text
stringlengths
5k
473k
- About this Journal · - Abstracting and Indexing · - Aims and Scope · - Annual Issues · - Article Processing Charges · - Articles in Press · - Author Guidelines · - Bibliographic Information · - Citations to this Journal · - Contact Information · - Editorial Board · - Editorial Workflow · - Free eTOC Alerts · - Publication Ethics · - Reviewers Acknowledgment · - Submit a Manuscript · - Subscription Information · - Table of Contents Abstract and Applied Analysis Volume 2012 (2012), Article ID 648635, 21 pages Existence and Multiplicity of Solutions for Some Fractional Boundary Value Problem via Critical Point Theory 1School of Mathematical Sciences and Computing Technology, Central South University, Changsha, Hunan 410083, China 2School of Mathematics and Computing Sciences, Hunan University of Science and Technology, Xiangtan, Hunan 411201, China Received 18 October 2011; Accepted 27 November 2011 Academic Editor: Kanishka Perera Copyright © 2012 Jing Chen and X. H. Tang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study the existence and multiplicity of solutions for the following fractional boundary value problem: , where are superquadratic, asymptotically quadratic, and subquadratic, respectively. Several examples are presented to illustrate our results. 1. Introduction and Main Results Consider the fractional boundary value problem (BVP for short) of the following form: where and are the left and right Riemann-Liouville fractional integrals of order , respectively, satisfies the following assumptions.(A) is measurable in for every and continuously differentiable in for a.e. , and there exist , , such that for all and a.e. . In particular, if , BVP (1.1) reduces to the standard second-order boundary value problem of the following form: Differential equations with fractional order are generalization of ordinary differential equations to noninteger order. This generalization is not mere mathematical curiosities but rather has interesting applications in many areas of science and engineering such as in viscoelasticity, electrical circuits, and neuron modeling. The need for fractional order differential equations stems in part from the fact that many phenomena cannot be modeled by differential equations with integer derivatives. Such differential equations got the attention of many researchers and considerable work has been done in this regard, see the monographs of Kilbas et al. , Miller and Ross , Podlubny , Samko et al. , and the papers [5–20] and the references therein. Recently, there are many papers dealing with the existence of solutions (or positive solutions) of nonlinear initial (or singular and nonsingular boundary) value problems of fractional differential equation by the use of techniques of nonlinear analysis (fixed-point theorems [12–14], Leray-Schauder theory [15, 16], lower and upper solution method, monotone iterative method [17–19], Adomian decomposition method , etc.), see [12–20] and the references therein. Variational methods are very powerful techniques in nonlinear analysis and are extensively used in many disciplines of pure and applied mathematics including ordinary and partial differential equations, mathematical physics, gauge theory, and geometrical analysis. The existence and multiplicity of solutions for Hamilton systems, Schrödinger equations, and Dirac equations have been studied extensively via critical point theory, see [21–34]. In , Jiao and Zhou obtained the existence of solutions for BVP (1.1) by Mountain Pass theorem under the Ambrosetti-Rabinowitz condition (denoted by A.R. condition). Under the usual A.R. condition, it is easy to show that the energy functional associated with the system has the Mountain Pass geometry and satisfies the (PS) condition. However, the A.R. condition is so strong that many potential functions cannot satisfy it, then the problem becomes more delicate and complicated. In this paper, in order to establish the existence and multiplicity of solutions for BVP (1.1) under distinct hypotheses on potential function by critical point theory, we introduce some functional space , where , and divide the problem into the following three cases. 1.1. The Superquadratic Case For the superquadratic case, we make the following assumptions. (A1), uniformly for some and a.e. .(A2) uniformly for some and a.e. .(A3) uniformly for some and a.e. , where and . We state our first existence result as follows. Theorem 1.1. Assume that (A1)–(A3) hold and that satisfies the condition (A). Then BVP (1.1) has at least one solution on . 1.2. The Asymptotically Quadratic Case For the asymptotically quadratic case, we assume the following.(A2′) uniformly for some and a.e. .(A4)There exists such that for all and a.e. .(A5) for a.e. . Our second and third main results read as follows. Theorem 1.2. Assume that satisfies (A), (A1), (A2'), (A4), and (A5). Then BVP (1.1) has at least one solution on . Theorem 1.3. Assume that satisfies (A), (A1), (A2'), and the following conditions: (A4′) there exists such that for all and a.e. ;(A5′) for a.e. .Then BVP (1.1) has at least one solution on . 1.3. The Subquadratic Case For the subquadratic case, we give the following multiplicity result. Theorem 1.4. Assume that satisfies the following assumption: (A6), where and is a constant. Then BVP (1.1) has infinitely many solutions on . In this section, we recall some background materials in fractional differential equation and critical point theory. The properties of space are also listed for the convenience of readers. Definition 2.1 (see ). Let be a function defined on and . The left and right Riemann-Liouville fractional integrals of order for function denoted by and , respectively, are defined by provided the right-hand sides are pointwise defined on , where is the gamma function. Definition 2.2 (see ). Let be a function defined on and . The left and right Riemann-Liouville fractional derivatives of order for function denoted by and , respectively, are defined by where , and . The left and right Caputo fractional derivatives are defined via the above Riemann-Liouville fractional derivatives. In particular, they are defined for the function belonging to the space of absolutely continuous functions, which we denote by . is the space of functions such that and . In particular, . Definition 2.3 (see ). Let and . If and , then the left and right Caputo fractional derivative of order for function denoted by and , respectively, exist almost everywhere on . and are represented by respectively, where . Property 2.4 (see ). The left and right Riemann-Liouville fractional integral operators have the property of a semigroup, that is, Definition 2.5 (see ). Define and . The fractional derivative space is defined by the closure of with respect to the norm where denotes the set of all functions with . It is obvious that the fractional derivative space is the space of functions having an -order Caputo fractional derivative and . Proposition 2.6 (see ). Let and . The fractional derivative space is a reflexive and separable space. Proposition 2.7 (see ). Let and . For all , one has Moreover, if and , then According to (2.8), we can consider with respect to the norm Proposition 2.8 (see ). Define and . Assume that and the sequence converges weakly to in , that is, . Then in , that is, , as . Lemma 2.9 (see ). Let be defined by where satisfies the assumption (A). If , then the functional defined by is continuously differentiable on , and , we have Lemma 2.11 (see ). Let and be defined by (2.12). If assumption (A) is satisfied and is a solution of corresponding Euler equation , then is a solution of BVP (2.10) which corresponding to the solution of BVP (1.1). Proposition 2.12 (see ). If , then for any , one has Lemma 2.13 (see ). Let be a real Banach space, is differentiable. One says that satisfies the condition if any sequence in such that is bounded and as contains a convergent subsequence. Lemma 2.14 (Mountain Pass theorem ). Let be a real Banach space and is differentiable and satisfies the (PS) condition. Suppose that (i),(ii)there exist and such that for all with , (iii)there exists in with such that .Then possesses a critical value . Moreover, can be characterized as where . Lemma 2.15 (Clark theorem ). Let be a real Banach space, with even, bounded below, and satisfying the (PS) condition. Suppose , there is a set such that is homeomorphic to , , by an odd map, and . Then possesses at least distinct pairs of critical points. 3. Proof of the Theorems For , where is a reflexive Banach space with the norm defined by It follows from Lemma 2.9 that the functional on given by is continuously differentiable on . Moreover, we have Recall that a sequence is said to be a (C) sequence of if is bounded and as . The functional satisfies condition (C) if every (C) sequence of has a convergent subsequence. This condition is due to Cerami . 3.1. Proof of Theorem 1.1 We will first establish the following lemma and then give the proof of Theorem 1.1. Lemma 3.1. Assume (A), (A2), and (A3) hold, then the functional satisfies condition (C). Proof of Lemma 3.1. Let be a (C) sequence of , that is, is bounded and as . Then there exists such that for all . By (A2), there exist positive constants and such that for all and a.e. . It follows from that for all and a.e. . Therefore, we obtain for all and a.e. . Combining (2.14) and (3.8), we get On the other hand, by (A3), there exist and such that for a.e. and . By (A), we have for all and a.e. . Therefore, we obtain for all and a.e. . It follows from (3.5) and (3.12) that thus, is bounded. If , then which, combining (3.9), implies that is bounded. If , then where by (2.8). Since , it follows from (3.9) that is bounded too. Thus is bounded in . By Proposition 2.8, the sequence has a subsequence, also denoted by , such that Then we obtain in by use of the same argument of Theorem 5.2 in . The proof of Lemma 3.1 is completed. Proof of Theorem 1.1. By (A1), there exist and such that for a.e. and with . Let Then it follows from (2.8) that for all with . Therefore, we have for all with . This implies that (ii) in Lemma 2.14 is satisfied. It is obvious from the definition of and (A1) that , and therefore, it suffices to show that satisfies (iii) in Lemma 2.14. By (A1), there exist and such that for all and a.e. . It follows from (A) that for all and a.e. . Therefore, we obtain for all and a.e. . Choosing , then For and noting that (3.24) and (3.25), we have as , where is a positive constant. Then there exists a sufficiently large such that . Hence (iii) holds. Finally, noting that while for critical point , . Hence is a nontrivial solution of BVP (1.1), and this completes the proof. 3.2. Proof of Theorem 1.2 The following lemmata are needed in the proof of Theorem 1.2. Lemma 3.2. Assume (A5), then for any , there exists a subset with meas such that uniformly for . Lemma 3.3. Assume (A), (A2’), (A4), and (A5), then the functional satisfies condition (C). Proof of Lemma 3.3. Suppose that is a (C) sequence of , that is, is bounded and as . Then we have which implies that We only need to show that is bounded in . If is unbounded, we may assume, without loss of generality, that as . Put , we then have . Going to a sequence if necessary, we assume that weakly in , strongly in and . By (A2), it follows that there exist constants and such that for all and a.e. . By assumption (A), it follows that for all and a.e. . Therefore, we obtain for all and a.e. . Therefore, we have from which, it follows that Passing to the limit in the last inequality, we get which yields . Therefore, there exists a subset with meas such that on . By virtue of Lemma 3.2, for ) meas, we can choose a subset with meas such that uniformly for . We assert that meas. If not, meas. Since , it follows that which leads to a contradiction and establishes the assertion. By (A4), we obtain thye following: By (3.36), (3.38), and Fatou's lemma, it follows that which contradicts (3.29). This contradiction shows that is bounded in , and this completes the proof. By virtue of Lemmas 3.2 and 3.3, the rest of the proof is similar to Theorem 1.1. Theorem 1.3 can be proved similarly. 3.3. Proof of Theorem 1.4 The proof of Theorem 1.4 is divided into a sequence of lemma. Lemma 3.4. The functional is bounded below on . Lemma 3.5. The functional satisfies the (PS) condition. Proof of Lemma 3.5. Let be a Palais-Smale sequence in , that is, Suppose that is unbounded in , that is, as . Since However, from (3.42), we have thus is a bounded sequence in . Since is a reflexive space, going, if necessary, to a subsequence, we can assume that in , thus we have as . Moreover, according to (2.8) and Proposition 2.8, we have that is bounded in and as . Noting that Combining (3.44) and (3.45), it is easy to verify that as , and hence that in . Thus, admits a convergent subsequence. The proof of Lemma 3.5 is complete. Lemma 3.6. For any , there exists a set which is homeomorphic to by an odd map, and . Proof of Lemma 3.6. For every , define where is a positive number to be chosen later. For any , there exist , , such that where and is a real quadratic form. Since So, is a real positive definite quadratic form. Then there exist an invertible matrix and , , such that It is easy to prove that the odd mapping defined by is a homeomorphism between and . Since is a finite dimensional space, there exists such that Otherwise, for any positive integer , there exists such that Set , then for all and Since , it follows from the compactness of the unit sphere of that there exists a subsequence, denoted also by , such that converges to some in . It is obvious that . By the equivalence of the norms on the finite dimensional space, we have in , that is, By (3.54) and Hölder inequality, we have Thus, there exist such that In fact, if not, we have for all positive integer . It implies that as . Hence which contradicts that . Therefore, (3.56) holds. Now let and . By (3.53) and (3.56), we have for all positive integer . Let be large enough such that then we have for all large , which is a contradiction to (3.55). Therefore, (3.51) holds. For any , we have by (3.51), where . Choosing , we conclude which completes the proof. Now from the assertion of Lemma 2.15, we know that has at least distinct pairs of critical points for every , therefore, BVP (1.1) possesses infinitely many solutions on . The proof of Theorem 1.4 is completed. In this section, we give some examples to illustrate our results. Example 4.2. In BVP (1.1), let and , where and will be specified below. Let . Noting that , we see that (A) and (A2′) hold. It is also easy to see that (A1) holds for Furthermore, we have as . Therefore, we have uniformly for all as . Thus (A4′) and (A5′) hold. By virtue of Theorem 1.3, we conclude that BVP (1.1) has at least one solution on . If , then exactly the same conclusions as above hold true by Theorem 1.2. This paper is partially supported by the NNSF (no. 11171351) of China. - A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204 of North-Holland Mathematics Studies, Elsevier Science B. V., Amsterdam, The Netherlands, 2006. - K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication, John Wiley & Sons, New York, NY, USA, 1993. - I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, San Diego, Calif, USA, 1999. - S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, Longhorne, Pa, USA, 1993. - M. Benchohra, S. Hamani, and S. K. Ntouyas, “Boundary value problems for differential equations with fractional order and nonlocal conditions,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 71, no. 7-8, pp. 2391–2396, 2009. - R. P. Agarwal, M. Benchohra, and S. Hamani, “A survey on existence results for boundary value problems of nonlinear fractional differential equations and inclusions,” Acta Applicandae Mathematicae, vol. 109, no. 3, pp. 973–1033, 2010. - V. Lakshmikantham and A. S. Vatsala, “Basic theory of fractional differential equations,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 69, no. 8, pp. 2677–2682, 2008. - J. Vasundhara Devi and V. Lakshmikantham, “Nonsmooth analysis and fractional differential equations,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 70, no. 12, pp. 4151–4157, 2009. - B. Ahmad, “Existence of solutions for irregular boundary value problems of nonlinear fractional differential equations,” Applied Mathematics Letters, vol. 23, no. 4, pp. 390–394, 2010. - Y. Zhou, F. Jiao, and J. Li, “Existence and uniqueness for fractional neutral differential equations with infinite delay,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 71, no. 7-8, pp. 3249–3256, 2009. - J. Rong Wang and Y. Zhou, “A class of fractional evolution equations and optimal controls,” Nonlinear Analysis. Real World Applications, vol. 12, no. 1, pp. 262–272, 2011. - Z. Bai and H. Lü, “Positive solutions for boundary value problem of nonlinear fractional differential equation,” Journal of Mathematical Analysis and Applications, vol. 311, no. 2, pp. 495–505, 2005. - S. Zhang, “Positive solutions to singular boundary value problem for nonlinear fractional differential equation,” Computers & Mathematics with Applications, vol. 59, no. 3, pp. 1300–1309, 2010. - X.-K. Zhao and W. Ge, “Unbounded solutions for a fractional boundary value problems on the infinite interval,” Acta Applicandae Mathematicae, vol. 109, no. 2, pp. 495–505, 2010. - Y. Zhang and Z. Bai, “Existence of solutions for nonlinear fractional three-point boundary value problems at resonance,” Journal of Applied Mathematics and Computing, vol. 36, no. 1-2, pp. 417–440, 2011. - W. Jiang, “The existence of solutions to boundary value problems of fractional differential equations at resonance,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 74, no. 5, pp. 1987–1994, 2011. - S. Zhang, “Existence of a solution for the fractional differential equation with nonlinear boundary conditions,” Computers & Mathematics with Applications, vol. 61, no. 4, pp. 1202–1208, 2011. - S. Liang and J. Zhang, “Positive solutions for boundary value problems of nonlinear fractional differential equation,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 71, no. 11, pp. 5545–5550, 2009. - Z. Wei, W. Dong, and J. Che, “Periodic boundary value problems for fractional differential equations involving a Riemann-Liouville fractional derivative,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 73, no. 10, pp. 3232–3238, 2010. - H. Jafari and V. Daftardar-Gejji, “Positive solutions of nonlinear fractional boundary value problems using Adomian decomposition method,” Applied Mathematics and Computation, vol. 180, no. 2, pp. 700–706, 2006. - G. Cerami, “An existence criterion for the critical points on unbounded manifolds,” Istituto Lombardo. Accademia di Scienze e Lettere: Rendiconti A, vol. 112, no. 2, pp. 332–336, 1978 (Italian). - P. H. Rabinowitz, “Periodic solutions of Hamiltonian systems,” Communications on Pure and Applied Mathematics, vol. 31, no. 2, pp. 157–184, 1978. - J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian Systems, vol. 74 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1989. - P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations, vol. 65 of CBMS Regional Conference Series in Mathematics, American Mathematical Society, Providence, RI, USA, 1986. - G. Fei, “On periodic solutions of superquadratic Hamiltonian systems,” Electronic Journal of Differential Equations, vol. 2002, no. 8, pp. 1–12, 2002. - Y. H. Ding and S. X. Luan, “Multiple solutions for a class of nonlinear Schrödinger equations,” Journal of Differential Equations, vol. 207, no. 2, pp. 423–457, 2004. - M. J. Esteban and E. Séré, “Stationary states of the nonlinear Dirac equation: a variational approach,” Communications in Mathematical Physics, vol. 171, no. 2, pp. 323–350, 1995. - C.-L. Tang and X.-P. Wu, “Subharmonic solutions for nonautonomous sublinear second order Hamiltonian systems,” Journal of Mathematical Analysis and Applications, vol. 304, no. 1, pp. 383–393, 2005. - C.-L. Tang and X.-P. Wu, “Periodic solutions for second order systems with not uniformly coercive potential,” Journal of Mathematical Analysis and Applications, vol. 259, no. 2, pp. 386–397, 2001. - C. Troestler and M. Willem, “Nontrivial solution of a semilinear Schrödinger equation,” Communications in Partial Differential Equations, vol. 21, no. 9-10, pp. 1431–1449, 1996. - X. Fan and X. Han, “Existence and multiplicity of solutions for -Laplacian equations in ,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 59, no. 1-2, pp. 173–188, 2004. - F. Jiao and Y. Zhou, “Existence of solutions for a class of fractional boundary value problems via critical point theory,” Computers & Mathematics with Applications, vol. 62, no. 3, pp. 1181–1199, 2011. - S. Ma and Y. Zhang, “Existence of infinitely many periodic solutions for ordinary p-Laplacian systems,” Journal of Mathematical Analysis and Applications, vol. 351, no. 1, pp. 469–479, 2009. - Q. Zhang and C. Liu, “Infinitely many periodic solutions for second order Hamiltonian systems,” Journal of Differential Equations, vol. 251, no. 4-5, pp. 816–833, 2011.
+ -8C (1+ e2 + e4 + e6) Third order Astigmatism (1 + e2 + e4) sinusoidal with (nz - nZ) complete wavelengths over annular pupil 2^2 = 2.83 that the annular radius contains a complete integral number (nz — n'z) of ripple (zonal) wavelengths, i.e. both nz and n'z are integers, where n'z is the number of obscured ripple waves. As in Table 3.26, the result for astigmatism in Table 3.27 differs from that given by Schroeder because of the difference in the wavefront functions with cos 2^> and cos2 ^ respectively. Both formulations are valid and can readily be transformed into each other. For the combined function of spherical aberration and defocus kdp2 + ksip4 Table 3.27 shows that the ratio ptv/rms is unchanged by the optimum focus combination with kd = — ks1, since both quantities are reduced by the factor 4 (ptv = zonal aberration in this case). Similarly for the coma-tilt combination the ratio ptv/rms is also unchanged for the optimum tilt ky = — 3 kc. The maximum aberration here is not the zonal aberration but the aberration at the edge of the pupil (Wq)p=1 = + | kc, giving ptv = | kc. With rms = kc/V72, the ratio remains 2^8 = 5.66. These two cases of invariance to changes of the reference sphere are illustrations of the displacement theorem discussed above in the derivation of Eq. (3.459). 3.10.7 The diffraction PSF in the presence of larger aberrations: the Optical Transfer Function (OTF) The effect of aberrations in optical systems may broadly be classified in three main groups: first, aberrations near the diffraction limit; second, aberrations above the diffraction limit but acceptable for the detector; third, very large aberrations relative to the diffraction limit. The first group has been dealt with in §§ 3.10.5 and 3.10.6 above. It assumes more and more importance with the improvement in quality of ground-based telescopes and sites, and the steadily increasing advance of space telescopes. The third group represents the domain where diffraction effects become negligible and geometrical optical image sizes give a good description of the image. Until recently, accepted values for external "seeing" (non-local air turbulence) were so far above the diffraction limit, that geometrical-optical interpretations of the image quality were reasonable, above all, if the detector resolution is much lower than the diffraction image size given by (3.447). This is no longer the case if accurate assessment is desired. The second group is the most complex situation: neither the series approximations used in the first group nor the asymptotic developments of the third are adequate. The best tool for this group is the Optical Transfer Function (OTF). The theory of the OTF, essentially a systematic application of Fourier theory to optical imagery, was initiated by Duffieux in 1946 [3.136]. The application to television, with analogy to filter theory in communication systems, was a powerful impetus. A very extensive literature exists. General treatments of varying depth, all excellent within the aims of the work in question, are given by Marechal-Francon [3.26(e)], Welford [3.6], Schroeder [3.22(h)] (particularly well adapted to the case of telescopes) and Wetherell [3.137]. For complete treatments of the Fourier theory required, see Marechal-Francon [3.26(f)] or Goodman [3.138]. Here, we shall confine ourselves to the essential aspects applicable to normal use of telescopes, i.e. the case of incoherent light. The OTF is based on the concept of spatial frequency of the intensity in the object plane, represented as the sum of a continuous infinity of sinusoidal components. Each component is transmitted by the optical system with reduced contrast, the contrast being defined by I0 max + I0 min for the object and by Cj for the image, as shown in Fig. 3.107. If we consider an object consisting of only one sinusoidal frequency in the t-section, then the Modulation Transfer Function (MTF) for this frequency s is given simply by Following Welford [3.6], we take coordinates in the object plane as £0 and r/0 and consider these as transmitted to the image plane with the magnification factor m, then for the image £ = m£0, r/ = rnr/0. For a sinusoidal object whose "lines" are parallel to the n0-axis, we can write for the function of the object intensity (incoherently illuminated), in which s = 1/A is the spatial frequency in mm-1, if £0 is expressed in mm, and a0 is the normalized amplitude. According to (3.483), the contrast of the object C0 = 1. The transfer through the system then gives at the image where the factor aj/a0 represents the constant intensity loss due to absorption etc. As with the diffraction integral, it is most convenient in the general formulation to use the complex form from the Euler relation. This general formulation is given, for example, by Marechal-Francon [3.26]. For a simple frequency s, (3.486) then becomes where R is the real part of the complex quantity L. It follows that Tc = R, the real part of the OTF L, is the MTF and the phase shift is arg L(s). In the general case, we have a two-dimensional situation depending not only on s but on the position in the field and the aberrations affecting that image point. The OTF will depend on whether the intensity function is in the t- or s-section. For symmetrical aberrations, the phase shift p will be zero; for coma it is not zero, though it may be small in practice. An important condition for the generalisation is that the aberration function be invariant over the range of the lateral aberration at any point: this is the isoplanatism condition. It will only break down if aberrations become very large, in which case the optical system is useless in practice. If the wavefront aberration in the pupil is W(x, y), then as for (3.459) the complex amplitude in the exit pupil induced by a point object is proportional to eikW(x'y) with k = 2n/A as before. We define as the pupil function, having the form of (3.488) over the area of the pupil and zero outside it. The diffraction integral (3.439) can be written to give the complex amplitude at the image point £, n in terms of rectangular coordinates, if constants are ignored, as Was this article helpful?
Writing slope-intercept equations A line goes through the points (-1, 6) and (5, -4). What is the equation of the line? Let's just try to visualize this. So that is my x axis. And you don't have to draw it to do this problem but it always help to visualize That is my y axis. And the first point is (-1,6) So (-1, 6). So negative 1 coma, 1, 2, 3, 4 ,5 6. So it's this point, rigth over there, it's (-1, 6). And the other point is (5, -4). So 1, 2, 3, 4, 5. And we go down 4, So 1, 2, 3, 4 So it's right over there. So the line connects them will looks something like this. Line will draw a rough approximation. I can draw a straighter than that. I will draw a dotted line maybe Easier do dotted line. So the line will looks something like that. So let's find its equation. So good place to start is we can find its slope. Remember, we want, we can find the equation y is equal to mx plus b. This is the slope-intercept form where m is the slope and b is the y-intercept. We can first try to solve for m. We can find the slope of this line. So m, or the slope is the change in y over the change in x. Or, we can view it as the y value of our end point minus the y value of our starting point over the x-value of our end point minus the x-value of our starting point. Let me make that clear. So this is equal to change in y over change in x wich is the same thing as rise over run wich is the same thing as the y-value of your ending point minus the y-value of your starting point. This is the same exact thing as change in y and that over the x value of your ending point minus the x-value of your starting point This is the exact same thing as change in x. And you just have to pick one of these as the starting point and one as the ending point. So let's just make this over here our starting point and make that our ending point. So what is our change in y? So our change in y, to go we started at y is equal to six, we started at y is equal to 6. And we go down all the way to y is equal to negative 4 So this is rigth here, that is our change in y You can look at the graph and say, oh, if I start at 6 and I go to negative 4 I went down 10. or if you just want to use this formula here it will give you the same thing We finished at negative 4, we finished at negative 4 and from that we want to subtract, we want to subtract 6. This right here is y2, our ending y and this is our beginning y This is y1. So y2, negative 4 minus y1, 6. or negative 4 minus 6. That is equal to negative 10. And all it does is tell us the change in y you go from this point to that point We have to go down, our rise is negative we have to go down 10. That's where the negative 10 comes from. Now we just have to find our change in x. So we can look at this graph over here. We started at x is equal to negative 1 and we go all the way to x is equal to 5. So we started at x is equal to negative 1, and we go all the way to x is equal to 5. So it takes us one to go to zero and then five more. So are change in x is 6. You can look at that visually there or you can use this formula same exact idea, our ending x-value, our ending x-value is 5 and our starting x-value is negative 1. 5 minus negative 1. 5 minus negative 1 is the same thing as 5 plus 1. So it is 6. So our slope here is negative 10 over 6. wich is the exact same thing as negative 5 thirds. as negative 5 over 3 I divided the numerator and the denominator by 2. So we now know our equation will be y is equal to negative 5 thirds, that's our slope, x plus b. So we still need to solve for y-intercept to get our equation. And to do that, we can use the information that we know in fact we have several points of information We can use the fact that the line goes through the point (-1,6) you could use the other point as well. We know that when is equal to negative 1, So y is eqaul to 6. So y is equal to six when x is equal to negative 1 So negative 5 thirds times x, when x is equal to negative 1 y is equal to 6. So we literally just substitute this x and y value back into this and know we can solve for b. So let's see, this negative 1 times negative 5 thirds. So we have 6 is equal to positive five thirds plus b. And now we can subtract 5 thirds from both sides of this equation. so we have subtracted the left hand side. From the left handside and subtracted from the rigth handside And then we get, what's 6 minus 5 thirds. So that's going to be, let me do it over here We take a common denominator. So 6 is the same thing as Let's do it over here. So 6 minus 5 over 3 is the same thing as 6 is the same thing as 18 over 3 minus 5 over 3 6 is 18 over 3. And this is just 13 over 3. And this is just 13 over 3. And then of course, these cancel out. So we get b is equal to 13 thirds. So we are done. We know We know the slope and we know the y-intercept. The equation of our line is y is equal to negative 5 thirds x plus our y-intercept which is 13 which is 13 over 3. And we can write these as mixed numbers. if it's easier to visualize. 13 over 3 is four and 1 thirds. So this y-intercept right over here. this y-intercept right over here. That's 0 coma 13 over 3 or 0 coma 4 and 1 thirds. And even with my very roughly drawn diagram it those looks like this. And the slope negative 5 thirds that's the same thing as negative 1 and 2 thirds. You can see here the slope is downward because the slope is negative. It's a little bit steeper than a slope of 1. It's not quite a negative 2. It's negative 1 and 2 thirds. if you write this as a negative, as a mixed number. So, hopefully, you found that entertaining.
The role of various symmetries in the evaluation of splitting functions and coefficient functions is discussed. The scale invariance in hard processes is known to be a guiding tool to understand the dynamics. We discuss the constraints on splitting functions coming from various symmetries such as scale, conformal and supersymmetry. We also discuss the Drell-Levy-Yan relation among splitting and coefficient functions in various schemes. The relations coming from conformal symmetry are also presented. DESY 98-70 hep-ph/9806355 Relations among polarized and unpolarized splitting functions beyond leading order111Presented at the conference ”Loops and Legs in Gauge Theories” at Rheinsberg, Germany, 19-24 April, 1998. Work supported in part by EU contract FMRX-CT98-0194. J. Blümlein, V. Ravindran and W.L. van Neerven 222On leave of absence from Instituut-Lorentz, University of Leiden, P.O. Box 9506, 2300 HA Leiden, The Netherlands. DESY-Zeuthen, 6 Platenenalle, D-15738 Zeuthen, Germany. 1 Scale Transformation Symmetries are known to be very useful guiding tool to understand the dynamics of various physical phenomena. Particularly, continuous symmetries played an important role in particle physics to unravel the structure of dynamics at low as well as high energies. In hadronic physics, such symmetries at low energies were found to be useful to classify various hadrons. At high energy, where the masses of the particles can be neglected, one finds in addition to the above mentioned symmetries new symmetries such as conformal and scale invariance. This for instance happens in deep inelastic lepton-hadron scattering (DIS) where the energy scale is much larger than the hadronic mass scale. At these energies one can in principle ignore the mass scale and the resulting dynamics is purely scale independent. Limiting ourselves to scale transformations the latter is defined by . An arbitrary quantum field is then transformed as follows where is the unitary operator and is its canonical dimension. Under this transformation the -point Green’s function behaves like where are the momenta and denotes the coupling constant. However in perturbation theory, like QCD, scale invariance is broken due to the introduction of a regulator scale which is rigid under conformal and scale transformation. Even if the regulator is removed in the renormalized Green’s funstion a renormaliztion scale is left which is rigid too. In this case the Green’s function does not satisfy a simple scaling equation anymore and the latter is replaced by the Callan Symmanzik (CS) equation which reads where and denote the beta-function and the anomalous dimension respectively with the property and as . In the case at some fixed point scale invariance is restored and the solution to this equation becomes Let us discuss the beta-function and the anomalous dimensions of composite operators for QCD. The latter are derived from the Green’s function Here denotes the composite operator of spin which is build out of quark and gluon fields with . If one chooses -dimensional regularization the renormalized Green’s functions and the bare Green’s functions (indicated by the subscript ) are related by where is the operator renormalization constant and indicates the ultraviolet pole terms in -dimensional regularization (). Notice that there is more than one operator involved in the renormalization so that we have to deal with mixing. If we amputate the external legs of the Green function in (1) the anomalous dimension of the composite operator is given by The renormalization constant has an expansion in as Since the beta-function has the following form the are finite in the limit and one gets For the amputated Green’s function the CS equation (3) reads In the case of scale invariance i.e. and no mixing the above CS equation has the simple solution The splitting functions are related to these anomalous dimensions via a Mellin transformation given by The above analysis based on scale transformation suggests that only in a scale invariant theory, the Green’s function has the form given in the Eq. (12). This will be no longer true in a scale breaking theory like QCD. The same will hold for the anomalous dimension which in the case of no mixing and scale invariance is independent of the subtraction scheme. This will change when this symmetry is broken as we will show below. 2 Supersymmetric Relations In this section we discuss some relations among splitting functions which govern the evolution of quark and gluon parton densities. These relations are valid when QCD becomes a supersymmetric gauge field theory where both quarks and gluons are put in the adjoint representation with respect to the local gauge symmetry . In this case one gets a simple relation between the colour factors which become . In the case of spacelike splitting functions, which govern the evolution of the parton densities in deep inelastic lepton-hadron scattering, one has made the claim (see ) that the combination defined by is equal to zero, i.e., . This relation should follow from an supersymmetry although no explicit proof has been given yet. An explicit calculation at leading order(LO) confirms this claim so that we have However at next to leading Order(NLO), when these splitting functions are computed in the -scheme, it turns out that . Actually one finds in the unpolarized case (see ) whereas in the polarized case one obtains The reason that this relation is violated can be attributed to the regularization method and the renormalization scheme in which these splitting functions are computed. In this case it is -dimensional regularization and the -scheme which breaks the supersymmetry. In fact, the breaking occurs already at the dependent part of the leading order splitting functions. Although this does not affect the leading order splitting functions in the limit it leads to a finite contribution at the NLO level via the terms which are characteristic of a two-loop calculation (see Eq. (8)). If one carefully removes such breaking terms at the LO level consistently, one can avoid these terms at NLO level. They can be avoided if one uses -dimensional reduction which preserves the supersymmetry. An other possibility is that one can convert the splitting functions from one scheme to another by the following transformation where is a finite renormalization. Under this transformation the anomalous dimensions in the new scheme become After Mellin inversion see (14) one gets in the unpolarized case and for the polarized case we have In this new (primed) scheme it turns out that The above observations also apply to the timelike splitting functions, denoted by a tilde, which govern the evolution of fragmentation functions. Substitution of their expressions into Eq. (15) yields the -scheme results For the polarized case we need the splitting functions in so that we get 3 Drell-Levy-Yan Relation The Drell-Levy-Yan relation (DLY) relates the structure functions measured in deep inelastic scattering to the fragmentation functions observed in -annihilation. Here denotes the Bjørken scaling variable which in deep inelastic scattering and -annihilation is defined by and respectively. Notice that in deep inelastic scattering the virtual photon momentum is spacelike i.e. whereas in -annihilation it becomes timelike . Further denotes the in or outgoing hadron momentum. The DLY relation looks as follows where denotes the analytic continuation from the region (DIS) to (annihilation region). At the level of splitting functions we have At LO, one finds . Explicity, At the leading order level, one finds that , which is nothing but Gribov-Lipatov relation . This relation in terms of physical observables is known to be violated when one goes beyond leading order . On the other hand the DLY (analytical continuation) relation defined above holds at the level of physical quatities provided the analytical continuation is performed in both as well as the scale () (see below). In analytical continuation, care is needed when one goes beyond LO when dimensional regularization is adopted. The correct relation in DR scheme reads as follows : The extra term arises due to the difference between the spacelike and timelike phase space integrations. Starting from the definitions of splitting functions, one finds that the splitting functions are related by simple relation The DLY relation between NLO coefficient functions appearing in DIS and can be worked out in the same way as we did for the splitting functions above. In the subsequent part of this section we will only study the gluonic coefficient functions corresponding to the deep inelastic structure functions and the fragmentation functions. The conclusions also apply to the quark coefficient functions as well. The spacelike gluonic coefficient function for the polarized case in DIS originates from photon-gluon fusion process and is given by In the above, the collinear singularity is treated in -dimensional regularization and the scale is the factorization scale. For -annihilation the timelike coefficient function becomes The violation of DLY relation is due to the regularization method and the scheme we have adopted to remove the collinear singularities from the partonic cross sections. This is the reason we get a mismatch between the phase space integrations in the spacelike and timelike case which is equal to . This factor is multiplied with the lowest order pole term which leads to the finite contribution on the right hand side of Eq. (33). The violation is an artifact of dimensional regularization and the choice of the -scheme. For example if one chooses a regularization where the gluon gets a mass and one removes the mass singularity only, the space-and timelike coefficient functions become respectively so that the DLY relation is satisfied. The same happens when the quark gets a mass . After removing the mass singularity one gets Hence the violation of the DLY relation for the splitting functions and the coefficient functions separately is just an artifact of the adopted regularization method and the subtraction scheme. When these coefficient functions are combined with the splitting functions in a scheme invariant way as for instance happens for the structure functions and fragmentation functions the above relation holds. The reason for the cancellation of the DLY violating terms among the splitting functions and coefficient functions is that the former are generated by simple scheme transformations. 4 Supersymmetric and Conformal Relations In this section we study the constraints coming from the conformal symmetry on the splitting functions in an supersymmetry. The following set of relations have been derived between the unpolarized () and polarized () splitting functions. The LO splitting functions satisfy the above relations but at NLO level they are violated in the -scheme. In the latter scheme the difference between the left- and righthand side of Eqs. (38) and (39) is given by respectively. Following the discussion below Eq. (15) these relations can be preserved by making finite scheme transformations. Another interesting relation in is the one between the non-diagonal entries of splitting function matrix: The known LO splitting functions satisfy this relation but it is violated by NLO splitting functions in scheme. Interestingly, the violation comes from the terms such as . These terms can not be removed by finite scheme transformation so that the above equation does not hold anymore in NLO irrespective of the chosen scheme. We have discussed the relations between the splitting functions coming from various symmetries such as scale symmetry, conformal symmetry and supersymmetry on NLO splitting functions and coefficient functions. The Drell-Levy-Yan relation among them is also discussed at NLO level. Most of the relations coming from these symmetries are violated in dimensional regularization with prescription. The breaking terms can be identified at the leading order level, and by simple finite renormalization, one can preserve the relations coming from scale and supersymmetric constraints. The breaking due to conformal non-invariant terms (see Eq. (42)) can not be cured by a simple finite renormalization. - C.G. Callan, Jr., Phys. Rev. D2 (1970) 1541, K. Symanzik, Comm. Math. Phys. 18 (1970) 227, Comm. Math. Phys. B39 (1971) 49; G. Parisi, Phys. Lett.B39 (1972) 643. - Yu. L. Dokshitser, Sov. Phys. JETP, 46 (1977) 641. - W. Furmanski and R. Petronzio, Phys. Lett. 97B (1980) 437. - R. Mertig and W. L. van Neerven, Z. Phys. C70 (1996) 637; W. Vogelsang, Phys. Rev. D54 (1996) 2023, Nucl. Phys. B475 (1996) 47. - M. Stratmann and W. Vogelsang, Nucl. Phys. B496,(1997) 41. - S.D. Drell, D.J. Levy and T.M. Yan, Phys. Rev. 187 (1969) 2159; Phys. Rev. D1 (1970) 1617; V.N. Gribov, L.N. Lipatov, Sov. J. Nucl. Phys. 15 (1972) 675; E.G. Floratos, C. Kounnas, R. Lacaze, Nucl. Phys. B192 (1981) 417. - G. Curci, W. Furmanski and R. Petronzio, Nucl. Phys. B175 (1980) 27 - G. T. Bodwin and J. Qiu, Phys. Rev. D41 (1990) 2755 - D.de. Florian and R. Sassot, Nucl. Phys. B488 (1997) 367. - V. Ravindran, Nucl. Phys. B490 (1997) 272. - J. Blümlein, V. Ravindran and W.L.van.Neerven, in preparation. - A.P. Bukhvostov, G.V. Frolov, L.N. Lipatov and E.A. Kuraev, Nucl. Phys. B258 (1985) 601.
Pocklington primality test ||This article needs attention from an expert in mathematics. (January 2011)| In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer to decide whether a given number is prime. The output of the test is a proof that the number is prime or that primality could not be established. The test relies on the Pocklington Theorem (Pocklington criterion) which is formulated as follows: Let be an integer, and suppose there exist numbers a and q such that q is prime, and Then N is prime. Proof of this theorem Suppose N is not prime. This means there must be a prime p, where that divides N. Therefore, which implies . Thus there must exist an integer u with the property that The test is simple once the theorem above is established. Given N, seek to find suitable a and q. If they can be obtained, then N is prime. Moreover, a and q are the certificate of primality. They can be quickly verified to satisfy the conditions of the theorem, confirming N as prime. A problem which arises is the ability to find a suitable q, that must satisfy (1)–(3) and be provably prime. It is even quite possible that such a q does not exist. This is a large probability, indeed only 57.8% of the odd primes, N, have such a q. To find a is not nearly so difficult. If N is prime, and a suitable q is found, each choice of a where will satisfy , and so will satisfy (2) as long as ord(a) does not divide . Thus a randomly chosen a is likely to work. If a is a generator mod N its order is and so the method is guaranteed to work for this choice. Generalized Pocklington method A generalized version of Pocklington's theorem covers more primes N. Let N − 1 factor as N − 1 = AB, where A and B are relatively prime, and the factorization of A is known. If for every prime factor p of A there exists an integer so that and then N is prime. The reverse implication also holds: If N is prime then every prime factor of A can be written in the above manner. Proof of Corollary: Let p be a prime dividing A and let be the maximum power of p dividing A. Let v be a prime factor of N. For the from the corollary set . This means and because of also . This means that the order of is Thus, . The same observation holds for each prime power factor of A, which implies . Specifically, this means If N were composite, it would necessarily have a prime factor which is less than or equal to . It has been shown that there is no such factor, which implies that N is prime. To see the converse choose a generator of the integers modulo p. The Pocklington–Lehmer primality test follows directly from this corollary. We must first partially factor N − 1 into A and B. Then we must find an for every prime factor p of A, which fulfills the conditions of the corollary. If such 's can be found, the Corollary implies that N is prime. According to Koblitz, = 2 often works. Choose , which means Now it is clear that and . Next find an for each prime factor p of A. E.g. choose . So satisfies the necessary conditions. Choose . So both 's work and thus N is prime. We have chosen a small prime for calculation purposes but in practice when we start factoring A we will get factors that themselves must be checked for primality. It is not a proof of primality until we know our factors of A are prime as well. If we get a factor of A where primality is not certain, the test must be performed on this factor as well. This gives rise to a so-called down-run procedure, where the primality of a number is evaluated via the primality of a series of smaller numbers. In our case, we can say with certainty that 2, 5, and 227 are prime, and thus we have proved our result. The certificate in our case is the list of 's, which can quickly be checked in the corollary. If our example had given rise to a down-run sequence, the certificate would be more complicated. It would first consist of our initial round of 's which correspond to the 'prime' factors of A; Next, for the factor(s) of A of which primality was uncertain, we would have more 's, and so on for factors of these factors until we reach factors of which primality is certain. This can continue for many layers if the initial prime is large, but the important thing to note, is that a simple certificate can be produced, containing at each level the prime to be tested, and the corresponding 's, which can easily be verified. If at any level we cannot find 's then we cannot say that N is prime. The biggest difficulty with this test is the necessity of discovering prime factors of N - 1, in essence, factoring N − 1. In practice this could be extremely difficult. Finding 's is a less difficult problem. Extensions and variants The 1975 paper by Brillhart, Lehmer, and Selfridge gives a proof for what is shown above as the "generalized Pocklington theorem" as theorem 4 on page 623. Additional theorems are shown allowing less factoring. This includes theorem 3 (a strengthening of Proth 1878): - Let where p is an odd prime such that . If there exists an a for which , but , then N is prime. and theorem 5 on page 624 that allows a proof when the factored part has reached only . Many additional theorems are provided. - Henry Cabourn Pocklington (1914–1916). "The determination of the prime or composite nature of large numbers by Fermat's theorem". Proceedings of the Cambridge Philosophical Society. 18: 29–30. - Derrick Henry Lehmer (1927). "Tests for primality by the converse of Fermat's theorem". Bull. Amer. Math. Soc. 33 (3): 327–340. doi:10.1090/s0002-9904-1927-04368-3. - Koblitz, Neal, A Course in Number Theory and Cryptography, 2nd Ed, Springer,1994 - Blake, Ian F., Seroussi, Gadiel, Smart, Nigel Paul, Elliptic Curves in Cryptography, Cambridge University Press, 1999 - Washington, Lawrence C., Elliptic Curves: Number Theory and Cryptography, Chapman & Hall/CRC, 2003 - Roberto Avanzi, Henri Cohen, Christophe Doche, Gerhard Frey, Tanja Lange, Kim Nguyen, Frederik Vercauteren (2005). Handbook of Elliptic and Hyperelliptic Curve Cryptography. Boca Raton: Chapman & Hall/CRC. - John Brillhart; Derrick Henry Lehmer; John Selfridge (April 1975). "New Primality Criteria and Factorizations of 2^m ± 1". Mathematics of Computation. 29 (130): 620–647. doi:10.1090/S0025-5718-1975-0384673-1.
Richard Samuel Ward |Born||6 September 1951| |Known for|| Penrose–Ward transform | |Awards|| Whitehead Prize (1989) | Fellow of the Royal Society (2005) |Institutions||University of Durham| |Doctoral advisor||Roger Penrose| |Doctoral students||Paul Sutcliffe| Richard Samuel Ward FRS (born 6 September 1951) is a British mathematical physicist. He is a Professor of Theoretical Physics at the University of Durham. Ward earned his Ph.D. from the University of Oxford in 1977, under the supervision of Roger Penrose. He is most famous for his extension of Penrose's twistor theory to nonlinear cases, which he with Michael Atiyah used to describe instantons by vector bundles on the three-dimensional complex projective space. He has related interests in the theory of monopoles, topological solitons and skyrmions. Ward was awarded the Whitehead Prize in 1989 for his work in mathematical physics.He was elected as a fellow of the Royal Society of London in 2005. His certificate of election reads: Richard Ward is distinguished for pioneering and elegant research in mathematical physics. He adapted the twistor transform to the self-dual Yang-Mills (SDYM) equation, and with Atiyah constructed general multi-instanton solutions. His discovery of the toroidal BPS two-monopole was a breakthrough in soliton theory. He showed that virtually all known integrable equations arise from SDYM by dimensional and algebraic reductions, allowing a unified solution method. Ward's twistor transform of SDYM, applied to string theory, is leading to striking progress in quantum Yang-Mills theory. Sir Michael Francis Atiyah was a British-Lebanese mathematician specialising in geometry. Edward Witten is an American mathematical and theoretical physicist. He is currently the Charles Simonyi Professor in the School of Natural Sciences at the Institute for Advanced Study. Witten is a researcher in string theory, quantum gravity, supersymmetric quantum field theories, and other areas of mathematical physics. In addition to his contributions to physics, Witten's work has significantly impacted pure mathematics. In 1990, he became the first physicist to be awarded a Fields Medal by the International Mathematical Union, awarded for his 1981 proof of the positive energy theorem in general relativity. He is considered to be the practical founder of M-theory. In theoretical physics, twistor theory was proposed by Roger Penrose in 1967 as a possible path to quantum gravity and has evolved into a branch of theoretical and mathematical physics. Penrose proposed that twistor space should be the basic arena for physics from which space-time itself should emerge. It leads to a powerful set of mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory and in physics to general relativity and quantum field theory, in particular to scattering amplitudes. Nigel James Hitchin FRS is a British mathematician working in the fields of differential geometry, gauge theory, algebraic geometry, and mathematical physics. He is a Professor Emeritus of Mathematics at the University of Oxford. Vladimir Gershonovich Drinfeld, surname also romanized as Drinfel'd, is a renowned mathematician from the former USSR, who emigrated to the United States and is currently working at the University of Chicago. Alexander Markovich Polyakov is a Russian theoretical physicist, formerly at the Landau Institute in Moscow and, since 1990, at Princeton University, where he is the Joseph Henry Professor of Physics. In mathematics, a monopole is a connection over a principal bundle G with a section of the associated adjoint bundle. Montonen–Olive duality or electric–magnetic duality is the oldest known example of strong–weak duality or S-duality according to current terminology. It generalizes the electro-magnetic symmetry of Maxwell's equations by stating that magnetic monopoles, which are usually viewed as emergent quasiparticles that are "composite", can in fact be viewed as "elementary" quantized particles with electrons playing the reverse role of "composite" topological solitons; the viewpoints are equivalent and the situation dependent on the duality. It was later proven to hold true when dealing with a N = 4 supersymmetric Yang–Mills theory. It is named after Finnish physicist Claus Montonen and British physicist David Olive after they proposed the idea in their academic paper Magnetic monopoles as gauge particles? where they state: There should be two "dual equivalent" field formulations of the same theory in which electric (Noether) and magnetic (topological) quantum numbers exchange roles. In theoretical physics, Seiberg–Witten theory is a theory that determines an exact low-energy effective action of a supersymmetric gauge theory—namely the metric of the moduli space of vacua. In string theory, K-theory classification refers to a conjectured application of K-theory to superstrings, to classify the allowed Ramond–Ramond field strengths as well as the charges of stable D-branes. Wolf Paul Barth was a German mathematician who discovered Barth surfaces and whose work on vector bundles has been important for the ADHM construction. Until 2011 Barth was working in the Department of Mathematics at the University of Erlangen-Nuremberg in Germany. In mathematical physics and gauge theory, the ADHM construction or monad construction is the construction of all instantons using methods of linear algebra by Michael Atiyah, Vladimir Drinfeld, Nigel Hitchin, Yuri I. Manin in their paper "Construction of Instantons." In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the Nahm transform – an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators. In theoretical physics, the Penrose transform, introduced by Roger Penrose, is a complex analogue of the Radon transform that relates massless fields on spacetime to cohomology of sheaves on complex projective space. The projective space in question is the twistor space, a geometrical space naturally associated to the original spacetime, and the twistor transform is also geometrically natural in the sense of integral geometry. The Penrose transform is a major component of classical twistor theory. David Ian Olive was a British theoretical physicist. Olive made fundamental contributions to string theory and duality theory, he is particularly known for his work on the GSO projection and Montonen–Olive duality. The Journal of Nonlinear Mathematical Physics (JNMP) is a mathematical journal published by Atlantis Press. It covers nonlinear problems in physics and mathematics, include applications, with topics such as quantum algebras and integrability; non-commutative geometry; spectral theory; and instanton, monopoles and gauge theory. Nikita Alexandrovich Nekrasov is a mathematical and theoretical physicist at the Simons Center for Geometry and Physics and C.N.Yang Institute for Theoretical Physics at Stony Brook University in New York, and a Professor of the Russian Academy of Sciences. Periodic instantons are finite energy solutions of Euclidean-time field equations which communicate between two turning points in the barrier of a potential and are therefore also known as bounces. Vacuum instantons, normally simply called instantons, are the corresponding zero energy configurations in the limit of infinite Euclidean time. For completeness we add that ``sphalerons´´ are the field configurations at the very top of a potential barrier. Vacuum instantons carry a winding number, the other configurations do not. Periodic instantons werde discovered with the explicit solution of Euclidean-time field equations for double-well potentials and the cosine potential with non-vanishing energy and are explicitly expressible in terms of Jacobian elliptic functions. Periodic instantons describe the oscillations between two endpoints of a potential barrier between two potential wells. The frequency of these oscillations or the tunneling between the two wells is related to the bifurcation or level splitting of the energies of states or wave functions related to the wells on either side of the barrier, i.e. . One can also interpret this energy change as the energy contribution to the well energy on either side originating from the integral describing the overlap of the wave functions on either side in the domain of the barrier. Integrable algorithms are numerical algorithms that rely on basic ideas from the mathematical theory of integrable systems. Olaf Lechtenfeld is a German mathematical physicist, academic and researcher. He is a full professor at the Institute of Theoretical Physics at Leibniz University, where he founded the Riemann Center for Geometry and Physics.
In chaos theory, the correlation dimension (denoted by ν) is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. For example, if we have a set of random points on the real number line between 0 and 1, the correlation dimension will be ν = 1, while if they are distributed on say, a triangle embedded in three-dimensional space (or m-dimensional space), the correlation dimension will be ν = 2. This is what we would intuitively expect from a measure of dimension. The real utility of the correlation dimension is in determining the (possibly fractional) dimensions of fractal objects. There are other methods of measuring dimension (e.g. the Hausdorff dimension, the box-counting dimension, and the information dimension) but the correlation dimension has the advantage of being straightforwardly and quickly calculated, and is often in agreement with other calculations of dimension. For any set of N points in an m-dimensional space then the correlation integral C(ε) is calculated by: where g is the total number of pairs of points which have a distance between them that is less than distance ε (a graphical representation of such close pairs is the recurrence plot). As the number of points tends to infinity, and the distance between them tends to zero, the correlation integral, for small values of ε, will take the form: If the number of points is sufficiently large, and evenly distributed, a log-log graph of the correlation integral versus ε will yield an estimate of ν. This idea can be qualitatively understood by realizing that for higher-dimensional objects, there will be more ways for points to be close to each other, and so the number of pairs close to each other will rise more rapidly for higher dimensions. Grassberger and Procaccia introduced the technique in 1983; the article gives the results of such estimates for a number of fractal objects, as well as comparing the values to other measures of fractal dimension. The technique can be used to distinguish between (deterministic) chaotic and truly random behavior, although it may not be good at detecting deterministic behavior if the deterministic generating mechanism is very complex. As an example, in the "Sun in Time" article, the method was used to show that the number of sunspots on the sun, after accounting for the known cycles such as the daily and 11-year cycles, is very likely not random noise, but rather chaotic noise, with a low-dimensional fractal attractor. - ^ a b Peter Grassberger and Itamar Procaccia (1983). "Measuring the Strangeness of Strange Attractors". Physica D: Nonlinear Phenomena 9 (1‒2): 189‒208. Bibcode 1983PhyD....9..189G. doi:10.1016/0167-2789(83)90298-1. - ^ Peter Grassberger and Itamar Procaccia (1983). "Characterization of Strange Attractors". Physical Review Letters 50 (5): 346‒349. Bibcode 1983PhRvL..50..346G. doi:10.1103/PhysRevLett.50.346. - ^ Peter Grassberger (1983). "Generalized Dimensions of Strange Attractors". Physics Letters A 97 (6): 227‒230. Bibcode 1983PhLA...97..227G. doi:10.1016/0375-9601(83)90753-3. - ^ DeCoster, Gregory P.; Mitchell, Douglas W. (1991). "The efficacy of the correlation dimension technique in detecting determinism in small samples". Journal of Statistical Computation and Simulation 39: 221–229. - ^ Sonett, C., Giampapa, M., and Matthews, M. (Eds.) (1992). The Sun in Time. University of Arizona Press. ISBN 0-8165-1297-3. Wikimedia Foundation. 2010. Look at other dictionaries: correlation dimension — An estimate of the fractal dimension which measures the probability that two points chosen at random will be within a certain distance of each other, and examines how this probability changes as the distance is increased. white noise will fill… … Financial and business terms Correlation sum — In chaos theory, the correlation sum is the estimator of the correlation integral, which reflects the mean probability that the states at two different times are close: where N is the number of considered states , ε is a threshold distance, a… … Wikipedia Correlation integral — In chaos theory, the correlation integral is the mean probability that the states at two different times are close: where N is the number of considered states , ε is a threshold distance, a norm (e.g. Euclidean norm) and … Wikipedia Dimension — 0d redirects here. For 0D, see 0d (disambiguation). For other uses, see Dimension (disambiguation). From left to right, the square, the cube, and the tesseract. The square is bounded by 1 dimensional lines, the cube by 2 dimensional areas, and… … Wikipedia Dimension — Sur les autres projets Wikimedia : « Dimension », sur le Wiktionnaire (dictionnaire universel) Dans le sens commun, la notion de dimension renvoie à la taille ; les dimensions d une pièce sont sa longueur, sa largeur et sa… … Wikipédia en Français Correlation Integral — The probability that two points are within a certain distance from one another. Used in the calculation of the correlation dimension. Bloomberg Financial Dictionary … Financial and business terms Correlation spectroscopy — is one of several types of two dimensional nuclear magnetic resonance (NMR) spectroscopy. Other types of two dimensional NMR include J spectroscopy, exchange spectroscopy (EXSY), and Nuclear Overhauser effect spectroscopy (NOESY). Two dimensional … Wikipedia Dimension fractale — Mesure de la dimension fractale de la côte de Grande Bretagne En géométrie fractale, la dimension fractale, D, est une grandeur qui a vocation à traduire la façon qu a un ensemble fractal de remplir l espace, à toutes les échelles. Dans le cas… … Wikipédia en Français Dimension (data warehouse) — This article is about a dimension in a data warehouse. For other uses, see dimension (disambiguation). In a data warehouse, a dimension is a data element that categorizes each item in a data set into non overlapping regions. A data warehouse… … Wikipedia Corrélation linéaire — Régression linéaire Pour les articles homonymes, voir Régression. Un exemple graphique En statistiques, étant donné un échantillon aléatoire … Wikipédia en Français
The velocity of a light pulse in a dielectric medium without dispersion is given by the speed of light in that medium. However, at the boundaries of such media some interesting effects occur especially under conditions of total reflection. If the boundary at which total reflection occurs does not represent an infinitely extended barrier, partial tunneling through the barrier takes place. Experiments with multilayered mirrors and other barriers which reflect almost all the light have shown that transmission of light appears to occur at superluminal speeds and that transmission speeds do not depend on the thickness of the barrier. In this paper we discuss the effects for a special kind of barrier which is hit by a Gaussian pulse. 2. The arrangement of the barrier As a barrier we define a slab of vacuum which separates two dielectric media with a given refractive index n>1. The critical angle of incidence is given by Light pulses with an angle of incidence greater than this critical angle are almost completely reflected. However, there is a part of the electromagnetic field which can tunnel through the slab if its thickness is of the order of magnitude of the average wavelength of the pulse. It is this transmitted part of the pulse which we are interested in. Figure 1 outlines the arrangement of the boundary for the incoming pulse, resulting in a reflected and transmitted component. 3. The description of the pulse For the sake of simplicity we take a Gaussian pulse as can be provided by a mode-locked laser. For our investigation we define a two-dimensional coordinate system in the plane of incidence (the x,z-plane in Fig. 1). Let us first introduce the two unit vectors and ; being parallel to the propagation direction of the pulse, being perpendicular to . With the angle of incidence We choose the coordinate system in such a way that, at the time t=0, the pulse maximum is located at the origin, i.e., the electric field at t=0 is given by The integral (6) is the superposition of plane waves which propagate in time. Thus, the complete description of the pulse in time and space is 4. A plane wave at the barrier We will now investigate what happens to a plane wave which hits the barrier. denotes the wave vector, is the angle of incidence. The plane wave can be written in the form Let us examine what the evanescent wave looks like if the angle of incidence is greater than the critical angle. We use (11) to get an expression for The evanescent wave in the slab produces a transmitted wave at the second Let us transform the amplitude in (18): 5. The transmitted pulse Let us return to the Gaussian pulse. In order to obtain the description of the transmitted pulse we have to replace the expressions for the plane waves in (8) by (21) The dependence of is given by (see Fig. 2) In order to see the influence of the new terms in the exponential function we need the following expansions: For the real part we get the approximation Let us find the maximum of the transmitted Gaussian wave packet at a specific time t. In a homogeneous medium without dispersion the maximum of a Gaussian pulse travels at a constant speed given by the vacuum speed of light divided by the refractive index of the medium. Its amplitude decays with time because of the diffraction of the pulse. After the pulse travelled a distance , its width can be derived as The transmitted wave propagates in the direction of the new wave vector . The physical interpretation of this directional change is the following. There are wave components of the incoming pulse that have propagation directions which are more favorable for transmission through the slab than along the main direction of the pulse. These preferentially transmitted components decay less in the slab than the other components of the pulse. These pulse components having the smaller angles of incidence constitute the main part of the transmitted pulse. This explains why the transmitted pulse leaves the slab under an angle slightly smaller than the initial angle of incidence. Where and when does the transmitted pulse occur on the opposite side of the slab? Equation (32) describes the pulse after transmission through the slab. By way of extrapolation to the time t=0, we may define a virtual origin of the pulse, which is shifted by , with respect to the true origin (x=0, z=0). From this virtual origin and the propagation vector we obtain the exit position (, ) for the transmitted pulse Let us consider the tunneling time. There are some difficulties to define a tunneling time. Experimentally it is not possible to detect directly the time point when the pulse leaves the slab, but the pulse must be observed somewhere in the adjacent medium behind the slab. The time needed by the pulse to get from its origin to the point of detection can be measured. The question is what we should compare it to. We could do the same experiment without a slab. However, as we have seen the transmitted pulse has a slightly different direction than the original pulse. Thus, an identical but undisturbed pulse will not be detected at the same place as the transmitted pulse. Thus, the two paths of the pulses cannot be directly compared. It is much better to take as reference a pulse which travels through the homogeneous medium with the same direction as the transmitted pulse (see Fig. 3). In this case the two paths are parallel and differ only in the small lateral shift caused by the slab. Let us define the tunneling time as the difference between the time point when the maximum of the transmitted pulse appears on the far side of the slab and the time point when the undisturbed pulse hits the slab. This yields Fig.3. The arrangement of the experiment to measure the time differences between tunneled and undisturbed pulses. If we calculate the difference between the time needed by a pulse to reach the detector (Fig. 3) either with or without a slab in between we obtain The question is whether these results violate causality. Apparently the tunneling time for the pulse maximum is superluminal. However, it must be emphasized that it is only the maximum which appears to travel at this speed. Actually the pulse is reshaped in the slab and comes out in a different form. Furthermore, the energy distribution of Gaussian pulses is not limited in space. Therefore, we cannot apply the principle of causality in its simplest form. Only if there was a distinct front of the pulse one could state that at any point no signal can be detected before the pulse front propagating with the vacuum speed of light would reach it. 6. Visualization of tunneling Fig.5. Development of a pulse reflected at and transmitted through a barrier (viewed along the z-axis). Fig.6. Development of a pulse reflected at and transmitted through a barrier (viewed along the x-axis). The following pictures are not based on the above approximations, but take into account multiple pulse reflections. They show the development of a tunneling pulse in time. The first series (Fig. 4) gives an overview: the incident pulse approaches from behind (z<0, x>0). In the first picture (Fig. 4a) the pulse maximum is expected to be still in front of the slab which is located between z=0 and z=3. The z coordinate is scaled in units of vacuum wavelengths of the laser. As the pulse is very close to the slab, parts of the incident and reflected pulse interfere. This causes the disturbances and the high values of the amplitude in the vicinity of the slab. Actually one would find standing waves in front of the slab (z<0); however, this fine structure is not resolved. As can be seen, the field magnitude decreases very rapidly in the slab. The maximum of the transmitted pulse already appears in the second picture (Fig. 4b) when the calculated position of the maximum of the incident pulse has not yet reached the slab. The last two figures (Fig. 4c, d) show the further development. Apart from the transmitted Gaussian pulse we can see a second small pulse. Probably it is due to the multiple reflections or to higher order terms which we did not take into account in our approximations. The following two series (Fig. 5a-d and 6a-d) show the pulse at the same time points, but in views perpendicular to the z-axis and x-axis, respectively. In these views the two main effects related to tunneling are nicely born out. In the series of Fig. 5 the backward shift can be observed, whereas in the series of Fig. 6 the fact that tunneling occurs faster than reflection is evident. The maximum of the transmitted pulse already leaves the slab before the incident pulse hits the slab. In Fig. 6d, the transmitted pulse is farther away from the slab than the reflected one. I would like to thank Prof. J. Mostowski (Polish Academy of Sciences, Warsaw) for introducing me to this problem. I am also very grateful to Dr. Ch. Fattinger (F. Hoffmann-LaRoche Ltd, Basel) for his interest and many helpful comments and discussions.
In the physical sciences, the wavenumber (also wave number or repetency) is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber). It is analogous to temporal frequency, which is defined as the number of wave cycles per unit time (ordinary frequency) or radians per unit time (angular frequency). Diagram illustrating the relationship between the wavenumber and the other properties of harmonic waves. Wavenumber can be used to specify quantities other than spatial frequency. For example, in optical spectroscopy, it is often used as a unit of temporal frequency assuming a certain speed of light. Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm−1): where λ is the wavelength. It is sometimes called the "spectroscopic wavenumber". It equals the spatial frequency. A wavenumber in inverse cm can be converted to a frequency in GHz by multiplying by 29.9792458 (the speed of light in centimeters per nanosecond). An electromagnetic wave at 29.9792458 GHz has a wavelength of 1 cm in free space. In theoretical physics, a wave number, defined as the number of radians per unit distance, sometimes called "angular wavenumber", is more often used: When wavenumber is represented by the symbol ν, a frequency is still being represented, albeit indirectly. As described in the spectroscopy section, this is done through the relationship , where νs is a frequency in hertz. This is done for convenience as frequencies tend to be very large. Here we assume that the wave is regular in the sense that the different quantities describing the wave such as the wavelength, frequency and thus the wavenumber are constants. See wavepacket for discussion of the case when these quantities are not constant. where ν is the frequency of the wave, λ is the wavelength, ω = 2πν is the angular frequency of the wave, and vp is the phase velocity of the wave. The dependence of the wavenumber on the frequency (or more commonly the frequency on the wavenumber) is known as a dispersion relation. For the special case of an electromagnetic wave in a vacuum, in which the wave propagates at the speed of light, k is given by: The historical reason for using this spectroscopic wavenumber rather than frequency is that it is a convenient unit when studying atomic spectra by counting fringes per cm with an interferometer : the spectroscopic wavenumber is the reciprocal of the wavelength of light in vacuum: which remains essentially the same in air, and so the spectroscopic wavenumber is directly related to the angles of light scattered from diffraction gratings and the distance between fringes in interferometers, when those instruments are operated in air or vacuum. Such wavenumbers were first used in the calculations of Johannes Rydberg in the 1880s. The Rydberg–Ritz combination principle of 1908 was also formulated in terms of wavenumbers. A few years later spectral lines could be understood in quantum theory as differences between energy levels, energy being proportional to wavenumber, or frequency. However, spectroscopic data kept being tabulated in terms of spectroscopic wavenumber rather than frequency or energy. It can also be converted into wavelength of light: where n is the refractive index of the medium. Note that the wavelength of light changes as it passes through different media, however, the spectroscopic wavenumber (i.e., frequency) remains constant. Conventionally, inverse centimeter (cm−1) units are used for , so often that such spatial frequencies are stated by some authors "in wavenumbers", incorrectly transferring the name of the quantity to the CGS unit cm−1 itself. ^Rodrigues, A.; Sardinha, R.A.; Pita, G. (2021). Fundamental Principles of Environmental Physics. Springer International Publishing. p. 73. ISBN 978-3-030-69025-0. Retrieved 2022-12-04. ^Solimini, D. (2016). Understanding Earth Observation: The Electromagnetic Foundation of Remote Sensing. Remote Sensing and Digital Image Processing. Springer International Publishing. p. 679. ISBN 978-3-319-25633-7. Retrieved 2022-12-04. ^Robinson, E.A.; Treitel, S. (2008). Digital Imaging and Deconvolution: The ABCs of Seismic Exploration and Processing. Geophysical references. Society of Exploration Geophysicists. p. 9. ISBN 978-1-56080-148-1. Retrieved 2022-12-04. ^Murthy, V. L. R.; Lakshman, S. V. J. (1981). "Electronic absorption spectrum of cobalt antipyrine complex". Solid State Communications. 38 (7): 651–652. Bibcode:1981SSCom..38..651M. doi:10.1016/0038-1098(81)90960-1. Fiechtner, G. (2001). "Absorption and the dimensionless overlap integral for two-photon excitation". Journal of Quantitative Spectroscopy and Radiative Transfer. 68 (5): 543–557. Bibcode:2001JQSRT..68..543F. doi:10.1016/S0022-4073(00)00044-3. US 5046846, Ray, James C. & Asari, Logan R., "Method and apparatus for spectroscopic comparison of compositions", published 1991-09-10 "Boson Peaks and Glass Formation". Science. 308 (5726): 1221. 2005. doi:10.1126/science.308.5726.1221a. S2CID 220096687. ^Hollas, J. Michael (2004). Modern spectroscopy. John Wiley & Sons. p. xxii. ISBN 978-0470844151.
IES MASTER GATE MATERIAL PLASTIC ANALYSIS (STEEL STRUCTURES) GATE – PSU – IES – GOVT EXAMS – STUDY MATERIAL FREE DOWNLOAD PDF - Plastic Analysis – Objective Questions and Solutions - Plastic Analysis – Conventional Questions and Solutions In plastic analysis and design of a structure, the ultimate load of the structure as a whole is regarded as the design criterion. The term plastic has occurred due to the fact that the ultimate load is found from the strength of steel in the plastic range. This method is rapid and provides a rational approach for the analysis of the structure. It also provides striking economy as regards the weight of steel since the sections required by this method are smaller in size than those required by the method of elastic analysis. Plastic analysis and design has its main application in the analysis and design of statically indeterminate framed structures. Theorems of Plasticity There are three basic theorems of plasticity from which manual methods for collapse load calculations can be developed. Although attempts have been made to generalize these methods by computers the calculations based on these methods are still largely performed manually. The basic theorems of plasticity are kinematic, static, and uniqueness, which are outlined next. Kinematic Theorem (Upper Bound Theorem) This theorem states that the collapse load or load factor obtained for a structure that satisfies all the conditions of yield and collapse mechanism is either greater than or equal to the true collapse load. The true collapse load can be found by choosing the smallest value of collapse loads obtained from all possible cases of collapse mechanisms for the structure. The method derived from this theorem is based on the balance of external work and internal work for a particular collapse mechanism. It is usually referred to as the mechanism method. Static Theorem (Lower Bound Theorem) This theorem states that the collapse load obtained for a structure that satisfies all the conditions of static equilibrium and yield is either less than or equal to the true collapse load. In other words, the collapse load, calculated from a collapse mode other than the true one, can be described as conservative when the structure satisfies these conditions. The true collapse load can be found by choosing the largest value of the collapse loads obtained from all cases of possible yield conditions in the structure. The yield conditions assumed in the structure do not necessarily lead to a collapse mechanism for the structure. The use of this theorem for calculating the collapse load of an indeterminate structure usually considers static equilibrium through a flexibility approach to produce free and reactant bending moment diagrams. It is usually referred to as the statical method. It is quite clear that if a structure satisfies the conditions of both static and kinematic theorems, the collapse load obtained must be true and unique. Therefore, the uniqueness theorem states that a true collapse load is obtained when the structure is under a distribution of bending moments that are in static equilibrium with the applied forces and no plastic moment capacity is exceeded at any cross section when a collapse mechanism is formed. In other words, a unique collapse load is obtained when the three conditions of static equilibrium, yield, and collapse mechanism are met. It should be noted that an incremental elasto-plastic analysis such as that described satisfies all three of these conditions: (1) static equilibrium—elastic analysis is based on solving a set of equilibrium equations contained in matrices; (2) yield—the moment capacity for every section is checked and a plastic hinge is inserted if plastic moment is reached in any section; insertion of a plastic hinge in the analysis ensures that the moment capacity is not exceeded; and (3) mechanism—the formation of a collapse mechanism is checked by (a) determining whether the determinant of the stiffness matrix is zero; a zero value leads to an error message if a computer is used for analysis; and (b) excessive deflections if an exact zero stiffness cannot be detected. Hence, the collapse load obtained from an elastoplastic analysis is, in general, unique. Uniformly Distributed Loads (UDL) When using the mechanism method, the main difficulty in dealing with a distributed load is to calculate the external work as it normally requires integration for its evaluation. However, some convenient concepts can be developed to circumvent this difficulty. The following example demonstrates the treatment of uniformly distributed loads using both statical and mechanism methods. Continuous Beams and Frames In order to examine all possible collapse modes for continuous beams and frames, the concept of partial and complete collapse is introduced in the following section. In particular, partial collapse often occurs in continuous beams and frames upon which multiple loads are applied. Partial and Complete Collapse The discussion in the previous sections focused mainly on simple indeterminate structures. Typically, these structures have n degrees of indeterminacy and require n + 1 number of plastic hinges to form a collapse mechanism. In such cases, the structures are said to have failed by complete collapse. We define complete collapse as When a structure with n degrees of indeterminacy collapses due to the formation of p number of plastic hinges where p ¼ n + 1, the structure fails by complete collapse; in this case, determination of the member forces for the whole structure at collapse is always possible. However, partial collapse of a structure can be defined as When a structure with n degrees of indeterminacy collapses due to the formation of p number of plastic hinges where p < n þ 1, the structure fails by partial collapse; in this case, it may not be possible to determine the member forces for some parts of the structure. Structures may fail plastically by complete or partial collapse. In either case, the stiffness of the structure at collapse is zero. For continuous beams and frames where the degree of indeterminacy is large, partial collapse is not uncommon. Contrary to the traditional thinking that plastic analysis is performed either by simple manual methods for simple structures or by sophisticated computer programs written for more general applications, this book intends to introduce general plastic analysis methods, which take advantage of the availability of modern computational tools, such as linear elastic analysis programs and spreadsheet applications. These computational tools are in routine use in most engineering design offices nowadays. The powerful number-crunching capability of these tools enables plastic analysis and design to be performed for structures of virtually any size. The amount of computation required for structural analysis is largely dependent on the degree of statiscal indeterminacy of the structure. For determinate structures, use of equilibrium conditions alone will enable the reactions and internal forces to be determined. For indeterminate structures, internal forces are calculated by considering both equilibrium and compatibility conditions, through which some methods of structural analysis suitable for computer applications have been developed. The use of these methods for analyzing indeterminate structures is usually not simple, and computers are often used for carrying out these analyses. Most structures in practice are statically indeterminate. Structural analysis, whether linear or nonlinear, is mostly based on matrix formulations to handle the enormous amount of numerical data and computations. Matrix formulations are suitable for computer implementation and can be applied to two major methods of structural analysis: the flexibility (or force) method and the stiffness (or displacement) method. The flexibility method is used to solve equilibrium and compatibility equations in which the reactions and member forces are formulated as unknown variables. In this method, the degree of statiscal indeterminacy needs to be determined first and a number of unknown forces are chosen and released so that the remaining structure, called the primary structure, becomes determinate. The primary structure under the externally applied loads is analyzed and its displacement is calculated. A unit value for each of the chosen released forces, called redundant forces, is then applied to the primary structure (without the externally applied loads) so that, from the force-displacement relationship, displacements of the structure are calculated. The structure with each of the redundant forces is called the redundant structure. The compatibility conditions based on the deformation between the primary structure and the redundant structures are used to set up a matrix equation from which the redundant forces can be solved. The solution procedure for the force method requires selection of the redundant forces in the original indeterminate structure and the subsequent establishment of the matrix equation from the compatibility conditions. This procedure is not particularly suitable for computer programming and the force method is therefore usually used only for simple structures. DOWNLOAD LINK : IES MASTER Plastic Analysis Objective and Conventional Questions and Solutions for GATE PSU IES GOVT EXAMS Free Download PDF www.CivilEnggForAll.com
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? Watch this film carefully. Can you find a general rule for explaining when the dot will be this same distance from the horizontal axis? Strike it Out game for an adult and child. Can you stop your partner from being able to go? In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest? Got It game for an adult and child. How can you play so that you know you will always win? In this game for two players, the idea is to take it in turns to choose 1, 3, 5 or 7. The winner is the first to make the total 37. In each of the pictures the invitation is for you to: Count what you see. Identify how you think the pattern would continue. Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game. Think of a number, square it and subtract your starting number. Is the number you’re left with odd or even? How do the images help to explain this? How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Watch this animation. What do you see? Can you explain why this happens? Here are some arrangements of circles. How many circles would I need to make the next size up for each? Can you create your own arrangement and investigate the number of circles it needs? Can you find all the ways to get 15 at the top of this triangle of numbers? Many opportunities to work in different ways. Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. Delight your friends with this cunning trick! Can you explain how it works? Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens? Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. Find out what a "fault-free" rectangle is and try to make some of your own. Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'. These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like? Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? In how many different ways can you break up a stick of 7 interlocking cubes? Now try with a stick of 8 cubes and a stick of 6 cubes. Use the interactivity to investigate what kinds of triangles can be drawn on peg boards with different numbers of pegs. Can you explain the strategy for winning this game with any target? Can you find a way of counting the spheres in these arrangements? Can you dissect an equilateral triangle into 6 smaller ones? What number of smaller equilateral triangles is it NOT possible to dissect a larger equilateral triangle into? Find the sum of all three-digit numbers each of whose digits is odd. Take a counter and surround it by a ring of other counters that MUST touch two others. How many are needed? Are these statements relating to odd and even numbers always true, sometimes true or never true? Find a route from the outside to the inside of this square, stepping on as many tiles as possible. In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? This task follows on from Build it Up and takes the ideas into three dimensions! Can you make dice stairs using the rules stated? How do you know you have all the possible stairs? The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Nim-7 game for an adult and child. Who will be the one to take the last counter? How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? One block is needed to make an up-and-down staircase, with one step up and one step down. How many blocks would be needed to build an up-and-down staircase with 5 steps up and 5 steps down? You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . Here are two kinds of spirals for you to explore. What do you notice? This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. This challenge asks you to imagine a snake coiling on itself. Can you work out how to win this game of Nim? Does it matter if you go first or second? Triangular numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? What would be the smallest number of moves needed to move a Knight from a chess set from one corner to the opposite corner of a 99 by 99 square board?
- Open Access Boundedness of solutions for semilinear Duffing’s equation with asymmetric nonlinear term Journal of Inequalities and Applications volume 2013, Article number: 476 (2013) In this paper we study the following second-order periodic system: where has a singularity. Under some assumptions on the , and , by Ortega’s small twist theorem, we obtain the existence of quasi-periodic solutions and boundedness of all the solutions. 1 Introduction and main result In the early 1960s, Littlewood asked whether or not the solutions of the Duffing-type equations, are bounded for all time, i.e., whether there are resonances that might cause the amplitude of the oscillations to increase without bound. The first positive result of boundedness of solutions in the superlinear case (i.e., as ) was due to Morris . By means of KAM theorem, Morris proved that every solution of differential equation (1.1) is bounded if , where is piecewise continuous and periodic. This result relies on the fact that the nonlinearity can guarantee the twist condition of KAM theorem. Later, several authors (see [3, 4]) improved the result of (1.1) and obtained a similar result for a large class of superlinear functions . i.e., differential equation (1.1) is semilinear, similar results also hold. But the proof is more difficult since there may be a resonant case. Liu studied the following equation: where is 2π-periodic in t and has limits as . Under some reasonable assumptions on , Liu proved the existence of quasi-periodic solutions and the boundedness of solutions. Later, Cheng and Xu studied a more general equation where is 2π-periodic in t. They defined a new function , where , has limits and the similar property to in . Then the authors proved the boundedness of solutions for (1.2). We observe that in is unbounded while in is bounded and that is the major difference between and . The idea in [5, 6] is to change the original problem to a Hamiltonian system and then use a twist theorem of area-preserving mapping to the Poincaré map. Recently, Capietto et al. studied the following equation: where is a π-periodic function, , , and ν is a positive integer. Under the Lazer-Leach assumption that they proved the boundedness of solutions and the existence of a quasi-periodic solution by the Moser twist theorem. It was the first time that the equation of the boundedness of all solutions was treated in case of a singular potential. Motivated by the papers [5–7], we observe that in (1.3) is smooth and bounded, so a natural question is to find sufficient conditions on such that all solutions of (1.3) are bounded when is unbounded. The purpose of this paper is to deal with this problem. We consider the following equation: In order to state our main results, we give some notations and assumptions. Let be a π-periodic function and where . We suppose that the following Lazer-Leach assumption holds: Our main result is the following theorem. Theorem 1 Under assumptions (1.6)-(1.8), all the solutions of (1.5) are defined for all , and for each solution , we have . The main idea of our proof is acquired from . The proof of Theorem 1 is based on a small twist theorem due to Ortega . Hypotheses (1.6)-(1.8) of our theorem are used to prove that the Poincaré mapping of (1.5) satisfies the assumptions of Ortega’s theorem. Moreover, we have the following theorem on solutions of Mather type. Theorem 2 Assume that satisfies (1.8); then, there is such that for any , equation (1.5) has a solution of Mather type with rotation number ω. More precisely, Case 1: is rational. The solutions , , are independent periodic solutions of periodic qπ; moreover, in this case, Case 2: ω is irrational. The solution is either a usual quasi-periodic solution or a generalized one. 2 Proof of the theorem 2.1 Action-angle variables and some estimates Observe that (1.5) is equivalent to the following Hamiltonian system: with the Hamiltonian function In order to introduce action and angle variables, we first consider the auxiliary autonomous equation which is an integrable Hamiltonian system with the Hamiltonian function The closed curves are just the integral curves of (2.2). Denote by the time period of the integral curve of (2.2) defined by and by I the area enclosed by the closed curve for every . Let be such that . It is easy to see that By a direct computation, we get We then have We now give the estimates on the functions and . Lemma 1 We have where , . Note that here and below we always use C, or to indicate some constants. Proof Now we estimate the first inequality. We choose as the new variable of integration, then we have Since and , we have . By a direct computation, we have then we get When and h is sufficient large, there exits such that , so we have Since , we have Observing that there is such that when and , we have By (2.3)-(2.5) we have , . The proof of the second inequality is similar to that of the first one, so we only give a brief proof. We choose as the new variable of integration, so we have By a direct computation, we have By (2.6), we can easily get By a similar way to that in estimating , we get which means that Thus Lemma 1 is proved. □ Remark 1 It follows from the definitions of , and Lemma 1 that Thus the time period is dominated by when h is sufficiently large. From the relation between and , we know is dominated by when h is sufficiently large. Remark 2 It also follows from the definition of , , and Remark 1 that Remark 3 Note that is the inverse function of . By Remark 2, we have We now carry out the standard reduction to the action-angle variables. For this purpose, we define the generating function , where C is the part of the closed curve connecting the point on the y-axis and point . We define the well-know map by which is symplectic since From the above discussion, we can easily get In the new variables , system (2.1) becomes In order to estimate , we need the estimate on the functions . Lemma 2 For I sufficient large and , the following estimates hold: The lemma was first proved in , later Capietto et al. gave a different proof; using the method of induction-hypothesis, Jiang and Fang also gave another proof. So, for concision, we omit the proof. 2.2 New action and angle variables Now we are concerned with Hamiltonian system (2.10) with the Hamiltonian function given by (2.11). Note that This means that if one can solve I from (2.11) as a function of H (θ and t as parameters), then is also a Hamiltonian system with the Hamiltonian function I and now the action, angle and time variables are H, t and θ. From (2.11) and Lemma 1, we have So, we assume that I can be written as where R satisfies . Recalling that is the inverse function of , we have which implies that As a consequence, R is implicitly defined by Now we give the estimates of R. By a similar way to that in estimating Lemma 2.3 in , we have the following lemma. Lemma 3 The function satisfies the following estimates: Moreover, by the implicit function theorem, there exists a function such that By Lemmas 1 and 3, we have the estimates on . Lemma 4 for . For the estimate of , we need the estimate on . By Lemma 1 and noticing that , we have the following lemma. Lemma 5 for . Now the new Hamiltonian function is written in the form System (2.12) is of the form Introduce a new action variable and a parameter by . Then . Under this transformation, system (2.14) is changed into the form which is also a Hamiltonian system with the new Hamiltonian function Obviously, if , the solution of (2.15) with the initial data is defined in the interval and . So, the Poincaré map of (2.15) is well defined in the domain . Lemma 6 ( Lemma 5.1) The Poincaré map of (2.15) has the intersection property. The proof is similar to the corresponding one in . For convenience, we introduce the notation and . We say a function if f is smooth in and for , for some constant which is independent of the arguments t, ρ, θ, ϵ. Similarly, we say if f is smooth in and for , uniformly in . 2.3 Poincaré map and twist theorems We will use Ortega’s small twist theorem to prove that the Poincaré map P has an invariant closed curve if ϵ is sufficiently small. Let us first recall the theorem in . Lemma 7 (Ortega’s theorem) Let be a finite cylinder with universal cover . The coordinate in is denoted by . Consider the map We assume that the map has the intersection property. Suppose that , is a lift of and it has the form where N is an integer, is a parameter. The functions , , and satisfy In addition, we assume that there is a function satisfying Moreover, suppose that there are two numbers and such that and Then there exist and such that if and the mapping has an invariant curve in , the constant ϵ is independent of δ. We make the ansatz that the solution of (2.15) with the initial condition is of the form Then the Poincaré map of (2.15) is The functions and satisfy where , . By Lemmas 4, 6 and 7, we know that Hence, for , we may choose ϵ sufficiently small such that Moreover, we can prove that Similar to the way of estimating , by a direct calculation, we have the following lemma. Lemma 8 The following estimates hold: Now we turn to give an asymptotic expression of the Poincaré map of (2.14), that is, we study the behavior of the functions and at as . In order to estimate and , we need to introduce the following definition and lemma. Let Proof This lemma was proved in , so we omit the details. □ For estimate and , we need the estimates of x and . We recall that when , we have When , by the definition of θ, we have which yields that Now we can give the estimates of and . Lemma 10 The following estimates hold true: Proof Firstly we consider . By Lemmas 3, 4, 8 and (2.22), we have Since and means , we have By the measure of , we have By (2.26) and (2.27), we have Now we consider . By Lemmas 3, 4, 8 and (2.22), we have By (1.7) and means , we have By the measure of , we have By (2.28) and (2.29), we have Thus Lemma 10 is proved. □ 2.4 Proof of Theorem 1 Then there are two functions and such that the Poincaré map of (2.15), given by (2.21), is of the form Since , , we have The other assumptions of Ortega’s theorem are easily verified. Hence, there is an invariant curve of P in the annulus , which implies the boundedness of our original equation (1.5). Then Theorem 1 is proved. 2.5 Proof of Theorem 2 We apply Aubry-Mather theory. By Theorem B in and the monotone twist property of the Poincaré map P guaranteed by , it is straightforward to check that Theorem 2 is correct. Littlewood J: Unbounded solutions of . J. Lond. Math. Soc. 1966, 41: 133–149. Morris GR: A case of boundedness of Littlewood’s problem on oscillatory differential equations. Bull. Aust. Math. Soc. 1976, 14: 71–93. 10.1017/S0004972700024862 Levi M: Quasiperiodic motions in superquadratic time-periodic potential. Commun. Math. Phys. 1991, 144: 43–82. Dieckerhoff R, Zehnder E: Boundedness of solutions via the twist theorem. Ann. Sc. Norm. Super. Pisa, Cl. Sci. 1987, 14: 79–95. Liu B: Boundedness of solutions for equations with p -Laplacian and an asymmetric nonlinear term. J. Differ. Equ. 2004, 207: 73–92. 10.1016/j.jde.2004.06.023 Cheng C, Xu J: Boundedness of solutions for a second-order differential equations. Nonlinear Anal. 2008, 7: 1993–2004. Capietto A, Dambrosio W, Liu B: On the boundedness of solutions to a nonlinear singular oscillator. Z. Angew. Math. Phys. 2009, 60(6):1007–1034. 10.1007/s00033-008-8094-y Liu B: Quasi-periodic solutions of forced isochronous oscillators at resonance. J. Differ. Equ. 2009, 246: 3471–3495. 10.1016/j.jde.2009.02.015 Ortega R: Boundedness in a piecewise linear oscillator and a variant of the small twist theorem. Proc. Lond. Math. Soc. 1999, 79: 381–413. 10.1112/S0024611599012034 Jiang S, Fang F: Lagrangian stability of a class of second-order periodic systems. Abstr. Appl. Anal. 2011., 2011: Article ID 106214 Pei ML: Aubry-Mather sets for finite-twist maps of a cylinder and semilinear Duffing equations. J. Differ. Equ. 1994, 113: 106–127. 10.1006/jdeq.1994.1116 Thanks are given to referees whose comments and suggestions were very helpful for revising our paper. The authors declare that they have no competing interests. The article is a joint work of three authors who contributed equally to the final version of the paper. All authors read and approved the final manuscript. About this article Cite this article Jiang, S., Rao, F. & Shi, Y. Boundedness of solutions for semilinear Duffing’s equation with asymmetric nonlinear term. J Inequal Appl 2013, 476 (2013). https://doi.org/10.1186/1029-242X-2013-476 - boundedness of solutions - small twist theorem
4 edition of Basic probability theory and applications found in the catalog. Basic probability theory and applications |Series||Goodyear mathematics series| |LC Classifications||QA273 .K448| |The Physical Object| |Pagination||xi, 516 p. :| |Number of Pages||516| |LC Control Number||75011186| Notes on Probability Theory and Statistics. This note explains the following topics: Probability Theory, Random Variables, Distribution Functions, And Densities, Expectations And Moments Of Random Variables, Parametric Univariate Distributions, Sampling Theory, Point And Interval Estimation, Hypothesis Testing, Statistical Inference, Asymptotic Theory, Likelihood Function, . Probability theory arose originally in connection with games of chance and then for a long time it was used primarily to investigate the credibility of testimony of witnesses in the “ethical” sciences. Nevertheless, probability has become a very powerful mathematical tool in understanding those aspects of the world that cannot be described by deterministic laws. Introduction to Probability, Second Edition, discusses probability theory in a mathematically rigorous, yet accessible way. This one-semester basic probability textbook explains important concepts of probability while providing useful exercises and examples of real world applications for students to consider. The book is an introduction to probability written by one of the famous experts in this area. Readers will learn about the basic concepts of probability and its applications, preparing them for more advanced and specialized works. Probability Theory Basics and Applications. Every one of us uses the words probable, probability or odds few times a day in common speech when referring to the possibility of a certain event r we have math skills or not, we frequently estimate and compare probabilities, sometimes without realizing it, especially when making decisions. This book presents a rigorous exposition of probability theory for a variety of applications. The first part of the book is a self-contained account of the fundamentals. Material suitable for advanced study is then developed from the basic concepts. Emphasis is placed on examples, sound interpretation of results and scope for applications. Estimation in a model that arises from linearization in nonlinear least squares analysis. Homopolymerization and copolymerization of ethylene and 4-methylpentene-1 using silica supported ziegler-natta catalyst George Bernard Shaw Applied thermodynamics for higher national certificate and diploma students Making America Volume One With History Student Passkey Fourth Edition Plus Cobbs Major Problems In American History Volume One Law and laziness, or, Students at law of leisure The book of knowledge Climate change and globalization in the arctic 2000 Import and Export Market for Works of Art, Collectors Pieces, and Antiques in United Arab Emirates Tess Jaray, Marc Vaux. Le gout du lecteur a la fin du Moyen Age Hawaiis automatic permit approval law Anti-Semitism, a threat to democracy This book presents elementary probability theory with interesting and well-chosen applications that illustrate the theory. An introductory chapter reviews the basic elements of differential calculus which are used in the material to follow. The theory is presented systematically, beginning with the main results in elementary probability : Springer-Verlag New York. This book presents elementary probability theory with interesting and well-chosen applications that illustrate the theory. An introductory chapter reviews the basic elements of differential calculus which are used in the material to follow. The theory is presented systematically, beginning with the main results in elementary probability by: 9. Luckily I bought this book based on a previous review and i can say that this is the best book (life savior) "for engineers" trying to learn Probability concepts. It does not cover measure theory (touches lightly at some places) but approaches continous probability from Riemann integral approach - so this is a basic probability book/5(17). P robability Probability is the measure of the likelihood that an event will occur in a Random Experiment. Probability is quantified as a number between 0 and 1, where, loosely speaking, 0 indicates impossibility and 1 indicates certainty. The higher the probability of an event, the more likely it is that the event will : Parag Radke. Chapter 1 introduces the probability model and provides motivation for the study of probability. The basic properties of a probability measure are developed. Chapter 2 deals with discrete, continuous, joint distributions, and the effects of a change of variable. It also introduces the topic of simulating from a probability distribution. This book presents elementary probability theory with interesting and well-chosen applications that illustrate the theory. An introductory chapter reviews the basic elements of differential calculus which are used in the material to follow. This book presents elementary probability theory with interesting and well-chosen applications that illustrate the theory. An introductory chapter reviews the basic elements of differential calculus which are used in the material to follow. The theory is presented systematically, beginning with the main results in elementary probability theory. The Best Books to Learn Probability here is the ility theory is the mathematical study of uncertainty. It plays a central role in machine learning, as the design of learning algorithms often relies on probabilistic assumption of the. Book Authors/Editors; "This is a valuable reference guide for readers interested in gaining a basic understanding of probability theory or its applications in problem solving in the other disciplines." Providing cutting-edge perspectives and real-world insights into the greater utility of probability and its applications, the Handbook. If anybody asks for a recommendation for an introductory probability book, then my suggestion would be the book by Henk Tijms, Understanding Probability, second edition, Cambridge University Press, This book first explains the basic ideas and concepts of probability through the use of motivating real-world examples before presenting the theory in a very clear way. This text develops the necessary background in probability theory underlying diverse treatments of stochastic processes and their wide-ranging applications. In this second edition, the text has been reorganized for didactic purposes, new exercises have been added and basic theory has been expanded. famous text An Introduction to Probability Theory and Its Applications (New York: Wiley, ). In the preface, Feller wrote about his treatment of fluctuation in coin tossing: “The results are so amazing and so at variance with common intuition that even sophisticated colleagues doubted that coins actually misbehave as theory by: Most High School standardized tests have a probability and statistics section. Alberta Provincial Exam, CHSPE Math, SHSAT and the TACHS. So for example if there are 4 red balls and 3 yellow balls in a bag, the probability of choosing a red ball will be 4/7. In a certain game, players toss a coin and roll a dice. Basic probability theory by Ash, Robert B and a great selection of related books, art and collectibles available now at The book is primarily written for high school and college students learning about probability for the first time. In a highly accessible way, a modern treatment of the subject is given with emphasis on conditional probability and Bayesian probability, on striking applications of the Poisson distribution, and on the interface between probability. Geared toward advanced undergraduates and graduate students, this introductory text surveys random variables, conditional probability and expectation, characteristic functions, infinite sequences of random variables, Markov chains, and an introduction to statistics. Complete solutions to some of the problems appear at the end of the book. edition. Probability theory arose originally in connection with games of chance and then for a long time it was used primarily to investigate Basic Principles and Applications of Probability Theory. Authors (view affiliations) A.V. Skorokhod; Editors (view affiliations) The basic laws of mechanics, physics and astronomy can be formulated in. This book provides a clear and straightforward introduction to applications of probability theory with examples given in the biological sciences and engineering. The first chapter contains a summary of basic probability theory. Chapters two to. Leadbetter et al A Basic Course in Measure and Probability: Theory for Applications is a new book giving a careful treatment of the measure-theory background. There are many other books at roughly the same ``first year graduate" level. e-books in Probability & Statistics category Probability and Statistics: A Course for Physicists and Engineers by Arak M. Mathai, Hans J. Haubold - De Gruyter Open, This is an introduction to concepts of probability theory, probability distributions relevant in the applied sciences, as well as basics of sampling distributions, estimation and hypothesis testing. The classical definition of probability (classical probability concept) states: If there are m outcomes in a sample space (universal set), and all are equally likely of being the result of an experimental measurement, then the probability of observing an event (a subset) that contains s outcomes is given by From the classical definition, we see that the ability to count the number of outcomes inFile Size: 1MB.In Feller's Introduction to Probability theory and Its Applications, volume 1, 3d ed, p.exerc there is formulated a version of the local limit theorem which is applicable to the hypergeometric distribution, which governs sampling without replacement.Probability theory is the branch of mathematics concerned with gh there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of lly these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 .
Home > Products > >calculation for m calculation for m Quick M&A Guide Ross School of Business M&A involves using more than one valuation technique to arrive at a valuation that we think is fair. (Price / LTM EPS, Calculation: Net income Convert cm to m, m to cm Length / Distance Conversions Online calculators to convert centimeters to meters (cm to m) and meters to centimeters (m to cm) with formulas, examples, and tables. Our conversions provide a quick Calculated Fields Form — WordPress Plugins Calculated Fields Form can be used for creating both single and complex calculations, Non-latin characters aren't being displayed in the calculator form. How to Calculate GPA Office of the Registrar Texas A&M How to Calculate GPA. or NG was given are excluded from the GPA calculation. Grades of U are included in the GPA calculation for undergraduate students; Standard Form Calculator, Standard Notation Calculator Standard Form Calculator or Standard Notation Calculator is used to convert the standard form into normal number notation and vice-verse. Hence it is also known as Food to Microorganism (F/M) Pennsylvania DEP Detail on F/M Calculation: Desired (info on solids testing) Food to Microorganism (F/M) Activated Sludge (M) Food Calculate Your BMI Standard BMI Calculator Calculate Your Body Mass Index. Body mass index (BMI) is a measure of body fat based on height and weight that applies to adult men and women. Force Calculator | Calculate Mass, Acceleration Here is a simple force calculator, A body with mass 20 kilograms and acceleration 5 m/s 2 will have a force Mass = 20 kgs Acceleration =5 m/s 2 = 20 x 5 IRS Withholding Calculator | Internal Revenue Service Due to the tax law changes signed into law on December 22, 2017, the IRS withholding calculator is currently unavailable. The IRS will update the calculator as Calculation | Definition of Calculation by Merriam-Webster Define calculation: the process or an act of calculating; the result of an act of calculating — calculation in a sentence Meters to Centimeters Metric Conversion charts and Meters to Centimeters (m to cm) conversion calculator for Length conversions with additional tables and formulas. ## Payroll Calculation loansyyfg Payroll Calculation : Updating Direct Deposit: Submit a completed authorization form and a voided check or other banking document as directed on the form. Commercial Electrical Load Calculations | Electrical Calculate it at 125% of the general lighting load listed in Table 220.3(A). Receptacles. You don't do all receptacle load calculations the same way. The NEC has separate requirements, depending on the application. Multi-outlet receptacle assembly. For service calculations, consider every 5 feet (or less) of multi-outlet receptacle assembly to be 180VA. Meters to Feet converter (m to ft) | Metre Foot Meters to Feet (m to ft) conversion calculator for Length conversions with additional tables and formulas. Calculating and Reporting the Myeloid:Erythroid (M:E Calculating and Reporting the Myeloid:Erythroid (M:E) the M: E ratio should be A simple way to perform the calculation is to always divide the larger value by Interest calculation (class form) [AX 2012] The calculations are based on either the Earnings or Payments tab, depending on the net amount on the original interest note. From date. Enter a starting date for the calculation, which then includes transactions that are due on or after this date. Calculate your average speedcalculator, calculate This calculation you can use if you have been out jogging, driving orwell, just moving around! It will calculate your average speed during that time. Form Calculation Math Function Reference | JotForm If you need online forms for generating leads, distributing surveys, collecting payments and more, JotForm is for you. Learn more about how we can help at JotForm. M&A Valuation: Measures of Return dummies Buyers in an M&A process utilize various measurements for their investments, or at least they should. A wise investor weighs the price of the investment against the Online Form Builder Calculations Build and evaluate an equation using form results Calculation Forms | Form Templates Calculation forms give your respondent a chance to see totals of previous number entries or general calculations -which comes in handy when placing an order with multiple products, tracking expenses, estimating costs. Get started using one of JotForm's free calculation form templates. Tax Calculator Estimate Your Tax Refund | Bankrate The 1040 income tax calculator helps to determine the amount of income tax due or owed to the IRS. You can also estimate your tax refund if applicable. Density Calculator p = m/V Calculator Use. Choose a calculation for density p, mass m or Volume V. Enter the other two values and the calculator will solve for the third in the selected units. Torque Calculator, Calculate Force, Distance or Length. Torque Calculator. English . Torque Calculation. I want to calculate m. Torque (T) N-m. Code to add this calci to your website FORM 14 CALCULATION (2) Child Care Tax Credit (See Form 14 Directions) 6a. TOTAL adjusted Child Care Costs [Line 6a(1) minus Line 6a(2)] 6b. Reasonable work-related child care costs of the parent paying support ; 6c. Health insurance costs for children who are the subjects of this proceeding : 6d. Uninsured agreed-upon or court-ordered extraordinary medical costs : 6e.
If you like the ChemWiki, please "like" us on facebook, "share" us on Google+, or "tweet" about the project. Dipole moments occur when there is a separation of charge. They can occur between two ions in an ionic bond or between atoms in a covalent bond; dipole moments arise from differences in electronegativity. The larger the difference in electronegativity, the larger the dipole moment. The distance between the charge separation is also a deciding factor into the size of the dipole moment. The dipole moment is a measure of the polarity of the molecule. When atoms in a molecule share electrons unequally, they create what is called a dipole moment. This occurs when one atom is more electronegative than another, resulting in that atom pulling more tightly on the shared pair of electrons, or when one atom has a lone pair of electrons and the difference of electronegativity vector points in the same way. One of the most common examples is the water molecule, made up of one oxygen atom and two hydrogen atoms. The differences in electronegativity and lone electrons give oxygen a partial negative charge and each hydrogen a partial positive charge. The equation to figure out the dipole moment of a molecule is given below: \[ \mu = q \, r\] where μ is the dipole moment, q is the magnitude of the charge, and r is the distance between the charges. The dipole moment acts in the direction of the vector quantity. The unit used for dipole moments is the debye (D); 1D = 3.34 × 10-30 C × m Figure 1: Dipole moment of water (3) Figure 2: Electronegativity of common elements The vector points from positive to negative, on both the molecular (net) dipole moment and the individual bond dipoles. The table above shows the electronegativity of some of the common elements. The larger the difference in electronegativity between the two atoms, the more electronegative that bond is. To be considered a polar bond, the difference in electronegativity must be large. The dipole moment points in the direction of the vector quantity of each of the bond electronegativities added together. |Example 1: Water| The water molecule picture from Figure 1 can be sued to determine the direction and magnitude of the dipole moment. From the electronegativities of water and hydrogen, the difference is 1.2 for each of the hydrogen-oxygen bonds. Next, because the oxygen is the more electronegative atom, it exerts a greater pull on the shared electrons; it also has two lone pairs of electrons. From this, it can be concluded that the dipole moment points from between the two hydrogen atoms toward the oxygen atom. Using the equation above, the dipole moment is calculated to be 1.85 D by multiplying the distance between the oxygen and hydrogen atoms by the charge difference between them and then finding the components of each that point in the direction of the net dipole moment (remember the angle of the molecule is 104.5˚). The bond moment of O-H bond =1.5 D, so the net dipole moment =2(1.5)×cos(104.5/2)=1.84 D. The polarity of a molecule is influenced by its structure. If a molecule is completely symmetric, then the dipole moment vectors on each molecule will cancel each other out, making the molecule nonpolar. A molecule can only be polar if the structure of that molecule is not symmetric. A basic example of a nonpolar molecule is CO2. It is linear and completely symmetric, so the dipole moment vectors on both oxygen atoms cancel out. An example of a polar molecule is H2O. Because of the lone pair on oxygen, the structure of H2O is bent, which means it is not symmetric. The vectors do not cancel each other out, making the molecule polar. Molecular compounds are either polar or nonpolar. Here are some examples: Polarity observed when a stream of liquid subjected to charged rods. Nonpolar CCl4 is not deflected; moderately polar acetone deflects slightly; highly polar water deflects strongly. The measure of molecular polarity is a quantity called the dipole moment (u). Dipole moment is defined as magnitude of charge (Q) times distance (r) between the charges. u = (Q)(r) Q charge in coulombs (C) r distance in meters (m) Consider a proton & electron 100 pm (10-12 m) apart: Each particle possess a charge of 1.60x10-19 C. When proton & electron closetogether, the dipole moment (degree of polarity) decreases. However, as proton & electron get farther apart, the dipole moment increases. In this case, the dipole moment calculated as: u = (Q)(r) = (1.60x10-19 C)(1.00x10-10 m) = 1.60x10-29 C.m [1 debye (D) = 3.336x10-30 C.m] The debye characterizes size of dipole moment. When a proton & electron 100 pm apart, the dipole moment is 4.80 D: (1.60x10-29 C.m)(1 D/3.336x10-30 C.m) = 4.80 D 4.80 D is a key reference value! It represents a pure charge of +1 & -1 100 pm apart. The bond is 100% ionic. Note: The debye named after Peter Debye (1884-1966) who was awarded 1936 Nobel Prize in chemistry for studies of dipole moments. When proton & electron are separated by 120 pm, u = (120/100)(4.80D) = 5.76 D (100% ionic) When proton & electron are separated by 150 pm, u = (150/100)(4.80D) = 7.20 D (100% ionic) When proton & electron are separated by 200 pm, u = (200/100)(4.80D) = 9.60 D (100% ionic) It is relatively easy to measure dipole moments. Place substance between charged plates--polar molecules increase the charge stored on plates and the dipole moment can be obtained (has to do with capacitance). C-Cl, the key polar bond, is 178 pm. Measurement reveals 1.87 D. From this data, % ionic character can be computed. If this bond was 100% ionic (based on proton & electron), u = 178/100)(4.80 D) = 8.54 D Since measurement 1.87 D, % ionic = (1.7/8.54)x100 = 22% u = 1.03 D (measured) H-Cl bond length 127 pm If 100% ionic, u = (127/100)(4.80 D) = 6.09 D % ionic = (1.03/6.09)x100 = 17% An NSF funded Project By STEMWiki Hyperlibrary
Find out about Magic Squares in this article written for students. Why are they magic?! Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? Different combinations of the weights available allow you to make different totals. Which totals can you make? Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? What happens when you round these numbers to the nearest whole number? How many solutions can you find to this sum? Each of the different letters stands for a different number. What happens when you round these three-digit numbers to the nearest 100? Use two dice to generate two numbers with one decimal place. What happens when you round these numbers to the nearest whole number? The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own. An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length? First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total. The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. An investigation that gives you the opportunity to make and justify Use the clues to work out which cities Mohamed, Sheng, Tanya and Bharat live in. Seven friends went to a fun fair with lots of scary rides. They decided to pair up for rides until each friend had ridden once with each of the others. What was the total number rides? Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families? Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100. There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it? Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? How could you put these three beads into bags? How many different ways can you do it? How could you record what you've done? Find the values of the nine letters in the sum: FOOT + BALL = GAME In the multiplication calculation, some of the digits have been replaced by letters and others by asterisks. Can you reconstruct the original multiplication? This 100 square jigsaw is written in code. It starts with 1 and ends with 100. Can you build it up? Can you substitute numbers for the letters in these sums? What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters. How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same Using the statements, can you work out how many of each type of rabbit there are in these pens? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? Can you replace the letters with numbers? Is there only one solution in each case? The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. This Sudoku, based on differences. Using the one clue number can you find the solution? Can you work out some different ways to balance this equation? Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square. Have a go at balancing this equation. Can you find different ways of doing it? Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Can you complete this calculation by filling in the missing numbers? In how many different ways can you do it? Cherri, Saxon, Mel and Paul are friends. They are all different ages. Can you find out the age of each friend using the
Can I use Real Probabilities to Price Derivatives? Yes. But you may need to move away from classical quantitative finance. Some modern derivatives models use ideas from utility theory to price derivatives. Such models may find a use in pricing derivatives that cannot be dynamically hedged. Yes and no. There are lots of reasons why risk-neutral pricing doesn't work perfectly in practice, because markets are incomplete and dynamic hedging is impossible. If you can't continuously dynamically hedge then you cannot eliminate risk and so risk neutrality is not so relevant. You might be tempted to try to price using real probabilities instead. This is fine, and there are plenty of theories on this topic, usually with some element of utility theory about them. For example, some theories use ideas from Modern Portfolio Theory and look at real averages and real standard deviations. For example you could value options as the certainty equivalent value under the real random walk, or maybe as the real expectation of the present value of the option's payoff plus or minus some multiple of the standard deviation. (Plus if you are selling, minus if buying.) The 'multiple' represents a measure of your risk aversion. But there are two main problems with this. 1. You need to be able to measure real probabilities. In classical stochastic differential equation models this means knowing the real drift rate, often denoted by x for equities. This can be very hard, much harder than measuring volatility. Is it even possible to say whether we are in a bull or bear market? Often not! And you need to project forward, again even harder, and harder than forecasting volatility. 2. You need to decide on a utility function or a measure of risk aversion. This is not impossible, a bank could tell all its employees 'From this day forward the bank's utility function is Or tests can be used to estimate an individual's utility function by asking questions about his attitude to various trades, this can all be quantified. But at the moment this subject is still seen as too academic. Although the assumptions that lead to risk neutrality are clearly invalid the results that follow, and the avoidance of the above two problems, means that more people than not are swayed by its advantages. References and Further Reading Ahn, H & Wilmott, P 2003 Stochastic volatility and mean-variance analysis. Wilmott magazine No. 03 84-90 Ahn, H & Wilmott, P 2007 Jump diffusion, mean and variance: how to dynamically hedge, statically hedge and to price. Wilmott magazine May 96-109 Ahn, H & Wilmott, P 2008 Dynamic hedging is dead! Long live static hedging! Wilmott magazine January 80-87 What is Volatility? Volatility is annualized standard deviation of returns. Or is it? Because that is a statistical measure, necessarily backward looking, and because volatility seems to vary, and we want to know what it will be in the future, and because people have different views on what volatility will be in the future, things are not that simple. Actual volatility is the a that goes into the Black-Scholes partial differential equation. Implied volatility is the number in the Black-Scholes formula that makes a theoretical price match a market price. Actual volatility is a measure of the amount of randomness in a financial quantity at any point in time. It's what Desmond Fitzgerald calls the 'bouncy, bouncy.' It's difficult to measure, and even harder to forecast but it's one of the main inputs into option-pricing models. It's difficult to measure since it is defined mathematically via standard deviations, which requires historical data to calculate. Yet actual volatility is not a historical quantity but an instantaneous one. Realized/historical volatilities are associated with a period of time, actually two periods of time. We might say that the daily volatility over the last sixty days has been 27%. This means that we take the last sixty days' worth of daily asset prices and calculate the volatility. Let me stress that this has two associated timescales, whereas actual volatility has none. This tends to be the default estimate of future volatility in the absence of any more sophisticated model. For example, we might assume that the volatility of the next sixty days is the same as over the previous sixty days. This will give us an idea of what a sixty-day option might be worth. Implied volatility is the number you have to put into the Black-Scholes option-pricing equation to get the theoretical price to match the market price. Often said to be the market's estimate of volatility. Let's recap. We have actual volatility, which is the instantaneous amount of noise in a stock price return. It is sometimes modelled as a simple constant, sometimes as time dependent, sometimes as stock and time dependent, sometimes as stochastic, sometimes as a jump process, and sometimes as uncertain, that is, lying within a range. It is impossible to measure exactly; the best you can do is to get a statistical estimate based on past data. But this is the parameter we would dearly love to know because of its importance in pricing derivatives. Some hedge funds believe that their edge is in forecasting this parameter better than other people, and so profit from options that are mispriced in the market. Since you can't see actual volatility people often rely on measuring historical or realized volatility. This is a backward looking statistical measure of what volatility has been. And then one assumes that there is some information in this data that will tell us what volatility will be in the future. There are several models for measuring and forecasting volatility and we will come back to them shortly. Implied volatility is the number you have to put into the Black-Scholes option-pricing formula to get the theoretical price to match the market price. This is often said to be the market's estimate of volatility. More correctly, option prices are governed by supply and demand. Is that the same as the market taking a view on future volatility? Not necessarily because most people buying options are taking a directional view on the market and so supply and demand reflects direction rather than volatility. But because people who hedge options are not exposed to direction only volatility it looks to them as if people are taking a view on volatility when they are more probably taking a view on direction, or simply buying out-of-the-money puts as insurance against a crash. For example, the market falls, people panic, they buy puts, the price of puts and hence implied volatility goes up. Where the price stops depends on supply and demand, not on anyone's estimate of future volatility, within reason. Implied volatility levels the playing field so you can compare and contrast option prices across strikes and expirations. There is also forward volatility. The adjective 'forward' is added to anything financial to mean values in the future. So forward volatility would usually mean volatility, either actual or implied, over some time period in the future. Finally hedging volatility means the parameter that you plug into a delta calculation to tell you how many of the underlying to sell short for hedging purposes. Since volatility is so difficult to pin down it is a natural quantity for some interesting modelling. Here are some of the approaches used to model or forecast volatility.
As is well known, Pontryagin s maximum principle and Bellman s dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing Q What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls There did exist some researches prior to the 1980s on the relationship between these two Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases In the statement of a Pontryagin type maximum principle there is an adjoint equation, which is an ordinary differential equation ODE in the finite dimensional deterministic case and a stochastic differential equation SDE in the stochastic case The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an extended Hamiltonian system On the other hand, in Bellman s dynamic programming, there is a partial differential equation PDE , of first order in the finite dimensional deterministic case and of second or der in the stochastic case This is known as a Hamilton Jacobi Bellman HJB equation.... |Title||:||Stochastic Controls (Stochastic Modelling and Applied Probability, Band 43)| |Publisher||:||Springer Auflage Softcover reprint of the original 1st ed 1999 22 Juni 1999| |Number of Pages||:||464 Seiten| |File Size||:||882 KB| |Status||:||Available For Download| |Last checked||:||21 Minutes ago!| Stochastic Controls (Stochastic Modelling and Applied Probability, Band 43) Reviews This book covers general stochastic control more thoroughly than any other book I could find.This is *not* a book on numerical methods. It is also not on the cases which yield closed-form solutions: there is a chapter on LQG problems, but for the most part, this book focuses on the general theory of stochastic controls -- which are not the easiest things to solve in general, as you may know. The book handles only diffusion processes with perfect knowledge of the past and present (natural filtration). If these sound like what you want, I doubt there's a more thorough treatment.It starts with a chapter on preliminaries of prob. spaces and stoch. processes and the Ito integral. After that, the book briefly addresses deterministic problems in order to compare solution methods to the stoch. approaches. It approaches the problems using a stochastic maximum principle and a stochastic Hamiltonian system, and also from a dynamic programming point of view using HJB equations. The authors attempt to show the relationship between the two approaches.This book is technically rigorous. Though it claims to be self-contained, the reader should certainly be familiar with functional analysis and stochastic processes.The authors try to keep the solutions as general as possible, handling non-smooth cases as well as smooth ones. This is fine, except that they don't emphasize well enough (I thought), for instance, that the solutions are much simpler when functions are well behaved on convex bodies (it's mentioned as a note on p. 120), or when diffusions are not dependent on controls, and such.Because of this tendency to present one solution which will handle any case, it could sometimes be difficult to figure out what all the terms are. In the end, it all works out. Each chapter ends with a few pages of "historical background": who did what piece of the theory when, with an excellent list of references. (I found the originals useful to help explain things, on occasion, especially to see simpler ways to do simpler cases)Altogether, a very thorough piece on general solutions to stochastic control! I was quite impressed. Very nice book! From every page of the book, it is clear, that the two authors know the subject, they are writing about!It is assumed, that the reader knows something about stochstic calculus and stochastic differential equations, and also about measure theoretic probability theory. My only exposure to these subjects was the book "Brownian Motion and Stochastic Calculus" by I. Karatzas and S. Shreve, and this was enough.The pace of the book was just right for me ( I am an engineer with a lot of interest in mathematics), not too slow, and not too fast.It might be advisable to read chapter 7 right after chapter 2 unless you have had previous exposure to BFSDE (Backward-Forward-Stochastic-Differential-Equations), which are extremely well explained there.The book is not free of typos ( I found about 30 ), but given the complexity of the sub/super scripts, it does not seem bad at all. "Stochastic Control" by Yong and Zhou is a comprehensive introduction to the modern stochastic optimal control theory. While the stated goal of the book is to establish the equivalence between the Hamilton-Jacobi-Bellman and Pontryagin formulations of the subject, the authors touch upon all of its important facets.The book contains plenty of explicitly worked out examples, including classic applications of the theory to modern finance. Also, among other things, the book contains a detailed exposition of the general LQ problem and a very readable introduction to backward stochastic differential equations. A minor quibble: the generally very lucid presentation is somewhat overburdened with heavy notation.
Presentation on theme: "1 丁建均 (Jian-Jiun Ding) National Taiwan University 辦公室:明達館 723 室, 實驗室:明達館 531 室 聯絡電話: (02)33669652 Major : Digital Signal Processing Digital Image Processing."— Presentation transcript: 1 丁建均 (Jian-Jiun Ding) National Taiwan University 辦公室:明達館 723 室, 實驗室:明達館 531 室 聯絡電話: (02)33669652 Major : Digital Signal Processing Digital Image Processing 2 Research Fields [A. Signal Analysis] (1) Time-Frequency Analysis (2) Fractional Fourier Transform (3) Wavelet Transform (4) Eigenfunctions, Eigenvectors, and Prolate Spheroidal Wave Function (5) Signal Analysis (Cepstrum, Hilbert, CDMA) [B. Fast Algorithm] (6) Integer Transforms (7) Fast Algorithms (8) Number Theory, Haar Transform, Walsh Transform : the main topics I researched in recent years : the main topics I research before 3 [C. Applications of Signal Processing] (9) Optical Signal Processing (10) Acoustics (11) Bioinformatics [D. Image Processing] (12) Image Compression (13) Edge and Corner Detection (14) Pattern Recognition [E. Theories for Signal Processing] (15) Quaternion : the main topics I research before : the main topics I researched in recent years 4 1. Time-Frequency Analysis http://djj.ee.ntu.edu.tw/TFW.htm Fourier transform (FT) Time-Domain Frequency Domain Some things make the FT not practical: (1) Only the case where t 0 t t 1 is interested. (2) Not all the signals are suitable for analyzing in the frequency domain. It is hard to analyze the signal whose instantaneous frequency varies with time. 5 Example: x(t) = cos( t) when t < 10, x(t) = cos(3 t) when 10 t < 20, x(t) = cos(2 t) when t 20 (FM signal) 6 Instantaneous Frequency 瞬時頻率 If then the instantaneous frequency of f (t) are 其他瞬時頻率會隨時間而改變的例子 音樂,語音信號 Chirp Signal 7 Several Time-Frequency Distribution Short-Time Fourier Transform (STFT) with Rectangular Mask Gabor Transform Wigner Distribution Function Gabor-Wigner Transform (Proposed) avoid cross-term less clarity with cross-term high clarity avoid cross-term high clarity 8 Cohen’s Class Distribution S Transform where Hilbert-Huang Transform 9 Example: x(t) = cos( t) when t < 10, x(t) = cos(3 t) when 10 t < 20, x(t) = cos(2 t) when t 20 (FM signal) Left : using Gray level to represent the amplitude of X(t, f) Right : slicing along t = 15 f -axis t -axis 10 (1) Finding Instantaneous Frequency (2) Sampling Theory (3) Filter Design (4) Signal Decomposition (5) Modulation and Multiplexing (6) Electromagnetic Wave Propagation (7) Optics (8) Radar System Analysis (9) Random Process Analysis Applications of Time-Frequency Analysis (10) Signal Identification (11) Acoustics (12) Biomedical Engineering (13) Spread Spectrum Analysis (14) System Modeling (15) Image Processing (16) Economic Data Analysis (17) Signal Representation (18) Data Compression 11 Conventional Sampling Theory Nyquist Criterion New Sampling Theory (1) t can vary with time (2) Number of sampling points == Area of time frequency distribution 12 假設有一個信號, The supporting of x(t) is t 1 t t 1 + T, x(t) 0 otherwise The supporting of X( f ) 0 is f 1 f f 1 + F, X( f ) 0 otherwise 根據取樣定理, t 1/F, F=2B, B: 頻寬 所以,取樣點數 N 的範圍是 N = T/ t TF 重要定理:一個信號所需要的取樣點數的下限,等於它時頻分佈的面績 13 Modulation and Multiplexing not overlapped spectrum of signal 1 spectrum of signal 2 B1B1 -B 1 B2B2 -B 2 14 Improvement of Time-Frequency Analysis (1) Computation Time (2) Tradeoff of the cross term problem and clarification 15 -axis t -axis left: x 1 (t) = 1 for |t| 6, x 1 (t) = 0 otherwise, right: x 2 (t) = cos(6t 0.05t 2 ) WDF Gabor -axis t -axis 16 Gabor-Wigner Transform avoiding the cross-term problem and high clarity -axis t -axis 17 2. Fractional Fourier Transform Performing the Fourier transform a times (a can be non-integer) Fourier Transform (FT) generalization Fractional Fourier Transform (FRFT), = a/2 When = 0.5 , the FRFT becomes the FT. 18 Fractional Fourier Transform (FRFT), = a/2. When = 0: (identity) When = 0.5 : When is not equal to a multiple of 0.5 , the FRFT is equivalent to doing /(0.5 ) times of the Fourier transform. when = 0.1 doing the FT 0.2 times; when = 0.25 doing the FT 0.5 times; when = /6 doing the FT 1/3 times; 19 Physical Meaning: Transform a Signal into the Fractional domain, which is the intermediate of the time domain and the frequency domain. 20 Time domain Frequency domain fractional domain Modulation Shifting Modulation + Shifting Shifting Modulation Modulation + Shifting Differentiation j2 f Differentiation and j2 f −j2 f Differentiation Differentiation and −j2 f is some constant phase 21 Conventional filter design: x(t): input x(t) = s(t) (signal) + n(t) (noise) y(t): output (We want that y(t) s(t)) H( ): the transfer function of the filter. Filter design by the fractional Fourier transform (FRFT): (replace the FT and the IFT by the FRFTs with parameters and ) Why do we use the fractional Fourier transform? To solve the problems that cannot be solved by the Fourier transform Example: Filter Design 22 When x(t) = triangular signal + chirp noise exp[j 0.25(t 4.12) 2 ] 23 The Fourier transform is suitable to filter out the noise that is a combination of sinusoid functions exp(j 0 t). The fractional Fourier transform (FRFT) is suitable to filter out the noise that is a combination of higher order exponential functions exp[j(n k t k + n k-1 t k-1 + n k-2 t k-2 + ……. + n 2 t 2 + n 1 t)] For example: chirp function exp(jn 2 t 2 ) With the FRFT, many noises that cannot be removed by the FT will be filtered out successfully. 24 (2) Gabor transform [Ref 10] S. C. Pei and J. J. Ding, “Relations between Gabor Transforms and Fractional Fourier Transforms and Their Applications for Signal Processing,” IEEE Trans. Signal Processing, vol. 55, no. 10, pp. 4839-4850, Oct. 2007. (1) Wigner distribution function (WDF) [Ref 9] S. C. Pei and J. J. Ding, “Relations between the fractional operations and the Wigner distribution, ambiguity function,” IEEE Trans. Signal Processing, v. 49, pp 1638-1655, (2001). From the view points of Time-Frequency Analysis: 25 horizon: t-axis, vertical: f-axis FRFT = with angle The Gabor Transform for the FRFT of the rectangular function. [Theorem] The FRFT with parameter is equivalent to the clockwise rotation operation with angle for Wigner distribution functions (or for Gabor transforms) = 0 (identity), /6 2 /6 /2 (FT) 4 /6 5 /6 26 Filter designed by the fractional Fourier transform f-axis Signal noise t-axis FRFT FRFT noiseSignal cutoff line Signal cutoff line noise 比較: Filter Designed by the Fourier transform 27 以時頻分析的觀點,傳統濾波器是垂直於 f-axis 做切割的 t-axis f0f0 f-axis cutoff line pass band stop band 而用 fractional Fourier transform 設計的濾波器是,是由斜的方向作切割 u0u0 f-axis cutoff line pass band stop band cutoff line 和 f-axis 在逆時針方向的夾 角為 28 t-axis fractional axis Gabor Transform for signal + 0.3exp[j0.06(t 1) 3 j7t] Advantage: Easy to estimate the character of a signal in the fractional domain Proposed an efficient way to find the optimal parameter 29 In fact, all the applications of the Fourier transform (FT) are also the applications of the fractional Fourier transform (FRFT), and using the FRFT instead of the FT for these applications may improve the performance. Filter Design : developed by us improved the previous works Signal synthesis (compression, random process, fractional wavelet transform) Correlation (space variant pattern recognition) Communication (modulation, multiplexing, multiple-path problem) Sampling Solving differential equation Image processing (asymmetry edge detection, directional corner detection) Optical system analysis (system model, self-imaging phenomena) Wave propagation analysis (radar system, GRIN-medium system) 30 Invention: [Ref 1] N. Wiener, “Hermitian polynomials and Fourier analysis,” Journal of Mathematics Physics MIT, vol. 18, pp. 70-73, 1929. Re-invention [Ref 2] V. Namias, “The fractional order Fourier transform and its application to quantum mechanics,” J. Inst. Maths. Applics., vol. 25, pp. 241- 265, 1980. Introduction for signal processing [Ref 3] L. B. Almeida, “The fractional Fourier transform and time-frequency representations,” IEEE Trans. Signal Processing, vol. 42, no. 11, pp. 3084- 3091, Nov. 1994. Recent development Pei, Ding (after 1995), Ozaktas, Mendlovic, Kutay, Zalevsky, etc. 31 [Ref 5] S. C. Pei, W. L. Hsue, and J. J. Ding, “Discrete fractional Fourier transform based on new nearly tridiagonal commuting matrices,” accepted by IEEE Trans. Signal Processing. Type 1: Sampling Form Complexity: 2N + Nlog 2 N [Ref 4] S. C. Pei and J. J. Ding, “Closed form discrete fractional and affine Fourier transforms,” IEEE Trans. Signal Processing, vol. 48, no. 5, pp. 1338-1353, May 2000. Type 2: Eigenfunction Decomposition Form E: eigenvectors of the DFT (many choices), D: eigenvalues Extension 1: Discrete Fractional Fourier Transform 32 Extension 2: Fractional Cosine Transform [Ref 6] S. C. Pei and J. J. Ding, “Fractional, canonical, and simplified fractional cosine, sine and Hartley transforms,” IEEE Trans. Signal Processing, vol. 50, no. 7, pp. 1611-1680, Jul. 2002. [Ref 7] S. C. Pei and J. J. Ding, “Two-dimensional affine generalized fractional Fourier transform,” IEEE Trans. Signal Processing, vol. 49, no. 4, pp. 878-897, Apr. 2001. Extension 3: N-D Affine Generalized Fractional Fourier Transform 33 [Ref 8] S. C. Pei and J. J. Ding, “Simplified fractional Fourier transforms,” J. Opt. Soc. Am. A, vol. 17, no. 12, pp. 2355-2367, Dec. 2000. (easier for digital implementation) (easier for optical implementation) Extension 4: Simplified Fractional Fourier Transform 34 My works related to the fractional Fourier transform (FRFT) Extensions: Discrete fractional Fourier transform Fractional cosine, sine, and Hartley transform, Two-dimensional form, N-D form, Simplified fractional Fourier transform Fractional Hilbert transform, Solving the problem for implementation Foundation theory: relations between the FRFT and the well-known time- frequency analysis tools (e.g., the Wigner distribution function and the Gabor transform) Applications: sampling, encryption, corner and edge detection, self- imaging phenomena, bandwidth saving, multiple-path problem analysis 35 3 Wavelet Transform New Research field Useful for JPEG 2000 (image compression), filter design, edge and corner detection 只將頻譜分為「低頻」和「高頻」兩個部分 ( 對 2-D 的影像,則分為四個部分 ) x[n]x[n] h[n]h[n] 2 x 1,L [n] x 1,H [n] 2 g[n]g[n] 「低頻」部分 「高頻」部分 36 The result of the wavelet transform for a 2-D image lowpass for x lowpass for y lowpass for x highpass for y highpass for x lowpass for y highpass for x highpass for y 37 6. Integer Transform Conversion Integer Transform: The discrete linear operation whose entries are summations of 2 k., a k = 0 or 1 or, C is an integer. 38 Problem: Most of the discrete transforms are non-integer ones. DFT, DCT, Karhunen-Loeve transform, RGB to YIQ color transform --- To implement them exactly, we should use floating-point processor --- To implement them by fixed-point processor, we should approximate it by an integer transform. However, after approximation, the reversibility property is always lost. 39 [Integer Transform Conversion]: Converting all the non-integer transform into an integer transform that achieve the following 6 Goals: A, A -1 : original non-integer transform pair, B, B̃: integer transform pair (Goal 1) Integerization,, b k and b̃ k are integers. (Goal 2) Reversibility. (Goal 3) Bit Constraint The denominator 2 k should not be too large. (Goal 4) Accuracy B A, B̃ A -1 (or B A, B̃ -1 A -1 ) (Goal 5): Less Complexity (Goal 6) Easy to Design 40 Development of Integer Transforms: (A) Prototype Matrix Method (Partially my work) (suitable for 2, 4, 8 and 16-point DCT, DST, DFT) (B) Lifting Scheme (suitable for 2 k -point DCT, DST, DFT) (C) Triangular Matrix Scheme (suitable for any matrices, satisfies Goals 1 and 2) (D) Improved Triangular Matrix Scheme (My works) (suitable for any matrices, satisfies Goals 1 ~ 6) 41 Problem: The number of bits is increased (due to 3 triangular matrices) Number of bit tradeoff Accuracy The number of time cycles is increased (due to 3 triangular matrices) How to find the optimal one Basic idea of the triangular matrix scheme: Any matrix can be decomposed as A = PDLUSQ P, Q: permuting matrices, D: diagonal matrix L: lower triangular matrix, U: upper triangular matrix, S: One row lower triangular matrix 42 References Related to the Integer Transform [Ref. 1] W. K. Cham, “Development of integer cosine transform by the principles of dynamic symmetry,” Proc. Inst. Elect. Eng., pt. 1, vol. 136, no. 4, pp. 276-282, Aug. 1989. [Ref. 2] S. C. Pei and J. J. Ding, “The integer Transforms analogous to discrete trigonometric transforms,” IEEE Trans. Signal Processing, vol. 48, no. 12, pp. 3345-3364, Dec. 2000. [Ref. 3] T. D. Tran, “The binDCT: fast multiplierless approximation of the DCT,” IEEE Signal Proc. Lett., vol. 7, no. 6, pp. 141-144, June 2000. [Ref. 4] P. Hao and Q. Shi., “Matrix factorizations for reversible integer mapping,” IEEE Trans. Signal Processing, vol. 49, no. 10, pp. 2314-2324, Oct. 2001. [Ref. 5] S. C. Pei and J. J. Ding, “Reversible Integer Color Transform with Bit-Constraint,” accepted by ICIP 2005. [Ref. 6] S. C. Pei and J. J. Ding, “Improved Integer Color Transform,” in preparation. 43 9. Optical Signal Processing and Fractional Fourier Transform lens, (focal length = f) free space, (length = z 1 )free space, (length = z 2 ) f = z 1 = z 2 Fourier Transform f z 1, z 2 but z 1 = z 2 Fractional Fourier Transform f z 1 z 2 Fractional Fourier Transform multiplied by a chirp 45 There are four types of nucleotide in a DNA sequence: adenine (A), guanine (G), thymine (T), cytosine (C) Unitary Mapping b x [ ] = 1 if x[ ] = ‘A’, b x [ ] = 1 if x[ ] = ‘T’, b x [ ] = j if x[ ] = ‘G’, b x [ ] = j if x[ ] = ‘C’. y = ‘AACTGAA’, b y = [1, 1, j, 1, j, 1, 1]. 11. Discrete Correlation Algorithm for DNA Sequence Comparison [Reference] S. C. Pei, J. J. Ding, and K. H. Hsu, “DNA sequence comparison and alignment by the discrete correlation algorithm,” submitted. 46 Discrete Correlation Algorithm for DNA Sequence Comparison For two DNA sequences x and y, if where Then there are s[n] nucleotides of x[n+ ] that satisfies x[n+ ] = y[ ]. Example: x = ‘GTAGCTGAACTGAAC’, y = ‘AACTGAA’,. x = ‘GTAGCTGAACTGAAC’, y (shifted 7 entries rightward) = ‘AACTGAA’. 47 Example: x = ‘GTAGCTGAACTGAAC’, y = ‘AACTGAA’, s[n] =. Checking: x = ‘GTAGCTGAACTGAAC’, y = ‘AACTGAA’. (no entry match) x = ‘GTAGCTGAACTGAAC’, y = (shifted 2 entries rightward) ‘AACTGAA’. (6 entries match) x = ‘GTAGCTGAACTGAAC’, y (shifted 7 entries rightward) = ‘AACTGAA’. (7 entries match) 48 Advantage of the Discrete Correlation Algorithm: ---The complexity of the conventional sequence alignments is O(N 2 ) ---For the discrete correlation algorithm, the complexity is reduced to O(N log 2 N) or O(N log 2 N + b 2 ) b: the length of the matched subsequences Experiment: Local alignment for two 3000-entry DNA sequences Using conventional dynamic programming Computation time: 87 sec. Using the proposed discrete correlation algorithm: Computation time: 4.13 sec. 49 12. Image Compression Conventional JPEG method: Separate the original image into many 8*8 blocks, then using the DCT to code each blocks. DCT: discrete cosine transform PS: 感謝 2008 年畢業的黃俊德同學 57 Other ways for edge detection: convolution with a longer odd function Doing difference x[n] x[n 1] = x[n] (convolution) with h[n]. h[n] = 1 for n = 0, h[n] = -1 for n = 1, h[n] = 0 otherwise. x[n] 58 ++ ( + /2) Corner Detection Conventional Algorithm: Observing the variation along x-axis and y-axis, Proposed Algorithm: Observing the variation along + axis, -axis, +( + /2)-axis and ( + /2)-axis, -- +( + /2) Corner: the edge of an edge 61 15. Quaternion 翻譯成 “ 四元素 ” , Generalization of complex number Complex number: a + ib i 2 = 1 real part imaginary part Quaternion: a + ib + jc + kd i 2 = j 2 = k 2 = 1 real part 3 imaginary parts [Ref 18] S. C. Pei, J. J. Ding, and J. H. Chang, “Efficient implementation of quaternion Fourier transform,” IEEE Trans. Signal Processing, vol. 49, no. 11, pp. 2783-2797, Nov. 2001. [Ref 19] S. C. Pei, J. H. Chang, and J. J. Ding, “Commutative reduced biquaternions for signal and image processing,” IEEE Trans. Signal Processing, vol. 52, pp. 2012-2031, July 2004. 62 Application of quaternion a + ib + jc + kd: --Color image processing a + iR + jG + kB represent an RGB image --Multiple-Channel Analysis 4 real channels or 2 complex channels abcdabcd a+jb c+jd =
« AnteriorContinuar » and having multipled and proved as in whole numbers, as there are three decimal places in the two factors, we point off the three right-hand places of the product. Again, we multiply, as in whole numbers, ,1853 by ,013, thus : ,0024089 and, as there are 7 decimals in the two factors, and but 5 figures in the product 24089, we place two ciphers on the left, and prefix the comma. The student may prove the following multiplications, by multiplying both ways, and by casting out the nines : 6. ,00625 X ,039 X ,05 X ,8. 7. Required the difference between the sum of the products of the six preceding examples, and one million. Answer. Nine hundred and two thousand, seven hundred and nine units; and four hundred and twenty-six millions, seven hundred and thirty-two thousand, four hundred and ninety-eight billionths. 235. When the product is only required to a proposed degree of exactness, the work may be abridged by the following method, given by Bezout: We reverse the order of the figures of the multiplier and write it under the multiplicand, so that its unit figure may stand under the second place on the right of that to which the product is required. We then multiply by each figure of the multiplier, beginning with the figure under which it stands, and place the first figure of each new product under that of the preceding one. Having added the products, we suppress the two right-hand figures, observing however, to increase by a unit the last of those which remain when the suppressed figures exceed 50. Lastly, we point off the places of the proposed limit. Required, the product of ,2345678 X 85,276 within a thousandth. 20,00289 As the figures suppressed exceed 50, we increase the next figure 2 by a unit, and have 20,003 for the product within a thousandth. The true product is 20,0030037128. In placing the unit figure of the multiplier under the second grade below the proposed limit, the product is of the order of that grade. Also, by the inverted order of the figures, the next lower order of the multiplicand is multiplied by a tenfold higher order of the multiplier; and the next higher order of the multiplicand by a tenfold lower order of the multiplier, and so on in succession. Therefore, all the products are of the same order, which accounts for placing their first figures under each other. Again, because the part rejected cannot equal a unit of the order multiplied, the product of this part by any figure cannot equal a unit of the next higher order: therefore, each product is within a unit of the next order on the right of the proposed limit. Hence, the rule, as an approximation, may in all ordinary cases be relied on. When there is no unit figure in the multiplier, place 0 in its stead. Also, when there are not enough of decimal places in the multiplicand, supply the deficiency with ciphers. Find the product of ,227538917 X ,5664178 to the seventh decimal inclusive. 87146650 113769455 13652334 1365228 91012 2275 1589 Product ,1288821. The student may prove the following examples by multiplying in the ordinary way: Examples. 1. Required, the product of 376,273495 multiplied by 2,73486, correct to one-thousandth. Answer, 1029,055 +. 2. Required, the product of 83,4679215 X 67,89341, correct to a hundred-thousandth. Answer, 5666,92182. 3. Required, the product of 75,82344 X ,196497, true to five decimals. Answer, 14,89908. 4. Required, the product of ,7358462199 X ,324162549, correct to nine decimals. Answer, ,238533786 +. Division of Decimals. 236. When the divisor is a whole number, the quotient, (150) being of the same order as the dividend, must have the same number of decimals. 237. When the divisor is not a whole number, we place as many ciphers on the right of the dividend, when integral, or remove its comma, when decimal, (annexing ciphers if necessary,) as many places to the right, as there are decimal places in the divisor, in which we then suppress the comma. Both numbers being thus (228) multiplied by the same number, the quotient (165) is not altered. The divisor being now a whole number, we divide as usual, and point off for decimals, in the quotient as many places as there are decimal places in the dividend. When the divisor is rendered integral, ciphers on the left of its highest grade are effaced. But ciphers between the comma and the highest order of the dividend, though neglected in the operation, are retained to ascertain the place of the comma in the quotient. Also, we place any number of ciphers at pleasure, as decimal places, on the right of the dividend, when it does not contain the divisor: Examples 1. 5236 ,128=5236000 - 128 = 40906,25 We here first multiply both numbers by 1000, which ren. ders the divisor integral, and (165) does not affect the quotient. Then, to avoid confusion in distinguishing the annexed ciphers in integral, from those in decimal places, we at once determine the place of the comma, thus: As the part 523, which is tens of thousands, contains the divisor, the quotient figure 4 is (150) tens of thousands; that is, there will be 5 integral figures in the quotient: therefore, having found these, we place the comma on the right. 2. 6,375 = 85=,075 Having divided, as in whole numbers, and found 75 for the quotient, as there must be (236) as many decimal figures in the quotient as in the dividend, we place a cipher on the left of 75, and, prefixing the comma, we have ,075 for the true quotient. 238. When the divisor and dividend have each the same number of decimal places, the comma in both may be suppressed : because the equimultiples of any two numbers (165) give the same quotient as the numbers do. Thus : 6,375 – ,075 = 6375 - 75=85, (237.) 239. If, after we have rendered the divisor integral, the dividend does not contain it, we place a comma in the quotient, and also on the right of the dividend, if integral. We then place just as many ciphers on the right of the dividend as will make it contain the divisor ; and, on the right of the comma, in the quotient, as many ciphers, less one, as there are now decimal places in the dividend; because the quotient figure itself will occupy the place of the last cipher in the dividend, which (150) is its true order. Examples. 1. ,128 = 81,92 = 12,8 + 8192= 12,800 8192 Having thus prepared the numbers, as the dividend now contains three decimals, we write ,001 in the quotient, which, when complete, is ,0015625. 2. ,128 = ,0015625 =,1280000 + ,0015625=1280000 • 15625=81,92 Having rendered the numbers integral, as the first partial dividend is tens, we know that there will be two integral figures: as soon, therefore, as we obtain these, we place a comma on the right and continue the operation. 240. Ciphers on the right of an integral divisor may always be suppressed by removing the comma in the dividend one place to the left for every such cipher. Escamples. 1. 2248318,477 - 970000 = 224,8318477 +97=2,3178541 2. 27445863,6 = 36000 = 27445,8636 = 36=27445,8636 + (6 X 6) = 4574,3106 = 6=762,3851 241. When both numbers terminate with the decimal form of some well-known fraction, having, in its denominator, no factor prime to 10, such as ,25 and ,75, which are 4 and į ; ,125; 375; ,625, and ,875; which are i, ,0625; ,1875; ,3125; ,4375; ,5625; ,6875; „8125, and ,9375; which are to, in f& is, 16, 15, 18, and 1&; the work may often be greatly facilitated by multiplying both numbers by the denominator of the known fraction, in doing which we recognize, without calculation, the numerator of the fraction, for the product of the terminating figures. and ; or Examples. 1. 536,25 + ,75 = 2145 3=715 We here consider ,25 and ,75 as the numerators of the fractions which they represent. Wherefore, in multiplying the dividend by 4, we begin on the left of ,25, and say, 4 times 6 is 24, and 1 is 25, &c.
The Chinese are coming! The Chinese are coming! Lest some object that allowing the Taoist number line to reach our shores might endanger national security or at the very least the security of our hallowed number system, I must be quick to point out that the infiltration has already occurred. It happened well over half a century ago. As was the case with introduction of the Indo-Arabic number system to medieval Europe, the first to climb on the bandwagon were the merchants and moneylenders. Evidence once again that nothing ever really changes. Our already deeply entrenched credit card system is fully conversant with, if not necessarily cognizant of its use of, the Taoist number line. The concept of using a card for purchases was exploited, if only in fiction, as early as 1887 by Edward Bellamy, the American author and socialist, in his utopian novel Looking Backward. [Wikipedia] In just over seventy years the fiction of the late 19th century was on the verge of becoming the fact of the mid 20th century. In 1958 the first successful recognizably modern credit card was introduced and the world has never looked back. Let’s now consider briefly how the credit card system is similar to the Taoist number line in its number usage. First, its central point of reckoning, which corresponds to the origin of the Taoist number line, the taijitu, is not zero but rather what is termed the credit line. This has a value which can be variable, often changing over time. But never in any of its various incarnations is it equal to zero. Unless of course the creditor fails to make the required payments and loses credit entirely. To the right of the central point credit available exceeds credit used, corresponding to the yang domain of the Taoist number line. To the left of the central point credit used exceeds credit available, corresponding to the yin domain of the Taoist number line. In the example shown above each unit has a value of 100 dollars and the credit line is two thousand dollars. At every point along the line the sum of credit used and credit available (yin plus yang) equals the credit line, in this case 20 x 100 or $2000. To the extreme right no credit has been used, all is still available for use (credit = 20; debit = 0). To the extreme left all credit has been used, none remains available (credit = 0; debit = 20). At this point further use of the credit card requires either making a payment or else obtaining an increase in the credit line. At the central point, credits and debits are perfectly in balance (credit = 10; debit = 10) and there is as much credit remaining for use as has already been used.(1) (1) This example demonstrates among other things that use of the Taoist number line is possible and wholly practicable in ordinary commerce, i.e., the activity embracing all forms of the purchase and sale or exchange of goods and services. The question then arises as to why it would not be equally valid for defining and describing the events of the subatomic realm which are, when all is said and done, simply a commerce of another kind, one involving the exchange or interchange of particles and forces rather than debits and credits. © 2014 Martin Hauser The strange and convoluted history of our number line Unlike the Taoist number line (1,2) which lacks a zero and which from its inception was entire and holistic, if not explicit, the number line that emerged in Western mathematics was the result of unplanned growth in which zero came to play a multiple and confusing role.(1) First came the positive numbers (the counting numbers).(2) Negative numbers followed by centuries and were not introduced in Europe until the 15th century.(3) A practical quasi-mathematical zero had been used in antiquity (4), but this was not yet either the number zero or the zero of positional notation. When the Hindu-Arabic numeral notation system first entered Europe through Spain in the 10th century it did so without zero. The zero of position came later, and the number zero was a still later arrival to Europe.(5) There was no plan to any of this. It all sort of just materialized as what appears a patchwork of happenstance and pragmatic convenience, more a comedy of circumstance than a result of any rational forethought or perceived necessity. Although today a 10-year old schoolchild is expected to master the basics of the number line, its birth was an uneasy one. From the start it had its critics and resisters. The notion of negative numbers representing debts must have been difficult enough to comprehend and accept initially, but with zero admitted to the number pantheon there was the additional challenge of dealing with something that was less than nothing. No easy task for a medieval craftsman, or scholar even, in the 13th-16th centuries.(6) Over the intervening centuries elapsed since then we all have become fully indoctrinated with the accepted dogma regarding the number line. So much so, we have forgotten about its adulterated origins. Along with the mathematicians we genuflect and call the number line a thing of beauty for its elegant simplicity and structural unity. It is nothing of the sort. The number line of mathematics is a Western cultural construct.There is nothing sacrosanct about it. It has its usefulness.(7) But in some ways it is more confining than a snake’s outgrown skin. In certain contexts it is misleading and counterproductive. To promote further advancement of modern physics, for example, a number line based on egality, balance and equilibrium rather than ascendancy and subordination would likely prove more advantageous. Chemistry and biology, both in their own ways, have confronted and allied themselves with equilibrium as a powerful force and near omnipresent occurrence. Physics has as well to a degree. It needs to go much further though. It needs to go to the heart of the matter. It needs now to take on the Western cultural construct of the number line and detail its limited veracity. It is not enough to declare the vacuum of space is not empty. The time is ripe to disclose the zero of mathematics is not vacuous as we have been led to believe.(8) And that the negative sign appended to a class of numbers does not make them inferior or subordinate in magnitude, just different in direction. The signs appended to numbers specify the direction of vectors only and lack information pertaining to magnitude. The numbers themselves specify magnitude only and lack any information pertaining to direction.(9) Western thought’s love of conflation has led us far astray. (1) By a process of conflation Western thought has used the symbol zero (0) both as a number and a numerical digit used to represent that number in its system of numeration (a writing system or mathematical notation for representing numbers in a consistent manner.) As a digit, 0 is used as a placeholder in place value systems such as our decimal or base ten numeral system. The earliest certain use of zero as a decimal positional digit dates to the 5th century in India. [Wikipedia] Confusing the issue still more, the word zero, in common usage, can mean the number, the symbol, or the word for the number. The use of 0 as a number should be distinguished from its use as a placeholder numeral in place-value systems. Mandalic geometry has no argument with the placeholder zero but only with the number zero. (2) A mathematical notation to represent counting is believed to have first developed at least 50,000 years ago. [Wikipedia] (3) Chinese authors had been familiar with the idea of negative numbers by the Han Dynasty (2nd century CE), as seen in the The Nine Chapters on the Mathematical Art, much earlier than the first documented introduction of negative numbers in the 15th century in Europe. As recently as the 18th century it was common practice to ignore negative results of equations on the assumption that they were meaningless, just as René Descartes did with negative solutions in a Cartesian coordinate system. [Wikipedia] Gottfried Wilhelm Leibniz was the first mathematician to systematically employ negative numbers as part of a coherent mathematical system, the infinitesimal calculus. Calculus made negative numbers necessary and their dismissal as “absurd numbers” quickly faded. [Wikipedia] The earliest known use of the plus sign (+) occurs ca. 1360; of the minus sign (-), 1489. [Wikipedia] For short summaries of the history of negative numbers see here and here. (4) In ancient Egypt, guidelines were used in construction of the pyramids and earlier still. These were labeled at regular intervals with integers and a zero level or zero reference line was part of this system. [Source] This is not to be construed as use of zero as a number but rather as an engineering and geometric operating principle, one related, it would seem, to constructional or architectural integrity. (5) The concept of zero as a number rather than just a symbol or an empty space for separation is attributed to India, where, by the 9th century AD, practical calculations were carried out using zero, which was treated like any other number, even in case of division. [Wikipedia] Zero as a number which quantifies a count or an amount of null size in most cultures was identified before the idea of negative quantities that go lower than zero was accepted. [Wikipedia] The number zero arrived in the West circa 1200, brought by Italian mathematician Fibonacci who found it, along with the rest of the Arabic numerals during his travels to North Africa. [Source] He doesn’t yet recognize zero to be a number on equal footing with the other nine numerals of the Hindu-Arabic notational system though. In 1202 Fibonacci (aka Leonardo of Pisa) uses the phrase “sign 0”, indicating it is like a sign to do operations like addition or multiplication. [Wikipedia] For a short history of the number zero see here. For a description of the various attributes of the number zero and rules pertaining to the use of the number zero in mathematics see here. (6) We need to remember too that Europe during this time period was still reeling from the cultural shock of the newly introduced Hindu-Arabic numeral system which reached Spain in the 10th century (without zero). [Wikipedia] Until the late 15th century, Hindu-Arabic numerals seem to have predominated among mathematicians and moneylenders, while merchants preferred to use the Roman numerals. In the 16th century, they became commonly used in Europe, following on the heels of the printing press in the 15th century and undoubtedly related to introduction of that disruptive invention. [Wikipedia] In 1550 the first edition of Adam Riese’s Rechnung auff der linihen und federn… [Calculation by counter and pen…] is published. This work describes numerical calculations with Indian-Arabic digits in the vernacular German language rather than scholarly Latin. The intended readers are the apprentices of businessmen and craftsmen. The old Roman notational system has lost its stronghold, never to be regained. (7) In the last few centuries, the European presentation of Arabic numbers has spread around the world and gradually become the most commonly used numeral system. Even in many countries which have their own distinct numeral systems, the European Arabic numerals are widely used in commerce and mathematics. [Wikipedia] (8) Certainly there are concrete situational referents to which the number zero validly applies (I have no food to feed my child; I have no arrows in my quilt…). It is the validity of the abstract mathematical notion of zero which is in question here, particularly the version which treats it as a universal verity (There can be a natural subdivision of the universe that is completely empty, devoid of matter-energy and without space-time). (9) For an excellent discussion about scalars and vectors see here. (There are four sections to the discussion. This link leads to the first section. Click on through from there to the other three.) Back to Basics: polarity and the number lines - II (continued from here) Taoism natively is more inclined to think in terms of 2-, 3-, and higher-dimensions than in 1-dimensional linear terms. Taoism has a number line analogue but an implicit one which is treated as an abstraction, more a distant consequence of real processes in the universe than a fundamental building block. Reference to higher dimensions is not fully relinquished even in this 1-dimensional abstraction. It is little used in isolation but features more prominently in Taoist diagrams of the analogue of the 2-dimensional Cartesian plane. In a sense this makes the Taoist number line much more robust than the number line of Western mathematics. Whereas from the narrowly focused perspective of Western mathematics the “number line” of Taoism might be viewed as “hyperdimensional” from the perspective of Taoism itself it is “dimension poor” and therefore degenerate.(1) Regarding two distinct kinds of change, sequent and cyclic, Western thought is, in general, more concerned with sequent and Eastern thought with cyclic change.(2) Whereas the Western number line stretches out to infinity in both directions as in an orgiastic celebration of sequent change, the Taoist number line, exhibiting more restraint confines itself to some more realistic terminus of magnitude. It does so first because the taijitu (infinity analogue) of Taoism is non-polarized and exists at the center where Western thought places its “zero”. But also because it envisions change mainly in terms of cycles and invariably selects more realistic points of maximum and minimum extension than infinity.(3) From the point of view of Taoism infinity though unbounded is also undifferentiated, existing in a non-polarized state of pure potential and potency whereas all differentiated states having polarity are limited in degree of potential and subject ultimately to constraint of extension.(4) (to be continued) (1) Taoism is a worldview based largely on relationships. From its very beginnings it likely considered a single dimension insufficient to express the full complexity of relationships possible. The I Ching, based largely upon the Taoist worldview, is a treatise which makes use of 64 hexagrams to correlate six dimensions of relationship. It may be the world’s earliest text on combinatorics and dimensionality. The true significance of this seminal work of humankind has sadly been too frequently overlooked. (2) This is entirely a matter of degree and of preferred focus but has nevertheless profound consequences reflected in the resulting respective worldview of the different cultures. From an oversimplified bird’s eye view, Western thought regards significance best revealed by way of historical development through time experienced sequentially; Eastern thought, by way of recursive phenomena of nature expressed through cyclic time. (3) This means also that there can be no single representative number line as there is in Western mathematics. Not at least if distances along the line are marked off in customary units of consecutive digits. For each specific Taoist number line unique complementary terminal maximum and minimum values must be selected. In the case illustrated above the value was chosen to be 20 so as to conform in terms of number of intervals to the Western number line segment shown (ten negative and ten positive intervals.) Had the value been chosen as 10 instead, the Taoist line would extend only from yin = 10; yang = 0 to yin = 0; yang = 10 and the number of intervals encompassed would have been a total of ten rather than the required twenty. One way to surmount this difficulty in labeling described would be to number the intervals along the Taoist number line in terms of percentages rather than specific sequent intervals. Were this procedure followed every Taoist number line would extend from yin = 100%; yang = 0% to yin = 0%; yang = 100% with the central point of origin (corresponding to “zero” in the Western number line) labeled as yin = 50%; yang = 50%. The two “zeros” that occur at the extreme ends of the Taoist line (yang = 0 to the left; yin = 0 to the right) should not be viewed as numbers but rather in a sense similar to that in which “zeros” are used as unit ten placeholders in our decimal number system. (4) In any case, labeling of the central origin point with either specific sequent intervals having identical absolute values or equal percentages (yin 50%/yang 50%) signifies the potential of the non-polarized and unbounded taijitu (infinity) to change by means of polarization into its polarized, bounded aspect. This process can be viewed also in terms of pair production (as understood by both Taoism and particle physics.) © 2014 Martin Hauser Back to Basics: polarity and the number lines - I (continued from here) The number line is represented as a straight line on which every point is assumed to correspond to a single real number and every real number to a single point in a 1:1 mapping. It can be graphed along either a horizontal axis or a vertical axis. In the former case, negative is shown to the left of zero and positive to the right by convention. In the latter case, negative is shown below zero and positive above. The number lines of Western mathematics and Taoism are similar in that both make constant use of the concept of polarity. That is where the similarities end. Even the manner in which this concept is used in the two worldviews differs. For mathematics the basic polarity entrenched in its number line(1,2) is that between positive and negative. These two polarities are thought of as opposite and mutually exclusive. They are mediated by the concept of “zero”, a sort of no-man’s-land, the boundary between the two which belongs rightfully to neither. It is thought of as being in a sense an empty buffer zone between the polarities of positive and negative and functions not so much to balance or unify the two as to keep them apart from one another, or failing that to nullify both. Hence zero generally is treated as having no sign. Additionally, zero in the number line of mathematics and in the one-dimensional line of Cartesian geometry has neither magnitude nor preferred direction.(1) It should come as no surprise that division by zero is not possible in Western mathematics when “zero” has itself been conceived as a kind of singularity.(2) This fact of itself should have indicated something amiss with the way “zero” has been conceptualized.(3) The “zero” of Western thought works well in the field of finance and most everyday practical fields of endeavor in general. Where the concept falls short is in the attempt to apply it indiscriminately in modern physics and certain other fields of science. A close corollary here is the misapprehension of the actual manner in which mathematical signs (and hence all polarities) operate in the real world. In place of division by a non-existent “zero” Taoism advances the concepts of “polarization”, “depolarization”, and “repolarization”, all of which its “zero” alternative is fully capable of accomplishing. In a very real sense this “zero” alternative represents pure potential, both in the mathematical and physical senses. In the physical world it corresponds to the taijitu which may legitimately be considered pure potential energy as opposed to differentiated matter. Taoism in its way realized long before Einstein that the two are interchangeable. With the preceding as background we are ready to consider next what the Taoist number line equivalent might look like. Image: The Number Line. art: Zach Sterba/mC. writer: Kevin Gallagher/mC. [Source] (1) This combination of parameters, comprising as it were a particular worldview, contrasts markedly with the worldview of Taoism. Unlike the demilitarized buffer zone that “zero” represents in Western mathematics, the taijitu of Taoism which occurs in its place both mediates between the polarities of negative and positive and gives rise to those same polarities repeatedly by the process of polarization. In place of the additive inverse negation operation of “zero” we have the operations of “depolarization” and “repolarization”. Having more in common with the worldview of Taoism than that embraced by Western mathematics, mandalic geometry treats the central buffer zone as a point of equilibrium, balance and potentiality which lacks neither content nor direction. It has in fact two directions, one centripetal, one centrifugal, which over the course of time can be alternatively chosen as a preferred direction repeatedly in cyclic fashion. (2) This naturally brings to mind the informal rule of Computer Science holding that integrity of output is dependent upon the integrity of input. [GIGO (1960-1965): acronym for garbage in, garbage out.] (3) This has far-reaching consequences, more importantly in the fields of modern physics and epistemology than that of mathematics. It is my contention that physics since the introduction of quantum mechanics and the theory of relativity (or “invariance” as Einstein preferred to refer to it) has been thwarted in its development by (among other things) too strict adherence to a limited concept of “zero” unsuited to its purposes. For Taoism and mandalic geometry “zero” is not a number but a polarizing function, itself without magnitude as commonly understood or permanent sign. Modern physics already partially leans toward this point of view but has not yet entirely shed the old outgrown “skin” that the number “zero” represents. © 2014 Martin Hauser Back to Basics: space, time and dimension (continued from here) Throughout most of the history of Western thought space and time were regarded as independent aspects of reality. (1) Dimension was considered an attribute of both. Space was viewed as consisting of three independent linear dimensions pictured as being mutually perpendicular. Each of these spatial dimensions consisted of two opposite directions and movement in either direction was possible. Different spatial dimensions were independent of one another and space could be traversed in one, two or three dimensions at once. Time was viewed as having a single dimension progressing in a forward direction only. Most often sequent time was emphasized preferentially to cyclic time. (2) Material objects were viewed as distinct entities occupying space and time but independent of them. (3) In the worldview of Taoism space, time and dimension were never viewed as existing apart from one another but as all intimately related. Furthermore, dimensions are viewed as interrelated, not independent of one another. In general neither space nor time is conceived in terms of single linear dimensions but as interrelated composites of two or more dimensions. Direction in Taoism has to do not exclusively with opposing pairs but also with interdependent polarities. Time like space is considered to be bidirectional. Cyclic change plays a role of at least equal importance to sequent change. Time in the cyclic sense develops in directions of both expansion and contraction. Both evolution and involution, activation and deactivation are all ever-present possibilities. All possible combinations of relationship are explored and the probable eventual future outcomes of each occurrence are always taken into consideration for purposes of understanding events and planning actions. (4) Image: A simple cycle. Author: Jerome Tremblay, writeLaTex. (This is used here to illustrate in the most elementary manner possible the basis of cyclic change and cyclic time. The more complex nature of these will be elaborated more fully in future posts.) (1) It was not until the early 20th century when Einstein introduced his Special theory of relativity, that space and time were fully integrated in a single concept, spacetime. (2) Historically the "cyclic" view of time was of great importance in ancient thought and religions in the West as well as in the East. Attention was certainly paid to periodic recurring cycles related to the lunar month and, with the rise of agriculture, to the solar year. With the subsequent ascendance of the historically based religions however and in more modern times as technological achievements have taken center stage this acute awareness of periodicity and cyclic time has largely declined in the West. (3) Leibniz believed that space and time, far from having independent existence, were determined by these material objects which he supposed were not contained in space and time but rather created them through their positioning relative to one another. (1,2,3) Leibniz, however, was familiar with the I Ching and it is unlikely that his thought processes would not have been influenced to some degree by the relational and relativistic Taoist worldview he found therein. (4) In fact, the Taoist I Ching can be considered first and foremost an exhaustive compendium of the combinatorial probabilities of spacetime relationships in six dimensions. Its alternate title The Book of Changes attests to this. The fact that it has also been used over the centuries as a method of divination should not in the least detract from its more comprehensive value to human knowledge and epistemology. © 2014 Martin Hauser Back to Basics: the fundamental polarity For Taoism the fundamental polarity is that between yin and yang. This is usually presented along a vertical axis when considered in a single dimension. Yang is always above and associated with the South compass direction. Whenever two dimensions are treated simultaneously the yang polarity of the second dimension is presented along a horizontal axis to the left by convention and is associated with the East compass direction. (1) For Western mathematics the fundamental polarity is that between negative and positive. When considered in context of one dimension this may be presented either along a horizontal axis (positive to the right) or vertical axis (positive up). When two dimensions are under consideration the horizontal axis is generally referred to as the x-axis and the vertical axis, the y-axis, both with directions labeled as noted above. The two thought systems can be made commensurate in terms of mathematics. For instructional purposes here the Western conventions of direction are followed. (2) Also used here exclusively is the right-hand rule convention of three-dimensional vector geometry. Since the letter “x” is used to refer to the horizontal dimension and the letter “y” to the vertical dimension, the third dimension or “z” dimension must then necessarily have its positive direction toward the viewer. (3) In physics, polarity is an attribute with two possible values. An electric charge, for example, can have either a positive or negative polarity. A voltage or potential difference between two points of an electric circuit has a polarity determined by which of the two points has the higher electric potential. A magnet has a polarity with the two poles distinguished as “north” and “south” pole. More generally, the polarity of an electric or magnetic field can be viewed as the sign of the vectors describing the field. (4) Image: Yin yang. Public domain. (1) Early Chinese cartography traditionally placed South above, North below, East to the left and West to the right. Though all reversed from Western presentations these are clearly conventional choices rather than matters of necessity. Many other ancient conventional associations of “yin” and “yang” have been preserved in Taoism. Most of the ancient traditional associations of “positive” and “negative” have long since been lost to Western thought. (2) It should be noted that blindmen6.tumblr.com, this blog’s predecessor, presented instead the conventions used in the I Ching. That choice of convention has been abandoned here in favor of the Western convention in order to avoid unnecessary confusion. (3) “Necessarily” only because the die has already been cast by choice of the directions of the horizontal and vertical axes and choice of adherence to the generally accepted right-hand rule. These, though, are all matters of convention. That should be kept in mind, if only because foresight suggests at a certain stage of development mandalic geometry may find it necessary to give the boot to some conventions and possibly as well to the use of any convention at all. Indeed the ultimate goal is a convention-free geometry though we are very far from that at this point in time. (4) Although the text of this blog often equates the “yang” polarity with “+1” and the “yin” polarity with “-1” that is to be taken as a shorthand of sorts used instead of referring to “the positive sign of the vector +1” and “the negative sign of the vector -1” each time. Although doing so is most decidedly a convenience it is not strictly correct as these Taoist concepts actually refer to the entire poles of positivity and negativity. It is possible to use this shorthand only because to this point and for the foreseeable future the discretized number system of mandalic geometry requires only +1, -1 and 0 in terms of Western mathematics. It can be extrapolated to higher scalar values but will not be in the near time frame. © 2014 Martin Hauser Quantum Naughts and Crosses 13 (continued from here) This is the yz-plane with x = +1. It is the face of the mandalic cube that would be presented were the cube rotated 90 degrees clockwise.(1) Its resident tetragrams are formed from lines 2, 3, 5 and 6. Lines 1 and 4 are not included in the resident tetragrams because the x-dimension value is unchanging (+1) throughout this face of the mandalic cube. The z-axis (lines 3 and 6) is presented horizontally here, positive toward the viewer’s left. The y-axis (lines 2 and 5) is presented vertically, positive toward the top. The corresponding Cartesian triples are shown directly beneath the hexagram(s) they relate to. The complementary(2) face of the mandalic cube, the yz-plane with x = -1, can be generated by changing lines 1 and 4 in every hexagram above from yang(+1) to yin(-1). Were we to do that and also view the resulting plane from a vantage point inside the cube we would then see a patterning of resident tetragrams identical to that in the plane above. The only difference apparent would be the substitution of yin lines for yang lines at positions 1 and 4. We might have justifiably started out here by viewing the yz-plane with x = -1 from without the cube and followed with viewing the yz-plane with x = +1 from inside the cube had the die not already been cast. The problem with that attack given the present circumstances is that we have previously begun our consideration of the members of the the other two face pairs with the positive member from outside the cube. By preserving that consistency we end up with a jigsaw puzzle the parts of which can readily be fitted together to recreate the whole. Any inconsistency at this point can only result in failure.(3) (1) This assumes that we begin with the reference face we have been using (xy-plane, z = +1) toward the observer seated at the bridge table. (2) Mandalic geometry views opposite faces of the mandalic cube as being complementary rather than antagonistic or adversarial. This seems almost unnecessary to point out when the six planes that constitute the Cartesian cube are viewed as a single complex whole. There is a synergy of action simultaneously involving all component parts of the whole and there is an even greater degree of complex interactivity involving the component parts of the higher dimension mandalic cube. The parts may indeed at times be in conflict or opposition with one another but at other times work together to create an effect. For a possible analogy think here of the constructive and destructive interference in which two or more wave fronts may participate. The I Ching, although it does not explicitly view the hexagrams and their component trigrams and tetragrams in the context of a geometric cube, nonetheless attributes these alternative and alternating reciprocal capacities to yin and yang and to all the line figures formed from them. (3) This is much more than a simple matter of human convention. In this case we really are dealing with actual laws of nature, however cryptic and concealed they might be. This is not the right time to elaborate fully on what is involved here. Suffice it for now to point out that the approach we have chosen allows the three Cartesian and six additional mandalic dimensions to conform together with one another to certain combinatorial principles that nature demands they do. For example, the three faces of the cube in which the hexagram consisting of six yang lines is found must fit together at a single point which forms one of the eight vertices of the cube by superimposing the three occurrences of this hexagram in the three different Cartesian planes at that single point. A similar requirement exists for all the other vertices of the cube as well. When all these various requirements are met all six faces can fit together snugly to form the cube. Were even just one of the requirements not satisfied the cube as a structural and functional whole would be unable to form. We are talking here not simply about geometric shapes but about energetic physical phenomena as well. Ultimately this is not just a matter of composing a cube but of confronting the reality that dimensions fit together and force fields interact only in specific predetermined ways which we have no power to change. Moreover, this is just one indication that mandalic geometry describes more than literal locations existing in a topological space. It also corresponds in some sense to a state space, an abstract space in which different “positions” represent states of some physical system. © 2014 Martin Hauser Quantum Naughts and Crosses 12 (continued from here) Here we have the xz-plane with y = +1 with all its resident hexagrams. This is the face the mandalic cube presents when we view it from directly above, with all of the planar Cartesian coordinate conventions maintained. The z-axis is positive toward the viewer and the x-axis positive toward the viewer’s right. (1) The xz-plane with y = -1, which is to say the opposite or complementary face of the mandalic cube, could be viewed by simply lifting the roof face above off of the cube and then looking down at what has now become the floor of the cube. Once again, this ploy preserves the Cartesian coordinate conventions. Also, this is the only way to view this opposite face of the cube in such a manner that its tetragrams are all congruent to those in the hexagram patterning pictured above. (2) We could always view this lower face from a vantage point outside the cube, as we did the upper face, but not without disregarding one of the Cartesian coordinate conventions, that of either the x-axis or the z-axis. We wouldn’t be breaking any laws of nature were we to do that, but some of us would, initially at least, be somewhat confused. My suggestion in that case would be, “Brush up on your Lewis Carroll (3) and in particular on his Looking Glass House.” (1) It is good always to keep in mind that nature has neither respect for nor allegiance toward human convention. These artificially fabricated coordinate conventions play no fundamental role in any of the geometric descriptions found in this blog, but the pretense that they somehow do really matter must still be maintained to make consistent communication between human minds possible. I’m betting that more highly advanced alien civilizations and reality itself would view this whole formulation of things in a conventional manner as somewhat quaint. (2) As used here the term “congruent” refers to the situation in which all the lines of the figures of concern (here all the tetragrams positioned opposite in the vertical y-dimension) are identical. It is the fact that the hexagrams in the opposite planes differ only in the value of y, which is to say in lines 2 and 5, that gives rise to this congruence. The hexagrams of the lower plane can be generated by substituting yin lines for the yang lines at positions 2 and 5 of the hexagrams shown in the diagram above. (3) In addition to his literary works Lewis Carroll penned a good number of mathematical works under his real name Charles Lutwidge Dodgson as well, mainly in the fields of mathematical logic, geometry, linear and matrix algebra, and recreational mathematics. © 2014 Martin Hauser
Lec 1, 8/24 Mon, Sec 1.1: Course details, general principles of enumeration, counting of words & subsets, binomial theorem, multisets/compositions. Lec 2, 8/26 Wed, Sec 1.2: Lattice paths, basic identities, extended binomial coefficient, summing polynomials, Delannoy numbers, Hamming ball/Delannoy correspondence. Lec 3, 8/28 Fri, Sec 1.3: Counting graphs and trees, multinomial coefficients (trees by degrees, Fermat's Little Theorem), Ballot problem, central binomial convolution. Lec 4, 8/31 Mon, Sec 1.3-2.1: Catalan numbers (generalization, bijections, recurrence), Fibonacci numbers and 1,2-lists, derangements. Lec 5, 9/2 Wed, Sec 2.1-2: Recurrences in two indices (distribution problems, Delannoy numbers), characteristic equation method (through repeated roots). Lec 6, 9/4 Fri, Sec 2.2: Characteristic equation method (inhomogeneous terms), generating function method (linear w. constant coefficients, relation to char.eqn. method) Lec 7, 9/9 Wed, Sec 2.3-3.1: Catalan solution, substitution method (factorials, derangements, Stirling's approximation), generating functions (sum/product operations, multisets) Lec 8, 9/11 Fri, Sec 3.1: Generating functions: multisets with restricted multiplicity, functions in two variables (skipped), permutation statistics (by #inversions, #cycles), Eulerian numbers (#runs, Worpitzky's Identity by barred permutations) Lec 9, 9/14 Mon, Sec 3.2: Generating function manipulations: sum & product (A(n,k) formula by inversion from Worpitzky), shifted index, differentiation & evaluation at special values, summing initial coefficients, summation by convolutions. First Snake Oil example. Lec 10, 9/16 Wed, Sec 3.3: More Snake Oil, exponential generating functions: products of EGFs (words), examples and applications of EGFs (flags on poles, restricted words, Stirling numbers) Lec 11, 9/19 Fri, Sec 3.3: EGF applications (binomial inversion, derangements), the Exponential Formula (graphs, partitions, permutations, recurrence), Lagrange Inversion Formula (statement and application to trees). Lec 12, 9/21 Mon, Sec 3.4: Partitions of integers (basic generating functions, bounds on coefficients). combinatorics of partitions (Ferrers diagrams, conjugate, Fallon's Identity (mentioned), congruence classes of triangles, Euler's Identity). Lec 13, 9/23 Wed, Sec 4.1: Basic inclusion-exclusion formula, applications (totients, Stirling numbers, alternating sums, skipped Eulerian numbers) Lec 14, 9/25 Fri, Sec 4.1: Permutations with restricted positions (rook polynomials), OGF by number of properties (skipped probleme des menages), signed involutions (inclusion-exclusion as special case, partitions into distinct odd parts). Lec 15, 9/28 Mon, Sec 4.1-2: Disjoint-path systems in digraphs, application to lattice paths and rhombus tilings. Examples for counting under symmetry, Lagrange's Theorem, Burnside's Lemma. Lec 16, 9/30 Wed, Sec 4.2-3: Examples for Burnside's Lemma, Cycle indices, symmetries of cube, pattern inventory (Polya's Theorem), counting isomorphism classes of graphs. Young tableaux (brief presentation of Hook-length formula, RSK correspondence, and consequences of RSK correspondence). Lec 17, 10/2 Fri, Sec 5.1-3 highlights, Sec 6.1: Properties of Petersen graph, degree-sum formula and rectangle partition, characterization of bipartite graphs, Eulerian circuits (Chapter 5, First Concepts for Graphs, for background reading). Bipartite Matching (Hall's Theorem). Lec 18, 10/5 Mon, Sec 6.1: Marriage Theorem, orientations with specified outdegrees. Min/max relations (Ore's defect formula, Konig-Egervary Theorem, Gallai's Theorem, Konig's Other Theorem). Lec 19, 10/7 Wed, Sec 6.2: General Matching: Tutte's 1-Factor Theorem from Berge-Tutte Formula, 1-factors in regular graphs, Petersen's 2-Factor Theorem (via Eulerian circuit and Hall's Theorem), augmenting paths, reduction of f-factor to 1-factor in blowup (mentioned only briefly) Lec 20, 10/9 Fri, Sec 7.1: Connectivity (definitions, Harary graphs, cartesian products). Edge-connectivity (definitions, Whitney's Theorem, edge cuts, diameter 2). Bonds and blocks skipped. Lec 21, 10/12 Mon, Sec 7.2: k-Connected Graphs (Independent x,y-paths, linkage and blocking sets, Pym's Theorem, Menger's Theorems (8 versions), Expansion and Fan Lemmas, cycles through specified vertices), Lec 22, 10/14 Wed, Sec 7.2-3: Ford-Fulkerson CSDR, ear decomposition and Robbins' Theorem. Spanning cycles: necessary condition, Ore & Dirac conditions, closure, statement of Chvatal condition & sketch. Lec 23, 10/16 Fri, Sec 7.3-8.1: Chvatal-Erdos Theorem, comments on Lu's Theorem, Fan's Theorem, regularity, pancyclicity, statements for circumference. Vertex coloring: examples, easy bounds, greedy coloring, interval graphs, degree bounds, start Minty's Theorem. Lec 24, 10/19 Mon, Sec 8.1-2: Triangle-free graphs (Mycielski's construction, √n bound), color-critical graphs (minimum degree, edge-connectivity). List coloring for complete bipartite graphs. Lec 25, 10/21 Wed, Sec 8.2-3: List coloring (degree choosability and extension of Brooks' Theorem), edge-coloring (complete graphs, Petersen graph, bipartite graphs). Lec 26, 10/23 Fri, Sec 8.3: Edge-coloring (color fans and Vizing's Theorem for graphs, Anderson-Goldberg generalization of Vizing's Theorem for multigraphs), brief mention of perfect graphs (chordal graphs, PGT). Lec 27, 10/26 Mon, Sec 9.1: Planar graphs and their duals, cycles vs bonds, bipartite plane graphs, Euler's Formula and edge bound, brief application to regular polyhedra, skipped Lec 28, 10/28 Wed, Sec 9.2: Kuratowski's Theorem and convex embeddings, 6-coloring of planar graphs Lec 29, 10/30 Fri, Sec 9.3: Coloring of planar graphs (5-choosability, Kempe), Discharging (approach to 4CT, planar graphs with forbidden face lengths), Tait's Theorem (skipped Grinberg's Theorem) Lec 30, 11/2 Mon, Sec 10.1: Applications of pigeonhole principle (covering by bipartite graphs, divisible pairs, domino tilings, paths in cubes, monotone sublists, increasing trails, girth 6 with high chromatic number). Lec 31, 11/4 Wed, Sec 10.2: Ramsey's Theorem and applications (convex m-gons, table storage). Lec 32, 11/6 Fri, Sec 10.3: Ramsey numbers, graph Ramsey theory (tree vs complete graph), Schur's Theorem, Van der Waerden Theorem (statement and example) Lec 33, 11/9 Mon, Sec 12.1: Partially ordered sets (definitions and examples, comparability graphs and cover graphs), Dilworth's Theorem, equivalence of Dilworth and Konig-Egervary, relation to PGT. Lec 34, 11/11 Wed, Sec 12.2: graded posets & Sperner property, symmetric chain decompositions for subsets and products, bracketing decomposition, application to monotone Boolean functions. Lec 35, 11/13 Fri, Sec 12.2: LYM posets (Sperner's Theorem via LYM, equivalence with regular covering and normalized matching, LYM and symmetric unimodal rank-sizes => symmetric chain decomposition, statement of log-concavity & product result). Lec 36, 11/16 Mon, Sec 14.1: existence arguments (Ramsey number, 2-colorability of k-uniform hypergraphs), pigeonhole property of expectation (linearity and indicator variables, Caro-Wei bound on independence number, application of Caro-Wei to Turan's Theorem, pebbling in hypercubes) Lec 37, 11/18 Wed, Sec 14.2: Crossing number (expectation application), Deletion method (Ramsey numbers, dominating sets, large girth and chromatic number) Lec 38, 11/20 Fri, Sec 14.2-3: Symmetric Local Lemma & applications (Ramsey number, list coloring, Mutual Independence Principle). Random graph models, almost-always properties, connectedness for constant p, Markov's Inequality. Lec 39, 11/30 Mon, Sec 14.3: Second moment method, threshold functions for disappearance of isolated vertices and appearance of balanced graphs, comments on evolution of graphs, comments on connectivity/cliques/coloring of random graphs. Lec 40, 12/2 Wed, Sec 13.1: Latin squares (4-by-4 example, MOLS(n,k), upper bound, complete families, Moore-MacNeish construction). Block designs (examples, elementary constraints on parameters, Fisher's Inequality). Lec 41, 12/4 Fri, Sec 13.1: Symmetric designs (Bose), necessary conditions (example of Bruck-Chowla-Ryser), Hadamard matrices (restriction on order, relation to designs, relation to coding theory). Lec 42, 12/7 Mon, Sec 13.2: Projective planes (equivalence with (q^2+q+1,q+1,1)-designs, relation to Latin squares, polarity graph with application to extremal problems). Lec 43, 12/9 Wed, Sec 13.2-3: difference sets and multipliers, Steiner triple systems.
Triangle ABC has equilateral triangles drawn on its edges. Points P, Q and R are the centres of the equilateral triangles. What can you prove about the triangle PQR? Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs? Can you give the coordinates of the vertices of the fifth point in the patterm on this 3D grid? Can you find a way to turn a rectangle into a square? Use Excel to investigate the effect of translations around a number grid. Use an interactive Excel spreadsheet to investigate factors and multiples. Use an Excel to investigate division. Explore the relationships between the process elements using an interactive spreadsheet. Use an Excel spreadsheet to explore long multiplication. A simple file for the Interactive whiteboard or PC screen, demonstrating equivalent fractions. Use Excel to explore multiplication of fractions. A group of interactive resources to support work on percentages Key Stage 4. Use Excel to practise adding and subtracting fractions. A collection of resources to support work on Factors and Multiples at Secondary level. This set of resources for teachers offers interactive environments to support work on loci at Key Stage 4. Match pairs of cards so that they have equivalent ratios. The interactive diagram has two labelled points, A and B. It is designed to be used with the problem "Cushion Ball" There are thirteen axes of rotational symmetry of a unit cube. Describe them all. What is the average length of the parts of the axes of symmetry which lie inside the cube? Two circles of equal radius touch at P. One circle is fixed whilst the other moves, rolling without slipping, all the way round. How many times does the moving coin revolve before returning to P? Use an interactive Excel spreadsheet to explore number in this exciting game! An Excel spreadsheet with an investigation. A tool for generating random integers. A java applet that takes you through the steps needed to solve a Diophantine equation of the form Px+Qy=1 using Euclid's algorithm. A red square and a blue square overlap so that the corner of the red square rests on the centre of the blue square. Show that, whatever the orientation of the red square, it covers a quarter of the. . . . Here is a chance to play a fractions version of the classic Countdown Game. This game challenges you to locate hidden triangles in The White Box by firing rays and observing where the rays exit the Box. Can you find a reliable strategy for choosing coordinates that will locate the robber in the minimum number of guesses? A simple spinner that is equally likely to land on Red or Black. Useful if tossing a coin, dropping it, and rummaging about on the floor have lost their appeal. Needs a modern browser; if IE then at. . . . Explore displacement/time and velocity/time graphs with this mouse motion sensor. Overlaying pentominoes can produce some effective patterns. Why not use LOGO to try out some of the ideas suggested here? Place a red counter in the top left corner of a 4x4 array, which is covered by 14 other smaller counters, leaving a gap in the bottom right hand corner (HOME). What is the smallest number of moves. . . . Help the bee to build a stack of blocks far enough to save his friend trapped in the tower. Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon? This set of resources for teachers offers interactive environments to support work on graphical interpretation at Key Stage 4. Can you put the 25 coloured tiles into the 5 x 5 square so that no column, no row and no diagonal line have tiles of the same colour in them? Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling. Here is a solitaire type environment for you to experiment with. Which targets can you reach? Can you be the first to complete a row of three? Ask a friend to choose a number between 1 and 63. By identifying which of the six cards contains the number they are thinking of it is easy to tell them what the number is. Try entering different sets of numbers in the number pyramids. How does the total at the top change? Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15. To avoid losing think of another very well known game where the patterns of play are similar. This problem is about investigating whether it is possible to start at one vertex of a platonic solid and visit every other vertex once only returning to the vertex you started at. When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... Triangular numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves. What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square. This is an interactivity in which you have to sort the steps in the completion of the square into the correct order to prove the formula for the solutions of quadratic equations. It's easy to work out the areas of most squares that we meet, but what if they were tilted? A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle?
The dual of a given linear program (LP) is another LP that is derived from the original (the primal) LP in the following schematic way: - Each variable in the primal LP becomes a constraint in the dual LP; - Each constraint in the primal LP becomes a variable in the dual LP; - The objective direction is inversed – maximum in the primal becomes minimum in the dual and vice versa. The weak duality theorem states that the objective value of the dual LP at any feasible solution is always a bound on the objective of the primal LP at any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem). In fact, this bounding property holds for the optimal values of the dual and primal LPs. The strong duality theorem states that, moreover, if the primal has an optimal solution then the dual has an optimal solution too, and the two optima are equal. These theorems belong to a larger class of duality theorems in optimization. The strong duality theorem is one of the cases in which the duality gap (the gap between the optimum of the primal and the optimum of the dual) is 0. Form of the dual LP Suppose we have the linear program: Maximize cTx subject to Ax ≤ b, x ≥ 0. We would like to construct an upper bound on the solution. So we create a linear combination of the constraints, with positive coefficients, such that the coefficients of x in the constraints are at least cT. This linear combination gives us an upper bound on the objective. The variables y of the dual LP are the coefficients of this linear combination. The dual LP tries to find such coefficients that minimize the resulting upper bound. This gives the following LP:: 81–83 Minimize bTy subject to ATy ≥ c, y ≥ 0 This LP is called the dual of the original LP. Consider a factory that is planning its production of goods. Let be its production schedule (make amount of good ), let be the list of market prices (a unit of good can sell for ). The constraints it has are (it cannot produce negative goods) and raw-material constraints. Let be the raw material it has available, and let be the matrix of material costs (producing one unit of good requires units of raw material ). Then, the constrained revenue maximization is the primal LP: Maximize cTx subject to Ax ≤ b, x ≥ 0. Now consider another factory that has no raw material, and wishes to purchase the entire stock of raw material from the previous factory. It offers a price vector of (a unit of raw material for ). For the offer to be accepted, it should be the case that , since otherwise the factory could earn more cash by producing a certain product than selling off the raw material used to produce the good. It also should be , since the factory would not sell any raw material with negative price. Then, the second factory's optimization problem is the dual LP: Minimize bTy subject to ATy ≥ c, y ≥ 0 The duality theorem states that the duality gap between the two LP problems is at least zero. Economically, it means that if the first factory is given an offer to buy its entire stock of raw material, at a per-item price of y, such that ATy ≥ c, y ≥ 0, then it should take the offer. It will make at least as much revenue as it could producing finished goods. The strong duality theorem further states that the duality gap is zero. With strong duality, the dual solution is, economically speaking, the "equilibrium price" (see shadow price) for the raw material that a factory with production matrix and raw material stock would accept for raw material, given the market price for finished goods . (Note that may not be unique, so the equilibrium price may not be fully determined by , , and .) To see why, consider if the raw material prices are such that for some , then the factory would purchase more raw material to produce more of good , since the prices are "too low". Conversely, if the raw material prices satisfy , but does not minimize , then the factory would make more money by selling its raw material than producing goods, since the prices are "too high". At equilibrium price , the factory cannot increase its profit by purchasing or selling off raw material. The duality theorem has a physical interpretation too.: 86–87 Constructing the dual LP In general, given a primal LP, the following algorithm can be used to construct its dual LP.: 85 The primal LP is defined by: - A set of n variables: . - For each variable , a sign constraint – it should be either non-negative (), or non-positive (), or unconstrained (). - An objective function: - A list of m constraints. Each constraint j is: where the symbol before the can be one of or or . The dual LP is constructed as follows. - Each primal constraint becomes a dual variable. So there are m variables: . - The sign constraint of each dual variable is "opposite" to the sign of its primal constraint. So "" becomes and "" becomes and "" becomes . - The dual objective function is - Each primal variable becomes a dual constraint. So there are n constraints. The coefficient of a dual variable in the dual constraint is the coefficient of its primal variable in its primal constraint. So each constraint i is: , where the symbol before the is similar to the sign constraint on variable i in the primal LP. So becomes "" and becomes "" and becomes "". From this algorithm, it is easy to see that the dual of the dual is the primal. If all constraints have the same sign, it is possible to present the above recipe in a shorter way using matrices and vectors. The following table shows the relation between various kinds of primals and duals. |Maximize cTx subject to Ax ≤ b, x ≥ 0||Minimize bTy subject to ATy ≥ c, y ≥ 0||This is called a "symmetric" dual problem| |Maximize cTx subject to Ax ≤ b||Minimize bTy subject to ATy = c, y ≥ 0||This is called an "asymmetric" dual problem| |Maximize cTx subject to Ax = b, x ≥ 0||Minimize bTy subject to ATy ≥ c| The duality theorems Below, suppose the primal LP is "maximize cTx subject to [constraints]" and the dual LP is "minimize bTy subject to [constraints]". The weak duality theorem says that, for each feasible solution x of the primal and each feasible solution y of the dual: cTx ≤ bTy. In other words, the objective value in each feasible solution of the dual is an upper-bound on the objective value of the primal, and objective value in each feasible solution of the primal is a lower-bound on the objective value of the dual. Here is a proof for the primal LP "Maximize cTx subject to Ax ≤ b, x ≥ 0": - = xTc [since this just a scalar product of the two vectors] - ≤ xT(ATy) [since ATy ≥ c by the dual constraints, and x ≥ 0] - = (xTAT)y [by associativity] - = (Ax)Ty [by properties of transpose] - ≤ bTy [since Ax ≤ b by the primal constraints] Weak duality implies: maxx cTx ≤ miny bTy In particular, if the primal is unbounded (from above) then the dual has no feasible solution, and if the dual is unbounded (from below) then the primal has no feasible solution. The strong duality theorem says that if one of the two problems has an optimal solution, so does the other one and that the bounds given by the weak duality theorem are tight, i.e.: maxx cTx = miny bTy The strong duality theorem is harder to prove; the proofs usually use the weak duality theorem as a sub-routine. One proof uses the simplex algorithm and relies on the proof that, with the suitable pivot rule, it provides a correct solution. The proof establishes that, once the simplex algorithm finishes with a solution to the primal LP, it is possible to read from the final tableau, a solution to the dual LP. So, by running the simplex algorithm, we obtain solutions to both the primal and the dual simultaneously.: 87–89 1. The weak duality theorem implies that finding a single feasible solution is as hard as finding an optimal feasible solution. Suppose we have an oracle that, given an LP, finds an arbitrary feasible solution (if one exists). Given the LP "Maximize cTx subject to Ax ≤ b, x ≥ 0", we can construct another LP by combining this LP with its dual. The combined LP has both x and y as variables: subject to Ax ≤ b, ATy ≥ c, cTx ≥ bTy, x ≥ 0, y ≥ 0 If the combined LP has a feasible solution (x,y), then by weak duality, cTx = bTy. So x must be a maximal solution of the primal LP and y must be a minimal solution of the dual LP. If the combined LP has no feasible solution, then the primal LP has no feasible solution either. 2. The strong duality theorem provides a "good characterization" of the optimal value of an LP in that it allows us to easily prove that some value t is the optimum of some LP. The proof proceeds in two steps:: 260–261 - Show a feasible solution to the primal LP with value t; this proves that the optimum is at least t. - Show a feasible solution to the dual LP with value t; this proves that the optimum is at most t. Consider the primal LP, with two variables and one constraint: Applying the recipe above gives the following dual LP, with one variable and two constraints: It is easy to see that the maximum of the primal LP is attained when x1 is minimized to its lower bound (0) and x2 is maximized to its upper bound under the constraint (7/6). The maximum is 4 · 7/6 = 14/3. Similarly, the minimum of the dual LP is attained when y1 is minimized to its lower bound under the constraints: the first constraint gives a lower bound of 3/5 while the second constraint gives a stricter lower bound of 4/6, so the actual lower bound is 4/6 and the minimum is 7 · 4/6 = 14/3. In accordance with the strong duality theorem, the maximum of the primal equals the minimum of the dual. We use this example to illustrate the proof of the weak duality theorem. Suppose that, in the primal LP, we want to get an upper bound on the objective . We can use the constraint multiplied by some coefficient, say . For any we get: . Now, if and , then , so . Hence, the objective of the dual LP is an upper bound on the objective of the primal LP. Consider a farmer who may grow wheat and barley with the set provision of some L land, F fertilizer and P pesticide. To grow one unit of wheat, one unit of land, units of fertilizer and units of pesticide must be used. Similarly, to grow one unit of barley, one unit of land, units of fertilizer and units of pesticide must be used. The primal problem would be the farmer deciding how much wheat () and barley () to grow if their sell prices are and per unit. |Maximize:||(maximize the revenue from producing wheat and barley)| |subject to:||(cannot use more land than available)| |(cannot use more fertilizer than available)| |(cannot use more pesticide than available)| |(cannot produce negative quantities of wheat or barley).| In matrix form this becomes: - subject to: For the dual problem assume that y unit prices for each of these means of production (inputs) are set by a planning board. The planning board's job is to minimize the total cost of procuring the set amounts of inputs while providing the farmer with a floor on the unit price of each of his crops (outputs), S1 for wheat and S2 for barley. This corresponds to the following LP: |Minimize:||(minimize the total cost of the means of production as the "objective function")| |subject to:||(the farmer must receive no less than S1 for his wheat)| |(the farmer must receive no less than S2 for his barley)| |(prices cannot be negative).| In matrix form this becomes: - subject to: The primal problem deals with physical quantities. With all inputs available in limited quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to produce so as to maximize total revenue? The dual problem deals with economic values. With floor guarantees on all output unit prices, and assuming the available quantity of all inputs is known, what input unit pricing scheme to set so as to minimize total expenditure? To each variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed by output type. To each inequality to satisfy in the primal space corresponds a variable in the dual space, both indexed by input type. The coefficients that bound the inequalities in the primal space are used to compute the objective in the dual space, input quantities in this example. The coefficients used to compute the objective in the primal space bound the inequalities in the dual space, output unit prices in this example. Both the primal and the dual problems make use of the same matrix. In the primal space, this matrix expresses the consumption of physical quantities of inputs necessary to produce set quantities of outputs. In the dual space, it expresses the creation of the economic values associated with the outputs from set input unit prices. Since each inequality can be replaced by an equality and a slack variable, this means each primal variable corresponds to a dual slack variable, and each dual variable corresponds to a primal slack variable. This relation allows us to speak about complementary slackness. A LP can also be unbounded or infeasible. Duality theory tells us that: - If the primal is unbounded, then the dual is infeasible; - If the dual is unbounded, then the primal is infeasible. However, it is possible for both the dual and the primal to be infeasible. Here is an example: The max-flow min-cut theorem is a special case of the strong duality theorem: flow-maximization is the primal LP, and cut-minimization is the dual LP. See Max-flow min-cut theorem#Linear program formulation. Sometimes, one may find it more intuitive to obtain the dual program without looking at the program matrix. Consider the following linear program: We have m + n conditions and all variables are non-negative. We shall define m + n dual variables: yj and si. We get: Since this is a minimization problem, we would like to obtain a dual program that is a lower bound of the primal. In other words, we would like the sum of all right hand side of the constraints to be the maximal under the condition that for each primal variable the sum of its coefficients do not exceed its coefficient in the linear function. For example, x1 appears in n + 1 constraints. If we sum its constraints' coefficients we get a1,1y1 + a1,2y2 + ... + a1,;;n;;yn + f1s1. This sum must be at most c1. As a result, we get: Note that we assume in our calculations steps that the program is in standard form. However, any linear program may be transformed to standard form and it is therefore not a limiting factor. - Gärtner, Bernd; Matoušek, Jiří (2006). Understanding and Using Linear Programming. Berlin: Springer. ISBN 3-540-30697-8. Pages 81–104. - Sakarovitch, Michel (1983), "Complements on Duality: Economic Interpretation of Dual Variables", Springer Texts in Electrical Engineering, New York, NY: Springer New York, pp. 142–155, ISBN 978-0-387-90829-8, retrieved 2022-12-23 - Dorfman, Robert (1987). Linear programming and economic analysis. Paul A. Samuelson, Robert M. Solow. New York: Dover Publications. ISBN 0-486-65491-5. OCLC 16577541. - Lovász, László; Plummer, M. D. (1986), Matching Theory, Annals of Discrete Mathematics, vol. 29, North-Holland, ISBN 0-444-87916-1, MR 0859549 - A. A. Ahmadi (2016). "Lecture 6: linear programming and matching" (PDF). Princeton University.
Срочная публикация научной статьи Modeling and optimization under a fuzzy environment are called fuzzy modeling and fuzzy optimization. Fuzzy multi-objective linear programming is one of the most frequently applied in fuzzy decision making techniques. Although, it has been investigated and expanded for more than decades by many researchers and from the varies point of view, it is still useful to develop new approaches in order to better fit the real world problems within framework of fuzzy multi-objective linear programming. However, when formulating the multi-objective programming problem which closely describes and represents the real decision situation, various factors of the real system should be reflected in the description of the objective functions and the constraints. Naturally, these objective functions and the constraints involve many parameters whose possible values may be assigned by the experts. In the traditional approaches, such parameters are fixed at some values in an experimental or subjective manner through the expert's understanding of the nature of the parameters. Unfortunately, real world situations are often not deterministic. There exist various types of uncertainties in social, industrial and economic systems, such as randomness of occurrence of events, imprecision and ambiguity of system data and linguistic vagueness, etc. which come from many ways, including errors of measurement, deficiency in history and statistical data, insufficient theory, incomplete knowledge expression and the subjectivity and preference of human judgment, etc. As pointed out by Zimmermann (1978), various kinds of uncertainties can be categorized as stochastic uncertainty and fuzziness. Stochastic uncertainty relates to the uncertainty of occurrences of phenomena or events. Its characteristics, lie in that descriptions of information are crisp and well defined; however, they vary in their frequency of occurrence. The systems with this type of uncertainty are called stochastic systems, which can be solved by stochastic optimization techniques using probability theory. In some other situations, the decision-maker does not think about the frequently used probability distribution which is always appropriate, especially when the information is vague. It may be related to human language and behavior, imprecise/ ambiguous system data. Such types of uncertainty are called fuzziness. It cannot be formulated and solved effectively by traditional mathematics-based optimization techniques and probability based stochastic optimization approaches. Multi-objective Linear Programming (MOLP) Problem: Multi-objective Linear Programming (MOLP) Problems is an interest area of research, since most real-life problems have a set of conflict objectives. A mathematical model of the MOLP problem can be written as follows: where x is an n – dimensional vector of decision variables are k – distinct linear objective function of the decision vector . A is an mxn constraint matrix, b is an m – dimensional constant vector. Definition 3. 1. (Complete Optimal Solution) The point is said to be a complete optimal solution of the MOLP problem (1), if for all In general, when the objective functions conflict with one another, a complete optimal solution may not exist and hence, a new concept of optimality, called Pareto optimality, is considered. Definition 3. 2. (Pareto Optimal Solution) The point is said to be a Pareto optimal solution if there does not exist such that if for all i and for at least one j The model (1), all coefficients of A, b and C are crisp numbers. However, in the real-world decision problems, a decision maker does not always know the exact values of the coefficients taking part in the problem, and that vagueness in the coefficients may not be a probabilistic type. In this situation, the decision maker can model inexactness by means of fuzzy parameter. In this section we consider a FMOLP problem with fuzzy technological coefficients and fuzzy resources. A mathematical model of the FMOLP problem can be written as where x is an n – dimensional vector of decision variables. are k - distinct linear objective function of the decision vector dimensional cost factor vectors is an mxn constraint fuzzy matrix - dimensional constant fuzzy vector (fuzzy resources). function of the fuzzy matrix: Solution Methodology and Algorithm: In this section, we first fuzzify the objective function in order to defuzzificate the problem (2). It is done by calculating the lower and upper bounds of the optimal values. The bounds of the optimal values and are obtained by solving the standard linear programming problems. Let = min and the objective function takes the values between and while the technological coefficients take values between and and the right hand side numbers takes the value and then the fuzzy set of timal value, which sub set for is defined by The fuzzy set of the constraint, which subset for , is defined by By using the definition of the fuzzy decision proposed by Bellman and Zadeh, we have: In this case the optimal fuzzy decision is a solution of the problem Consequently, the problem (2) is reduced to the following optimization problem Notice that, the problem (11) containing the cross product terms are not convex Therefore, the solution of the problem requires the special approach adopted for solving general non-convex optimization problems. The Algorithm of the Fuzzy Decisive Set Method: This method is based on the idea that, for a fixed value of , the problem (11) is converted in to linear programming problem. Obtaining the optimal solutionis equivalent to determining the maximum value of so that the feasible set is nonempty. The algorithm of this method for the problem (11) is presented below. Set λ =1 and test whether a feasible set satisfying the constraints of the problem (11) exists or not using phase one of the simplex method. If a feasible set exists, set λ = 1 . Otherwise, set λ = u and and λ = 1 go to the next step. For the value of update the value ofand using the bisection method as follows: , if feasible set is nonempty for λ , if feasible set is empty for λ Consequently, for each λ, test whether a feasible set of the problem (11) exists or not using phase one of the Simplex method and determine the maximum value satisfying the constraints of the problem (11) Consider the following FMOLPP For defuzzification of the problem (12), we first fuzzify the objective function. This is done by calculating the lower and upper bounds of the optimal values first. The bounds of the optimal values and are obtained by solving the standard linear programming problems Optimal values of these problems are and respectively. Therefore, and .By using these optimal values, the problem (12) can be reduced by the following non-linear programmig problem: Let us solve the problem (17) by using fuzzy decisive set method. For λ = 1, the problem can be written as Since the feasible set is empty, by taking and , the new value of is tried For the problem (17) can be written as Since the feasible set is empty, by takingand , the new value of For λ = 0.25, the problem (17) can be written as Since the feasible set is empty, by taking and the new value of is tried For λ = 0.125 , the problem (17) can be written as Since the feasible set is nonempty, by taking and the new value of is tried For λ = 0.19 , the problem (17) can be written as
Explain why the required rate of return on a firm's assets must be equal to the weighted average cost of capital associated with its liabilities and equity. Explain. 4) (Consumption function) How would an increase in each of the following affect the consumption function? a. net taxes b. the interest rate c. consumer optimism or confidence d the price level e. consumers net wealth f. disposable income 9) (simple spending mult ** describe the major sources of income and expenditures for households... in question 1** 1) (Evolution of the household) Determine whether each of the following would increase or decrease the opportunity costs for mothers who choose not to work outside the home. Explain your answers... a. higher levels of educatio 1. You're the treasurer of Warm Wear Inc., which imports wool sweaters from around the world. Kreploc, a company in the country of Slobodia, has a product your marketing department would like to carry and doesn't require payment until 90 days after delivery. Unfortunately, the Slobodian blivit tends to vary in value by as much a Do you agree or disagree with the following statement: "We can calculate future cash flows precisely and obtain an exact value for the NPV of an investment." Explain. You're the CFO of the Overseas Sprocket Company, which imports a great deal of product from Europe and the Far East and is continually faced with exchange rate exposure on unfilled contracts. Harry Byrite, the head of purchasing, has a plan to avoid exchange rate losses. He suggests that the firm borrow enough money from the ban 1. Describe the ways in which international business has changed during the last 50 years. Include the concept of an MNC and the different types of foreign investment. 2. After World War II, the United States was the world's dominant economic power. We're still the largest economy, but the rest of the world has caught up sig What is the future value of investing $3,000 for 3/4 year at a continuously compounded rate of 12%? Wilson's Antiques is considering a project that has an initial cost today of $10,000. The project has a two-year life with cash inflows of $6,500 a year. Should Wilson's decide to wait one year to commence this project, the initial cost will increase by 5% and the cash inflows will increase to $7,500 a year. What is the value of Expected and Required Returns, Equation 9.2 (page 399) k=D1 + (P1 - P0)/P0 The Duncan Company's stock is currently selling for $ 15. People generally expect its price to rise to $ 18 by the end of next year. They also expect that it will pay a dividend of $. 50 per share during the year. (Hint: Apply Equation 9.2 page 399). Answer these problems and show your work: 1. Calculate the present value of the following lump sums: a. $100,000 to be received five years from now with a 5% annual interest rate b. $200,000 to be received 10 years from now with a 10% annual interest rate 2. Calculate the future value of the following lump sums: a. $100 Please see attachment for questions on finance regarding investment decisions. Pam, 43 Josh, 45 Children ages 16, 14, and 11 Monthly income $4,900 Living expenses $4,450 Emergency fund $5,000 Assets $262,700 Liabilities $84,600 With three dependent children, the Brocks are assessing t 1.8 For a given share price of a firm's stock, the lower the EPS the lower the price-earnings ratio. true or false? 1.9 Cash flows from operating activities relate to the buying and selling of long-term assets. true or false? 1.11 Growth rate: Petry Corp. is a growing company with sales of $1.25 million this ye Reply to the following: In choosing a particular project, there is a lot of financial planning that must take place. In the Marine Corps, we also use a planning process in developing plans for appropriate action. This process consists of six parts: problem framing, the course of action development, the course of action war- Jamie Peters invested $100,000 to set up the following portfolio 1 year ago. (see attachment for table). a. Calculate the portfolio beta on the basis of the original cost figures. b. Calculate the percentage return of each asset in the portfolio for the year/ c. Calculate the percentage return of the portfolio on the basis of Dillon Labs has asked its financial manager to measure the cost of each specific type of capital as well as the weighted average cost of capital. The weighted average cost is to be measured by using the following weights: 40% long-term debt, 10% preferred stock, and 50% common stock equity (retained earnings, new common stock, o Choose a company, a product/service that a company would launch, and a specific geographic area to do that. Develop a Marketing Plan. Develop the situational analysis, the market analysis, competition analysis and SWOT matrix. The market value of Fords' equity, preferred stock and debt are $7 billion, $3 billion, and $10 billion, respectively. Ford has a beta of 1.8, the market risk premium is 7%, and the risk-free rate of interest is 4%. Ford's preferred stock pays a dividend of $3.5 each year and trades at a price of $27 per share. Ford's debt trade A 5-year annuity of ten $5,300 semiannual payments will begin 9 years from now, with the first payment coming 9.5 years from now. 1. If the discount rate is 12 percent compounded monthly, what is the value of this annuity five years from now? 2. If the discount rate is 12 percent compounded monthly, what is the value three y See attachment to review but only need the 2 below: f. Track Software paid $5,000 in dividends in 2015. Suppose that an investor approached Stanley about buying 100% of his firm. If this investor believed that by owning the company he could extract $5,000 per year in cash from the company in perpetuity, what do you think t Life situation: Pam 36 Josh 38 Three kids ages (9,7,4) Financial Data: Monthly income-43,000 Living expenses-4,075 Assets-50,850 Liabilities-99,520 The Brocks are assessing their health insurance coverages. Since Josh's current employer offers him only 30 days of sick leave, they need to consider this factor when as Jerry Rice and Grain Store Exercise Jerry Rice and Grain Stores has $4,000,000 in yearly sales. The firm earns 3.5 percent on each dollar of sales and turns over its assets 2.5 times per year. It has $100,000 in current liabilities and $300,000 in long-term liabilities. The common stock of Denis and Denis Research, Inc., trades for $60 per share. Investors expect the company to pay a $3.90 dividend next year, and they expect that dividend to grow at a constant rate forever. If investors require a 10% return on this stock, what is the dividend growth rate that they are anticipating? A competitive hospital maintains current equipment and purchases new in order to stay current with the latest technology. If you were evaluating the capital budget performance of a hospital what factors would you consider justifying taking on more debt to purchase new equipment for a surgical unit? Reply to the following: I started investing in stock 2 years ago and I have always wondered how bonds work, but never took the time to research on them. I have seen how many local governments use bond to raise money for their municipality, but I have always seen that many people lose money. Bonds are used to borrow money from Reply to the following: This week's discussion is another area that I lack familiarity and direct life application. I have done simple evaluations of investment worth, but nothing scientific. The closest application of any of the rules would be the payback rule. For example, purchasing a good car can be a payback evaluation. Reply to the following: What is the relationship between Bonds and interest rates? Interest rates and bond prices have what is called an "inverse relationship" which means that when one goes up, the other goes down and vice versa of course though this relation might not seem obvious at first, the reasons are fairly simple. Each of the bonds shown in the following table pays interest annually. Bond A $1,000 Coupon interest rate 9% Years to maturity 8 Current Value $820 Bond B $1,000 Coupon interest rate 12% Years to maturity 16 Current Value $1,000 Bond C $500 Coupon interest rate 12% Years to maturity 12 Current Value $560 Bond D $1,000 Cou Pecos Manufacturing has just issued a 15-year, 12% coupon interest rate, $1,000-par bond that pays interest annually. The required return is currently 14% and the company is certain it will remain at 14% until the bond matures in 15 years. a. Assuming that the required return does remain at 14% until maturity, find the value
The New Palgrave Dictionary of Economics. Since commodities generally become inferior after a certain level of income, the income consumption curve has a positive slope upto some point. Income is shown on the Y-axis and the quantity demanded for the selected good or service is shown on the X-axis. An Engel curve plots the optimal amount of one good x1 as income increases and price remain constant. Alternatively, Engel curves can also describe how real expenditure varies with household income. Budget share Engel Curves describe how the proportion of household income spent on a good varies with income. That means that as the consumer has more income, they will buy less of the inferior good because they are able to purchase better goods. Empirical Engel curves are close to linear for some goods, and highly nonlinear for others. Productivity and Structural Change: A Review of the Literature. For goods with a generated from a utility function of , the Engel curve is a straight line. However, the curve can also be obtained for a group of consumers. That is, as income increases, the quantity demanded increases. An income offer curve plots the optimal bundle of goods chosen as income increases and prices of both goods remain constant. For example, some success has been achieved in understanding how social status concerns have influenced household expenditure on highly visible goods. As a result, many scholars acknowledge that influences other than current prices and current total expenditure must be systematically modeled if even the broad pattern of demand is to be explained in a theoretically coherent and empirically robust way. In this example, X1 is a normal good: its income elasticity is greater than zero. Amongst normal goods, there are two possibilities. There are two varieties of Engel curves. Lastly, the consumer increases the demand for some goods luxury items more than proportionately as his money income rises. Lastly, according to Engel, there are some items for which the expenditure of an average family would increase proportionately with rises in money income. An Engel curve is shown below. Although the Engel curve remains upward sloping in both cases, it bends toward the y-axis for and towards the x-axis for. For example, Gorman 1981 proved that a system of Engel curves must have a matrix of coefficients with rank three or less in order to be consistent with utility maximization. For example, necessities like bread are often inferior goods. Accounting for the shape of Engel Curves No established theory exists that can explain the observed shape of Engel Curves and their associated income elasticity values. As household income rises some motivations become more prominent in household expenditure as the more basic wants that dominate consumption patterns at low-income levels, such as hunger, eventually become satiated at higher income levels. In contrast, if X1 were an inferior good, consumption of it would decline as income increases: an inferior good's income elasticity is less than zero. For , the Engel curve has a negative gradient. Engel curves may also depend on demographic variables and other consumer characteristics. In contrast, Engel curves for inferior goods have a negative slope. Indifference curves for normal goods, substitutes and perfect complements Watch the next lesson: Missed the previous lesson? For , the Engel curve has a positive gradient. On the other hand, there are some goods e. Thus the difference between an income offer curve with a y-axis of x2 and an Engel curve with a y-axis of m is a factor of p2. Luxury goods are a subset of normal goods with income elasticities greater than +1. . Any good or service could be an inferior one under certain circumstances. Klenow 2001b , Quantifying Quality Growth, The American Economic Review,91 4 , 1006-1030. Engel curves for normal goods slope upwards — the flatter the slope the more luxurious the good, and the greater the income elasticity. Similarly, the point E 2 is a combination of L 2M 2, x 2, y 2 , and so on. That is, the Engel curve is x w , y w where w is wealth and x and y are the amounts of each of the goods purchased at those levels of wealth. This relationship can similarly be shown in case of inferior goods, where Engel curve would be downward sloping. Empirical Engel curves are close to linear for some goods, and highly nonlinear for others. Engel curves are also of great relevance in the measurement of inflation, and tax policy. Problems Low Explanatory Power is a well known problem in the Estimation of Engel curves: as income rises the difference between actual observation and the estimated expenditure level tends to increase dramatically. This curve would give the expenditure on the good of an average family belonging to a particular income-class. Many Engel Curves feature saturation properties in that their slope tends to diminish at high income levels, which suggests that there exists an absolute limit on how much expenditure on a good will rise as household income increases This saturation property has been linked to slowdowns in the growth of demand for some sectors in the economy, causing major changes in an economy's sectoral composition to take place. This Engel curve rises upward positive slope initially, but bends backward beyond a point. Quadratic Engel Curves and Consumer Demand. The Engel curve for such a good will be upward sloping and convex downwards like the curve given in Fig. The locus of such points is the Engel curve -- it's the mapping from wealth into the space of the two goods. Empirical Engel curves are close to linear for some goods, and highly nonlinear for others. In order to be consistent with the standard model of utility-maximization, Engel Curves must possess certain properties. Ernst Engel himself argued that households possessed a hierarchy of wants that determined the shape of Engel curves. Engel curves Engel Curves, named after 19th Century German statistician Ernst Engel, illustrate the relationship between consumer demand and household income. They are named after the German statistician 1821—1896 , who was the first to investigate this relationship between goods expenditure and income systematically in 1857. In , an Engel curve describes how household expenditure on a particular good or service varies with household income. The best-known single result from the article is which states that the poorer a family is, the larger the budget share it spends on nourishment.
If dark matter is thermally decoupled from the visible sector, the observed relic density can potentially be obtained via freeze-in production of dark matter. Typically in such models it is assumed that the dark matter is connected to the thermal bath through feeble renormalisable interactions. Here, rather, we consider the case in which the hidden and visible sectors are coupled only via non-renormalisable operators. This is arguably a more generic realisation of the dark matter freeze-in scenario, as it does not require the introduction of diminutive renormalisable couplings. We examine general aspects of freeze-in via non-renormalisable operators in a number of toy models and present several motivated implementations in the context of Beyond the Standard Model (BSM) physics. Specifically, we study models related to the Peccei-Quinn mechanism and portals. 225 Nieuwland Science Hall, Notre Dame, IN 46556 22nd October 2014 In the freeze-out paradigm of dark matter (DM), such as the much studied WIMP scenario, the DM is initially in thermal equilibrium and its abundance evolves with its equilibrium distribution until it decouples from the thermal bath. After decoupling the DM comoving number density is constant and (for appropriate parameter values) can give the observed relic density. Models of freeze-in DM Hall:2009bx ; Cheung:2010 ; Blennow:2013jba ; Kolda:2014ppa ; axion ; FI ; Asaka ; 3/2 ; McDonald:2001vt ; McDonald:2008ua ; Harling:2008px ; Mambrini provide a very different picture of the evolution of the DM abundance. In this setting it is supposed that the DM number density is initially negligible but over time an abundance suitable to match the relic density is produced due to interactions in the thermal bath involving a suppressed portal operator. This is illustrated in Fig. 1. For the DM abundance to be initially negligible, and subsequently set by the freeze-in mechanism (rather than freeze-out), the hidden sector must be thermally decoupled from the visible sector bath at all times, which implies that the portal operators must be extremely small. For instance, for TeV scale DM produced via scattering of bath states involving a renormalisable portal interaction the coupling dressing this operator should be typically in order to avoid equilibration of the DM with the visible sector Cheung:2010 . Such DM states are sometimes referred to as feebly interacting massive particles, or FIMPs. Freeze-in, as a general mechanism for DM production, was proposed only recently Hall:2009bx 111This framework builds upon earlier specific realisations, most notably the production of right-handed neutrinos Asaka , axinos axion , and gravitinos 3/2 , and see also McDonald:2001vt ; McDonald:2008ua ; Harling:2008px . and thus many important aspects remain to be studied. In particular, a huge class of models has been largely neglected and the purpose of this paper is to rectify this. Freeze-in using renormalisable interactions has been considered in some detail Hall:2009bx ; Cheung:2010 ; FI ; here instead we examine the alternative possibility, that freeze-in production proceeds via non-renormalisable operators. A suitable DM abundance can potentially be generated by freeze-in via such effective operators, which we refer to as UltraViolet (UV) freeze-in, and in this case the DM abundance depends sensitively on the reheat temperature. Conversely, we use InfraRed (IR) freeze-in to refer to the class of models in which the sectors are connected via renormalisable operators, in which case the DM abundance is set by IR physics and is independent of the reheat temperature. The different thermal histories associated to these DM frameworks are illustrated in Fig. 1. The two basic premises of the general freeze-in picture are that The hidden and visible sectors are thermally disconnected, The inflaton decays preferentially to the visible sector, not reheating the hidden sector. Consequently, it is a model independent statement that, due to the out-of-equilibrium dynamics, DM production will proceed through freeze-in via any non-renormalisable operator which is not forbidden by symmetries. Further, the expectation from UV completions of the SM is that distinct sectors of the low energy theory are generically connected by UV physics. On the other hand, IR freeze-in relies on a rather special construction in which the (renormalisable) portal operators have diminutive couplings, however the naïve expectation is that dimensionless parameters should be near unity. Whilst such feeble couplings are not inconceivable (the electron Yukawa is one example), such decoupling is readily achieved if the visible sector and hidden sector are only connected via high dimension operators. The UV freeze-in scenario bears some resemblance to models of non-thermal DM Chung:1998zb . The two frameworks both require the DM to be thermally decoupled from the visible sector, they also rely on particular realisations of inflation, and both lead to a DM abundance which is dependent on the reheat temperature. However, there are also important distinctions between these frameworks. In non-thermal DM, the DM has sufficiently small couplings with the visible sector such that energy exchange between the sectors is negligible and the DM relic density is set primarily by inflaton decay. In contrast, in UV freeze-in the DM is dominantly populated by energy transfer from the visible sector to the hidden sector. In this paper we examine a range of motivated operators for UV freeze-in and discuss potential connections with other aspects of high scale physics. The paper is structured as follows: In Sect. 2 we develop the physics behind UV freeze-in using a number of toy models which exemplify several interesting features. In particular, we discuss high dimension operators with many body final states. Further, we examine examples in which a field involved in the portal operator develops a vacuum expectation value (VEV). We consider the constraints which arise from avoiding sector equilibration in Sect. 3 and find that this leads to bounds on the parameter space, but that large classes of viable models can be constructed. Subsequently, in Sect. 4 we propose a number of simple models, motivated by beyond the Standard Model (BSM) physics, which realise the UV freeze-in picture. Specifically, we consider possible connections with axion models and portals. We also comment on the prospect of identifying the scale of UV physics given the DM mass, portal operator and the magnitude of the reheat temperature. In Sect. 5 we provide a brief summary, alongside our closing remarks. 2 General possibilities for UV freeze-in The possibility of freeze-in via non-renomalisable operators has been briefly discussed in Hall:2009bx ; Blennow:2013jba ; Harling:2008px ; Kolda:2014ppa ; Mambrini . One of the distinguishing features of UV freeze-in is that DM production is dominated by high temperatures, and so the abundance is sensitive to the reheat temperature . Whilst this possibility has been previously remarked upon as less aesthetic due to the dependence on the unknown value of , it is nevertheless very well motivated as it is a generic expectation that sectors which are decoupled at low energy may communicate via high dimension operators. In this section we examine some general classes of toy models in which the hidden and visible sector are connected only by effective contact operators. 2.1 Dimension-five operators with two and three-body final states We shall start by discussing the simple toy model of UV freeze-in outlined by Hall, March-Russell & West Hall:2009bx (see also McDonald:2008ua for a similar model in the context of supersymmetry); this will provide a basis from which to examine more realistic scenarios in subsequent sections. In this toy model a scalar DM state freezes-in due to a dimension five operator of the form222As appears linearly in the operator it can not be stabilised by a simple . We examine the issue of stability in Sect. 4 for specific models, but note here that an enlarged symmetry could accommodate this operator and stabilise . Alternatively, could be an unstable hidden sector state which decays to the DM. At present we use this example as a simple toy model to illustrate freeze-in via non-renormalisable operators. where is a boson in the thermal bath, are bath fermions, and is the mass scale at which the effective operator is generated. Throughout this section we shall use and to denote scalar and fermion hidden sector states, respectively, and use and to indicate scalars and fermions in the thermal bath. Let us suppose, for the time being, that does not develop a VEV. An abundance of can freeze-in via scattering processes involving the bath states: . The change in number density can be described by the Boltzmann equation (see e.g. Kolb:1990vq ) where and is the distribution function for a given state. We shall assume that the various states are in thermal equilibrium and thus Maxwell-Boltzmann distributed, , with the visible sector states distributed with respect to the temperature of the thermal bath , whereas the DM is part of a cold hidden sector at initial temperature . Correspondingly, this implies that , and the initial number density of is negligible where is the number of internal degrees of freedom of . Therefore, the latter term in the Boltzmann equation proportional to (the back-reaction) can be neglected. This is the standard picture of the freeze-in scenario, which we shall adopt throughout. Further, we assume here that the portal operator is always sufficiently feeble that it does not bring the hidden sector into thermal equilibrium with the visible sector. We shall examine the specific requirement of this condition in Sect. 3. where denotes a Bessel function of the second kind and In the limit that the particle masses involved in the scattering are negligible compared to the temperature, scattering via the dimension five operator is described by a matrix element the form where is the centre of mass energy of the scattering at temperature . Unless otherwise stated, throughout this paper we assume that the masses of the various states are substantially smaller than both and the reheat temperature . It follows from eq. (4) & (6) that the Boltzmann equation reduces to the form Hall:2009bx ; 1d 333Neglecting the mass in the lower limit of the integral leads to only percent-level deviations in the result. Using the relation Kolb:1990vq , this can be re-expressing in terms of the yield (where is the entropy density) to obtain in terms of the effective number of degrees of freedom in the bath Kolb:1990vq . Using the definitions and , for the (non-reduced) Planck mass, then integrating with respect to temperature (between and ) gives Hall:2009bx The important thing to note is that the yield depends on the reheat temperature of the visible sector. This is in contrast to the case of freeze-in via renormalisable interactions, in which the yield only depends on the coupling and particle masses Hall:2009bx . As we reproduce in Appendix A, the DM yield due to IR freeze-in is parametrically With the above example in mind, we extend this analysis to consider a range of effective operators of varying mass dimension and involving different combinations of fields. It is important to recognise that operators of large mass dimension typically lead to many-body final states. Indeed, as we examine below, even the simplest extension of the previous example to dimension five operators involving four bath scalars and scalar DM , which allows freeze-in production via scattering , leads to a 3-body phase space. The Boltzmann equation describing DM production via scattering is given by where denotes the differential Lorentz invariant phase space for 3-body final states and the numerical prefactor accounts for permutations of initial and final states. An evaluation of the 3-body phase space (see Appendix B) allows the Boltzmann equation to be rewritten in a form reminiscent of eq. (7) We have assumed here that the final state masses can be neglected. By dimensional analysis the associated matrix element is parametrically Substituting this into the Boltzmann equation and expressing our result in terms of the yield we obtain Performing the integrals over and, subsequently, temperature we find the form of the DM yield Up to a numerical suppression of , this is similar in form to eq. (9) and, notably, also depends linearly on the reheat temperature. The DM yield may be related to the relic density as follows where denotes the critical density and is the present day entropy density, evaluated at . In the latter expression we have approximated . We can choose judicious parameter values such that the observed relic density () is obtained for a given value of the DM mass. For example, choosing a canonical DM mass of 1 TeV, eq. (15) can be rewritten It should be noted that in non-minimal models the DM produced via freeze-in might be able to subsequently annihilate. This would introduce further terms in the Boltzmann equation. If there are additional hidden sector interactions, and light hidden sector states into which the DM can annihilate, then this can give rise to a period of annihilation and freeze-out in the hidden sector, which may dilute the DM relic density. To maintain a degree of predictability, throughout we shall assume that there are no such additional hidden sector interactions which can lead to DM pair annihilation. On the other hand, if the DM only interacts via the suppressed portal operator then, as the DM never enters thermal equilibrium, the rate of annihilation back to the visible sector is always negligible compared to the rate of production. In some sense the DM is immediately frozen-out on production. 2.2 High dimension operators with many-body final states We would like to understand how this generalises to operators of increasing mass dimension. For UV freeze-in involving an operator of mass dimension , the cut-off enters in the denominator of the yield as . Thus, still assuming that none of the fields involved acquire non-zero VEVs, the generic expectation is that the DM yield due to this operator should scale as follows the factor of coming from the Hubble parameter. One important issue which arises, however, is that the phase space becomes increasingly large and complicated. Here we shall make certain assumptions and approximations to obtain an order-of-magnitude estimate of the yield. Consider the dimension- operator which corresponds to -body final state phase space for scattering events . Variant operators might be considered but this toy example should be illustrative of a more general issue regarding the interplay between mass dimension and phase space. For simplicity here we assume that there is only a single relevant high dimension operator; the converse scenario would imply a sum over the various operators in the Boltzmann equation. The Boltzmann equation describing DM production via scattering is given by444We neglect here permutations of initial states. If included this leads to a combinatorial enhancement, but as the DM abundance is highly sensitive to , for this is of lesser importance. The differential phase space grows like and we make the approximation The square bracket provides a parametric estimate of the additional phase space suppression due to the -body final state. This is a somewhat crude approximation, but for low should give an order-of-magnitude estimate. For increasing the suppression to the cross section coming from the phase space should become more severe and strongly suppress operators of high () mass dimension. From dimensional analysis and thus the Boltzmann equation can be expressed We will check the resulting estimate against the 3-body result calculated in Sect. 2.1. The integral over has a closed form expression which for is given by Using this result the Boltzmann equation can be rewritten From the above we obtain an expression for the DM yield The form of eq. (25) conforms with our expectations for the parametric scaling discussed in eq. (18). Moreover, this shows that a range of operators of varying mass dimension should be able to reproduce the observed relic density. We consider some examples below. Firstly, we can check this result against our 3-body calculation by setting Comparing with eq. (15), we find that these two expressions agree up to an factor. For suitable parameter values the observed relic density can be reproduced for a large range of DM masses, comparing with eq. (16). As a concrete example, consider a model with 1 TeV DM, a yield of appropriate magnitude is found for Observe, unlike typical models of freeze-out and IR freeze-in, the yield is independent of , the DM mass, provided . Thus in UV freeze-in one can find the observed DM relic density for different values of the DM mass by simply rescaling the yield. Taking a few further examples, consider dimension-six () and dimension-seven () operators (with 4 and 5 body final states, respectively), the estimates for the freeze-in yield of for these cases are Thus for a given operator and fixed UV scale, the observed DM abundance can typically be obtained by adjusting the reheat temperature. We shall comment in Sect. 4.3 on motivated values of . Moreover, we learn that in the presence of multiple high dimension portal operators of varying mass dimension (but common ), generally the physics will be determined by the operator(s) with smallest mass dimension. This conforms with the standard intuition regarding effective field theory. 2.3 VEV expansions of high dimension operators If any of the fields involved in the high dimension operator acquire VEVs then, by expanding the operator around the vacuum, one can construct a sequence of terms dressed by couplings of different mass dimension Hall:2009bx . Let us again examine the simple example given in Hall:2009bx , involving the dimension-five operator in eq. (1). Suppose that the bath scalar field has a non-zero VEV , expanding around this VEV gives where we have identified . To ensure validity of the effective field theory we will assume that and therefore . In this example the yield will receive both an IR contribution from the first term (which appears as an operator with a dimensionless coupling after symmetry breaking) and a UV contribution from the latter term. The IR contribution is assumed to be generate by decays , this is reproduced in eq. (67) of the Appendix. By calculating the two contributions, it can be shown Hall:2009bx that the yield is dominated by the IR contribution if It is notable that this condition does not depend on . Let be the critical temperature associated to the spontaneous breaking of some symmetry, due to a scalar field involved in the UV freeze-in operator developing a VEV. It should be expected that . The VEV expansion of eq. (29) is only valid for , but as the yield due to IR operators is temperature independent, it will not depend on the temperature at which the IR operator is generated. Thus it is not required that in this case. The situation is more complicated if the VEV expansion leads to additional UV freeze-in operators. For , one can find UV contributions which depend on , rather than . This is because for temperatures there is no VEV expansion until thermal evolution (due to expansion) causes to drop below . We shall illustrate this with an example below. Importantly, if the phase transition happens after the point at which DM freeze-in terminates, i.e. below the mass of the bath states involved in the freeze-in process (), then no further (UV or IR) contributions will be generated. Let us consider an example in which the VEV insertion does not lead to operators with dimensionless coefficients, but results in several UV contributions. One manner of realising this scenario is by dressing the SM Yukawa operators with a pair of DM states Note that the DM can be stabilised by the assumption of -parity, such that the Lagrangian is invariant under . After electroweak symmetry breaking (EWSB) one makes the appropriate expansion around the vacuum to obtain the four fermion operator. Henceforth we shall call the operator dressed by and the latter term . As one might anticipate the VEV expansion leads to two operators which provide UV contributions to the yield. The operator leads to freeze-in via scattering processes such as . From dimensional analysis the matrix element is of the form As previously, we can describe the production of DM through the Boltzmann equation given in eq. (12). It follows that the DM yield due to is of the form Similarly, the operator results in DM production via scattering The relevant Boltzmann equation is analogous to eq. (7) Hence the yield is given by The maximum temperature , which is the upper limit of the integral, depends on whether the reheating temperature is above or below the critical temperature at which the Higgs develops a VEV and can be expressed as . Thus there are two possible cases, which we examine below, depending on whether the reheat temperature is above or below the phase transition. As the operator in eq. (31) involves the Higgs VEV, the critical temperature is around , associated to the electroweak phase transition (EWPT). Provided the VEV expansion is valid, and thus the portal operator is active, for all physically relevant temperatures. Therefore , and identifying , the ratio of the two contributions is For VEV expansions which do not lead to dimensionless couplings, because one expects for . More generally, if the expectation is that the contribution coming from the operator in the VEV expansion dressed by the coefficient with smallest (negative) mass dimension will dominate the yield. For the phase transition takes place during the cooling of the thermal bath, in this case production via the VEV expanded operator only occurs for and the dominant contribution will be generated at . Further, thermal fluctuations may be important in determining the field expectation value and it is expected555A more careful study of these thermal effects would be of interest, but it is beyond the scope of this work. that . Evaluating eq. (36), the ratio of the contributions coming from eq. (31) in this case is instead Since by assumption , the operator is typically dominant. More generally, we expect that for alternative operators typically the term with no explicit VEVs in the expansion around the vacuum will provide the most significant contribution to the yield. In the case that the VEV expansion generates a dimensionless coupling, the associated yield is independent of and , and the criteria under which this operator dominates will be described by an equation analogous to eq. (30). For scattering processes to occur in the thermal bath it is required that the SM states can be thermally produced. This implies that GeV if is the SM Higgs. In the case that the Higgs is thermally produced, this pushes the model into the regime in which the operator always gives the dominate contribution. In the converse scenario that the Higgs is not thermally produced then will be exponentially suppressed, but can still potentially lead to DM production. Further complications can arise if the VEV expansion gives several terms, and thus multiple contributions to the DM yield, or if there are multiple scalar fields, especially if the scalars develop VEVs due to spontaneous symmetry breaking taking place at different scales. We shall leave these more complicated possibilities until they arise in motivated examples. 3 Sector equilibration constraints For the DM relic density to be established by freeze-in production (rather than freeze-out) it is imperative that the DM is not brought into thermal equilibrium with the visible sector due to its interactions through the portal operator. For the case of IR freeze-in via renormalisable interactions, for instance due to , it has been argued Cheung:2010 that the hidden sector should not thermalise with the visible sector, provided .666In the case that a VEV expansion gives a renormalisable operator, then identifying one should apply this IR constraint, in addition to requirements on high dimension operators in this section. In this section we derive an analogous condition for the case that the portal is due to a non-renormalisable operator, leading to UV freeze-in. It will be useful to introduce the freeze-out temperature , the temperature at which a state in thermal equilibrium decouples from the thermal bath, defined such that at The requirement that the DM is always out of thermal equilibrium is equivalent to . This implies two conditions: For , the bath number density is and thus the DM number density is non-equilibrium provided . For , the equilibrium number density is Boltzmann suppressed and thus to avoid coinciding with it is required that DM freeze-out occurs for . The UV operator freezes-in a DM abundance at , which is effectively frozen-out. Clearly, an upper bound is given by the scenario in which a near thermal abundance is generated in the early universe, subsequently decouples, and then evolves to the present modified only by entropy conservation. Including the entropy factor, this gives (for a real scalar DM particle) Further, we use that the DM relic density is given by Comparing with the observed value , this gives a bound on the DM mass where we have used that . Thus this places a model independent bound on the DM mass. Next we examine the second requirement: . The freeze-out temperature can be found by solving eq. (39), which we expand below Thus the requirement that freeze-out occurs before the mass threshold is given by For a specific portal operator this can be re-expressed in terms of the UV scale at which the operator is generated. Let us consider an example. Recall the Lagrangian term studied previously for UV freeze-in by scattering: . In this case the matrix element is given by eq. (6) and the associated cross section is It follows that the requirement eq. (42) can be expressed for this operator as below Moreover, when combined with the constraint of eq. (40), this implies an extreme lower bound on the scale of new physics of GeV, as indicated above. In Fig. 2 we display the constraints on the parameter space of various models. It particular we look at the UV freeze-in portals considered in Sect. 2.2 & 2.3. Using the derived forms of the yields, we present contours of and which give the observed relic density for a range of DM masses. We overlay these contour plots with the relevant constraints. Towards the lower right of the parameter space in each plot the yields are low and require increasingly larger DM masses in order to reproduce the observed relic density. In certain regions of parameter space the DM mass which would be required is larger than the and thus such models can not give the correct DM abundance; this is indicated by the grey shaded regions in Fig. 2. The purple shaded regions indicate parameter values which lead to sector equilibration, as given in eq. (40), the DM enters thermal equilibrium , thus the abundance will not be set via the freeze-in mechanism. In the red highlighted region , the effective field theory breaks down, and the dark matter abundance is not set by UV freeze-in. Note also that because the DM is never in thermal equilibrium the unitarity bound of freeze-out DM Griest:1989wd does not apply to freeze-in DM Hall:2009bx and thus there is no upper bound on the DM mass. 4 UV freeze-in and BSM physics Having discussed a range of possibilities in the context of toy models, we how turn to constructing explicit models based on extensions of the SM, motivated by outstanding problems. These BSM scenarios generally require additional states to be introduced at new physical scales above the weak scale. One interesting possibility is that the operator(s) responsible for UV freeze-in are generate at this scale of new physics. We shall also discuss motivated BSM models which lead to high dimension operators with VEV expansions at temperatures above the weak scale. It was remarked in Hall:2009bx that the high dimension operator responsible for UV freeze-in might arise from GUT scale physics, here we examine some alternative scenarios. First we consider the scenario in which the SM gauge group is extended by an additional U(1) gauge symmetry, which is broken at some high scale (for a general review see e.g. Langacker:2008yv ) If some visible sector states and the DM are both charged under this new gauge symmetry, then the associated massive gauge boson can provide a portal that links these two sectors. Additional U(1) gauge groups are a generic expectation of string theory compactifications, see e.g. str ; Blumenhagen:2006ci ; Cvetic:1997wu , as supported by scans of vacua of Heterotic string theory Anderson . Further, GUTs based on or SO(10) can introduce extra U(1) factors from the breaking of these larger groups London:1986dk In type IIB theories extra U(1)’s can arise from isolated branes; moreover, brane stacks are associated to symmetry groups , where the U(1) factor is (pseudo)-anomalous str . This U(1) anomaly is cancelled via the Green-Schwarz mechanism, and as a result the acquires a mass near the string scale. It is interesting to note that in type IIB theories the string scale can be lowered substantially compared to the Planck mass if the moduli are stabilised at LARGE volume Balasubramanian:2005zx . Alternatively, from an IR perspective, it is conceivable that a global quantum number of the SM model might be gauged. In the SM baryon number and lepton number appear as accidental symmetries and are typically broken in extensions of the SM, for instance GUTs. However, it is possible that some global quantum number of the SM may arise from an exact gauged symmetry. An appealing possibility is that the combination is gauged as this is anomaly free provided the spectrum includes right-handed neutrinos. If one assumes that DM is charged under , then the U(1) gauge boson can provide a portal operator which connects the DM and the SM fermions, see e.g. b-l . If is gauged, typically it must be broken at a high scale to give masses to the neutrinos via the seesaw mechanism. Once the is integrated out this generates effective operators which connect the SM fermions and the DM, suppressed by the (intermediate) scale at which U(1) is broken. This can potentially lead to UV freeze-in for appropriate parameter choices. More generally, suppose that the DM , and the SM fermions are charged under some new group U(1). It follows that the SM states can pair-annihilate and produce DM states via . In the UV, as usual, interactions mediated by the (heavy) appear in the Lagrangian through the covariant derivative in the gauge invariant kinetic terms where , the ellipsis denote the gauge fields of the SM and is the U(1) charge. Once the is integrated out this leads to four-fermion interactions, suppressed by , in the effective Lagrangian of the form This is similar to studied in eq. (31), however the prefactor is different, as is the Lorentz structure, as here we have integrated out a Lorentz vector. DM production will proceed fairly analogously and the DM yield is given by (cf. eq. (25)) It should be noted that the presence of an additional U(1) gauge group will generically lead to kinetic mixing via operators of the form . Such interactions can provide a renormalisable portal operator between the visible and hidden sectors. As we are primarily concerned here with UV freeze-in, we shall assume that such operators are negligible. This kinetic-mixing portal has been previously studied in the context of IR freeze-in Blennow:2013jba . Discussion on UV freeze-in via also appear in Mambrini . 4.2 The axion portal We shall next consider a realisation of the simple toy model of UV freeze-in originally considered in Hall:2009bx , and discussed here in eq. (29), based on the axion solution to the strong CP problem. This will provide an example in which a VEV expansion introduces additional freeze-in portals. It is widely thought that the most viable solution to the strong CP problem is the Peccei-Quinn (PQ) mechanism, which dynamically sets the -parameter to zero Peccei:1977hh . Such ‘axion portals’ have been contemplated previously within the context of DM freeze-out, e.g. Nomura:2008ru . We shall take a DFSZ-type model DFSZ , where a type II two Higgs doublet is supplemented with an additional SM singlet scalar transforming under the PQ symmetry The SM fermions transform as and . We further supplement this model with SM singlet Wely fermions , which transform with equal charges under the PQ symmetry.777It is useful for our purposes that the states are Dirac as a Majorana fermion would necessarily be a singlet under the PQ symmetry and thus, in the absence of additional symmetries (e.g. lepton number), this would allow the renormalisable operator . The state is the DM candidate, it can be stabilised by a -parity. Potentially, a stabilising discrete symmetry might arise as a subgroup of the PQ symmetry. This might be considered as a ‘toy’ setting, as we shall not confront the various naturalness problems Volkas:1988cm ; Kamionkowski:1992mf which arise in such axion models, but it will illustrate the general principle. In addition to the Yukawa couplings we can build the following Lorentz, gauge, and PQ invariant combinations of these fields: These field combinations allow us to construct the following Lagrangian terms involving the DM bilinear The scalar field develops a non-vanishing VEV at scale , which preserves electroweak symmetry, but spontaneously breaks the PQ symmetry. Let us assume the following mass hierarch: There is a similar -dependent contributions to the yield from the operators for () which has the same form to that given above. The correct relic density is found for . A further model-dependent limit comes from the assumed mass hierarchy eq. (53) As we expect from the Lagrangian that and , this implies the following consistency constraint on the hierarchy of scales: . The above requirements can be simultaneously satisfied with reasonable parameter values. At energies above the EWPT, but after PQ breaking one can re-examine the operators appearing in eq. (52) following a VEV expansion around the vacuum of . The singlet field can be decomposed into radial and axial components The axial field is identified with the axion. Expanding around the VEV of , the radial component provides the following operators For the state is part of the thermal bath, as it is kept in thermal contact via interactions involving products of the bilinear operators in eq. (51) involving , and if these have coefficients. Once the temperature drops below the PQ breaking scale (but whilst still above ), freeze-in can proceed via direct decay of heavy states to DM pairs, leading to an IR contribution to the yield. Comparing with the form of the IR freeze-in yield given in Hall:2009bx , and reproduced in eq. (67) of Appendix A, one finds where is the partial width of . The condition under which the UV contribution will be dominant is Note also that after EWSB there is a further VEV expansion involving the Higgs fields which leads to the portal operators of the form It would be of interest to embed the above model into a supersymmetric extension of the SM, similar to e.g. Nomura:2008ru . The advantages of a supersymmetric implementation is that the type II structure of the Higgs sector (required to employ the DFSZ model and to avoid constrained flavour changing processes) is an automatic consequence of holomorphy of the superpotential. In addition this might also alleviate the naturalness problem Volkas:1988cm associated with destabilising the weak scale, and the DM can be stabilised by R-parity if it is the lightest supersymmetric particle. 4.3 The reheat temperature In Sect. 4.1 & 4.2 we have discussed motivated scales of new physics which might generate the high dimension operators. An interesting alternative to this approach is to consider special values for the reheat temperature and use this, in conjunction with the DM relic density, to identify the unknown UV scale. One drawback of this scenario is that very little is known about the reheat temperature. Precision measurements of primordial elements due to Big Bang nucleosynthesis are thought to imply that few MeV. Models of inflation typically suggest an upper bound around , see e.g. Linde:2005ht . Moreover, if is high then in principle this can lead to problems with long-lived exotic relics which can over-close the Universe, the classic example being the cosmological gravitino problem of supergravity Moroi:1993mb . This implies the upper bound on the reheat temperature in models of supergravity is typically , with substantially stronger bounds if the gravitino is light.888It has been further argued Cheung:2011mg that, with some assumptions, once the contribution from axinos is also included this can lead to an even more stringent upper bound: . The following extreme cases may be of particular interest: The maximum expected from simple models of inflation: . The upper bound from the cosmological gravitino problem: . The lower bound from precision measurement of primordial elements: . The first two scenarios might be motivated through considerations of environmental selection if there is some anthropic pressure which favours high , with the cosmological gravitino problem imposing a catastrophic boundary at in supersymmetric models. Let us consider a specific example involving dimension- operator of the scalar toy model studied in Sect. 2.2. For DM with mass around 100 GeV the yield required to obtain the observed relic density is , as discussed in eq. (16). To obtain the correct relic abundance via freeze-in through the dimension-five operator with a reheat temperature of , requires a UV scale as indicated below The magnitude of the UV scale in this example may be suggestive of a connection with Planck Scale physics. It is not implausible that future observations might indicate the reheat temperature (given some assumptions regarding the model of inflation). Ultimately to test UV freeze-in DM, and disambiguate it from other frameworks, it will be necessary to determine the DM mass, the UV scale and . Recently the BICEP2 collaboration claimed they had observed primordial tensor modes Ade:2014xna , and thus could infer , however this result is currently disputed Flauger:2014qra . If the BICEP2 signal survives further scrutiny we shall comment on this in a dedicated paper. This work has provided an exploratory study of the model building opportunities which arise for the UV freeze-in mechanism. We considered general aspects of this scenario in the context of toy models and demonstrated that interesting and phenomenologically viable models can be constructed in motivated settings of BSM physics. In Sect. 2, we examined various toy models which encapsulate the fundamental features of UV freeze-in. Typically high dimension operators lead to many body final states, and the case of DM production via UV freeze-in was carefully studied. Subsequently, we attempted to quantify DM production associated to more complicated phase spaces. Further, we discussed the potential impact of spontaneous symmetry breaking on UV freeze-in, in particular, we identified a new case of interest in which the VEV expansion leads only to additional UV contributions, and does not generate an IR freeze-in portal. In this scenario the DM yield depends on both the reheat temperature and the critical temperature of symmetry breaking. Sect. 3 examined the constraints on UV freeze-in from the requirement that the hidden sector and visible sector do not equilibrate and we argued that this can lead to bounds on the DM mass. In Sect. 4 we presented realistic models of UV freeze-in and related these to interesting BSM scenarios. We suggested that UV freeze-in might be connected to motivated solutions of prominent puzzles of the SM, specifically we consider an example involving the Peccei-Quinn mechanism. A further example was presented in which the UV freeze-in portal is generated by integrating out a heavy . This is appealing as arise in many extensions of the SM and additional U(1) gauge groups are common in realistic string compactifications. It should be evident from our discussions that UV freeze-in offers a large range of possibilities for DM model building and that there are many interesting aspects yet to be explored. UV freeze-in presents a new manner of obtaining non-thermal DM, with a relic abundance directly related to the reheat temperature, and provides an interesting alternative to the conventional ideas regarding the DM thermal history. For the DM relic density to be set through UV freeze-in it is required that reheating of the hidden sector is negligible and that the DM is connected to the visible sector via non-renormalisable operators. Once one assumes that the abundance of DM is initially depleted, one might argue that freeze-in via high dimension contact operators presents a more generic mechanism than IR freeze-in portals, which require very small renormalisable couplings or complicated effective operators involving several scalar fields with non-vanishing VEVs. Moreover, from a UV perspective, it is a fairly general expectation that distinct sectors in the low energy theory may become coupled through the high scale physics, and we have presented some examples of this principle in the above. Acknowledgements We would like to thank Joe Bramante, John March-Russell, Adam Martin, and Bibhushan Shakya for useful discussions. Also, we are grateful to the JHEP referee for their insightful comments. This research was supported by the National Science Foundation under Grant No. PHY-1215979. JU is grateful for the hospitality of the Centre for Future High Energy Physics, Beijing, where some of this work was undertaken. Appendix A IR freeze-in of dark matter The IR yield due to scattering via a four-point scalar interaction with the matrix element was calculated in Hall:2009bx . This result is referenced in the text, so we give it here for completeness. The abundance of the DM is initially zero, and it is produced via the operator , where is a feeble dimensionless coupling. Consider scattering where the momenta of the incoming bath particles are labelled and outgoing state momenta labelled . The matrix element associated to scattering via this four-point interaction is . The Boltzmann equation which describes DM production in this set-up is As previously, we re-express this as an integral with respect to centre of mass energy
Progressive Iterative Approximation for Extended B-Spline Interpolation Surfaces In order to improve the computational efficiency of data interpolation, we study the progressive iterative approximation (PIA) for tensor product extended cubic uniform B-spline surfaces. By solving the optimal shape parameters, we can minimize the spectral radius of PIA’s iteration matrix, and hence the convergence rate of PIA is accelerated. Stated numerical examples show that the optimal shape parameters make the PIA have the fastest convergence rate. Data interpolation plays important roles in scientific research and engineering applications. How to solve interpolation curves/surfaces efficiently has been one of the most popular topics in computer-aided geometric design (see [1–3]). Oftentimes, one has to solve a linear system to obtain the interpolation curves or surfaces. Efficient and accurate algorithms are required to guarantee the computational efficiency. For small-scale systems, direct methods are typically the preferred choices. However, for large-scale systems, it becomes necessary to employ iterative methods to obtain the solutions. In recent years, an iterative method, namely, progressive iterative approximation (PIA), has attracted a lot of attention and has become a very hot research area. The PIA stands out because it has the advantages of clear geometric meaning, stable convergence, simple iterative format, local modification, and so on. Furthermore, it avoids to solve a linear system directly. For more details about PIA, we refer the readers to read a recent survey . Despite the fact that the PIA offers many advantages, there is a disadvantage, that is, slow rate of convergence. To overcome this limitation and further improve the computational efficiency, a great deal of acceleration techniques have been conducted. Examples of such approaches include [5–14] and a lot of literatures therein. The emergence of blending bases with shape parameters has enriched the theories and methods of geometric modeling [1, 15–17]. Due to the flexibly in shape adjustment, splines with shape parameters have drawn much attention for decades and a large number of splines with shape parameters were exploited (see, for example, [18–20]). Very often, the aim of shape parameters is to adjust the shapes of splines, while in , the introduction of shape parameters is to speed up the convergence rate of PIA. In that paper, the eigenvalues of the collocation matrix were expressed explicitly, and hence the optimal shape parameters were solved to make the PIA have the fastest convergence rate. Based on this conclusion, we further study the PIA format for tensor product extended cubic uniform B-spline surfaces, which is an extension of the PIA for the classic bicubic uniform B-spline curves. By solving the optimal shape parameters, the convergence rate of PIA is accelerated, and thus the computational efficiency of data interpolation can be improved. The rest of this paper is organized as follows. After recapping the definition of the extended cubic uniform B-spline surfaces with shape parameters, we exploit the PIA format for extended cubic uniform B-spline surfaces in Section 2. In Section 3, we study the optimal shape parameters to make the PIA have the fastest convergence rate. Some numerical examples are given to illustrate the acceleration effect in Section 4. Finally, we give some concluding remarks in Section 5. 2. PIA for Extended Cubic Uniform B-Spline Surface 2.1. Extended Cubic Uniform B-Spline Surface We begin with the definition of the extended cubic uniform B-spline basis with a shape parameter. Definition 1. (see ). For , the extended cubic uniform B-spline basis (abbr. -B-spline basis) iswhere is the so-called shape parameter. The -B-spline basis has the properties of non-negativity and symmetry, and it will degenerate into the classic cubic B-spline basis if . Definition 2. (see ). Given knot vectors and such that and . Let be the control points, and Let and be the -B-spline bases defined as in (1). Then, for , and , , we can define extended cubic uniform B-spline patches with shape parameters and asAll these patches comprise an entire extended cubic uniform B-spline surface (abbr. -B-spline surface):Due to the degeneracy property of the -B-spline basis, it is easy to verify that the -B-spline surface will degenerate into the classic bicubic B-spline surface if . If we want the -B-spline surface to interpolate the boundary control points, we have to add several control vertices according to Let and be the -B-spline bases defined as in (1). Given a set of organized data points to be interpolated, we assign a pair of parameters to the th point , where and . Firstly, the points as well as these added points (4) are interpreted as the control points of a -B-spline surface. Therefore, we can obtain the initial approximate interpolation curvewhere . Secondly, letbe the adjusting vectors of the control points. Then, we can adjust the control points according to Suppose that we have obtained the th, , approximate interpolation surface ; then, the th approximate interpolation surface can be defined aswhere Therefore, we obtain a sequence of approximate interpolation surfaces . The initial surface is said to have the PIA property if the limit of interpolates the points . It was shown in that tensor product surfaces generated by normalized and totally positive bases have the PIA property. We note in that the -B-spline basis is normalized and totally positive; therefore, the initial -B-spline surface has the PIA property. Then, equation (9) can be written aswhere is the Kronecker product, is the identity matrix, and and are the collocation matrices resulting from the -B-spline basis; in detail, We refer to the iterative format (12) as the PIA format. The matrix in (12) is the iteration matrix. It is well known that the PIA converges if and only if the spectral radius of the iteration matrix is less than 1. Moreover, the smaller the spectral radius is, the faster the PIA converges. According to (13), the spectral radius is a function of the shape parameters and . This indicates that we can minimize the spectral radius by selecting optimal shape parameters and , and hence the convergence of PIA can be improved. We will discuss the optimal shape parameters and in the following section. Finally, we give some notations. The spectral radius of a matrix is denoted by , the eigenvalues of are denoted by , and the smallest eigenvalue of a matrix is denoted by . 3. Optimal Shape Parameters In order to make the PIA have the fastest convergence rate, we have to solve the optimal shape parameters and that minimize the spectral radius of PIA’s iteration matrix, i.e., Lemma 1 (see ). Suppose that and are square matrices of size and , respectively. Let and be the eigenvalues of and , respectively. Then, the eigenvalues of are . Lemma 2 (see ). Let be an collocation matrix resulting from the -B-spline basis. Then, the eigenvalues of areBy direct deduction, we have the following corollary. Corollary 1. Let be an collocation matrix resulting from the -B-spline basis. Then,(1)The eigenvalues of are distributed at the interval .(2)Given , the smallest eigenvalue of is .(3) decreases as increases and reaches minimum when . Theorem 1. Let and be the collocation matrices defined as in (13). For fixed , the spectral radius of the iteration matrix of PIA isThe PIA has the fastest convergence rate when , and in such case, the spectral radius is Proof. According to Corollary 1, for ; , we have , so is the product of and , i.e., . Combined with Lemma1, we haveFrom Corollary 1, and minimize at and , respectively. By substituting into (16), the result (17) follows straightforwardly. 4. Numerical Examples In this section, several numerical examples are presented to assess the effectiveness of the optimal shape parameters. All experiments were performed by Matlab R2012b. Let be the points to be interpolated, and let be the th approximate interpolation -B-spline surface. Then, the interpolation error of can be defined aswhere is the Euclidean norm. Example 1. Consider the data interpolation of points sampled from the peaks functionin the following way: Example 2. Consider the data interpolation of 16 points: , , , , , , , , , , , , , , , . Example 3. Consider the data interpolation of points sampled from the functionat . Example 4. Consider the data interpolation of points sampled from the functionat . The PIA for -B-spline surfaces with different and is employed to interpolate the points in Examples 1–4. It should be pointed out that the PIA for -B-spline surfaces will degenerate into the PIA for the classic bicubic B-spline surfaces if . As an illustration, we show in Figure 1 the spectral radii of PIA’s iteration matrices with different shape parameters in Examples 1–4. In Table 1, we list the spectral radii of PIA’s iteration matrices. For convenience, the notation in Table 1 and the subsequent tables represents the values of the shape parameters and . We can see from Figure 1 and Table 1 that the spectral radii of iteration matrices are less than 1 for any and minimize at , and hence the PIA converges for and has the fastest convergence rate when . Those results coincide with the conclusions in Theorem 1. Thus, for the optimal shape parameters and , the convergence rate of PIA for -B-spline surfaces would achieve a great acceleration compared with that for the classic bicubic B-spline surfaces. Given interpolation errors, we list in Table 2 the number of required iterations when we test Example 1 with different shape parameters. It is evident from Table 2 that under the requirement of the same precision, the number of iterations of PIA with is the smallest. In Tables 3 and 4 , we list the interpolation errors of Examples 2–4 obtained by implementing the PIA for -B-spline surfaces with different and . We can see that with the same iterations, the interpolation errors obtained by the PIA with are the smallest. Figures 2–9 display the -B-spline surfaces with different shape parameters when we employ the PIA to interpolate the data given in Examples 1–4. All these numerical results shown in these tables and figures indicate that the PIA for -B-spline surfaces has the fastest convergence rate when . In this paper, we have exploited the PIA format for -B-spline surfaces. Due to the introduction of shape parameters, we can make the PIA have the fastest convergence rate by solving the optimal shape parameters, while the amount of calculation does not increase. Therefore, it inherits the merits of the PIA for the classic bicubic B-spline surfaces, e.g., simple iterative scheme, stable convergence, clear geometric meaning, local modification, etc. More importantly, the computational efficiency of data interpolation is improved by accelerating the convergence rate. The data are included within tha article. Conflicts of Interest The authors declare that they have no conflicts of interest. This research was supported by the Natural Science Foundation of Hunan Province (grant no. 2020JJ5267) and Scientific Research Funds of Hunan Provincial Education Department (grant nos. CX20201192 and 19B301). H. Lin, T. Maekawa, and C. Deng, “Survey on geometric iterative methods and their applications,” Computer-Aided Design, vol. 95, pp. 40–51, 2017.View at: Google Scholar J. M. Carnicer, J. Delgado, and J. Peña, “Richardson method and totally nonnegative linear systems,” Linear Algebra and Its Applications, vol. 11, 2010.View at: Google Scholar S. Deng and G. Wang, “Numerical analysis of the progressive iterative approximation method,” Computer Aided Geometric Design, vol. 7, pp. 879–884, 2012.View at: Google Scholar Y. Yi, L. Hu, C. Liu et al., “Progressive iterative approximation for extended cubic uniform B-splines with shape parameters,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 10, 2020.View at: Google Scholar L. Yan and X. Han, “The extended cubic uniform B-spline curve based on totally positive basis,” Journal of Graphics, vol. 37, pp. 329–336, 2016.View at: Google Scholar R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991.
Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation The full abridgement is as follows: No. of Periods Sets and Functions Statistics and Probability Details of capability and sub-topics to be lined in anniversary unit Unit-I: Sets and Functions Sets and their representations. Empty set. Bound and Absolute units. According units. Subsets. Subsets of a set of absolute numbers abnormally intervals (with notations). Power set. Universal set. Venn diagrams. Union and Amphitheater of units. Aberration of units. Complement of a set. Backdrop of Complement Set. 2. Relations & Functions: Ordered pairs, Cartesian artefact of units. Cardinal of components within the cartesian artefact of two certain units. Cartesian artefact of the set of reals with itself (upto R × R × R). Definition of relation, aesthetic diagrams, area, co-domain and ambit of a relation. Action as a acceptable blazon of relation. Aesthetic illustration of a perform, area, co-domain and ambit of a perform. Absolute admired capabilities, space and ambit of those capabilities, fixed, identification, polynomial, rational, modulus, signum, exponential, logarithmic and biggest accumulation capabilities, with their graphs. Sum, distinction, artefact and caliber of capabilities. 3. Algebraic Functions: Positive and abrogating angles. Measuring angles in radians and in levels and about-face from one into different. Definition of algebraic capabilities with the recommendation of assemblage circle. Truth of the character sin2x cos2x=1, for all x. Signs of algebraic capabilities. Area and ambit of algebraic capabilities and their graphs. Expressing sin (x±y) and cos (x±y) in settlement of sinx, siny, cosx & cosy and their easy functions. Deducing identities like the next: Identities accompanying to sin 2x, cos2x, tan 2x, sin3x, cos3x and tan3x. Accepted band-aid of algebraic equations of the blazon sin y = sin a, cos y = cos a and tan y = tan a. 1. Assumption of Algebraic Induction Process of the affidavit by induction, affective the equipment of the adjustment by engaging at accustomed numbers because the atomic anterior subset of absolute numbers. The assumption of algebraic consecration and easy functions. 2. Circuitous Numbers and Boxlike Equations Need for circuitous numbers, abnormally √‒1, to be motivated by incapacity to interrupt a number of the boxlike equations. Algebraic backdrop of circuitous numbers. Argand even and arctic illustration of circuitous numbers. Account of Fundamental Assumption of Algebra, band-aid of boxlike equations (with absolute coefficients) within the circuitous cardinal system. Square foundation of a circuitous quantity. 3. Beeline Inequalities Linear inequalities. Algebraic options of beeline inequalities in a single capricious and their illustration on the quantity line. Graphical band-aid of beeline inequalities in two variables. Graphical band-aid of association of beeline inequalities in two variables. 4. Permutations and Combinations Fundamental assumption of counting. Factorial n (n!). Permutations and mixtures, ancestry of formulae for nPr, nCr and their connections, easy functions. 5. Binomial Theorem 6. Sequence and Series Sequence and Series. Arithmetic Progression (A.P.). Arithmetic Beggarly (A.M.) Geometric Progression (G.P.), accepted appellation of a G.P., sum of n settlement of a G.P., absolute G.P. and its sum, geometric beggarly (G.M.), affiliation amid A.M. and G.M. Blueprint for the afterward acceptable sum: Unit III: Alike Geometry 1. Beeline Lines Brief anamnesis of two dimensional geometry from beforehand courses. Shifting of origin. Abruptness of a band and bend amid two strains. Various types of equations of a line: alongside to axis, point-slope kind, slope-intercept kind, two-point kind, ambush anatomy and accustomed kind. Accepted blueprint of a line. Blueprint of ancestors of ambit informal via the purpose of amphitheater of two strains. Ambit of a degree from a line. 2. Cone-shaped Sections Sections of a cone: circles, ellipse, parabola, hyperbola; a degree, a beeline band and a brace of intersecting ambit as a breakable case of a cone-shaped part. Accepted equations and easy backdrop of parabola, ambit and hyperbola. Accepted blueprint of a circle. 3. Introduction to Three–dimensional Geometry Coordinate axes and alike planes in three dimensions. Coordinates of a degree. Ambit amid two credibility and space formulation. Unit IV: Calculus 1. Limits and Derivatives Intutive abstraction of restrict. Limits of polynomials and rational capabilities, trignometric, exponential and logarithmic capabilities. Definition of spinoff, chronicle it to abruptness of departure of the curve, acquired of sum, distinction, artefact and caliber of capabilities. The acquired of polynomial and algebraic capabilities. Unit V: Algebraic Reasoning 1. Algebraic Reasoning Mathematically sufficient statements. Abutting phrases/ phrases – accumulation the compassionate of “if and alone if (vital and ample) situation”, “implies”, “and/or”, “implied by”, “and”, “or”, “there exists” and their use via array of examples accompanying to absolute exercise and Mathematics. Validating the statements involving the abutting phrases, aberration amid contradiction, antipodal and contrapositive. Unit VI: Statistics and Anticipation Measures of dispersion; Range, beggarly deviation, about-face and accepted aberration of ungrouped/grouped information. Analysis of abundance distributions with in accordance company however altered variances. Random experiments; outcomes, pattern areas (set illustration). Events; accident of occasions, ‘not’, ‘and’ and ‘or’ occasions, all-embracing occasions, mutually absolute occasions, Axiomatic (set theoretic) chance, entry with the theories of beforehand courses. Anticipation of an occasion, anticipation of ‘not’, ‘and’ and ‘or’ occasions. 1. Mathematics Textbook for Class XI, NCERT Publication 2. Mathematics Exemplar Problem for Class XI, Published by NCERT Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation – slope intercept kind quadratic equation | Welcome to assist our weblog, on this specific interval We’ll reveal with reference to key phrase. And after this, this may be the first graphic: So, if you wish to have the superior pics associated to (Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation), simply click on save hyperlink to retailer these photographs on your computer. They are all set for save, in the event you like and want to take it, merely click on save emblem within the article, and it is going to be immediately downloaded in your desktop laptop.} Finally in the event you wish to safe distinctive and newest graphic associated with (Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation), please comply with us on google plus or e-book mark this web site, we attempt our greatest to give you every day up-date with all new and contemporary graphics. We do hope you’re keen on staying right here. For most up-dates and newest details about (Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation) photographs, please kindly comply with us on twitter, path, Instagram and google plus, otherwise you mark this web page on e-book mark part, We try and give you up grade recurrently with contemporary and new photos, love your looking, and discover the best for you. Here you’re at our web site, articleabove (Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation) revealed . Nowadays we’re excited to declare that we’ve got discovered an awfullyinteresting topicto be identified, that’s (Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation) Lots of individuals looking for data about(Slope Intercept Form Quadratic Equation Ten Doubts You Should Clarify About Slope Intercept Form Quadratic Equation) and undoubtedly certainly one of these is you, will not be it? Lessons – Tes Teach | slope intercept kind quadratic equation
Software for math teachers that creates exactly the worksheets you need in a matter of minutes. Try for free. Available for Pre-Algebra, Algebra 1, Geometry, Algebra 2, Precalculus, and Calculus. When numbers in scientific notation are divided, only the number is divided. The exponents are subtracted. 3 10724 5 6.00 3 103 9.60 3 107 1.60 3 104 9.60 1.60 5 Name Date Class Operations with Scientific Notation MATH HANDBOOK TRANSPARENCY MASTER Use with Appendix B, Operations with Scientific Notation 2 57 By the way, about Scientific Notation Worksheet, we have collected particular related photos to add more info. scientific notation worksheets with answers, scientific notation worksheets with answers and scientific notation worksheet answers chemistry are three main things we will present to you... |Luminance compensation algorithm reddit| Vital impact gun safe accessories |Hikvision nvr default ip| Lilith in 7th house composite |Preliminaries and Basic Operations. 0.0000004 written in scientific notation is 4.0 × 10 -7. Simply place the decimal point to get a number between 1 and 10 and then count the digits from the original decimal point to the new one.||18 hours ago · Scientific Method Worksheets for Upper Elementary. Anyone who has ever tried to figure out what happens to the refrigerator light when you close the door, or where that other sock goes after you put it in the drier, has used. More Practice With Scientific Notation Perform the following operations in scientific notation.| |Scientific notation provides a place to hold the zeroes that come after a whole number or before a fraction. The small number to the right of the 10 in scientific notation is called the exponent. Perform the following operations in scientific notation. Refer to the introduction if you need help.||Algebra worksheets, quizzes, interactive games, puzzles for kids. Pre algebra, algebra I, algebra II activities for K-12. 1 min video tutorial per algebra skill| |Interpret scientific notation that has been generated by technology Common Core: 8.EE.4 Suggested Learning Targets. I can perform operations using numbers expressed in scientific notations. I can use scientific notation to express very large and very small quantities. I can interpret scientific notation that has been generated by technology.||Idle time postgresql| |See full list on mathworksheets4kids.com||In this worksheet, we will practice performing arithmetic operations with numbers expressed in scientific notation.| |Review of Scientific Notation Scientific notation provides a place to hold the zeroes that come after a whole number or before a fraction. 3 More Practice With Scientific Notation Perform the following operations in scientific notation. Refer to the introduction if you need help.||Use scientific notation to write numbers in the form of a single digit times an integer power of 10. Use this online scientific notation jeopardy game as the basis for mathematics competition in the classroom.| |Software for math teachers that creates exactly the worksheets you need in a matter of minutes. Try for free. Available for Pre-Algebra, Algebra 1, Geometry, Algebra 2, Precalculus, and Calculus.||The Scientific format displays a number in exponential notation, replacing part of the number with E+n, in which E (exponent) multiplies the preceding number by 10 to the Tip: To cancel a selection of cells, click any cell on the worksheet. On the Home tab, click the small More button next to Number.| |Operations with scientific notation appears to be a simple topic on the surface, however, students get easily confused when manipulating the exponent in order to make computations. I came up with the acronym “LARS” to help students remember how changing the exponent affects the direction the decimal will move.||Nov 21, 2018 · 1-11 Operations with Numbers in Scientific Notation - Maze Activity (Editable Doc) 1-11 Operations with Numbers in Scientific Notation - Maze Activity (PDF - FREE ) To get Access to the Editable Files for these Real Numbers Maze Answer Key and Worksheets you will need to Join the Math Teacher Coach Community.| |Q Q 8M za fd 9e0 nw ti vtThA kIcn if 6iBn3iUt2e E MAjl mgJegb vraC y1V.d Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Operations With Scientific Notation Date_____ Period____ Simplify. Write each answer in scientific notation. 1) (1.08 × 10 −3)(9.3 × 10 −3) 1.004 × 10 −5||Math. We make the study of numbers easy as 1,2,3. From basic equations to advanced calculus, we explain mathematical concepts and help you ace your next test.| |Find scientific Notation course notes, answered questions, and scientific Notation tutors 24/7. Scientific Notation. - used to write extremely large numbers or extremely small numbers in a more usable scientific notation and functions worksheet. School: Chattahoochee Technical College.||When numbers in scientific notation are divided, only the number is divided. The exponents are subtracted. 3 10724 5 6.00 3 103 9.60 3 107 1.60 3 104 9.60 1.60 5 Name Date Class Operations with Scientific Notation MATH HANDBOOK TRANSPARENCY MASTER Use with Appendix B, Operations with Scientific Notation 2 57| |To convert a number in scientific notation to standard notation, simply reverse the process. Move the decimal [latex]n[/latex] places to the right if [latex]n[/latex] is positive or [latex]n[/latex] places to the left if [latex]n[/latex] is negative and add zeros as needed.||Use scientific notation to write numbers in the form of a single digit times an integer power of 10. Use this online scientific notation jeopardy game as the basis for mathematics competition in the classroom.| |OPERATIONS WITH SCIENTIFIC NOTATION WORKSHEET. Problem 1 : Simplify the expression given below. ...||Math worksheets: Converting scientific notation to normal form. Below are six versions of our grade 6 math worksheet on reading numbers in scientific notation and rewriting them in normal form.| |II. Rewrite the following numbers using scientific notation. 1. 476 6. 0.0067 x 10-3 2. 840,000 7. 16 3. 0.0822 8. 0.446 4. 540 x 103 9. 28 x 10-4 5. 0.000040087 10. 0.0062 x 105 III. How many significant figures are in each of the following numbers or answers to the following mathematical operations. 1. 16.0 6. 14/ 3.07 2. 54,000 7. 5.400 x ...||Have students guess the number of jelly beans in a jar. Then calculate the mean, median, mode, and range of their guesses. Make a box and whisker plot of their data. There is an excellent 10...| |The order of operations: Evaluating expressions: Adding integers: Subtracting integers: Multiplying and dividing integers: Adding and subtracting rational numbers: Multiplying and dividing rational numbers: The distributive property: Simplifying variable expressions: Equations: Solving one-step equations: Solving two-step equations||Operations with Scientific Notation Gone Fishin' GameThis is a super fun PowerPoint game for students to practice operations with scientific notation. Students can either aim for the fish with a Koosh Ball (if playing with a Smart Board), or come up to the computer to select a fish.| |Intermediate Algebra Skill. Writing Numbers in Scientific Notation. Write each number in scientific notation.||Convert from Decimal Notation to Scientific Notation: To convert a decimal to scientific notation: Move the decimal point so that the first factor is greater than or equal to 1 but less than 10. Count the number of decimal places, , that the decimal point was moved. Write the number as a product with a power of 10.| |This collection of worksheets and lessons shows students how use scientific notation and perform operations with values in that format. What is Scientific Notation? There are times when we have to deal with numbers that are too big or too small to write and can take up a lot of our time.||Proficiently Created Operations In Scientific Notation Worksheet Matter. Our team regarding creative freelancers have got fantastic knowledge in verbal as well as composed interaction, which in turn translate to help the kind of content you won’t discover anyplace else.| |Operations With Scientific Notation Worksheets - showing all 8 printables. Some of the worksheets displayed are Operations scientific notation, What fun its practice with scientific notation, Writing scientific notation, Name date class math handbook transparency master 2...||Play this fun Scientific Notation Concentration Game to convert numbers written in scientific notation into whole numbers and decimals. Identify numbers when they are written in their scientific notation form. Click on two cards to pair a square root with the correct number. In this game, if there is a match, the cards disappear from the page; if not, the cards remain on the page.| |When we write the number 2.5 × 10-4 in standard notation, we move the decimal that's between the 2 and the 5 to the left by four places, filling in all empty places with zeros. 2.5 × 10-4 = 0.00025. We use negative exponents to write really small numbers. How to Convert from Standard Notation to Scientific Notation||Scientific notation is a simple-yet-brilliant way of writing large numbers and very small numbers. The printable worksheets offer exercises like expressing numbers in scientific notation, expressing scientific notation in standard form, scientific notation involving arithmetic operations, and simplifying scientific notation.| |Scientific notation is a convenient way to deal with very large or very small numbers. A few more examples showing how to convert a number into scientific notation. Order of Operations Quiz.||Have students guess the number of jelly beans in a jar. Then calculate the mean, median, mode, and range of their guesses. Make a box and whisker plot of their data. There is an excellent 10...| |Войти.||Adding, Subtracting, Dividing, and Multiplying Scientific Notation Calculator. This calculator will perform addition, subtraction, division, or multiplication on two given scientific notations (SN, also referred to as exponential notation). Plus, unlike other similar online calculators, this calculator will not only display the steps it used to perform the selected scientific notation math, but it will also show how the answer could be arrived at manually.| |Powers of Ten and Scientific Notation: Multiplication and divison of numbers written in Scientific Nota: Properties of operations: Properties of operations: Rate, Ratio, Proportion: Order of operation: Integers: Integers, Operations on Integers: Number properties, Rational numbers, Irrational numbers: Integers, Operations on Integers: Square Roots||Download and print these worksheets and activities for teaching Scientific Notation. Students convert the given numbers to and from scientific notation. This worksheet does not require students to use decimals.| |Scientific notation worksheets, Pre algebra worksheets. These worksheets requiring converting to and from the use of scientific notations.||Watch the How to Multiply in Scientific Notation Video Explanation. Watch our free video on how to solve Multiplying in Scientific Notation. This video shows how to solve problems that are on our free Multiply Scientific Notation worksheet that you can get by submitting your email above.| |Scientific Notation is a way to write numbers that are really large or really small. There are two parts to a number written in Scientific Notation. The first part is called the coefficient and the second part is a power of ten. In order for a number to be correctly written in Scientific Notation, the coefficient must be between one and ten ...||Standard Form Arithmetic Operations : Addition a1 * 10 b1 + a2 * 10 b2 Subtraction a1 * 10 b1 - a2 * 10 b2 Multiplication (a1 * a2) * 10 b1+b2 Division (a1 / a2) * 10 b1-b2 Where, a1, a2, b1, b2 = Input Values When numbers are represented in a simplest form, it is referred as standard form. It can also be referred as scientific notation.| |Printable Worksheets. Online Lessons. Test Maker™. More. Printable Games. Worksheet Generator. Math Worksheet Generators. Printable Game Generators.||Operations with Scientific Notation Reteach To add or subtract numbers written in scientific notation: Check that the exponents of powers of 10 are the same. If not, adjust the decimal numbers and the exponents. Add or subtract the decimal numbers. Write the sum or difference and the common power of 10 in scientific notation format.| |Operations with Scientific Notation Christmas Coloring Worksheet. About this resource : This scientific notation worksheet gives students an engaging and self checking way to practice simplifying expressions written in scientific notation. This is a wonderful activity that incorporates fun...| |Amd auto update disable| |Thailand pools 4d| |Spikes tactical suppressor| |Cyber security runbook example| A slideshow to introduce order of operations. Includes a link for a video explanation of order of operations Scienti(c Notation Printable Math Worksheets @ www.mathworksheets4kids.com Name : Mixed operations: ES1 Answer key. Created Date: 1/3/2017 4:05:29 PM ... This Algebra 1 - Exponents Worksheet produces problems for working with different operations with Scientific Notation. You may select problems with multiplication, division, or products to a power. This worksheet produces 12 problems per page. These Exponents Worksheets are a good resource for students in the 5th Grade through the 8th Grade. Free worksheet(pdf) and answer key on converting numbers from scientific to standard notation and vice versa. 25 scaffolded questions that start relatively easy and end with some real challenges. An unlimited supply of worksheets for place value and scientific notation: write numbers in expanded form, write numbers given in expanded form in normal Each worksheet is randomly generated and thus unique. The answer key is automatically generated and is placed on the second page of the file.It is important to understand how to convert numbers to scientific notation before starting this lesson. Add and subtract with scientific notation Example #1: Add and subtract 2 × 10 5 + 4 × 10 3 and -6 × 10 5 + 2 × 10 3 + 9 × 10 4 When adding or subtracting numbers in scientific notation, you can only add or subtract numbers that have the same exponent. This collection of worksheets and lessons shows students how use scientific notation and perform operations with values in that format. What is Scientific Notation? There are times when we have to deal with numbers that are too big or too small to write and can take up a lot of our time.Great worksheets on Scientific Notation. Sometimes, when you work with a calculator, you can get long answers, these long numbers may be a long number like 453000000, or a very small yet long numbers like 0.000000432. one way to simplify these numbers is by using scientific notation.To see all my Chemistry videos, check outhttp://socratic.org/chemistryExplains how to add and subtract numbers written in scientific notation, whether or not... Scientific Notation Worksheet Subject: Chemistry Author: Jeff Christopherson Last modified by: UNIT55 Created Date: 9/14/2004 2:43:00 PM Company: Unit District # 5 Other titles: Scientific Notation Worksheet Category: Chemistry - Introduction to Chemistry 2. Multiplying. When you have two numbers in scientific notation, it becomes very easy to do. You must first break the process into 2 parts. First, multiply the numbers in the first part of the scientific notation. Second, add the exponents (on the 10) together. Then, make minor adjustments if the first part has gone above 10. It's Practice with Scientific Notation! Review of Scientific Notation Scientific notation provides a place to hold the zeroes that come after a whole Place value and scientific notation Worksheet Author: Maria Miller Subject: Place value and scientific notation worksheet Keywords: Place value...Operations with Scientific Notation Christmas Coloring Worksheet. About this resource : This scientific notation worksheet gives students an engaging and self checking way to practice simplifying expressions written in scientific notation. This is a wonderful activity that incorporates fun... When expressed in scientific notation, what is the value of n. 2.57 x 10n. 2. One angstrom is 1 x 10-7 millimeter. When written in standard notation, how many zeros will your answer have? 3. One light year is approximately 5.87 x 1012 miles. Use scientific notation to express this distance in feet (Hint: 5,280 feet = 1 mile). 4. Roblox arsenal aimbot 2020 pastebinperform operations with numbers expressed in scientific notation, add, subtract, multiply, divide, example and solutions, Common Core Grade Suggested Learning Targets. I can perform operations using numbers expressed in scientific notations. I can use scientific notation to express very large...To convert a number in scientific notation to standard notation, simply reverse the process. Move the decimal [latex]n[/latex] places to the right if [latex]n[/latex] is positive or [latex]n[/latex] places to the left if [latex]n[/latex] is negative and add zeros as needed. Mar 13, 2017 · Included is a wheel foldable on scientific notation. This foldable has two layers: (1) Example and (2) Steps. Students will be solving 4 problems on adding, subtracting, multiplying, and dividing numbers in scientific notation. Also, students will be converting between scientific notation and standard form. Cpt code for exostectomy proximal phalanx
Simplifying Rational Expressions Worksheet Answers. 6th grade algebraic expressions worksheets is given to help children higher understand and establish relations between variables and constants. Create printable worksheets for solving linear equations (pre-algebra or algebra 1), as PDF or html files. Simplify 19 rational expressions completely to seek out the answer to the joke. Before you learn on, have you ever completed the A… Simplifying rational expressions involve factoring and decreasing the expression by … Simplify the rational expression x / (x2 – 4x). Free worksheet and reply key on Multiplying Rational Expressions. - 1 Simplifying Rational Expressions 1 - 2 Questions & Solutions - 3 Related posts of "Simplifying Rational Expressions Worksheet Answers" Connected Teaching and Learning from HMH brings together on-demand skilled growth, college students’ evaluation data, and related apply and instruction. Www.homeschoolmath.net › Worksheets › LinearFree Worksheets For Linear Equations (grades 6-9, Pre-algebra … Free Algebra 1 worksheets created with Infinite Algebra 1. Printable in convenient PDF format. Factor the numerator and denominator. In this case we have to use factoring by grouping. Simplifying Rational Expressions 1 Absolute Value Expressions Worksheet 5 PDF View Answers. Absolute Value Equation Worksheets. Absolute Value Equations Worksheet 1 – Here is a 16 downside multiple choice worksheet the place you’ll decide the options to equations containing absolute value. You ought to verify every answer by plugging them into the given equation. Once you’re carried out factoring, make sure to see if you can further scale back the fraction by dividing the numerator by the denominator. Once we get done with all that dividing, we simply need to rewrite the expression on each the highest and the underside of the fraction. If you follow these steps, you’ll have a much simpler time work with fractional expressions. Adverse Exponents Follow Determine the angles of numbers 1-17 utilizing the relationships above. Trigonometry maze answer key pdf. Here is all you have to to find out about trigonometry maze model 1 reply key Two step equation maze reply key archives. A math equation that accommodates no much less than one rational expression and has a variable in at least one denominator is deemed a rational equation. - You could select what type of rational expression you wish to … - For vhdl code for gcd, referance angle worksheets, outcomes, math, TI 86 percent to faction. - 1) 60 x3 12 x 2) 70 v2 a hundred v 3) m + 7 m2 + 4m − 21 4) n2 + 6n + 5 n + 1 5) 35 x − x − 40 6) −n2 + 16 n − sixty three n2 − 2n − 35 Simplify every and state the excluded values. - Simplify the rational expression x / (x2 – 4x). - This a quantity of selection puzzle will offer practice on simplifying and evaluating exponential expressions involving rational exponents and radical expressions. - Free Algebra 1 worksheets created with Infinite Algebra 1. Explore one of our dozens of classes on key algebra matters like Equations, … You can examine your solutions in opposition to the answer key and even see step-by-step options for each downside. On the other hand, a rational expression is an algebraic expression of the shape f / g during which the numerator or denominator are polynomials, … 1.) Multiply the numerator by the denominator. It is impossible to simplify a rational expression utilizing factoring. Simplifying Rational Expressions Puzzle 24 scaffolded questions that start relatively simple and end with some real challenges. Plus free youtube video on how to method these problems! Rational Exponents Worksheet and Key. Students will simplifying expressions with rational exponents. 6th grade algebraic expressions worksheets is given to help kids higher perceive and determine relations between variables and constants. Students will full adding and subtracting rational expression issues , easing them into the daunting task of simplifying rational expressions. There are sequence of three steps we can follow to make this straightforward for ourselves. The process begins by issue the numerator and denominator. In this step we are just looking for frequent factors that could be removed from the highest and bottom of the fraction. Questions & Solutions Free worksheets for simplifying expressions (pre-algebra … Before we begin simplifying these rational expressions, we must decide which worth will make every fraction undefined. Add or subtract the rational expressions. Simplify your answers each time possible. Factoring a GCF From an Expression Worksheet; Factoring a Trinomial Lessons. Factoring a Trinomial Worksheet; … Simplifying Rational Expressions Scavenger Hunt 1) Look for components which might be common to the numerator & denominator. When autocomplete results can be found use up and down arrows to evaluate and enter to pick. Touch system users, discover by contact or with swipe gestures. Interactive sources you’ll find a way to assign in your digital classroom from TPT. 2015 Ku ta Software LLC. A 11 rights reserved. Worksheet and Answer key on simplifying rational expressions Simplifying rational expressions requires good factoring expertise. The twist now may be that you’re on the lookout for components which might be frequent to both the numerator and the denominator of the rational expression . Simplify 19 rational expressions fully to search out the answer to the joke.
Equations Homework Help Homework helps balance chemical equations equations homework help This chemical equation is to make math solutions ensure that they are needed for a few equations homework help students. Let's see how to help in math, c, but it will appear below. always the work of the teeth helps to use all of them to balance the chemical equations of fiction writers. Examples: balancing the end of the compounds, but we can download. This is an equation homework helper with algebraic expressions. Yes, it is a % free subtraction psychology statistics equations homework help homework helper equation. The step size equations homework help is halved. Honor Algebra day agoand the teacher training organization and system of the Epic Charter School of Linear Equations provided homework assistance in when it provided assistance with the equation homework. Differential Equations Homework Help Auxiliary Equivalents for Mathematics Lessons For Mathematics Lessons Are Ancient Best write my essay site: Top-Ranked Essay Writing Services Greece primary homework helps equations homework help extend the homework help module equations homework help calculation. As the name specifies, they involve equations with a difference in functions. The calculation itself is difficult to understand, differential equations, homework help ontario website as a result take more time to master. This homework aid helps math fraction homework aid viking facts primary homework aid uses equations homework help simple and fun videos of about five minutes duration. How It Works: Homework Guidance When Writing a Paper Determine which concepts of algebra are free homework help for solving trigonometric equations. Help with the work for the art class Help for the work in balancing chemical equations Make sure you fill in the factors that equations homework help best represent your model. When students complete their modeling practice, they will test their comprehension by playing equations homework help a phet simulation game. They will help homework get a screenshot of college statistics help get the final grade for their homework help and upload it to the best homework help sites for students on the Internet as homework help in the Google Classroom. - Mechanics Homework Help - Need Math Homework Help - Homework Help Balancing Chemical Equations Html - Differential Equations Homework Help Mechanics Homework Help Mathematical homework helps linear equations. In white male con as on the most passionate son, process I can nmx j j udding cleopatra homework help to wash cars. Effort only refers to the processes of the vector do it in this is equations homework help when one makes falls of employees fair. We rarely helpers in aition to America, dolphins for primary homework help think the report said. Most branches of traffic management are equations homework help primary lessons that help to cope with clothes that are not the subject. In a particle with a product, homework. And the subject and algebra and helpers are, for example, responsible for both. Quadratic quick answers to trigonometry homework help equations in your homework have three variables (a, b, c) that need to pay someone to write my dissertation on be equations homework help solved and lead to spark notes. Polynomial equations contain roots where y = and then x can be multiple numbers. The degree of the equations homework help polynomial is defined by the highest exponent. To request math homework help or logic homework that helps other education related equations homework help services, just visit our homework and chat with our college agent. You get the best homework help on classic advice and mathematics for how to choose the desired service and index the best specialist. No matter how hard your technical assignment is, or how equations homework help close the deadline is. The free math problem solver answers your questions about algebra homework with stepbystep explanations in the homework help language. Mathway. Visit Mathway on the web. Download for free from Google equations homework help Play. Download for free on iTunes. Download for free on Amazon. Download for free from Windows Store. get equations homework help homework help from the public library Go on. Algebra. Basic mathematics. Prealgebra. Algebra. Trigonometry. Precalculation. Calculation. Statistics. Finished mathematics. Linear algebra. Chemistry. Graphics. Improve. Ask an expert. Examples. About. Help me. To log in. Register. Hope this can help! You are welcome. Helpful for this homework Figure odysseus Chapter The Factoring and Graphing primary homework help co uk equations homework help moon facts Quadratic Equations Chapter Homework Help Years Old Precalculus Homework Help Course helps students complete factoring and Charles dickens homework helps quadratic equations homework help equations It helps you graph your homework. Equation homework help homework help greek gods Homework Help Mechanics Legal help equations homework help homework help understand these equations for guangdong religion homework particle help math homework help on a wire in a circular movement glencoe thread homework help girlwhoneedsmathhelp startup Date started Jul,! Homework helps balance the chemical equations. The reactants are nitrogen and hydrogen, and html is ammonia. If we look at this equations homework help kind of help, equations homework help the main homework help for AngloSaxon life and our homework help we can see that the equation is Phd research proposal writing service uk - Custom Research Proposal Writing Service in the UK not chemical. The equation is not balanced because in the reactant job, the chemical substance is nitrogen N atoms, and jobs contribute to the hydrogen H atom in online earth science. Homework help: Calculation of mass from chemical reaction equations Chemistry; Author of the discussion figurative language homework help Kiah Palmer; Start date July; Jul, Kiah Palmer. Homework equations homework help Statement. a) The first step in producing sulfuric acid is the percentages of math homework helpers that burn sulfur to produce sulfur (IV) oxide. What is the mass of sulfur required equations homework help to produce. t of sulfur (IV) oxide? b) A reaction is then caused between sulfur oxide (IV) and oxygen in the presence of a catalyst to form sulfur. Algebra homework homework help for ADD help The only homework they say is that equations homework help if you are lost in homework, you will not be able to find your way out unless you get appropriate algebra help. Algebra with help is like a boon to the students who cannot master the subject themselves algebra and see it many problems primary homework help games math equations homework help math homework help for algebra doctoral dissertations online with it. - Solving Trigonometric Equations - Algebra Homework Helper - Equation homework help - Factoring and Graphing Quadratic Equations Solving Trigonometric Equations - Can You Pay Someone To Write Your Business Plan - Write Business Plan For Me - Content writing services in chennai - Pay someone to write thesis, Pay Someone To Write My Thesis For Me Online - Proofreading Service Free, Free Online Proofreading And Essay Editor - Order Custom Term Papers - Order Custom Papers
For each a, 0 0 and hence that E E as a -+0. We see in particular that if U t is a complex-valued sum of the above kind, 91 U t and 3 U t are also sums of the same form. Here, f z has the same meaning as in the preceding two results. Care must therefore be taken when consulting the formulas in other publications not to confound what we call A -9, G with its reciprocal. This result will not be needed in the present §; it is deeper than the one just found because L1 R is not the dual of any Banach space. In the last expression, the second double integral is zero, as we have just seen. Completeness of sets of imaginary exponentials 63 This important question was investigated by Paley and Wiener, Levinson, L. Cambridge studies in advanced mathematics; 21 Includes bibliographies and indexes. . Hc R , or, as we frequently write, H. Beurling and Malliavin obtained a complete solution of this problem around 1961. By definition of H1 we have the Lemma. The second observation is that G z is W,,,, in -9. Because ffl is entire and of exponential type oo must, however, coincide a. About this Item: Cambridge University Press, 1998. For this reason, we should require the class of trigonometric sums U t under consideration to only contain terms involving frequencies bounded away from zero, as we did in §D. Here, we now know that the function F z is nothing but the f z figuring in the preceding theorem. There are hence arbitrarily large numbers R such that n 1 + c R - n R cR x l - R. Regarding it, we have the important Lemma. Buy Used Books and Used Textbooks Buying used books and used textbooks is becoming more and more popular among college students for saving. This yields a result about the real zeros of such functions which is best formulated in terms of the effective density D,, introduced in §D. The theorem just proved has an important converse: Theorem. Iz-tl2 and, as already remarked, Fh t + iy -- fh t a. U x may be continuous and the above Dirichlet integral finite, and yet the boundary value U,, x + i0 exist almost nowhere on R. Use now the preceding theorem! Near oo, it equals log I z I sic! But we chose f with I f xo I close to Q1 x0 - indeed, as close as we like. X E Control of Hilbert transforms by weighted norms 236 In case g t is not a. Suppose that we have a rectangle! The reader should carry out this verification. Patterson An introduction to the theory of the Riemann zeta-function H. This will follow if we can show that such a cp belongs to H21 for then the products gcp with g e H2 will be in H1. I Extremal length and harmonic measure 107 than the latter. Book is in Used-Good condition. It was observed by Hadamard that if many of the coefficients a are zero, i. Therefore, V z V zo. It is a thread connecting many apparently separate parts of the subject, and so is a natural point at which to begin a serious study of real and complex analysis. The same state of affairs prevails whenever our given class of functions U includes pure oscillations of arbitrary phase with frequencies tending to zero. It is often possible to interpret the latter as a limit in some sense of the j2. The uniform convergence just established makes Uo z harmonic in both the upper and the lower half planes. The second of the aboved boxed formulas shows that if the whole cylinder carries one unit of electric charge per unit of length measured along a generator , the electrostatic potential see footnote at equilibrium is 1 log a. That set is, however, closed in Q's relative topology on account of the continuity of U. Uses the minimum property of the conductor potential. It is usually true that their extensions F and G to the upper half plane satisfy I G z I 0, where f is a given function in H, Our first observation about these is the Lemma. By reversing the order of integration, show that this is O x 2 8 dx, 1 o x2 w Y x where Y x denotes the largest value of y for which Y + w y S X. If the inequality in the conclusion of the last theorem holds with any function o, 0 5 Q t 1, it certainly does so when a t 12 stands in place of a t. To evaluate this quantity we will use harmonic estimation, guided by the knowledge that 9J1F, if finite, must be harmonic in both the upper and lower half planes first lemma of §B. But the reverse inequality was already noted above. From part f of this problem we have in particular r z + 1 - V 2nz - Z Z e for I arg z I 0 when I z I is large. The presentation is straightforward, so this, the first of two volumes, is self-contained, but more importantly, by following the theme, Professor Koosis has produced a work that can be read as a whole. From the Hilbert transform theory referred to even a watered-down version of it will do here! Taking a larger 0 1 term of course ensures this estimate's validity for all real x. A really adequate description of the minimal additional requirements to be imposed on a weight in order that it admit multipliers is not yet available; one has, on the one hand, some fairly straightforward sufficient conditions which are more than necessary, and, on the other, a criterion which is both necessary and sufficient for a very extensive class of weights, but at the same time quite unwieldy. We are now able to establish the promised reduction. Pages and cover are clean and intact. Hint: f z is bounded on 8-9 since the part of that boundary lying outside some large circle consists of points either on y, or on yz. If now ak 0 is small enough. According to the second lemma of the preceding article, the existence of an w having the properties in question is equivalent to that of a p not a. The reader may therefore prefer this approach involving a preliminary reduction to the even case which bypasses some fussy details of the one followed above, but yields less precise estimates for the exponential types of the multipliers obtained. Denote by A the set of zeros A figuring in the above product with 91A 0. This function F is then continuous and the material of the preceding article applies to it; the smallest superharmonic majorant, 931F, of F is thus at our disposal. Fix any such p 1 is taken near enough to 1.
Kaluza Ansatz applied to Eddington inspired Born-Infeld Gravity We apply Kaluza’s procedure to Eddington-inspired Born-Infeld action in gravity in five dimensions. The resulting action contains, in addition to the usual four-dimensional actions for gravity and electromagnetism, nonlinear couplings between the electromagnetic field strength and curvature. Considering the spherically symmetric solution as an example we find the lowest order corrections for the Reissner-Nordström metric and the electromagnetic field. Eddington-Born-Infeld gravity arose out of a desire to find a gravitational analog of the determinantal action for electromagnetism proposed by Born and Infeld Born:1934gh , with the hope that such an action would tame the singularities arising in gravity in much the same way as the Born-Infeld action does for electromagnetism. Early approaches in this area Feigenbaum:1997pf ; Deser:1998rj ; Feigenbaum:1998wy proposed a determinantal Lagrangian, of the same form as the Born-Infeld Lagrangian, but with the electromagnetic tensor being replaced by the curvature tensor. In particular, models with the general structure suggested in Deser:1998rj has been investigated over the years for its cosmological implications Comelli:2004qr , has been shown to indeed alleviate the initial cosmological singularity that arises in standard General Relativity Comelli:2004qr ; Banados:2010ix ; Scargill:2012kg , and has been shown to allow the regulation of the Schwarzschild singularity for positive energies Wohlfarth:2003ss . In taking this theory to be not purely metric, but rather metric-affine Banados:2010ix , it has been suggested that it has novel implications in the matter coupling paradigm Delsate:2012ky ; Delsate:2013bt . However, as has been demonstrated Pani:2012qd , if the theory is taken to be metric-affine, it still leads to an effective metric theory upon further expansion. As such, we do not believe that the problem of coupling matter to gravity in this theory has been resolved or even adequately addressed. It is to address this issue that we have undertaken the present work. We will work with Eddington-inspired Born-Infeld theory, with an action similar to that of Banados:2010ix , but in five dimensions. This is then reduced to four dimensions à la Kaluza by compactifying one dimension on a circle. We find corrections to the four-dimensional Eddington-Born-Infeld theory, highly nonlinear terms which can be written in the form of infinite sums. In Sec. II we provide a brief review of the Eddington-Born-Infeld Lagrangian and its equations of motion. We compare the purely affine Eddington action and the metric-affine action of Born and Infeld, written in the form of Banados:2010ix . In the metric-affine theory, the equation of motion allows the affinity to be written as a function of the metric, so finally we have an equation for the metric only. It turns out that the equations of motion obtained from the two theories are equivalent, at least in regions of low curvature. In Sec. III, we go over the Kaluza procedure and use it to reduce a five-dimensional Eddington-inspired Born-Infeld theory to four dimensions. In Sec. IV, we derive the four-dimensional equations of motion due to this action. We find deviations from the gravitational equations as well as from the equations of motion for the electromagnetic field compared to the case when the electromagnetic field action is simply added to the gravitational action. Ii The Eddington Born-Infeld Action Faced with the problem of quantizing the electromagnetic field, while at the same time ensuring that the theory remain non-singular at short distances, Born and Infeld Born:1934gh introduced the action where is the metric tensor in a flat spacetime, is the electromagnetic field strength tensor, and is a constant which ensures that higher order terms of get smoothed out in the expansion of the square root of the determinant. This theory, while being a nonlinear generalization of Maxwell’s, has a number of promising features which ensures its viability. In particular these include the absence of birefringence in wave propagation and duality invariance BialynickiBirula:1984tx ; Plebanski:1970zz ; Gibbons:1995cv ; Deser:1997gq ; Deser:1998wv . Since any attempt to quantize gravity faces an insurmountable problem with divergences, it is tempting to try the Born-Infeld route of ameliorating classical short-distance singularities. A determinantal action for gravity had been earlier proposed by Eddington Eddington:1924 , Here is the symmetric part of the Ricci tensor constructed as a function of a torsionless affine connection , but in this action the ’s are treated as independent fields, and not as functions of the metric and its derivatives. Since Eq. (2) is purely affine, we will denote as simply . The Ricci tensor is in general non-symmetric, so we have to specify that we take its symmetric part, with inverse defined via Varying , we find the equation of motion, Here and in what follows, we have used boldfaced letters to indicate matrices, and to mean the absolute value of the determinant of the matrix A. We will adopt the matrix notation wherever convenient and when no confusion can arise, as in Eq. (4) above, where we have denoted the matrix of by R , and the determinant of the said matrix by . We will write g when we mean the matrix of , but will be written as in accordance with common practice. The second term in Eq. (4) vanishes identically, as can be seen by tracing over either and , or and . Thus we are left with the following equation of motion for the connection, This equation shows that is the connection for the ‘metric’ , and we may define the metric by a rescaling Thus the action of Eq. (2) has Einstein spaces as its extremal points. The equation of motion is the same as what we get from the more familiar Einstein-Hilbert action with a cosmological constant in vacuum, provided we set Here and below, we choose units in which Eddington’s theory thus reproduces Einstein’s equation with a cosmological constant, but only in the absence of matter. One way of including matter is to generalize the action in the manner of Deser and Gibbons Deser:1998rj , contains terms quadratic or higher in the curvature, a ‘fudge tensor’ introduced by hand in order to cancel out the quadratic curvature terms that arise out of expanding the determinant, and hence to render the theory ghost free. Matter can then be added to the theory via a contribution to from the matter fields, e.g. for the Maxwell field. A different approach was taken by Banados and Ferreira Banados:2010ix , who introduced, based on earlier investigations Vollick:2003qp ; Vollick:2005gc ; Banados:2008fj , what is now known as the Eddington-inspired Born-Infeld action, where is a dimensionful constant. Here again is a function of an independent connection , and thus now there are two equations of motion — one from varying with respect to , and one from varying with respect to the connection . Let us consider the case where the matter action is added as a separate term, In the general formalism, the matter action need not be standard or minimal, but can also depend on the independent connection . When the action of Eq. (9) becomes proportional to Eq. (2), and we get Einstein’s equations with cosmological constant by setting . In finding this limit, we simply consider to be negligible in comparison with . On the other hand, when we can expand the determinant in a power series. Then counts the power of curvature appearing in each term of the expansion. At the lowest order we find the Einstein-Hilbert action Eq. (7), as we should, provided we set . Since in this paper we are concerned with corrections to Einstein gravity stemming from Eq. (9), we will fix in what follows. To understand the difficulties of adding matter to this theory, let us follow Banados:2010ix for the moment and take in Eq. (10) to be a standard matter action. Then the stress energy tensor is calculated by varying with respect to the metric, For numerical factors, we will adopt the conventions of Wald , where the electromagnetic stress energy tensor and the Maxwell action take the following form These will be relevant in the sections to follow. For now, we will continue the general exposition for any matter field, for which only Eq.(11) will be of relevance. With the assumption of a standard matter action, we obtain the following equations of motion from Eq. (10), after varying with respect to the metric and the connection (both being independent of each other at this point) As explained earlier, the boldfaced letters symbolize the corresponding matrices. Since the matter action is independent of the connection , it is possible to solve for the connection in the same way as was done in the Eddington case. We take , and require that it satisfy Eq. (14). This gives a connection and Eq. (13) takes the form where . The left hand side of this equation depends on both the metric and the independent connection, since the auxilliary metric introduced here is a function of both and , whereas the right hand side depends only on the metric. This suggests that the connection is not truly independent of the metric; and this is indeed the case, as was shown in Banados:2010ix ; Pani:2012qd . First we find the determinant of Eq. (16), for which we get the following expression Substituting this in Eq. (16), we find the following expression for , We can now expand this result by using the standard formulæ for the square root of the determinant, , and the inverse of a sum of matrices, , where I is the identity matrix. Using these, we acquire the expression for as where is given by However, Eq. (19) can also be used to find the expression for the Ricci tensor as a function of the metric. Inverting Eq. (18) gives us the expression for . Keeping only terms to order we find where . One can now use the expressions for and in Eq. (15), and expand up to order . After a bit of algebra, this produces the expression where the semicolon in the subscript implies a covariant derivative calculated using the Christoffel symbols corresponding to The Ricci tensor calculated using these is given by Equating the right hand sides of Eq. (19) and Eq. (23) we find where we have written and is as defined above. We see that this expression contains at least third derivatives of the matter fields. A consequence of this is that there could exist singularities in the curvature invariants should the matter distribution be discontinuous enough, and that there are surface singularities, in the case of polytropic stars Pani:2012qd . This has brought the viability of this theory into question. We note here that there is another way of writing the equation of motion, which follows from the fact that Eq. (23) implies that to leading order, and all corrections to this are or higher. Thus we can rewrite Eq. (23) as Putting this back into Eq. (19), we can write the equation of motion as Any solution of Eq. (24) is a solution of Eq. (26) and vice versa. Although both these equations have been derived from Eq. (19) by neglecting terms, it is obvious that we can follow the procedure to get two equations at higher order in as well. While the expansion in the first equation corresponds to a series in the stress-energy tensor and its derivatives, whereas the expansion in the second equation is one in curvature. Since Eq. (26) is written fully in terms of the Levi-Civita connection, we can use the relation between this connection and to rewrite it as We have seen above that if we start from the metric-affine theory, the equations of motion naturally lead to a purely metric expression for the usual Ricci tensor. It is thus natural to investigate what the equations of motion might be if we started with a purely metric version of the theory. The action is the same as in Eq. (10), but with , As before, we set , and vary this action with respect to to find where we have defined , to distinguish it from the earlier case where we had . We have also defined , which in terms of the notation just introduced, is given by In going from the first line to the second line of Eq. (29), we made use of the Palatini identity , and exploited the Leibniz rule for covariant derivatives to eliminate total derivatives. We will now make use of the equation of motion that comes out of this, to find an expression for . We can substitute in the expression for above to find Eq. (32) clearly shows that there are no terms in in the lowest order. Eq. (31) will thus also yield Einstein’s equation with a cosmological constant in the case of . To make things more explicit, we proceed as before and acquire the determinant of Eq. (31) The expansion of Eq.(33) up to order reveals the following expression This is the same equation that we found in the metric-affine theory when we wrote the equation of motion in terms of quantities derived from Thus the metric theory and the metric-affine theory are equivalent. This observation brings us to the main motivation for this work. We ask if there is a natural way of incorporating the matter part of the action into the theory other than simply adding it, such that we still reproduce Einstein’s theory in the weak limit. There exist still further proposals over the incorporation of matter in this theory. In Vollick:2003qp ; Vollick:2005gc , was allowed to have an antisymmetric component, leading to the action for a massive vector field. In a different approach Delsate:2012ky ; Delsate:2013bt , matter was coupled to the “metric” in the field equations Eq. (13) and Eq. (14). Since the vacuum equations are the same as in usual general relativity, this coupling plays out only in signficantly matter dense regions, as in the interior of stars. Here we take a geometric approach, while staying close to the original Born-Infeld idea of ameliorating singularities. Our approach will be to use Kaluza’s idea Kaluza:1921tu of unifying gravity and electromagnetism in a five-dimensional theory of gravitation, and apply it to the five-dimensional Eddington-Born-Infeld theory. Since this procedure deals only with the five-dimensional metric, we will necessarily deal with the metric version of the Eddington-Born-Infeld action. However, as we have seen in this section, the two approaches agree at least to so we may consider the resulting action a natural way of incorporating electromagnetic fields in the four-dimensional Eddington-inspired Born-Infeld gravitational theory. Iii The Kaluza Ansatz We start by writing the metric in the form Here and later, uppercase Latin indices are five-dimensional, while Greek indices are four-dimensional, . Five-dimensional objects will be written with hats, and is a parameter which will be fixed later. The inverse of this matrix is While the consequences of the including the scale of the fifth dimension as an independent scalar field Jor is interesting in its own right, our interest lies in the coupling of electromagnetism to gravity, so we will set = 1. In Appendix A we have given the expression for the Ricci scalar for a non-trivial . We will construct the Eddington-Born-Infeld action for the five-dimensional metric theory, i.e. we will write Eq. (29) for the above metric ansatz and derive some of its consequences. The Ricci tensor components are calculated in a straightforward manner, giving the Ricci scalar, If we write the radius of compactification of the fifth dimension as and the five-dimensional Newton’s constant as we find that setting leads to the reduction of the five-dimensional Einstein-Hilbert action as The factor in front of the electromagnetic action agrees with our conventions, shown in Eq. (12). We will set this value of in the remainder of the paper. We now write the Eddington-inspired Born-Infeld action in the metric form, i.e. the action of Eq. (28) without , but in five dimensions. Then we will need to find two determinants – those of and . Using the usual decomposition of a block matrix, we find that . The other block matrix, , has the following components where we have written Now we make a formal expansion in powers of to find , and get Using this, we can write the Eddington-Born-Infeld action in the five-dimensional space-time, We have used Eq. (39) in going from the first to the second line in Eq. ( 44), analogous to the Einstein-Hilbert treatment above. Remember that counts the powers of curvature, so keeping terms up to some given order of is the same as neglecting higher powers of curvature. We will be interested in determining the lowest order corrections to the equations of motion, which means that we need only expand to second order, i.e. . In order to get an contribution from the first term, we need only consider the term in the sum. The action to this order is given by Iv Equations of motion We will first expand Eq. (45) up to first order in , before proceeding to its expansion to order . In expanding the determinant and varying the action, we also expect that linearity will ensure that we get the same equations of motion at the lowest order as Eq. (9), which is nothing but Einstein’s equation with a cosmological constant. If we expand all terms to first order , the action is given by from which we get the equations of motion We thus recover the equations of motion of the Einstein-Maxwell theory, and dynamics is unaffected at the lowest order, as it should be. Expanding to second order in gives us which is extremized with respect to the inverse metric, to get the following equation of motion to order Here is the usual Einstein tensor, is the usual energy-momentum tensor of electrodynamics, and Here contain all the terms which do not contain the field strength tensor, while terms are the terms that do. Variation of the action with respect to , gives the equation of motion Even at first order in , what we get from the Eddington-Born-Infeld theory in five dimensions differs from what we would get by adding usual Maxwell action to the four-dimensional theory as in Eq. (9). This is true for all the equations derived thence, be it Eq. (19) derived via a Palatini variation, the expansion of that as in Eq. (24), or Eq. (34) derived through the variation of a completely metric theory. The difference lies primarily in the fact that we have couplings between the electromagnetic field strength and the curvature. Note however that if we set in Eq. (49), the resulting equation agrees with Eq. (34). Taking the trace of the equations brings out the difference rather dramatically. We have seen earlier that if we add electromagnetism as a separate matter action, the trace of Eq. (24) produces , since the stress-energy tensor electromagnetism is traceless. The trace of Eq. (34) gives the same result if we formally expand In contrast, the trace of Eq. (49) produces to order Thus the Kaluza-Klein prescription leads to the incorporation of electromagnetic fields in the theory as expected, but also to novel non-trivial couplings which get naturally introduced because of the determinantal form of the action. V Iterative solutions The equations of motion for the metric and the vector potential will have even more complicated couplings at higher orders of as they come from a higher-order expansion of the action (44). However, it is possible to find solutions to these equations to any order in via an iterative procedure, which we will now describe. Let us rewrite the equations of motion Eq. (49) and Eq. (52) as We can split the fields and in their zeroth and first order parts, where and satisfy the zeroth order equations, with and defined in terms of the zeroth order fields, and are linear in Let us consider the spherically symmetric case, then we have the Reissner-Nordström-de Sitter solution for the lowest order equations, with and the corresponding We can write the equations at the next order in as where and are the parts of and and and are defined as functions of the zeroth-order fields and and have the form
One-loop Effective Action of the Holographic Antisymmetric Wilson Loop We systematically study the spectrum of excitations and the one-loop determinant of holographic Wilson loop operators in antisymmetric representations of supersymmetric Yang-Mills theory. Holographically, these operators are described by D5-branes carrying electric flux and wrapping an in the bulk background. We derive the dynamics of both bosonic and fermionic excitations for such D5-branes. A particularly important configuration in this class is the D5-brane with worldvolume and units of electric flux, which is dual to the circular Wilson loop in the totally antisymmetric representation of rank . For this Wilson loop, we obtain the spectrum, show explicitly that it is supersymmetric and calculate the one-loop effective action using heat kernel techniques. Alberto Faraggi, Wolfgang Mück and Leopoldo A. Pando Zayas Michigan Center for Theoretical Physics Randall Laboratory of Physics, The University of Michigan Ann Arbor, MI 48109-1040, USA Dipartimento di Scienze Fisiche, Università degli Studi di Napoli “Federico II” Via Cintia, 80126 Napoli, Italy INFN, Sezione di Napoli, Via Cintia, 80126 Napoli, Italy - 1 Introduction - 2 Background geometry and classical D5-brane solutions - 3 Fluctuations - 4 Spectrum of operators on circular Wilson loops - 5 One-loop effective action - 6 Conclusions - A Conventions - B Geometry of emdedded manifolds - C Scalar heat kernel on - D Integrals and infinite sums Wilson loop operators play a central role in gauge theories, both as formal variables and as important order parameters. In the context of the AdS/CFT correspondence expectation values of Wilson loops were first formulated by Maldacena and Rey-Yee . One of the most exciting developments early on was the realization that the expectation value of the BPS circular Wilson loop can be computed using a Gaussian matrix model [3, 4]. This conjecture was later rigorously proved in . In a beautiful, now classic work by Gross and Drukker, the matrix model was evaluated and its leading , large ’t Hooft coupling limit was successfully compared with the string theory answer. One of the most intriguing windows opened by this problem is the question of quantum corrections it their entire variety. For example, having an exact field theory answer (Gaussian matrix model) prompted Gross and Drukker to speculate that the exact matrix model result was the key to understanding higher genera on the string theory side. The quantum corrections on the string theory side have been the subject of much investigation starting with earlier efforts in [6, 7] and continuing in more recent works such as [8, 9]. Despite these concerted efforts, it is fair to say that a crisp picture of matching the BPS Wilson loop at the quantum level on both sides of the correspondence has not yet been achieved. More recently the question of tackling BPS Wilson loops in more general representations has been successfully tackled at leading order. The introduction of general representations gives a new probing parameter, thus expanding the possibilities initiated in the context of the fundamental representation. In the holographic framework, a half BPS Wilson loop in supersymmetric Yang-Mills (SYM) theory in the fundamental, symmetric or antisymmetric representation of is best described by a fundamental string, a D3-brane or a D5-brane with fluxes in their worldvolumes, respectively. Drukker and Fiol computed in , using a holographic D3 brane description, the expectation value of a -winding circular string which, to leading order, coincides with the -symmetric representation. A more rigorous analysis of the role of the representation was elucidated in [11, 12]. Some progress on the questions of quantum corrections to these configurations immediately followed with a strong emphasis on the field theory side [13, 14, 15]. Developing the gravity side of this correspondence is one of the main motivations for this work. In particular, we derive the spectrum of quantum fluctuations in the bosonic and fermionic sectors for a D5-brane with units of electric flux in its world volume embedded in . This gravity configuration is the dual of the half BPS Wilson loop in the totally antisymmetric representation of rank in SYM. Although our main motivation comes from the study of Wilson loops, there is another strong motivation for our study of quantum fluctuations. String theory has heavily relied on the understanding of extended objects in the context of the gauge/gravity correspondence. They have played a key role in interpreting and identifying various hadronic configurations (quarks, baryons, mesons, -strings). A more general approach on the quantization of these objects is a natural necessity. The long history of failed attempts at quantizing extended objects around flat space might have found its right context. Although largely motivated by holography, it is important by itself that the quantum theory of extended objects in asymptotically world volumes seems to be much better behaved than naively expected. In our simplified setup we are faced with various divergences, but many of them allow for some quite natural interpretations. Although we do not attack the general problem of divergences in a general context, we hope that our analysis could serve as a first step in this more fundamental direction of quantization of extended objects. In this paper, we systematically study small fluctuations of D5-branes embedded in asymptotically , with flux in its world volume and wrapping an [16, 17, 18, 19]. The formalism we develop readily applies to more general backgrounds than just the holographic Wilson loop, including holographic Wilson loop correlators [20, 21] and related finite-temperature configurations [22, 23]. Using this general formalism, we obtain the spectrum of both bosonic and fermionic excitations of D5-branes dual to the half BPS circular Wilson loop. Our analysis is explicit by nature and falls nicely in the group theoretic framework put forward in . We also compute the one-loop effective action using heat kernel techniques. The paper is organized as follows. In section 2, we introduce the class of D5-brane configurations for which our analysis applies. For completeness, the bulk background geometries and the main features of the D5-brane background configurations are reviewed in sections 2.1 and 2.2, respectively. Section 3 contains the general analysis of the bosonic and fermionic excitations of these D5-branes. The second-order actions for the bosonic and fermionic degrees of freedom are constructed in sections 3.1 and 3.2, respectively, and their classical field equations are analyzed in sections 3.3 and 3.4. Sections 4 and 5 deal with the holographic Wilson loop. The spectrum of fluctuations is obtained in section 4. Section 5 presents the calculation of the one-loop effective action using the heat kernel method. We conclude in section 6. Technical material pertaining to our notation, the geometry of embeddings and to aspects of the heat kernel method are relegated to a series of appendices. Note: While our work was in progress, the paper appeared, in which the spectrum of the bosonic fluctuations was derived. 2 Background geometry and classical D5-brane solutions We begin by briefly reviewing the bulk geometry and classical D5-brane configurations we are interested in. Although we will eventually focus on , we emphasize that the methods developed in this paper are more general and apply to other solutions of type-IIB supergravity, including the near horizon limit of black D3-branes. Throughout the paper we will work in Lorentzian signature and switch to Euclidean signature only to discuss functional determinants in Sec. 5. We refer the reader to Appendix A for notation and conventions. 2.1 Bulk background We are interested in probe D5-branes embedded in the following solution of type-IIB supergravity, All the other background fields vanish. The function satisfies , so that the 5-form is where is the volume measure of a 5-sphere of radius , as it appears in the metric (2.1). where is the ’t Hooft coupling, (2.1) describes D3-branes, generically at finite temperature. The black hole horizon “radius”, , is related to the inverse temperature by The zero temperature solution is recovered by setting . In this case, we can make the replacement to obtain the metric in the standard Poincaré coordinates with boundary at . Anticipating the embedding of the D5-branes, the metric of has been written in terms of an at some azimuth angle, . 2.2 Background D5-branes In the background (2.1), the bosonic part of the D5-brane action is where is the D-brane tension, is the induced metric on the world volume, and is the field strength living on the brane. We consider D5-brane configurations such that four coordinates wrap the at a constant angle and the remaining two coordinates span an effective string world sheet, with induced metric , in the a part of the bulk. By symmetry, the only non-vanishing components of are111With a slight abuse of notation we use to denote also the antisymmetric component. and we can fix a gauge such that only is non-trivial. It follows that Hence, the action (2.6) can be written as The prefactor arises from , where is the volume of the unit . Although , the fundamental string charge dissolved on the D5-brane, is quantized, we can consider as a continuous variable in the large- limit. and the prime denotes a derivative with respect to . Putting everything together, one finds that the action of the background D5-brane can be reduced to that of an effective string living in the a portion of the 10-dimensional geometry In this section, we consider the fluctuations of the bosonic and fermionic degrees of the D5-brane solutions described above. We construct the quadratic actions and derive the classical field equations. As a first result, the spectrum of fluctuations of the circular Wilson loop of operators in the anti-symmetric representations, predicted in , is fully derived. Our formalism readily applies to more general backgrounds, including holographic Wilson loop correlators [20, 21] and related finite-temperature configurations [22, 23]. 3.1 Bosonic fluctuations Let us start by defining the dynamical variables that parameterize the physical fluctuations. We will make use of well-known geometric relations for embedded manifolds , which are reviewed in appendix B. The fields present in (2.6) are the target-space coordinates of the D5-brane and the gauge field components living on the brane. Both are functions of the D5-brane world volume coordinates . We now recall a few facts from differential geometry that, although known to the reader, we bring to bear explicitly in our calculations. We shall parameterize the fluctuations of around the background coordinates by the generating vector of an exponential map thereby obtaining a formulation that is manifestly invariant under bulk diffeomorphisms. Recall that, as familiar from General Relativity, the differences of coordinates are not covariant objects, but vector components are. Here and henceforth, all quantities except the fluctuation variables are evaluated on the background. Locally, the vector components coincide with the Riemann normal coordinates centered at the origin of the exponential map. Riemann normal coordinates are also helpful for performing the calculations, because of a number of simplifying relations that hold at the origin. For example, one can make use of while the expression for a covariant tensor of rank is, up to second order in , In the equations that follow, we will implicitly assume the use of a Riemann normal coordinate system. Moreover, we shall drop terms of higher than second order in . The tangent vectors along the world volume (see appendix B), which serve to calculate the pull-back of bulk tensor fields, are given by Reparametrization invariance allows us to gauge away the fluctuations that are tangent to the world volume. This leaves us with where the parameterize the fluctuations orthogonal to the world volume, or normal fluctuations, and the index runs over all normal directions. The expression above is the natural geometric object related to fluctuations; it has appeared in previous works, for example, and, more explicitly, in . We found it appropriate to provide an explicit account of the origin of this parametrization of the fluctuations. Using the relations summarized in appendix B, this gives rise to where is the second fundamental form of the background world volume, and denotes the covariant derivative including the connections in the normal bundle. The fluctuations of the gauge field are introduced by and we recall that the background gauge field only lives on the 2-d part of the world volume as shown in (2.7). Following these preliminaries, we now consider fluctuations of the degrees of freedom of the D5-brane. The goal is to expand the action (2.6) to second order in the fields and . For the Born-Infeld term, we make use of the formula where denotes the matrix , and we have introduced where is the inverse of the 6-d metric which is independent of . Henceforth, the normal index in (3.11) refers only to the three normal directions within the part of the bulk, as we have indicated explicitly for the normal direction within . Note that (3.12) is not the induced metric on the worldvolume. Rather, the background flux deforms the metric and the fluctuations see the deformed geometry as appropriate for open string fluctuations. In order to expand the Chern-Simons term in (2.6), we make use of (3.3), (3.4), (3.7) and the background relations. One soon finds that to quadratic order in the fluctuations, only the components of the four-form that live on contribute. After some calculation one obtains Replaceing (3.11) and (3.13) in (2.6), the linear terms in are found to cancel as expected for an expansion around a classical solution. The linear term in is a total derivative and is cancelled by a boundary term similar to (2.12). Thus, one ends up with the following quadratic terms in the action, The dynamical fields present in (3.14) are the scalar , the scalars transforming as a triplet under the symmetry of the normal bundle, and the gauge fields , . 3.2 Fermionic fluctuations We now consider fluctuations of the fermionic degrees of freedom of the D5-brane. This is somewhat easier than the bosonic part, because one just needs the fermionic part of the action, in which all bosonic fields assume their background values. Our starting point is the fermionic part of the D5-brane action with -symmetry, which was derived in 222We thank L. Martucci for pointing out to us that, in order to correctly interpret the symbol in the gauge-fixed action (30) of , one should start from their equation (17). We have omitted the term , which vanishes in the background (2.1). where as before, , is a doublet of 32-component left-handed Majorana-Weyl spinors (, the acts on the doublet notation) and . Moreover, in the background (2.1), the derivative operator is given by where we have defined It is useful to re-write (3.19) as Finally, the inverse of the matrix is found to be when acting on a left-handed spinor. Putting everything together and using also that the extrinsic curvature terms in (3.17) and (3.18) are and , because the 2-d part of the background is a minimal surface, we find after some calculation that the action (3.15) becomes where we have abbreviated to denote the covariant spinor derivative including the connections in the normal bundle, are the covariant gamma matrices normalized for a 4-sphere of radius , and is the 6-d metric (3.12), which we used also for the bosons. To simplify (3.24) slightly, we introduce the rotated double spinor . Its conjugate is easily found as , which is just the combination that appears in (3.24). Henceforth, we shall work with the rotated spinor and drop the prime for brevity. Now we fix the -symmetry. The covariant gauge-fixing condition reduces to , because is left-handed. The terms in the action that survive this projection are where is now a single, 32-component spinor. Next, let us write (3.26) in terms of 6-d spinors. For this purpose, we choose the following, chiral representation of the 10-d gamma matrices, where , and are Euclidean 4-d gamma matrices, Lorentzian 2-d gamma matrices and a set of Pauli matrices, respectively, satisfying and . Notice the peculiar representation of , which will turn out to be handy for reconstructing 6-d gamma matrices. It follows from (3.27) that the 10-d chirality matrix is simply Then, after reconstructing the 6-d gamma matrices for the D5-brane world volume by and using the left-handedness of , the action (3.26) becomes where now represents a doublet of 8-component, 6-d Dirac spinors that stems from the spinor components that survive the chiral projection. The 6-d gamma matrices act on the spinors, and the matrices act on the doublet. Performing a chiral rotation, , one can obtain other, equivalent ways of writing the action (3.31), in which the “mass” term changes its appearance. In particular, for , one obtains In contrast to (3.31), in which the mass term commutes with the 4-d part of the kinetic term and anti-commutes with the 2-d part, in (3.32) it commutes with the 2-d part of the kinetic term and anti-commutes with the 4-d part. In Sec. 5, (3.31) and (3.32) will give rise to two different ways of calculating the heat kernel, with slightly different results. To conclude the 6-d reduction of the fermionic action, we consider the 10-d Majorana condition. In the decomposition (3.27), the intertwiner is given by444It is irrelevant whether we use or , because they differ by a factor of . Notice that the choice of intertwiners is unique for the odd-dimensional parts, but we have indicated the signs for clarity. Moreover, from (3.27) it is evident that when acting on the . Therefore, the first two factors on the right hand side of (3.33) just form the 6-d intertwiner Thus, if we write the 6-d spinor doublet as where is a doublet of constant (and normalized) 3-d symplectic Majorana spinors satisfying then the 10-d Majorana condition gives rise to the symplectic Majorana condition on the two 6-d spinors, A similar analysis can be done for the chirally rotated spinor . This completes the 6-d formulation of the fermionic action. 3.3 Classical field equations - bosons We shall work in the Lorentz gauge where denotes the covariant derivative with respect to the metric (3.12) and, if acting on fields with indices , contains also the appropriate connections for the normal bundle. Condition (3.38) leaves the residual gauge symmetry with . Taking this into account, the field equations that follow from (3.14) are Note that , where is the Laplacian on an of radius , while the covariant derivative on the 2-d part of the world volume includes the connections for the normal bundle in the case of (3.39). In (3.41), denotes the curvature scalar of the 2-d part of the world volume. So far, the components and of the gauge fields are not entirely decoupled from each other, because of the gauge condition (3.38). However, we can use the residual gauge freedom to set on-shell. To see this, contract (3.41) with , which yields Thus, for any satisfying (3.43), one can find a residual gauge transformation satisfying making the fields and transverse, This still leaves us with the residual gauge transformations satisfying To continue, we decompose the fields into555Notice the index shift for the vector harmonics, which is used to have all sums start from . The sums over other quantum numbers are implicit. where and are scalar and transverse vector eigenfunctions of the Laplacian on , respectively. The corresponding eigenvalues and their degeneracies are given by
While snow, which is a crystalline form of ice, is lighter in weight than solid ice, each cubic foot of snow adds 17 pounds per square inch to the roof. Or up to one and a half tons approximately. For estimating purposes, most Contractor’s consider the yield to be 3,000 pounds per cubic yard or 1.5 tons per cubic yard. Convert the volume of water to liters and then multiply by the density to find the weight. volume = 10 mL ÷ 1000 = .01 L How much does 1 cubic yard of medium shells weigh Products. 1 cubic foot = 3527.396/35.314 =99.887 pounds. Cooler water is denser than warmer water, so if the temperature of the water is less than 70 F, a cubic foot of it weighs more. Use our density calculators to calculate weight, volume, or density of various materials. Water (fresh) 1,700 lb. One cubic yard of dry sandy soil weighs about 2,600 pounds, while 1 cubic yard of dry clay soil weighs in around 1,700 pounds. A square meter of gravel with a depth of 5 cm weighs about 84 kg or 0.084 tonnes. What are some samples of opening remarks for a Christmas party? 1 cubic yard of water at 68 degrees Fahrenheit weighs 1,681.297 pounds. Calculate the equivalent water volume for snow and learn the steps to figure the snow water equivalent (SWE) manually, without a calculator. per cubic yard. Your object would weigh about 1778 pounds in terms of concrete. A cubic yard is 1 yard (3’) wide, long, and high. No sand is created equal. CUBIC YARDAGE CALCULATION SHEET Height of Sides ... Material Weight – Pounds per Cubic Yard Asphalt 2,700 lb. A yard of dirt actually refers to the cubic yard measurement of volume. damp sand is usually moist to the touch, sand can look dry, if you pick up a handful and water drips from it , it will be wet packed. Well, 1 cubic yard of water at 68 degrees Fahrenheit weighs 1,681.297 pounds. How Much Does Concrete Weigh A typical concrete mix weighs 150 lbs per cubic foot, 4,050 lbs per cubic yard, or 2,400 kg per cubic meter. The more moisture content in the mulch, the more it will weigh. The density of water changes ever so slightly depending on it's temperature. How Much Does This Weigh? Use our pipe volume calculator to calculate the volume and weight of the water in your plumbing system. The weight of one cubic foot of water is 7.48052 gallons times 8.3453 pounds, which equals62. Please don't take these numbers literal but just as a general reference. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Although the weight change is minimal, if 10 - 12 gallons of water is used up and/or evaporates, that's about 100 pounds per yard of concrete. You can also use our weight conversion calculators to convert from grams and kilograms to pounds and ounces. Who is the longest reigning WWE Champion of all time? One cubic foot of topsoil weighs around 40 pounds. The weight of a volume of water can be found given the density, which is the mass compared to the volume. If that seems hard to believe, consider that 1 cubic foot is almost 7.5 gallons, and a cubic yard is more than 201 gallons. A cubic yard of pea gravel weighs between 2,500 to 2,600 pounds (1,133 to 1,179 kg). All posted weights were gathered from the EPA & NTEA. How much does a cubic yard of water weigh? 42718356 pounds of water per cubic foot. Find how much water weighs given a volume in teaspoons, tablespoons, cups, quarts, pints, gallons, liters, or milliliters. A cubic yard of mulch, which visually is 3 feet long by 3 feet wide by 3 feet tall, weighs between 400 and 800 pounds. The weight of ice and snow on a roof has the potential to cause winter disasters. A cubic yard is 3′ x 3′ x 3′. T his suggests that you ought to divide your figure by 27. In general, a pure cubic yard of fill dirt will weigh somewhere between 2,000 to 2,700 pounds depending on moisture content and composition. Most of Harmony Sand & Gravel’s products will weight approximately 2,840 pounds per cubic yard or about 1.42 tons per cubic yard. What does contingent mean in real estate? per cubic yard. 3 x 3 x 3 = cubic feet per cubic yard = CFPCY. All Rights Reserved. Calculate how much snow weighs using dimensions, area, or volume measurements. Freight transportation companies charge one of two rates for shipping. Liters and then multiply by the density of water changes ever so slightly depending on 's... N'T take these numbers literal but just as a general reference because the temperature affects the ’! Water ( yd3 - cu yd ) versus pounds of water, start by finding density! The cubic yard of dirt, you have 1 cubic yard ) the components it 's temperature x x... Harmony sand & gravel ’ s consider the yield to be 3,000 pounds per cubic meter or 145 lbs cubic. Wide, long, and high in areas prone to heavy snowfall require special engineering to support load! … Well, 1 cubic yard of gravel weigh kilogram per liter ( kg/L at. Some people hear the term yard of empty space ( 18.59 kg ) cu. Found given the size, shape, and high, long, type... What are some samples of opening remarks for a Christmas party the moisture! 5 cm weighs about 3600 pounds x PPCF = … Well, 1 cubic yard of water ( yd -. One gallon of water weighs 1,684.8 pounds ) at 39.2° of pea gravel weighs between to. Author: Brian … one cubic yard of medium shells weigh products would weigh about 1778 in. The mass compared to the volume 3 ’ ) wide, long, and type of metal given... Of ice and snow on a roof has the potential to cause winter disasters measurement of.! Finding the density affects the water in your plumbing system 27 ( 3x3x3 cubic! Of concrete and the formulas to find the weight of for the Wonder -... Cubic yard of water at 68 degrees Fahrenheit weighs 1,681.297 pounds weight of for the Pets. At 39.2° exact weight, the more moisture content in the mulch, the more moisture in!, shape how much does a cubic yard of water weigh and high 3 ’ ) wide, long, and high most of sand... Weighs 2400 kg per cubic foot ( 3915 lbs per cubic yard, cubic! In your plumbing system is going to determine amount of fluid in it his suggests that you ought divide. Volume conversion calculators to calculate the weight one cubic foot therefore, your object is 12 cu a! Well, 1 cubic yard = CFPCY material given the size, shape, and amount... An exact weight, volume, or volume measurements transportation companies charge one of how much does a cubic yard of water weigh rates for.. Dry concrete weighs about 84 kg or 0.084 tonnes an inch is 2.54.... Tons per cubic foot of topsoil weighs around 40 pounds, volume, or density various... Going to determine amount of sand weighs 99.887 pounds or approximately 100 pounds topsoil. Type of metal material given the size, shape, and high require special engineering to support load. ^3 = 764,554.857984 cubic centimeters assumed clean of dirt actually refers to the cubic yard or about 1.42 per! 2 in ( ~5 cm ) weighs about 157 pounds ( 1,133 to kg... 1.68 tonnes per liter ( kg/L ) at 39.2° and ounces how much does a cubic yard of water weigh of water... Times 8.3453 pounds, which equals62 at 39.2° slightly depending on it 's made of. Volume calculator to calculate the volume of water contains 7.48052 gallons type of concrete and the,. Dry concrete weighs 2400 kg per cubic yard of wet concrete weighs 2400 kg cubic! More moisture content in the mulch, the supplier can tell you when you purchase the soil weighs dimensions! A roof has the potential to cause winter disasters provides enough material to cover a area! Home improvement professionals and find the densities of common metal alloys water at 68 degrees Fahrenheit weighs pounds... Mulch, the more it will weigh around 1,080 pounds feet figure by 27 ( the number of cubic are... A square meter of typical gravel weighs between 2,500 to 2,600 pounds 18.59! Or volume measurements to price of cubic feet per cubic yard a half tons approximately of. The release dates for the same volume the Math: 27 x 7.48 x 8.3249 = 1,681.297 pounds wet weighs! By 27 so we can conclude: a yard is 36 inches and an is! Density affects the weight of the components it 's temperature concrete and the amount sand! A concrete company to sell you that volume 1.5 tons per cubic yard dirt... Amounts of the water is 62.4 pounds per cubic meter of typical gravel weighs 1,680 kilograms 1.68..: Brian … one cubic yard of seawater weighs 1728 pounds we can:. There are 27 cubic feet in 1 cubic yard 7.48052 gallons gravel provides enough material to a. 27 x 7.48 x 8.3249 = 1,681.297 pounds feet are there in a yard of it weighs less than pounds! Feet ) between 2,400 to 2,900 lbs to pounds and a half tons approximately size, shape and! Is ( 91.44 ) ^3 = 764,554.857984 cubic centimeters the amounts of the water in plumbing!
7 edition of Modern general topology found in the catalog. 1985 by North-Holland, Sole distributors for the U.S.A. and Canada, Elsevier Science Pub. Co. in Amsterdam, New York, New York, NY, U.S.A . Written in English |Series||North-Holland mathematical library ;, v. 33| |LC Classifications||QA611 .N25 1985| |The Physical Object| |Pagination||x, 522 p. ;| |Number of Pages||522| |LC Control Number||85004415| John L. Kelley General Topology D. Van Nostrand Company Inc. Acrobat 7 Pdf Mb Scanned by artmisa using Canon DRC + flatbed option. Books shelved as topology: Topology by James R. Munkres, Algebraic Topology by Allen Hatcher, Geometry, Topology and Physics by M. Nakahara, Euler's Gem. Apply cardinal number Cartesian product Cech-complete closed mapping closed open closed sets closed subspace closure compact subspace compact-open topology consider contains continuous function continuous mapping COROLLARY countably compact defined definition denoted discrete space Dokl Exercise exists a continuous fibers filter finite. Psychiatric/Mental Health Nursing Pathogenic specialization in cereal rust fungi, especially Puccinia recondita f. sp. tritici: concepts, methods of study, and application age of the Crusades .... Anne Frank Study Guide Homestead entries by minors. Prayers for impossible days Catalog of State services and State programs for community development. Canadian hospital accounting manual The book is a valuable source of data for mathematicians and researchers interested in modern general topology. Show less Bibliotheca Mathematica: A Series of Monographs on Pure and Applied Mathematics, Volume VII: Modern General Topology focuses on the processes, operations, principles, and approaches employed in pure and applied mathematics. Modern General Topology (ISSN Book 33) - Kindle edition by Nagata, J. -I. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Modern General Topology (ISSN Book 33).Manufacturer: North Holland. The book was prepared in connection with the Prague Topological Symposium, held in During the last 10 years the focus in General Topology changed and therefore the selection of topics differs slightly from those chosen in The book is both a literate treatment by a world-class topologist of significant portions of Modern General Topology, and a personal judgemental statement as to what does and what does not deserve to be recorded for posterity. On each count it is a valued addition to the literature."Cited by: Purchase Modern General Topology - 2nd Edition. Print Book & E-Book. ISBNBook Edition: 2. Modern General Topology. Edited by Jun-iti Nagata. Vol Pages () Download full volume. Previous volume. Next volume. Actions for selected chapters. Select all / Deselect all. Book chapter Full text access Chapter V - Paracompact. Get this from a library. Modern general topology. [Jun-iti Nagata] -- This classic work has been fundamentally revised to take account of recent developments in general topology. The first three chapters remain unchanged except for numerous minor corrections and. Additional Physical Format: Online version: Nagata, Jun-iti, Modern general topology. Amsterdam: North-Holland Pub. ; New York: American Elsevier Pub. Purchase Modern General Topology, Volume 33 - 3rd Edition. Print Book & E-Book. ISBNA List of Recommended Books in Topology Allen Hatcher Here are two books that give an idea of what topology is about, aimed at a general audience, without much in the way of prerequisites. — A fine reference book on point-set topology, now out of print, Size: 65KB. Read "Modern General Topology" by J.-I. Nagata available from Rakuten Kobo. This classic work has been fundamentally revised to take account of recent developments in general topology. The first t Brand: Elsevier Science. General Topology and Its Relations to Modern Analysis and Algebra II is comprised of papers presented at the Second Symposium on General Topology and its Relations to Modern Analysis and Algebra, held in Prague in September The book contains expositions and lectures that discuss various subject matters in the field of General Topology. The goal of this part of the book is to teach the language of math-ematics. More specifically, one of its most important components: the language of set-theoretic topology, which treats the basic notions related to continuity. The term general topology means: this is the topology that is needed and used by most mathematicians. A permanent File Size: 1MB. General Topology by Stephen Willard. Basic Topology by M.A. Armstrong. Perhaps you can take a look at Allen Hatcher's webpage for more books on introductory topology. He has file containing some very good books. improve this answer. edited Feb 13 '12 at 2 revs, 2 users 90% A note about Munkres: For me, there was very little in the. General Topology by Shivaji University. This note covers the following topics: Topological spaces, Bases and subspaces, Special subsets, Different ways of defining topologies, Continuous functions, Compact spaces, First axiom space, Second axiom space, Lindelof spaces, Separable spaces, T0 spaces, T1 spaces, T2 – spaces, Regular spaces and T3 – spaces, Normal. Modern General Topology PDF Download. Download free ebook of Modern General Topology in PDF format or read online by J.-I. Nagata Published on by Elsevier. This classic work has been fundamentally revised to take account of recent developments in general topology. Read "Modern General Topology" by J.-I. Nagata available from Rakuten Kobo. This classic work has been fundamentally revised to take account of recent developments in Brand: Elsevier Science. Find many great new & used options and get the best deals for North-Holland Mathematical Library: Modern General Topology 33 by J. Nagata (, Hardcover, Revised) at the best online prices at eBay. Free shipping for many products. The book is both a literate treatment by a world-class topologist of significant portions of Modern General Topology, and a personal judgemental statement as to what does and what does not deserve to be recorded for posterity. On each count it is a valued addition to the : J.-I. Nagata. The book is a valuable source of data for mathematicians and researchers interested in modern general topology. Mathematics Nonfiction Publication Details. A book in topology. Ask Question Asked 8 years, 4 months I am bound to recommend my book. Topology and Groupoids, () Ronald Brown, A first course in algebraic topology" is an user-friendly book to learn basic definitions and theorems about general topology, homotopy theory and fundamental group. If your students hate that book, they. Modern topology often involves spaces that are more general than uniform spaces, but the uniform spaces provide a setting general enough to investigate many of the most important ideas in modern topology, including the theories of Stone-Cech compactification, Hewitt Real-compactification and Tamano-Morita Para compactification, together with. So, someone recommended the book General Topology by Kelley. So, I bought it because of the recommendation and because it happened to be dirt cheap for a new copy on Amazon. When I read it, I had had some exposure to the topology of the real line, so I was at least familiar with stuff like open sets (though only on the real line and R^n). Buy the Modern General Topology ebook. This acclaimed book by J.-I. Nagata is available at in several formats for your eReader. Search. Modern General Topology. By J.-I. Nagata. Mathematics: Topology - General. North Holland Publication date: November ISBN: The book is a valuable source of data for mathematicians and researchers interested in modern general topology. $ $ Topological Methods in Euclidean Spaces/5(4). The book is a valuable source of data for mathematicians and researchers interested in modern general topology. Questions and Answers in General Topology Author: N.A. Among the best available reference introductions to general topology, this volume is appropriate for advanced undergraduate and beginning graduate students. Its treatment encompasses two broad areas of topology: "continuous topology," represented by sections on convergence, compactness, metrization and complete metric spaces, uniform spaces, and function spaces; /5(9). A Working Textbook. Author: Aisling McCluskey,Brian McMaster; Publisher: OUP Oxford ISBN: Category: Mathematics Page: View: DOWNLOAD NOW» This textbook offers an accessible, modern introduction at undergraduate level to an area known variously as general topology, point-set topology or analytic topology with a particular focus on helping. In mathematics, general topology is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic r name for general topology is point-set topology. The fundamental concepts in point-set. MODERN- GENERAL TOPOLOGY JUN-ITI NAGATA University of Amsterdam Second revised edition ^.~.__ NORTH-HOLLAND AMSTERDAM • NEW YORK • OXFORD. CONTENTS CHAPTER I INTRODUCTION \ 1. Set l > 1 2. Cardinal numbers 5 3. Ordinal numbers ' * 10 4. Zermelo's theorem and Zorn's lemma 16 5. Topology of Euclidean plane This classic book is a systematic exposition of general topology. It is especially intended as background for modern analysis. Based on lectures given at the University of Chicago, the University of California and Tulane University, this book is intended to be a reference and a text. Overrated and outdated. Truth be told, this is more of an advanced analysis book than a Topology book, since that subject began with Poincare's Analysis Situs (which introduced (in a sense) and dealt with the two functors: homology and homotopy). The only point of such a basic, point-set topology textbook is to get you to the point where you can work through an (Algebraic) /5. This classic book is a systematic exposition of general topology. It is especially intended as background for modern analysis. Based on lectures given at the University of Chicago, the University of California and Tulane University, this book is intended to be a reference and a : Springer-Verlag New York. ♥ Book Title: General Topology ♣ Name Author: John L. Kelley ∞ Launching: Info ISBN Link: ⊗ Detail ISBN code: ⊕ Number Pages: Total sheet ♮ News id: goleb9Ov3oC Download File Start Reading ☯ Full Synopsis: "Aimed at graduate math students, this classic work is a systematic exposition of general topology and is. The field of modern canonical quantum general relativity was born in and since then an order of research papers closely related to the subject have been published. Pivotal structures of the theory are scattered over an order of. Modern topology often involves spaces that are more general than uniform spaces, but the uniform spaces provide a setting general enough to investigate many of the most important ideas in modern topology, including the theories of Stone-Cech compactification, Hewitt Real-compactification and Tamano-Morita Para compactification, together with. Introduction Topology Modern Analysis by Simmons. You Searched For: Hilbert Spaces Chapter Finite-Dimensional Spectral Theory PART III: ALGEBRAS OF OPERATORS Chapter General Preliminaries on Banach Algebras Chapter This is an ex-library book and may have the usual library/used-book markings book has soft covers. These notes are intended as an to introduction general topology. They should be su cient for further studies in geometry or algebraic topology. Comments from readers are welcome. Thanks to Micha l Jab lonowski and Antonio D az Ramos for pointing out misprinst and errors in earlier versions of these notes. Size: KB. Online Modern General Topology by Maurice online row; Entrepreneur Media, Inc. Determine your additional conditions, trainees, servants, and more to your video so you can provide them completely on any hybrid. meet the own to share metastatic suo - triggered and demonstrated to your j. include press of what browser you say/5. Schaums Outline of General Topology by Seymour Lipschutz, available at Book Depository with free delivery worldwide. Schaum’s Outline of General Topology has 41 ratings and 2 reviews. Schaum’s Outlines present all the essential course information in. Aimed at graduate math students, this classic work is a systematic exposition of general topology and is intended to be a reference and a text. As a reference, it offers a reasonably complete coverage of the area, resulting in a more extended treatment than normally given in a course.This book, the updated journal of a continuing expedition to the never-never land of general topology, should appeal to the latent naturalist in every mathematician. Notation. Several of the naming conventions in this book differ from more accepted modern conventions, particularly with respect to the separation : Lynn Arthur Steen, J. Arthur Seebach, Jr.General topology: john l. kelley, sam sloan: General Topology: John L. Kelley, Sam Sloan: Books - Amazon Try Prime. Your Store Deals Store Gift Cards Sell Help en fran ais. Shop by. John l. kelley - wikipedia, the free encyclopedia John L. Kelley (December 6,was an American mathematician at University of California, Berkeley who /5().
Common gauge origin of discrete symmetries in observable sector and hidden sector An extra Abelian gauge symmetry is motivated in many new physics models in both supersymmetric and nonsupersymmetric cases. Such a new gauge symmetry may interact with both the observable sector and the hidden sector. We systematically investigate the most general residual discrete symmetries in both sectors from a common Abelian gauge symmetry. Those discrete symmetries can ensure the stability of the proton and the dark matter candidate. A hidden sector dark matter candidate (lightest -parity particle or LUP) interacts with the standard model fields through the gauge boson which may selectively couple to quarks or leptons only. We make a comment on the implications of the discrete symmetry and the leptonically coupling dark matter candidate, which has been highlighted recently due to the possibility of the simultaneous explanation of the DAMA and the PAMELA results. We also show how to construct the most general charges for a given discrete symmetry, and discuss the relation between the gauge symmetry and -parity. For many beyond standard models, discrete symmetries are invaluable ingredients to make the models phenomenologically viable. For example, in the minimal supersymmetric standard model (MSSM), -parity is usually assumed for the proton stability. -parity also guarantees the stability of the lightest superparticle (LSP), which can be a good dark matter candidate. It is argued, however, that discrete symmetries are vulnerable to Planck scale physics unless they have a gauge origin . An extra Abelian gauge symmetry is also predicted in many new physics scenarios such as superstring, extra dimension, little Higgs, and grand unification. Therefore, it would be useful to understand what discrete symmetries are allowed as a residual discrete symmetry of the extra gauge symmetry. The first systematic study of the residual discrete symmetry in a supersymmetry (SUSY) framework was performed by Ibanez and Ross , where they found 3 independent generators , , and . They studied all possible and discrete symmetries from a , and found that (matter parity, which is equivalent to -parity) as well as another symmetry can be a residual discrete symmetry of the gauge symmetry, a.k.a. a discrete gauge symmetry. Complementary and general discrete symmetries ( with ) with a origin were also studied [4, 5]. In a special case where the -problem is addressed by a TeV scale , the discrete symmetries were investigated in Refs. [7, 8], which allow -parity violating models without fast proton decay. Nevertheless, these discrete symmetries concerned only the observable sector (or the MSSM sector). Many theories need exotic chiral fields for various reasons. For example, the SUSY breaking mechanism requires additional fields. Also exotic fields are often necessary to make the model anomaly free when an additional gauge symmetry is added. Even when they do not have standard model (SM) charges, such hidden sector fields may have charges under the extra gauge symmetry. The SM neutral hidden sector fields can be natural dark matter candidates if they are stable. It was shown that the same symmetry that provides the discrete symmetry for the MSSM sector can also be the source of the discrete symmetry for the hidden sector simultaneously . Another independent generator was introduced for the hidden sector discrete symmetry. The lightest -parity particle (LUP) from the hidden sector is stable under the (-parity), and it was shown that the experimental constraints from the relic density and the direct detection can be satisfied in a large parameter space with the LUP dark matter candidate . However, the study in Ref. was not completely general since the hidden sector field was assumed to be Majorana with as a mass term, and only the factorizable extension was exploited. In this paper, we first generalize the discussion by including the Dirac type hidden sector fields and possible nonrenormalizable mass terms. Dirac type fields allow a discrete symmetry (with ) while Majorana type fields allow only . This leads to the possibility of multiple hidden sector dark matter candidates stable due to the hidden sector discrete symmetry. We also start from the general form of the discrete symmetry taking the factorizable case as a special limit. Then we present a method to construct the most general charges for a given discrete symmetry of the MSSM and hidden sector, with illustrations for specific examples. In Appendix A, we discuss the origin of the popular -parity and its relation with the solution of the -problem, which is one of the motivations to extend the supersymmetric standard model to include an extra . In Appendix B, we discuss about the compatibility of discrete symmetries with a leptonically coupling dark matter candidate. 2 Residual discrete symmetries from the gauge symmetry In this section, we review the general discrete symmetries in the MSSM sector, which are the remnant of an Abelian gauge symmetry. Starting with a gauge symmetry which is broken spontaneously by a Higgs singlet , one is generically left with a residual discrete symmetry. In a normalization where all particles of the theory have integer charges , the value of is directly determined by The resulting discrete charges of the fields are then given by the part of their original charges By definition the Higgs singlet has vanishing discrete charge so that giving a vacuum expectation value (vev) to keeps the discrete symmetry unbroken. Note that in the case with , we formally obtain a which corresponds to no remnant discrete symmetry. The possible (family-independent) discrete symmetries of the MSSM111We include 3 right-handed neutrinos which do not change our argument. which can emerge from an anomaly free gauge symmetry have been identified and investigated in Refs. [3, 4, 5]. Demanding invariance of the MSSM superpotential operators one can express any discrete symmetry among the MSSM particles in terms of the two generators where the charges and are defined in Table 1. Different discrete symmetries of the observable sector are then obtained by multiplying various integer powers of these generators Compared to Refs. [3, 4], the generator , which gives nonzero discrete charge to only one of the two Higgs doublets, is omitted because its presence would forbid the term in Eq. (3). As the invariance of under requires opposite discrete charges for and , one can always find an equivalent set of discrete charges by adding some amount of hypercharge such that simultaneously for , . Thus requiring the existence of the term guarantees the absence of domain walls after the electroweak symmetry breaking. A more intuitive way of writing Eq. (6) is obtained by defining the generator . The discrete charges of the MSSM fields under are related to the familiar baryon number () by the hypercharge shift Here the hypercharge is normalized so that . On the other hand, the discrete charges of the MSSM fields under are nothing but the negative of the lepton number () Hence, the general discrete symmetry of Eq. (6), written in terms of and , can be understood in terms of the well-known baryon number and lepton number, with a discrete charge The exponents in Eqs. (6) and (9) are related to each other by and . Specific values for and define a symmetry of the MSSM particles for which the quantity is conserved. The lightest particle with nonzero value will be stable by the discrete symmetry. The following discrete symmetries are some examples obtained for given and values. Note that with the symmetry in the last line () corresponds to matter parity because for any invariant term for which is always an integer. As long as the spin angular momentum is conserved, matter parity is equivalent to -parity, . The discussion so far has been completely independent of any assumptions about the origin of the discrete symmetry. Requiring that the arises as a remnant of an anomaly free gauge symmetry, we have to impose the discrete anomaly conditions of Ref. (Note the cubic anomaly condition is disregarded .) where the sums run over MSSM particles only. Additional exotic fields which may or may not be singlets under the SM gauge group do not contribute to these anomaly conditions as long as they acquire a mass term when is broken, i.e. if they are vectorlike under the SM gauge groups, while they are not under the . The consequence of Eqs. (11) - (13) is that some sets of parameters correspond to symmetries which are discrete anomaly free while others are anomalous and therefore ruled out (see Refs. [3, 4, 5]). For instance, the symmetries of type automatically satisfy Eqs. (11) and (13) for all and . However, Eq. (12) yields the nontrivial constraint where and denote the number of families of fermions and Higgs pairs, respectively. For , we obtain only , as allowed choices for the cyclic symmetry. Unless (which is not a real symmetry), we are led to the symmetry or . Similarly, the discrete anomaly free symmetries of type are only or (unless ). This conclusion does not depend on whether there are massive charged exotics or not since their corresponding mass term would imply vanishing contribution to the discrete anomaly condition . 3 Hidden sector discrete symmetries We now wish to extend the concept of discrete symmetries to the hidden sector or the SM neutral particles.222The discrete symmetry argument does not change even if the Dirac type exotics are SM-charged. To do so, we introduce the generators and which assign nontrivial discrete charges to, respectively, the Majorana () and the Dirac (, ) particles of the hidden sector while the MSSM fields remain uncharged. Note that we label hidden sector fields by indices (, , etc) which can refer to fields with different or identical (i.e. family) charges. The generators , as well as , , and – extended to include the hidden sector particles – are shown in Table 1. Introducing the discrete symmetry of the hidden sector the generalized discrete symmetry over the observable and the hidden sectors can be written as It is uniquely determined by the integer exponents , entailing the discrete charges Summation over repeated indices is assumed as usual. Under the assumption that the hidden sector particles acquire a mass after the gauge symmetry is broken down to the discrete symmetry, invariance of the bilinear terms under constrains the exponents to which makes it effectively a parity for Majorana type field (). Starting with an anomaly free discrete symmetry in the observable sector, the extended discrete symmetry can also originate in an anomaly free gauge symmetry, regardless of the chosen values for and . In other words, due to the invariance of the mass terms in Eq. (19), and jointly either satisfy or do not satisfy the discrete anomaly conditions of Eqs. (11) - (13). Now we consider the case where can be factorized into two smaller discrete symmetries. where . This decomposition is only possible if and have no common prime factor, i.e. they must be coprime to each other. Let us apply this method to separate the discrete symmetries of the observable and the hidden sector. To do so, we have to assume that the exponents and are multiples of , while and are multiples of . Eq. (17) can then be written as This yields a symmetry in the observable sector and a in the hidden sector with charges Both originate in the underlying symmetry and are conserved separately. The symmetry can be used to forbid certain processes whose external states comprise only MSSM particles. On the other hand, the symmetry can stabilize the lightest charged particle, leading to a dark matter candidate in the hidden sector [10, 9]. Depending on as well as the charges , there could be even more than one hidden sector particle stable due to the discrete symmetry. Assume that , where all factors are coprime to each other. Evidently, all but perhaps one are necessarily odd. Then, the decomposition of the discrete symmetry in the hidden sector reads What are the charges of the particles and under these individual ? Due to the invariance of the mass term for a Majorana particle , its charge must be zero for odd . In the case where there is an even , the particle has charge under . For Dirac particles , the charges are related by Since all are coprime to each other, the charges are uniquely fixed by the value of .333If there were a second charge assignment for the same value of , the sum would have to be integer. This however is only possible for . Consider for example three particles , , , which have the charges , , , respectively. The symmetry breaks up into , leading to the following charges. is the only particle charged under the symmetry. Thus it is stable. Similarly is stable because it is the only charged particle. Finally, the symmetry stabilizes the lighter of the two particles and . If this is , then there is no more particle stable due to the discrete symmetry. In that way, it is possible that different symmetries stabilize the same particle. The important point in this discussion is that a single gauge symmetry can effectively give rise to more than one discrete symmetry. One part of it might be used to forbid unwanted processes involving the MSSM fields only, while other parts lead to stable hidden sector particles, i.e. multiple dark matter candidates.444Of course, we can have multiple dark matter candidates from the MSSM sector and hidden sector for , for example, which can provide the LSP dark matter (stable under -parity) and the Dirac type hidden sector dark matter (stable under ). This setup is schematically sketched in Figure 1. The discussion here is basically a generalization of that of Ref. , which dealt with only the Majorana case with a specific mass term. An example of the purely hidden sector discrete symmetry in the non-SUSY case can be found in Ref. , where an additional was introduced to explain the neutrino mass and dark matter simultaneously. |MSSM sector||Hidden sector| 4 General charges Having discussed the most general symmetries that can arise from a gauge symmetry, we now want to derive the most general charges within our setup. Including the possibility that the superpotential terms of Eqs. (3), (4), and (19) originate from higher-dimensional operators, the underlying theory before breaking generally includes the following terms555In addition to the factors one could also have powers of multiplying the effective superpotential terms. For the sake of clarity, we omit this possibility. where we assume generation independent integer exponents with and . is some high mass scale (e.g. or ) at which new physics generates the nonrenormalizable operators. Note that , , and are dimensionless parameters. These terms yield severe constraints on the allowed charges of the chiral matter fields. We find From this we obtain the general solution of charges in terms the continuous real parameters , , , , In writing Eq. (39), we have chosen a specific basis in which the first basis vector (corresponding to the parameter ) is , the second (corresponding to ) is hypercharge. The parameters are related to the exponents of the symmetry by Furthermore, our basis is suitable to discuss the anomaly condition easily. From we see that the parameters , , , and do not enter the anomaly condition. Plugging in the charges of Eq. (39), we obtain In the case where there are no exotic states which are charged under , the parameter must therefore vanish due to the anomaly condition. Of course, to be free from gauge anomaly, the other anomaly conditions should also be satisfied with a specified particle spectrum. To be as general as possible we do not consider these full gauge anomaly conditions in this paper. However, see Refs. [14, 15, 16, 17, 7, 18] for some examples. Note that Eq. (39) is a generalization of the discussion presented in Refs. [7, 8] where . This general charge assignment is consistent with the following well-known fact: assuming (i) Yukawa couplings with , (ii) no SM-charged particles other than quarks and leptons, (iii) vanishing of the mixed anomalies (yielding , see discussion in Ref. , for example) and (yielding ), the most general generation independent which can be defined on the quarks and leptons is a superposition of and , the first and the second basis vector of Eq. (39) (see also Refs. [19, 20]). Relaxing these conditions would allow different symmetries. Disregarding , the parameters , , , and can be written in terms of the charges as In a normalization in which all charges are integer, the above four parameters (as well as ) are automatically also integer. Note that the contribution of can be absorbed effectively in the number of Higgs doublet pairs. However, it is not guaranteed in general that would remain integer. Eq. (39) is useful to obtain general charges in various limits. For example, assuming , the quark-phobic case () requires , , .666See Appendix B for further discussion related to DAMA/PAMELA results. The lepton charges in this case are then given by The lepto-phobic case () requires , . The quark charges in this case are then given by Depending on value of , we can categorize the models. Especially the case can solve the -problem by generating the effective parameter as This is one of the most interesting cases for phenomenology, since the new gauge boson and the exotic colored particles which are necessary to cancel the anomaly, are at the (TeV) scale, which can be explored by the LHC. A TeV scale has implications also in cosmology such as providing a venue so that the right-handed sneutrino LSP dark matter candidate or the LUP dark matter candidate can be a thermal dark matter candidate through the resonance [21, 10]. See Ref. for a review of this model. It might appear that this type of cannot have matter parity (-parity) as its residual discrete symmetry, but there are ways to achieve this (see Appendix A). 5 Construction of the charges for a given discrete symmetry We discuss how to construct the most general charges, which have a given discrete symmetry as its residual symmetry. The SM-charged exotics are highly model-dependent and they may be obtained by scanning (see e.g. Refs. [7, 8]). Here, we limit ourselves only to the MSSM particles and the SM-singlet exotics (, ). The specific discrete symmetries we want to cover in this paper are listed in Table 2. An overall sign change does not affect the discrete symmetry. The general charges, before any discrete symmetry is assumed, are given in Eq. (39). Integer normalization is achieved through the coefficient , , , and . Then, of is determined by fixing also the parameter as shown in Eq. (43). Since invariance under a hypercharge transformation is implicitly assumed throughout the paper, the hypercharge column (with coefficient ) of Eq. (39) has no effect on the discrete symmetry. However, in order to obtain integer charges, must be chosen in a particular way. As a general procedure, we suggest the following: Take of . Identify some terms which are allowed by the given discrete symmetry as well as the SM gauge group. Extract an additional condition about the charges from these allowed terms (MSSM sector only). Using this additional relation, obtain the charges from Eq. (39), the most general charge assignments before imposing any particular discrete symmetry. Require the charges to be integer. The resulting set of equation is the most general solution that contains the given discrete symmetry, up to arbitrary hypercharge shift and scaling. We illustrate our method on three examples: , , and .
Physics is the natural science that studies the matter and its motion and behaviour through space and time and that studies the related entities of energy and force. What is a physics class? Students explore complex scientific concepts and make real-world connections to understand its impact on daily life. Our physics curriculum focuses on making sure students get a clear understanding of motion, energy, electricity, magnetism, and the laws that govern the physical universe. What is included in a physics class? The physical concepts include mechanical energy, thermodynamics, the Carnot cycle, electricity and magnetism, quantum mechanics, and nuclear physics. What is physics and example of physics? Physics is the science of energy and matter and how they relate to each other. An example of physics is the study of quantum mechanics. An example of physics is electrocution. Physical properties or processes. Is physics an easy class? Students and researchers alike have long understood that physics is challenging. But only now have scientists managed to prove it. It turns out that one of the most common goals in physics—finding an equation that describes how a system changes over time—is defined as “hard” by computer theory. Why is physics so hard? Why is Physics harder than Math? Answer: Physics demands problem-solving skills that can be developed only with practice. It also involves theoretical concepts, mathematical calculations and laboratory experiments that adds to the challenging concepts. Is physics or chemistry easier? Physics is considered comparatively harder than chemistry and various other disciplines such as psychology, geology, biology, astronomy, computer science, and biochemistry. It is deemed difficult compared to other fields because the variety of abstract concepts and the level of maths in physics is incomparable. What do you expect to learn in physics? Courses in physics reveal the mathematical beauty of the universe at scales ranging from subatomic to cosmological. Studying physics strengthens quantitative reasoning and problem solving skills that are valuable in areas beyond physics. Is physics a science or math? Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, with its main goal being to understand how the universe behaves. What are the 3 types of physics? Here are all branches of Physics: Classical Physics. Modern Physics. Nuclear Physics. What is the easiest way to understand physics? One of the easiest ways of learning physics is to master the basic theories. Learning the basic laws will help you in solving complex problems later on in the advanced stages. Even better, create a graphical mind map that includes an overview of all the concepts and relate them to the complex problems. What should I know for physics? To study physics, you should take as much high school and college mathematics as you can reasonably fit into your schedule. Especially, take the entire run of algebra, geometry/trigonometry, and calculus courses available, including Advanced Placement courses if you qualify. What are 5 examples of physical science? Academic Programs. Physical Science involves the study of non-living aspects of natural science including physics, chemistry, astronomy, geology, meteorology and oceanography. Why is it important to study physics? Physics is the cornerstone of the other natural sciences (chemistry, geology, biology, astronomy) and is essential to understanding our modern technological society. At the heart of physics is a combination of experiment, observation and the analysis of phenomena using mathematical and computational tools. How physics is used in daily life? We use physics in our daily life activities such as walking, cutting, watching, cooking, and opening and closing things. Physics is one of the most elementary sciences that contributes directly to the development of science and the development of new technologies. Is physics harder than math? Physics might be more challenging because of the theoretical concepts, the mathematical calculations, laboratory experiments and even the need to write lab reports. What is the hardest subject? The hardest degree subjects are Chemistry, Medicine, Architecture, Physics, Biomedical Science, Law, Neuroscience, Fine Arts, Electrical Engineering, Chemical Engineering, Economics, Education, Computer Science and Philosophy. What kind of math is in physics? Calculus. Calculus will help you solve many physics equations. You’ll start with single variable calculus, then progress to multivariable calculus. The latter is extremely relevant to physics because you’ll work with directional derivatives and similar concepts in three-dimensional space. Is physics harder than biology? Beginning university students in the sciences usually consider biology to be much easier than physics or chemistry. From their experience in high school, physics has math and formulae that must be understood to be applied correctly, but the study of biology relies mainly on memorization. Is physics easy if you know math? If you’re already proficient in mathematics, is physics much easier to learn? Probably. More advanced physics is just more advanced applied math. You can take grad QM if you suck at physics but love lin alg and group theory. What is the hardest college class? Organic Chemistry: It shouldn’t surprise you that organic chemistry takes the No. 1 spot as the hardest college course. This course is often referred to as the “pre-med killer” because it actually has caused many pre-med majors to switch their major. Which science is hardest? - Chemistry. Chemistry degree is famous for being one of the hardest subjects. - Biomedical Science. - Molecular Cell Biology. Is physics harder than calculus? Physics is absolutely harder than calculus. Calculus is an intermediate level of mathematics that is usually taught during the first two years of most STEM majors. Physics on the other hand is a very advanced and difficult and highly researched field. How difficult is physics? Physics is usually among the toughest classes someone may encounter in their academic studies, since it requires conceptual understanding of physics concepts, along with both mechanical application and conceptual understanding mathematics. Is physics all math? While physicists rely heavily on math for calculations in their work, they don’t work towards a fundamental understanding of abstract mathematical ideas in the way that mathematicians do. Physicists “want answers, and the way they get answers is by doing computations,” says mathematician Tony Pantev.
During the third hour after midnight the hands on a clock point in the same direction (so one hand is over the top of the other). At what time, to the nearest second, does this happen? Bernard Bagnall recommends some primary school problems which use numbers from the environment around us, from clocks to house If the numbers 5, 7 and 4 go into this function machine, what numbers will come out? Investigate the different distances of these car journeys and find out how long they take. Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice? Find the next number in this pattern: 3, 7, 19, 55 ... These sixteen children are standing in four lines of four, one behind the other. They are each holding a card with a number on it. Can you work out the missing numbers? This article for teachers suggests ideas for activities built around 10 and 2010. Mr. Sunshine tells the children they will have 2 hours of homework. After several calculations, Harry says he hasn't got time to do this homework. Can you see where his reasoning is wrong? A lady has a steel rod and a wooden pole and she knows the length of each. How can she measure out an 8 unit piece of pole? Where can you draw a line on a clock face so that the numbers on both sides have the same total? Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square. Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same Which times on a digital clock have a line of symmetry? Which look the same upside-down? You might like to try this investigation and Ben’s class were making cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? This number has 903 digits. What is the sum of all 903 digits? In this investigation, you are challenged to make mobile phone numbers which are easy to remember. What happens if you make a sequence adding 2 each time? EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total. Ben has five coins in his pocket. How much money might he have? Ram divided 15 pennies among four small bags. He could then pay any sum of money from 1p to 15p without opening any bag. How many pennies did Ram put in each bag? Annie cut this numbered cake into 3 pieces with 3 cuts so that the numbers on each piece added to the same total. Where were the cuts and what fraction of the whole cake was each piece? Find out what a Deca Tree is and then work out how many leaves there will be after the woodcutter has cut off a trunk, a branch, a twig and a leaf. Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99 How many ways can you do it? Can you make square numbers by adding two prime numbers together? Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? If you had any number of ordinary dice, what are the possible ways of making their totals 6? What would the product of the dice be A game for 2 people. Use your skills of addition, subtraction, multiplication and division to blast the asteroids. The clockmaker's wife cut up his birthday cake to look like a clock face. Can you work out who received each piece? Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families? Use 4 four times with simple operations so that you get the answer 12. Can you make 15, 16 and 17 too? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! Fill in the missing numbers so that adding each pair of corner numbers gives you the number between them (in the box). There are three buckets each of which holds a maximum of 5 litres. Use the clues to work out how much liquid there is in each bucket. Can you arrange 5 different digits (from 0 - 9) in the cross in the On a calculator, make 15 by using only the 2 key and any of the four operations keys. How many ways can you find to do it? There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places. 48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what On the table there is a pile of oranges and lemons that weighs exactly one kilogram. Using the information, can you work out how many lemons there are? We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes? Rocco ran in a 200 m race for his class. Use the information to find out how many runners there were in the race and what Rocco's finishing position was. Amy has a box containing domino pieces but she does not think it is a complete set. She has 24 dominoes in her box and there are 125 spots on them altogether. Which of her domino pieces are missing? Tim had nine cards each with a different number from 1 to 9 on it. How could he have put them into three piles so that the total in each pile was 15? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Well now, what would happen if we lost all the nines in our number system? Have a go at writing the numbers out in this way and have a look at the multiplications table. There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g? I throw three dice and get 5, 3 and 2. Add the scores on the three dice. What do you get? Now multiply the scores. What do you notice?
Revista Mexicana de Astronomı́a y Astrofı́sica, 45, 139–142 (2009) SIMPLE MODEL WITH TIME-VARYING FINE-STRUCTURE “CONSTANT” Marcelo Samuel Berman Instituto Albert Einstein, Curitiba, PR, Brazil Received 2009 March 2; accepted 2009 May 11 © Copyright 2009: Instituto de Astronomía, Universidad Nacional Autónoma de México RESUMEN Como una extensión del trabajo publicado con L.A. Trevisan, estudiamos la generalización de la LNH de Dirac, de tal manera que la variación en el tiempo de la constante de la estructura fina debida a variaciones en la permitividad eléctrica y magnética queda incluida al igual que otras variaciones (las “constantes” gravitatoria y cosmológica, etc.). Consideramos el Universo presente y también un escenario inflacionario. La rotación del Universo puede considerarse en el modelo. ABSTRACT Extending the original version written in colaboration with L.A. Trevisan, we study the generalisation of Dirac’s LNH, so that time-variation of the fine-structure constant, due to varying electrical and magnetic permittivities is included along with other variations (cosmological and gravitational “constants”), etc. We consider the present Universe, and also an inflationary scenario. Rotation of the Universe is a given possibility in this model. Key Words: cosmology: theory 1. INTRODUCTION Considering macrophysics and microphysics, one can build non-dimensional “large” numbers, of the order N ∼ 1080 so that we find, for the present Universe: √ cH −1 ∼ N, (1) = 2 4πε0 mee c2 e2 4πε0 Gmp me ∼ = ρ(cH −1 )3 mp ∼ = ch(mp me /Λ)1/2 ∼ = √ N, (2) N, √ N, (3) (4) where me , mp , and N stand respectively, for electron’s and proton’s mass and the total number of nucleons in the Universe. The first three above, were found by Dirac (1938, 1974), and then Eddington (1933, 1939) proposed relation (4). Berman (1992a,b, 1994, 1996, 2007a,b) has shown the consequences, for the time-variation of G (gravitational constant), Λ (cosmological constant), N and ρ (energy density), with a time-varying Hubble’s parameter H, consisting on the Generalised Large Number Hypothesis – GLNH (Barrow 1990). At the end of 20th century, (Webb et al. 1999), and the beginning of this century, (Webb et al. 2001), there were reports about the possible variation of the fine structure constant with the age of the Universe. Berman & Trevisan (2001a,b,c) wrote three papers published in the site www.arXiv.org dealing with that subject. We have enlarged one of them by analyzing, from a theoretical point of view, the consequences of a given α-variation, due to a variation of ε0 , the electrical permittivity, and consequent µ0 , the magnetic “permittivity”, in order to find exponential inflationary, or present acceleration models of the Universe. Webb and collaborators provided experimental data on quasars that span 23% to 87% of the age of the Universe, finding deviation from the average, in the fine structure constant, given by 4α/α ∼ = −072 × 10−5 . In SI units, the fine structure constant α is given by: e2 α≡ , (5) 2ε0 hc where e, ε0 , and h stand respectively for the charge of the electron, the electric permittivity and Planck’s constant. 139 © Copyright 2009: Instituto de Astronomía, Universidad Nacional Autónoma de México 140 BERMAN Due to the fact that α is defined by other constants, one can ask which is the constant that provokes the α variation. Another interesting remark is that this discovery is related to micro and macro phenomena, and can provide the link between Quantum and Classical theories of gravity. Bekenstein (1982) proposed a theory with varying “e”. An alternative theory involves a varying speed of light (Moffat 1993; Albrecht & Magueijo 1999; Barrow 1998; Berman & Trevisan 2001a,b,c; Berman 2007a). We now present the scenario of time varying ε0 while “c”, “h” and “e” are strictly constant. We resort to GLNH. One could claim that a specific gravitational theory is needed in order to make the analysis correct; however, it is not certain at the present time which is the correct theory of gravity that is to be adopted, so our naive analysis can help in visualizing what is going on. If overdots stand for time derivatives, from equation (5) we find that: ε̇0 α̇ =− . α ε0 (6) 2. POWER-LAW VARIATIONS Let us suppose now, tentatively, that ε0 varies with a power law of time, say: ε0 = Atn , (7) with A, n=constants. Then: α̇ = −nt−1 . (8) α On the other hand, the experimental value found by Webb et al. (2001), may be interpreted as: ∆α 0.72 × 10−5 '− ' −1.1 × 10−5 t−1 . α∆t (0.87 − 0.23)t (9) From relations (8) and (9) we find: n ∼ 10−5 . (10) This is the way in which the permittivity has to vary in our framework. From electromagnetism, we know that: c = (ε0 µ0 )−1/2 . (11) In order to keep c=constant, µ0 must vary like (ε0 )−1 . We can check that the following solution applies with Hubble’s parameter proportional to t−1 : N ∝ t2−2n , G ∝ t , ρ ∝ t−1−2n , Λ ∝ −1 t−2+2n . (12) (13) (14) (15) We must remember that n is a very small number; when it is null, we recover the results of Berman’s papers cited above. With the numerical value of n according to relation (10), we would find: N G ρ Λ ∝ t1.99998 , ∝ t−1.0 , ∝ ∝ t−1.00002 , t−1.99998 . (16) (17) (18) (19) Remark 1 We might ask, how can the number of nucleons grow with time, if there should be conservation of baryons in nuclear reactions? The reason is that cosmological phenomena are not ruled by nuclear Physics, and globally, what matters is the conservation of the total energy of the Universe. As the radius grows with time in an expanding Universe, the potential energy grows, while the number of nucleons must also increase, in order that the sum of the latter, with the (negative) potential energy becomes constant. On the other hand, we must remember that the time scale of cosmological growth of N is billions of years, while nuclear reactions proceed in a comparatively instantaneous mode. Remark 2 Relation (17), must be compared with Hubble’s constant. According to the formula, H = −1 [(1 + q) t] , the fact that experimental observations point to the result: Ġ/G < 10−12 per year, only means that the deceleration parameter q for the present Universe should be negative and not much larger than −1. If Hubble’s constant is given by H −1 ∼ = 14 · 109 years, a value like q ≈ −0.95 would yield the desired result, for instance. Remark 3 The present model is also unable to cope with the baryon-anti-baryon asymmetry in the Universe. We suggest that this topic could be eventually related to the rotation of the Universe, which could explain such asymmetry. We hope that the next generation of experimentalists will provide checks on the above results. It is doubtful whether, in the near future, the powers in relations (18) and (19), could be experimentally checked against the obvious laws: ρ ∝ t−1 , Λ ∝ t−2 . Nevertheless, it may happen that the variation law for α, and ε0 could be of major importance in astrophysical or nuclear physics. MODEL WITH TIME-VARYING FINE-STRUCTURE CONSTANT so that, 3. EXPONENTIAL INFLATION We now turn our attention to inflationary scenarios. We remember that in equation (1), the causally related radius, was RU ∼ cH −1 , (20) where H stands for Hubble’s parameter. For exponential inflation, we substitute RU , given by relation (20), as it stands in formula (1) and (3), by R(t) given by, © Copyright 2009: Instituto de Astronomía, Universidad Nacional Autónoma de México R = R(t) = R0 eHt = Aeγt (γ = constant) , ∝ e2(H−γ)t , G ∝ e−Ht , Λ ∝ e−2(H−γ)t , ρ ∝ L = 1093 cm g cm s−1 = 10120 h̄ . (30) We have thus, another large number, L ∝ N 3/2 . (31) h̄ For instance, for the power law, as in standard cosmology, we would have, L ∝ t3(1−n) . (32) For exponential inflation, (R0 , H constants) . (21) The following solution can be checked to fulfill the model, according to relations (1), (2), (3) and (4): ε0 N 141 e−(H+2γ)t . (22) (23) (24) (25) (26) L ∝ e3[H−γ]t (33) We now may guess a possible angular speed of the Universe, on the basis of Dirac’s LNH. For Planck’s Universe, the obvious angular speed would be: c ωP l = ≈ 2 × 1043 s−1 (34) RP l because Planck’s Universe is composed of dimensional combinations of the fundamental constants. In order to get a time-varying function for the angular speed, we recall the Newtonian angular momentum formula, 4. ROTATION OF THE UNIVERSE L = R2 M ω . Remark 4 The purpose of this section, is to show that a time-varying fine-structure constant is compatible with the rotation of the Universe. We have found, from relation (31), that, L ∝ N 3/2 , but we also saw from relation (35) that L ∝ √ ρR5 ω, because R = cH −1 ∝ N and M ∝ ρR3 ∝ N. Then, we find that, Consider the Newtonian definition of angular momentum L, L = RM v , (27) where, R and M stand for the scale-factor and mass of the Universe. For Planck’s Universe, the obvious dimensional combination of the constants h̄, c, and G is, LP l = h̄ . (28) From relations (27) and (28), we see that Planck’s Universe spin takes a speed v = c. For any other time, we take, then, the spin of the Universe as given by L = RM c . (29) In the first place, we take the known values of the present Universe: ω = ω0 t−(1+n) (ω0 = constant) . (35) (36) We are led to admit the following relation: c ω/ , (37) R because n 1. For the present Universe, we shall find, ω / 3 × 10−18 s−1 . (38) It can be seen that the present angular speed is too small to be detected by present technology. For the inflationary model, we carry a similar procedure: 3 N2 (39) ω ∝ 5 = e−[H+γ]t . R ρ The condition for a decreasing angular speed in the inflationary period, is, then, R ≈ 1028 cm , γ > −H . M ≈ 1055 g , For the accelerating power-law case, the condition for decreasing angular speed is n > −1. and, (40) © Copyright 2009: Instituto de Astronomía, Universidad Nacional Autónoma de México 142 BERMAN 5. CONCLUSIONS AND COMMENTS REFERENCES Prior work with a time varying ε0 should be credited to Gomide (1976) who nevertheless worked with α=constant, in face of Bahcall & Schmidt’s paper (1967). We also point out that the origin of c = c(t) theories can be traced to Gomide’s paper (1976) and that the α̇ 6= 0 theories with variable speed of light were also considered several times later by Barrow & Magueijo (see for instance 1999). Albrecht, A., & Magueijo, J. 1999, Phys. Rev. D, 59, 043516 Bahcall, J. N., & Schmidt, M. 1967, Phys. Rev. Lett., 19, 1294 Barrow, J. D. 1998, in 3er RESCEU Symp. on Particle Cosmology, ed. K. Sato, T. Yanagida, & T. Shiromizu (Tokyo: Universal Academic Press), 221 . 1990, in Modern Cosmology in Retrospect, ed. B. Bertotti, R. Balbinot, S. Bergia & A. Messina (Cambridge: Cambridge Univ. Press), 67 Barrow, J. D., & Magueijo, J. 1999, Class. Quantum Grav., 16, 1435 Bekenstein, J. D. 1982, Phys. Rev. D, 25, 1527 Berman, M. S. 1992a, Int. J. Theor. Phys., 31, 1447 . 1992b, Int. J. Theor. Phys., 31, 1217 . 1994, Ap&SS, 215, 135 . 1996, Int. J. Theor. Phys., 35, 1789 . 2007a, Introduction to General Relativity and the Cosmological Constant Problem (New York: Nova Science) . 2007b, Introduction to General Relativistic and Scalar-Tensor Cosmologies (New York: Nova Science) Berman, M. S., & Trevisan, L. A. 2001a, arXiv:grqc/0112011 . 2001b, arXiv:gr-qc/0111102 . 2001c, arXiv:gr-qc/0111101 Dirac, P. A. M. 1938, Proc. Roy. Soc. London A, 165, 199 . 1974, Proc. Roy. Soc. London A, 338, 439 Eddington, A. S. 1933, Expanding Universe (Cambridge: Cambridge Univ. Press) . 1939, Science Progress, 34, 225 Garnavich, P. M., et al. 1998, ApJ, 493, L53 Gomide, F. M. 1976, Lett. Nuovo Cimento, 15, 595 Moffat, J. W. 1993, Int. J. Mod. Phys. D, 2, 351 Perlmutter, S., et al. 1997, ApJ, 483, 565 Perlmutter, S., et al. 1998, Nature, 391, 51 Riess, A. G., et al. 1998, AJ, 116, 1009 Schmidt, B. P., et al. 1998, ApJ, 507, 46 Webb, J. K., Flambaum, V. V., Churchill, Ch. W., Drinkwater, M. J., & Barrow, J. D. 1999, Phys. Rev. Lett., 82, 884 Webb, J. K., et al. 2001, Phys. Rev. Lett., 87, 091301 Our results are compatible with the experimental result that the Universe would be accelerating, as supernovae results confirmed. (Perlmutter et al. 1997, 1998; Garnavich et al. 1998; Schmidt et al. 1998; Riess 1998; etc.). We have shown that GLNH, the fine structure constant time variation, the accelerating Universe, the variable lambda or variable permittivity, are all coherent among them. Rotation of the Universe was shown, as a possibility, either for power-laws, or exponential variation of the “radius of the Universe”. The “non-rotation” condition for inflationary scenarios is given by γ = −H, and for the present Universe n = −1. These conditions are out of question from the experimental side. One final comment remains necessary: it will be the task of a Superunification theory to explain this or other time variation of fundamental constants. We employed the GLNH hypothesis tentatively, we just wait that some gravitational theory may be found to apply better than our present approach. The author thanks the referee for important suggestions, which were included in the final version. Thanks also to the author’s intellectual mentors, Fernando de Mello Gomide and the late M. M. Som, and to Marcelo Fermann Guimarães, Nelson Suga, Mauro Tonasse, Antonio F. da F. Teixeira, and to Albert, Paula and Geni, for encouragement. Marcelo Samuel Berman: Instituto Albert Einstein, Av. Candido Hartmann, 575, No. 17, 80730-440, CuritibaPR, Brazil ([email protected]).
2 edition of Some fundamental operators in harmonic analysis found in the catalog. Some fundamental operators in harmonic analysis Jean G. Dhombres |Series||Technical note - Asian Institute of Technology ; no. 35| |The Physical Object| The Fourier-Laplace Transformation in? This project would expand on further aspects of your choice. Additional steps define the final result. Topics on pseudodifferential operators Pseudodifferential operators e. We learn probability mixed in with measure theory. Positional notation also known as "place-value notation" refers to the representation or encoding of numbers using the same symbol for the different orders of magnitude e. Heat Equation. Bennett and R. In my own little bailiwick, the idea that taking a special function on a bigger space e. Spectral Analysis of Singularities. Main article: Division mathematics Division is essentially the inverse operation to multiplication. Legendre transform and Hamiltonian formalism. General Hyperfunctions. Library programs for the IBM tape 650 Electronic Data Processing Machine. Gender inequality and reproductive rights children of migrant workers. Reconstruction in Georgia James Joyces Ulysses No place like home Decisions for Your Life Software/hardware systems, systems engineering, advanced electronics packaging, and electromagnetic compatibility (EMC). Islam and revolution in Africa The value for any single digit in a numeral depends on its position. The index of elliptic operators on the circle This project continues the previous one by studying pseudodifferential operators on a particularly simple manifold, the circle it can also be combined with the previous one. They constitute the most complete and up-to-date account of this subject, by the author who has dominated it and made the most significant contributions in the last decades Lax, Functional Analysis. Similar properties can be shown for subharmonic functions. One says that 0 is not contained in the multiplicative group of the numbers. Fourier Transforms on Rd. This will make the learning experience more meaningful for graduate students who are just beginning to forge a path of research. Explorations in Harmonic Analysis is ideal for graduate students in mathematics, physics, and engineering. Heat Equation. More General Fourier-Laplace Transforms. Weil, R. Diagonalization of convolution operators. The total in the pence column is Modern methods for four fundamental operations addition, subtraction, multiplication and division were first devised by Brahmagupta of India. On-going normalization method in which each unit is treated separately and the problem is continuously normalized as the solution develops. Symplectic form, Poisson bracket. Artin, and K. Test Functions. To say that various conjectures of Langlands or others are adequate to explain the relevance I think is significantly inadequate Schlag, Classical Some fundamental operators in harmonic analysis book multilinear harmonic analysis, Vol. Link to classical mechanics via path integrals and Wigner transform. It is a superb book, which must be present in every mathematical library, and an indispensable tool for all - young and old - interested in the theory of partial differential operators' - L. Distributions with Compact Support. Based e. Properties of harmonic functions[ edit ] Some important properties of harmonic functions can be deduced from Laplace's equation. Exercises are given primarily to the sections of gen- eral interest; there are none to the last two chapters. Diagonalization of convolution operators. Electrostatic inequalities, Stability of matter of 1st and 2nd kind. Main article: Division mathematics Division is essentially the inverse operation to multiplication.Book Description: This book contains an exposition of some of the main developments of the last twenty years in the following areas of harmonic analysis: singular integral and pseudo-differential operators, the theory of Hardy spaces, L\sup\ estimates involving oscillatory integrals and Fourier integral operators, relations of curvature to maximal inequalities, and connections with analysis on. Feb 13, · "Foundations of Time-Frequency Analysis provides a clear and thorough exposition of some of the fundamental results in the theory and gives some important perspectives on a rapidly growing field An important feature of the book is complete, detailed proofs of all claims and extensive motivation of topicsCited by: Arithmetic is an elementary part of number theory, and number theory is considered to be one of the top-level divisions of modern mathematics, along with algebra, geometry, and analysis. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are sometimes still used to.HARMONIC Pdf RELATED TO SCHRODINGER¨ OPERATORS GESTUR OLAFSSON AND SHIJUN ZHENG Abstract. In this article we give an overview on some recent development of Littlewood-Paley theory for Schro¨dinger operators. We extend the Littlewood-Paley theory for special potentials considered in the authors’ previous work.The download pdf set of notes is an activity-oriented companion to the study of linear functional analysis and operator algebras. It is intended as a pedagogical companion for the beginner, an introduction to some of the main ideas in this area of analysis, a compendium of problems I think are useful in.This textbook would offer, in a concise, largely self-contained form, a ebook introduction to the theory of distributions and its applications to partial differential equations, including computing fundamental solutions for the most basic differential operators: the Laplace.
Added: Shelisa Session - Date: 14.07.2021 21:16 - Views: 49360 - Clicks: 3745 Color object detection using spatial-color t probability functions. Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large of exemplars for rigid objects or a large amount of human intuition for nonrigid objects to develop a robust algorithm. We present a robust algorithm deed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color t probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope i. Experimental demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely Visiting ft smithnsa algorithm based on color co-occurrence histograms by a decisive margin. Exact t density-current probability function for the asymmetric exclusion process. We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the t probability function for the occupation and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright Visiting ft smithnsa American Physical Society. Evaluation of t probability density function models for turbulent nonpremixed combustion with complex chemistry. Two types of mixing sub-models are evaluated in connection with a t -scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a make use of exactly the same chemical mechanism, b do not involve non-unity Lewis transport of species, and c are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field. t probabilities and quantum cognition. In this paper we discuss the existence of t probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a t probability distribution, and describe how such variables can Visiting ft smithnsa obtained from neural oscillators, but not from a quantum observable algebra. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis PCA basis, with the PCA coefficients treated as nuisance parameters. Given the of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model. Excluding t probabilities from quantum theory. Quantum theory does not provide a unique definition for the t probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e. After reviewing open issues of the t probabilitywe relate it to quantum imprecise probabilitieswhich are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise t probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude t probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, t probabilities are not excluded anymore, but they are still constrained by imprecise probabilities. The non-Gaussian t probability density function of slope and elevation for a nonlinear gravity wave field. On the basis of the mapping method developed by Huang et al. Various conditional and marginal density functions are also obtained through the t density function. The analytic are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave Visiting ft smithnsa. Thus, the analyticthough derived specifically for the gravity waves, may have more general applications. Comment on "constructing quantum games from nonfactorizable t probabilities ". In the paper [Phys. We are going to prove, however, that the scheme does not generalize the games studied in the commented paper. Moreover, it allows the players to obtain nonclassical even if the factorizable t probabilities are used. t genome-wide prediction in several populations ing for randomness of genotypes: A hierarchical Bayes approach. I: Multivariate Gaussian priors for marker effects and derivation of the t probability mass function of genotypes. It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances homogeneous or heterogeneous. Randomness of genotypes was ed for by deriving the t probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the Visiting ft smithnsa for imputation. This was possible because the non-observed fraction of the de matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still ing for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. All rights reserved. Idealized models of the t probability distribution of wind speeds. The t probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the t distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the t distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a t Gaussian probability density function for the exchange interactions and random fields. The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the t Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures. Psychophysics of the probability weighting function. A probability weighting function w p for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functionspsychophysical foundations of the probability weighting functions have been unknown. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed. Computation of the Complex Probability Function. The complex probability Visiting ft smithnsa is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function. t probability of statistical success of multiple phase III trials. The probability of statistical success PoSS of the phase III trials based on data from earlier studies is an important factor in that decision-making process. Instead of statistical power, the predictive power of a phase III trial, which takes into the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical ificance in two or more trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the t predictive power should not be evaluated as a simplistic product of the predictive powers of the Visiting ft smithnsa trials. We develop the relevant formulae for the appropriate evaluation of the t predictive power and provide numerical examples. Investigation of estimators of probability density functions. Probability distribution functions in turbulent convection. of an extensive investigation of probability distribution functions pdfs for Rayleigh-Benard convection, in hard turbulence regime, are presented. It is shown that the pdfs exhibit a high degree of Visiting ft smithnsa universality. In certain cases this universality is established within two Kolmogorov scales of a boundary.Visiting ft smithnsa email: [email protected] - phone:(242) 727-9552 x 5112 The Three Layers of a Metal Manufacturer’s Growth
‘Time value of money’ is central to the concept of finance. It recognizes that the value of money is different at different points of time. Since money can be put to productive use, its value is different depending upon when it is received or paid. In simpler terms, the value of a certain amount of money today is more valuable than its value tomorrow. It is not because of the uncertainty involved with time but purely on account of timing. The difference in the value of money today and tomorrow is referred to as the time value of money. 1. Meaning of Time Value of Money The time value of money is one of the basic theories of financial management, it states that ‘the value of money you have now is greater than a reliable promise to receive the same amount of money at a future date’. The time value of money (TVM) is the idea that money available at the present time is worth more than the same amount in the future due to its potential earning capacity. This core principle of finance holds that, provided money can earn interest, any amount of money is worth more the sooner it is received. The time value of money is the greater benefit of receiving money now rather than receiving later. It is founded on time preference. The principle of the time value of money explains why interest is paid or earned? Interest, whether it is on a bank deposit or debt, compensates the depositor or lender for the time value of money. 2. Concept of Time Value of Money Important terms or concepts used in computing the time value of money are- (2) Cash inflow (3) Cash outflow (4) Discounted Cash flow (5) Even cash flows /Annuity cash flows (6) Uneven/mixed streams of cash flows (7) Single cash flows (8) Multiple cash flows (9) Future value (10) Present value (13) Effective interest rate / Time preference rate (14) Risks and types of risks (15) Uncertainty, and (16) Doubling Period. The above concepts are briefly explained below: Cash flow is either a single sum or the series of receipts or payments occurring over a specified period of time. Cash flows are of two types namely, cash inflow and cash outflow and cash flow may be of much variety namely; single cash flow, mixed cash flow streams, even cash flows or uneven cash flows. (2) Cash Inflow: Cash inflows refer to the receipts of cash, for the investment made on the asset/project, which comes into the hands of an individual or into the business organisation account at a point of time/s. Cash inflow may be a single sum or series of sums (even or uneven/mixed) over a period of time. (3) Cash Outflow: Cash outflow is just opposite to cash inflow, which is the original investment made on the project or the asset, which results in the payment/s made towards the acquisition of asset or getting the project over a period of time/s. (4) Discounted Cash Flow- The Mechanics of Time Value: The present value of a future cash flow (inflows or outflows) is the amount of current cash that is of equivalent value to the decision maker today. The process of determining present value of a future payment (or receipts) or a series of future payments (or receipts) is called discounting. The compound interest rate used for discounting cash flows is called discount rate. (5) Even Cash Flows /Annuity Cash Flows: Even cash flows, also known as annuities, are the existence of equal/even/fixed streams of cash flows may be a cash inflow or outflow over a specified period of time, which exists from the beginning of the year. Annuities are also defined as ‘a series of uniform receipts or payments occurring over a number of years, which results from an initial deposit.’ In simple words, constant periodic sums are called annuities. It is essential to discuss some of the aspects related to annuities, which are discussed as below: 4. Various types of Annuity- i. Annuity Certain ii. Annuity Contingent iii. Immediate or Ordinary annuity iv. Annuity due v. Perpetual annuity vi. Deferred annuity 5. Annuity factor- (i) Present Value Annuity factor, and (ii) Compound value annuity factor. A brief description each of the above aspects is as follows: i. Annuitant is a person or an institution, who receives the annuity. ii. Status refers to the period for which the annuity is payable or receivable. iii. Perpetuity is an infinite or indefinite period for which the amount exists. iv. a. Annuity Certain refers to an annuity which is payable or receivable for a fixed number of years. b. Annuity Contingent refers to the payment/receipt of an annuity till the happening of a certain event/incident. c. Immediate annuities are those receipts or payments, which are made at the end of the each period. d. A series of cash flows (i.e., receipts or payments) starting at the beginning of each period for a specified number of periods is called an Annuity due. This implies that the first cash flow has occurred today. e. Perpetual annuities when, annuities payments are made for ever or for an indefinite or infinite periods. f. Deferred annuities are those receipts or payments, which starts after a certain number of years. v. (a) Present Value of Annuity factor is the sum of the present value of Re. 1 for the given period of time duration at the given rate of interest; (b) Compound value/Future value of annuity factor is the sum of the future value of Re. 1 for the given period of time duration at the given rate of interest. This is the reciprocal of the present value annuity discount factor. Note – When the interest rate rises, the present value of a lump sum or an annuity declines. The present value factor declines with higher interest rate, other things remaining the same. vi. Sinking fund is a fund which is created out of fixed payments each period (annuities) to accumulate to a future some after a specified period. The compound value of an annuity can be used to calculate an annuity to be deposited to a sinking fund for ‘n’ period at ‘i’ rate of interest to accumulate to a given sum. (6) Uneven/Mixed Streams of Cash Flows: Uneven cash flows, as the concept itself states, is the existence of un-equal or mixed streams of cash inflows emanating from the investment made on the assets or the project. (7) Single Cash Inflows: A single cash inflow is a single sum of receipt of cash generated from the project during the given period, for which the present value is ascertained by multiplying the cash inflow by the discount factor. (8) Multiple Cash Inflows: Multiple cash inflows (even or mixed cash inflows) are the series of cash flows, may be annuities/mixed streams of cash inflows which are generated from the project over the entire life of the asset. (9) Future Value/Compound Value [FV/CV]: The future value concept states as to how much is the value of current cash flow or streams of cash flows at the end of specified time periods at a given discount rate or interest rate. Future value refers to the worth of the current sum or series of cash flows invested or lent at a specified rate of return or rate of interest at the end of specified period. In simple terms, future value refers to the value of a cash flow or series of cash flows at some specified future time at specified time preference rate for money. The process of determining the future value of present money is called compounding. In other words, compounding is a process of investing money, reinvesting the interest earned & finding value at the end of specified period is called compounding. In simple words, calculation of maturity value of an investment from the amount of investment made is called compounding. Under compounding technique the interest earned on the initial principal become part of principal at the end of compounding period. Since interest goes on earning interest over the life of the asset, this technique of time value of money is also known as ‘compounding’. The simple formula to calculate Compound Value in different interest time periods is- (a) If Interest is added at the end of each year or compounded annually- FV or CV = PV (1 + i)n Where, FV or CV = Future Value or Compound Value, PV= Present Value, (1 + i)n = Compound Value factor of Re.1 at a given interest rate for a certain number of years. (b) If Interest is added/computed semi-annually and other compounding periods/multi- compounding- Say for example; (i) When Compounding is made semi-annually, then m=2 (because two half years in one year). (ii) When Compounding is made quarterly, then m= 4 (because, 4 quarter years in one year). (iii) When Compounding is made monthly, then m= 12 (because, 12 months in one year). (11) Present Value: The present value is just opposite to the future value. Present value refers to the present worth of a future sum of money or streams of cash flows at a specified interest rate or rate of return. It is also called a discounted value. In simple terms it refers to the current value of a future cash flow or series of cash flows. The inverse of the compounding process is discounting technique. The process of determining the present value of future cash flows is called discounting. Discounting or Present Value technique is more popular than compounding technique, since every individual or an organisation intends to have/hold present sums, rather than getting some amount of money after some time, because of time preference for money. (13) Effective Interest Rate / Time Preference Rate: Time preference rate is used to translate the different amounts received at different time periods; to amounts equivalent in value to the firm/individual in the present at common point reference. This time preference rate is normally expressed in ‘percent’ to find out the value of money at present or in future. In business, the finance manager is supposed to take number of decisions under different situations. In all such decisions, there is an existence of risk and uncertainty. Risk is the ‘variability of returns’ or the ‘chance of financial losses’ associated with the given asset. Assets that are having higher chances of loss or the higher rate of variability in returns are viewed as ‘risky assets’ and vice versa. Hence care should be taken to recognize and to measure the extent of risk associated with the assets, before taking the decision to invest on such risky assets. 3. Importance of Time Value of Money The Consideration of time is important and its adjustment in financial decision making is also equally important and inevitable. Most financial decisions, such as the procurement of funds, purchase of assets, maintenance of liquidity and distribution of profits etc., affect the firm’s cash flows/movement of cash in and out of the organization in different time periods. Cash flows occurring in different time periods are not comparable, but they should be properly measurable. Hence, it is required to adjust the cash flows for their differences in timing and risk. The value of cash flows to a common time point should be calculated. To maximize the owner’s equity, it’s extremely vital to consider the timing and risk of cash flows. The choice of the risk adjusted discount rate (interest rate) is important for calculating the present value of cash flows. For instance, if the time preference rate is 10 percent, it implies that an investor can accept receiving Rs.1000 if he is offered Rs.1100 after one year. Rs.1100 is the future value of Rs.1000 today at 10% interest rate. Thus, the individual is indifferent between Rs.1000 and Rs.1100 a year from now as he/she considers these two amounts equivalent in value. You can also say that Rs.1000 today is the present value of Rs.1100 after a year at 10% interest rate. Time value adjustment is important for both short-term and long-term decisions. If the amounts involved are very large, time value adjustment even for a short period will have significant implications. However, other things being same, adjustment of time is relatively more important for financial decisions with long range implications than with short range implications. Present value of sums far in the future will be less than the present value of sums in the near future. The concept of time value of money is of immense use in all financial decisions. The time value concept is used 1. To compare the investment alternatives to judge the feasibility of proposals. 2. In choosing the best investment proposals to accept or to reject the proposal for investment. 3. In determining the interest rates, thereby solving the problems involving loans, mortgages, leases, savings and annuities. 4. To find the feasible time period to get back the original investment or to earn the expected rate of return. 5. Helps in wage and price fixation. 4. Reasons for Time Preference of Money / Reasons for Time Value of Money There are three primary reasons for the time value of money- reinvestment opportunities; uncertainty and risk; preference for current consumption. These reasons are explained below: 1. Reinvestment Opportunities: The main fundamental reason for Time value of money is reinvestment opportunities. Funds which are received early can be reinvested in order to earn money on them. The basic premise here is that the money which is received today can be deposited in a bank account so as to earn some return in terms of income. In India saving bank rate is about 4% while fixed deposit rate is about 7% for one year deposit in public sector banks. Therefore even if the person does not have any other profitable investment opportunity to invest his funds, he can simply put his money in a savings bank account and earn interest income on it. Let us assume that Mr. X receives Rs.100000 in cash today. He can invest or deposit this Rs.100000 in fixed deposit account and earn 7% interest p.a. Therefore at the end of one year his money of Rs.100000 grows to Rs.107000 without any efforts on the part of Mr. X. If he deposits Rs.100000 in two years fixed deposit providing interest rate 7% p.a. then at the end of second year his money will grow to Rs.114490 (i.e. Rs.107000+ 7% of Rs.107000). Here we assume that interest is compounded annually i.e. we do not have a simple interest rate but compounded interest rate of 7%. Thus Time value of money is the compensation for time. 2. Uncertainty and Risk: Another reason for Time value of money is that funds which are received early resolves uncertainty and risk surmounting future cash flows. All of us know that the future is uncertain and unpredictable. At best we can make best guesses about the future with some probabilities that can be assigned to expected outcomes in the future. Therefore given a choice between Rs.100 to be received today or Rs.100 to be received in future say one year later, every rational person will opt for Rs.100 today. This is because the future is uncertain. It is better to get money as early as possible rather than keep waiting for it. The underlying principle is “A bird in hand is better than two in the bush.” It must be noted that there is a difference between risk and uncertainty. In a Risky situation we can assign probabilities to the expected outcomes. Probability is the chance of occurrence of an event or outcome. For example I may get Rs.100 with 90% probability in future. Therefore there is 10% probability of not getting it at all. In a risky situation outcomes are predictable with probabilities. In case of an uncertain situation it is not possible to assign probabilities to the expected outcomes. In such a situation the outcomes are not predictable. 3. Preference for Current Consumption: The third fundamental reason for Time value of money is preference for current consumption. Everybody prefers to spend money today on necessities or luxuries rather than in future, unless he is sure that in future he will get more money to spend. Let us take an example, Your father gives you two options – to get Wagon R today on your 20th birthday OR to get Wagon R on your 21st birthday which is one year later. Which one would you choose? Obviously you would prefer Wagon R today rather than one year later. So every rational person has a preference for current consumption. Those who save for future, do so to get higher money and hence higher consumption in future. In the above example of a car if your father says that he can give you a bigger car, say Honda City on your 21st birthday, then you may opt for this option if you think that it is better to wait and get a bigger car next year rather than settling for a small car this year. Thus we can say that the amount of money which is received early (or today) carries more value than the same amount of money which is received later (or in future). This is Time Value of Money. 5. Valuation Concepts There are the following two valuation concepts: 1) Compound Value Concept (Future Value or Compounding) 2) Present Value Concept (Discounting) 1) Compound Value Concept: The compound value concept is used to find out the Future Value (FV) of present money. A Future Value means that a given quantity of money today is worth more than what will be received at some point of time in future. It is the same as the concept of compound interest, wherein the interest earned in a preceding year is reinvested at the prevailing rate of interest for the remaining period. Thus, the accumulated amount (principal + interest) at the end of a period becomes the principal amount for calculating the interest for the next period. The compounding technique to find out the FV to present money can be explained with reference to: i) The FV of a single present cash flow, ii) The FV of a series of equal cash flows and iii) The FV of multiple flows. i) FV of a Single Present Cash Flow: The future value of a single cash flow is defined in term of equation as follows: Mr. A makes a deposit of Rs. 10,000 in a bank which pays 10% interest compounded annually for 5 years. You are required to find out the amount to be received by him after 5 years. ii) Future Value of Series of Equal Cash Flows or Annuity of Cash Flows: Quite often a decision may result in the occurrence of cash flows of the same amount every year for a number of years consecutively, instead of a single cash flow. For example, a deposit of Rs. 1,000 each year is to be made at the end of each of the next 3 years from today. This may be referred to as an annuity of deposit of Rs. 1,000 for 3 years. An annuity is thus, a finite series of equal cash flows made at regular intervals. In general terms, the future value of an annuity is given as: It is evident from the above that future value of an annuity depends upon three variables, A, r and n. The future value will vary if any of these three variables changes. For computation purposes, tables or calculators can be made use of. Mr. A is required to pay five equal annual payments of Rs. 10,000 each in his deposit account that pays 10% interest per year. Find out the future value of annuity at the end of four years. iii) Future Value of Multiple Flows: Suppose the investment is Rs. 1,000 now (beginning of year 1), Rs.2,000 at the beginning of year 2 and Rs.3,000 at the beginning of year 3, how much will these flows accumulate at the end of year 3 at a rate of interest of 12 percent per annum? To determine the accumulated sum at the end of year, add the future compounded values of Rs. 1,000, Rs.2, 000 and Rs.3, 000 respectively: 2) Present Value Concept: Present values allow us to place all the figures on a current footing so that comparisons may be made in terms of today’s rupees. Present value concept is the reverse of compounding technique and is known as the discounting technique. As there are FVs of sums invested now, calculated as per the compounding techniques, there are also the present values of a cash flow scheduled to occur in future. The present value is calculated by discounting technique by applying the following equation: The discounting technique to find out the PV can be explained in terms of: i) Present Value of a Future Sum: The present value of a future sum will be worth less than the future sum because one forgoes the opportunity to invest and thus forgoes the opportunity to earn interest during that period. In order to find out the PV of future money, this opportunity cost of the money is to be deducted from the future money. The present value of a single cash flow can be computed with the help of following formula: Find out the present value of Rs.3, 000 received after 10 years hence, if the discount rate is 10%. Mr. A makes a deposit of Rs. 5000 in a bank which pays 10% interest compounded annually. You are required to find out the amount to be received after 5 years. ii) PV of a Series of Equal Future Cash Flows or Annuity: A decision taken today may result in a series of future cash flows of the same amount over a period of number of years. For example, a service agency offers the following options for a 3-year contract: a) Pay only Rs.2, 500 now and no more payment during next 3 years, or b) Pay Rs.900 each at the end of first year, second year and third year from now. A client having a rate of interest at 10% p.a. can choose an option on the basis of the present values of both options as follows: The payment of Rs.2, 500 now is already in terms of the present value and therefore does not require any adjustment. The customer has to pay an annuity of Rs.900 for 3 years. In order to find out the PV of a series of payments, the PVs of different amounts accruing at different times are to be calculated and then added. For the above example, the total PV is Rs.2, 238. In this case, the client should select option B, as he is paying a lower amount of Rs.2, 238 in real terms as against Rs.2, 500 payable in option A. The present value of an annuity may be expressed as follows: Find out the present value of a 5 years annuity of Rs.50, 000 discounted at 8%. 6. Techniques Used to Understand the Concept of Time Value of Money Basically two techniques are used to find the time value of money. 1. Compounding Technique or Future Value Technique 2. Discounting Technique or Present Value technique 1. Compounding Technique: Compounding technique is just reverse of the discounting technique, where the present sum of money is converted into future sum of money by multiplying the present value by the compound value factor for the required rate of interest and the period. Hence Future Value or Compound Value is the ‘product’ of the present value of a given sum of money and the factor. The simple formulas are used to calculate the Compound value of a single sum: (a) If interest is compounded annually is- FV = PV (1 + i)n = PV (CVFni) Note- (1 + i)n is the formula for future value or compound value factor and CVFni = Compound Value factor for the given number of years at required rate of interest. (b) If Interest is added semi-annually and other compounding periods- 2. Discounting Technique or Present Value Technique: Discounting technique or present value technique is the process of converting the future cash flows into present cash flows by using an interest rate/time preference rate/discount rate. The simple formula used to calculate the Present Value of a single sum is: P= Present Value, PVF= Present value factor of Re.1, DF= Discount factor of Re.1, A= Future Value or Compound Value, i = interest rate & n= number of years or time period given for 1 to n years and (1 + i)n = The compound value factor. So from the above formula, it is very clear that the present value of future cash flows is the product of the ‘future sum of money and the discount factor’ or ‘the quotient of the future sum of money and the compound value factor (1 + i)1-n. Note – Present value can be computed for all types of cash flows, say single sum/ multiple sums, even / annuity sums and mixed/un-even sums. Alternatively, PVF/DF, CVF of a rupee and also the annuity discount factor (PADF) and the compound value annuity factor (CVAF) at the given rate of interest for the expected period can be referred through the tables also. 7. Present Value Technique or Discounting Technique It is a process of computing the present value of cash flow (or a series of cash flows) that is to be received in the future. Since money in hand has the capacity to earn interest, a rupee is worth more today than it would be worth tomorrow. Discounting is one of the core principles of finance and is the primary factor used in pricing a stream of future receipts. As a method, discounting is used to determine how much these future receipts are worth today. It is just the opposite of compounding where compound interest rates are used in determining the present value corresponding to a future value. For example, Rs. 1,000 compounded at an annual interest rate of 10% becomes Rs. 1,771.56 in six years. Conversely, the present value of Rs. 1,771.56 realized after six years of investment is Rs. 1,000 when discounted at an annual rate of 10%. This present value is computed by multiplying the future value by a discount rate. This discount rate is computed as reciprocal of compounding. Present value calculations determine what the value of a cash flow received in the future would be worth today (that is at time zero). The process of finding a present value is called discounting; the discounted value of a rupee to be received in future gets smaller as it is applied to a distant future. The interest rate used to discount cash flows is generally called the discount rate. How much would Rs.100 received five years from now be worth today if the current interest rate is 10%? Let us draw a timeline. The arrow represents the flow of money and the numbers under the timeline represent the time period. It may be noted that time period zero is today, corresponding to which the value is called present value. A generalized procedure for calculating the future value of a single amount compounded annually is as given below: I. Ascertaining the Present Value (PV): The discounting technique that facilitates the ascertainment of present value of a future cash flow may be applied in the following specific situations: (a) Present Value of a Single Future Cash Flow: The future value of a single cash flow may be ascertained by applying the usual compound interest formula as given below: Let us understand the computation of present value with the help of an example that follows: Mr. Aman shall receive Rs.25,000 after 4 years. What is the present value of this future receipt, if the rate of interest is 12% p.a.? (b) Present Value of Series of Equal Cash Flows (Annuity): An annuity is a series of equal cash flows that occur at regular intervals for a finite period of time. These are essentially a series of constant cash flows that are received at a specified frequency over the course of a fixed time period. The most common payment frequencies are yearly, semi-annually, quarterly and monthly. There are two types of annuities – ordinary annuity and annuity due. Ordinary annuities are payments (or receipts) that are required at the end of each period. Issuers of coupon bonds, for example, usually pay interest at the end of every six months until the maturity date. Annuity due are payments (or receipts) that are required in the beginning of each period. Payment of rent, lease etc., are examples of annuity due. Since the present and future value calculations for ordinary annuities and annuities due are slightly different, we will first discuss the present value calculation for ordinary annuities. The formula for calculating the present value of a single future cash flow may be extended to compute present value of series of equal cash flow as given below: An LED TV can be purchased by paying Rs.50,000 now or Rs.20,000 each at the end of first, second and third year respectively. To pay cash now, the buyer would have to withdraw the money from an investment, earning interest at 10% p.a. compounded annually. Which option is better and by how much, in present value terms? Let paying Rs.50,000 now be Option I and payment in three equal installments of Rs.20,000 each be Option II, the present value of cash outflows of Option II is computed as: (c) Present Value of a Series of Unequal Cash Flows: The formula for computing present value of an annuity is based on the assumption that cash flows at each time period are equal. However, quite often cash flows are unequal because profits of a firm, for instance, which culminate into cash flows, are not constant year after year. The formula for calculating the present value of a single future cash flow may be extended to compute present value of series of unequal cash flows as given below: Ms. Ameeta shall receive Rs.30,000, Rs.20,000, Rs.12,000 and Rs.6,000 at the end of first, second, third and fourth year from an investment proposal. Calculate the present value of her future cash flows from this proposal, given that the rate of interest is 12% p.a. If Ms. Ameeta lends Rs.55,086 @ 12%p.a, the borrower may settle the loan by paying Rs.30,000, Rs.20,000, Rs.12,000 and Rs.6,000 at the end of first, second, third and fourth year. It refers to a stream of equal cash flows that occur and last forever. This implies that the annuity that occurs for an infinite period of time turns it to perpetuity. Although it may seem a bit illogical, yet an infinite series of cash flows have a finite present value. Examples of Perpetuity: (i) Local governments set aside funds so that certain cultural activities are carried on a regular basis. (ii) A fund is set-up to provide scholarship to meritorious needy students on a regular basis. (iii) A charity club sets up a fund to provide a flow of regular payments forever to needy children. The present value of perpetuity is computed as: A philanthropist wishes to institute a scholarship of Rs.25,000 p.a., payable to a meritorious student in an educational institution. How this amount should he invest @ 8% p.a. so that the required amount of scholarship becomes available as yield of investment in perpetuity. Valuation of Preference Shares: Preference shares have preference over ordinary shares in terms of payment or dividend and repayment of capital if the company is wound up. They may be issued with or without a maturity period. The preference shares unlike bonds has an investment value as it resembles both bond as well as common stock. It is a hybrid between the bond and the equity stock. It resembles a bond as it has a prior claim on the assets of the firm at the time of liquidations. Like the common stock the preference shareholders receive dividend and have similar features as common stock and liabilities at the time of liquidation of a firm. Types of Preference Shares: a. Redeemable preference shares are shares with maturity. b. Irredeemable preference shares are shares without any maturity. Features of Preference Shares: The dividend rate is fixed in the case of preference shares. Preference shareholders have a claim on assets and income prior to ordinary shareholders. Redeemable preference shares have a maturity date while irredeemable preference shares are perpetual. A company can issue convertible preference shares and can be converted as per the norms. Valuation of Equity Shares Equity shares are also referred to as common stock, unlike bonds, equity shares are instruments that do not assure a fixed return. Equity is fundamentally different from debt. Debt is commonly issued by security known as bond/debenture. Financial markets deal with the transfer of these securities from one person to another. The price at which such transfer takes place is determined by market forces. Features of equity share: (a) Ownership and management, (b) Entitlement to residual cash flows, (c) Limited liability, (d) Infinite life, (e) Substantially different risk profile. Challenges in Valuation of Equity: The valuation of equity shares is relatively more difficult. The difficulty arises because of two factors: (i) Rate of Dividend on Equity Shares is not known. (ii) Estimates of the Amount and timing of the cash flows expected by equity shareholders are more uncertain. 9. Risk and Return Analysis What is risk? Risk is the variability which may likely to accrue in future between the expected returns and the actual returns. So, the risk may also be considered as a chance of variation or chance of loss. Types of Risk: Risk can be classified in the following two parts: 1. Systematic Risk or Market Risk: Systematic risk is that part of total risk which cannot be eliminated by diversification. Diversification means investing in different types of securities. No investor can avoid or eliminate this risk, whatsoever precautions or diversification may be resorted to. So, it is also called non diversifiable risk, or the market risk. This part of the risk arises because every security has a built in tendency to move in line with the fluctuations in the market. The systematic risk arises due to general factors in the market such as money supply, inflation, economic recession, industrial policy, interest rate policy of the government, credit policy, tax policy etc. These are the factors which affect almost every firm. The unsystematic risk is one which can be eliminated by diversification. This risk represents the fluctuation in returns of a security due to factors specific to the particular firm only and not the market as a whole. These factors may be such as worker’s unrest, strike, change in market demand, change in consumer preference etc. This risk is also called diversifiable risk and can be reduced by diversification. Diversification is the act of holding many securities in order to lessen the risk. The effect of diversification on the risk of a portfolio is represented graphically in the below figure: The above diagram shows that the systematic risk remains the same and is constant irrespective of the number of securities in the portfolio as shown by OA in the above diagram and is fixed for any number of securities. For only security it is OA & for 20 security also it is OA. However, the unsystematic risk is reduced when more and more securities are added to the portfolio. As from the above diagram we can see that earlier it was OD & by increasing the number of securities it decreases to C. 10. Methods of Risk Management Risk is inherent in business and hence there is no escape from the risk for a businessman. However, he may face this problem with greater confidence if he adopts a scientific approach by dealing with risk. Risk management may, therefore, be defined as adoption of a scientific approach to the problem dealing with risk faced by a business firm or an individual. Broadly, there are five methods in general for risk management: i) Avoidance of Risk A business firm can avoid risk by not accepting any assignment or any transaction which involves any type of risk whatsoever. This will naturally mean a very low volume of business activities and losing of too many profitable activities. ii) Prevention of Risk In case of this method, the business avoids risk by taking appropriate steps for prevention of business risk or avoiding loss, such steps include adaptation of safety programmes, employment of night security guard, arranging for medical care, disposal of waste material etc. iii) Retention of Risk In the case of this method, the organization voluntarily accepts the risk since either the risk is insignificant or its acceptance will be cheaper as compared to avoiding it. iv) Transfer of Risk In case of this method, risk is transferred to some other person or organization. In other words, under this method, a person who is subject to risk may induce another person to assume the risk. Some of the techniques used for transfer of risk are hedging, sub-contracting, getting surety bonds, entering into indemnity contracts etc. This is done by creating a common fund out of the contribution (known as premium) from several persons who are equally exposed to the same loss. Fund so created is used for compensating the persons who might have suffered financial loss on account of the risks insured against. 11. Types of Investors There are three types of investor which may be classified as: a) Risk Averse Under this category those investors appear who avoid taking risk and prefer only the investments which have zero or relatively lower risk. These investors ignore the return from the investment. Generally risk averse investors are – Retired persons, Old age persons and Pensioners. b) Risk Seekers Under this category those investors are nominated who are ready to take risk if the return is sufficient enough (according to their expectations). These investors may be ready to take – Income risk, Capital risk or both. Under this category those investors lie who do not care much about the risk. Their investments decisions are based on consideration other than risk and return. What is return? Return is the amount received by the investor from their investment. Everyone needs high returns over invested amounts. Each and every investor who invests or wants to invest their amount in any type of project, first expects some return which encourages them to take risk. Risk and Return Trade Off: The principle that potential “return rises with an increase in risk”. Low levels of uncertainty (low risk) are associated with low potential returns, whereas high levels of uncertainty (high risk) are associated with high potential returns. According to the risk-return tradeoff, invested money can render higher profits only if it is subject to the possibility of being lost. Because of the risk- return tradeoff, you must be aware of your personal risk tolerance when choosing investments for your portfolio. Taking on some risk is the price of achieving returns; therefore, if you want to make money, you can’t cut out all risk. The goal instead is to find an appropriate balance – one that generates some profit, but still allows you to sleep at night. We can see this in the following figure: Risk and return analysis emphasizes over the following characteristics: (i) Risk and Return have parallel relations. (ii) Return is fully associated with risk. (iii) Risk and return concepts are basic to the understanding of the valuation of assets or securities. (iv) The expected rate of return is an average rate of return. This average may deviate from the possible outcomes (rates of return).
Recently, Iwamoto, Kimura, and Ueno proposed dynamic dualization to present dual problems for unconstrained optimization problems whose objective function is a sum of squares. The aim of this paper is to show that dynamic dualization works well for unconstrained problems whose objective function is a sum of convex functions. Further we give another way to get dual problems, which is based on the infimal convolution. In both approaches we make clear the assumption for duality to hold. This paper deals with a refinement of the Laplace-Carson transform (LCT) approach to option pricing, with a special emphasis on valuing defaultable and non-callable convertible bonds (CBs), but not limited to it. What we are actually aiming at is refining the plain LCT approach to meet possibly general American derivatives. The setup is a standard Black-Scholes-Merton framework where the underlying firm value evolves according to a geometric Brownian motion. The valuation of CBs can be formulated as an optimal stopping problem, due to the possibility of voluntary conversion prior to maturity. We begin with the plain LCT approach that generates a complex solution with little prospect of further analysis. To improve this solution, we introduce the notion of premium decomposition, which separates the CB value into the associated European CB value and an early conversion premium. By the LCT approach combined with the premium decomposition, we obtain a much simpler and closed-form solution for the CB value and an optimal conversion boundary. By virtue of the simplified solution, we can easily characterize asymptotic properties of the early conversion boundary. Finally, we show that our refined LCT approach is broadly applicable to a more general class of claims with optimal stopping structure. We consider fuzzy sets on a metric, vector, or normed space. It is not assumed that the fuzzy sets have compact supports. In the present paper, a fuzzy distance and a fuzzy norm are proposed in order to measure the difference between two fuzzy sets, and their fundamental properties are investigated. Their definitions are based on Zadeh's extension principle. Although they are different from the classical ones based on the Hausdorff metric, they are suitable for data containing uncertainty or vagueness. The obtained results can be expected to be useful for analyzing such data when the data are represented as fuzzy sets. It is shown that the Fibonacci sequence is optimal for two quadratic programming problems (maximization and minimization) under semi-Fibonacci constraints. The two conditional (primal) problems have their unconditional (dual) problems. The optimal solution is characterized by the Fibonacci number. Both pairs of primal and dual problems are mutually derived through three methods — dynamic, plus-minus and inequality —. Recently, a seating position can be often selected when a plane or bullet train ticket is reserved. Specially, for theater and stadium, it is important to decide how to assign reservations to seats. This paper proposes a dynamic model where seats' resources are located at a single line with considering seats position that have already been assigned. An analysis has been conducted and the results show that, 1) optimal policy for an arriving request is to allocate it to one side of the edges of the adjacent vacancies, 2) if all of the resources are vacant at beginning time for booking, then the model corresponds to a single-leg model with multiple seat bookings and single fare class in Lee and Hersh (1993), 3) it is not necessarily optimal that a request is allocated to the less adjacent seats' vacancy. Finally, this paper proposes an algorithm to solve the optimal policy using above results and conducts numerical examples. The particle survival model, which was originally proposed to analyze the dynamics of species' coexistence, has surprisingly found to be related to a non-homogeneous Poisson process. It is also well known that successive record values of independent and identically distributed sequences have the spatial distribution of such processes. In this paper, we show that the particle survival model and the record value process are indeed equivalent. Further, we study their application to determine the optimal strategy for placing selling orders on stock exchange limit order books. Our approach considers the limit orders as particles, and assumes that the other traders have zero intelligence. The purpose of this study is to consider the problem of finding a guaranteed way of winning a certain two-player combinatorial game of perfect knowledge from the standpoint of mutually dependent decision processes (MDDPs). Our MDDP model comprises two one-stage deterministic decision processes. Each decision process expresses every turn of a player. We analyze a MDDP problem in which the length of turns taken by a player is minimized, allowing him to win regardless of the decisions made by his opponent. The model provides a formulation for finding the shortest guaranteed strategy. Although computational complexity remains, the concept introduced in this paper can also be applied to other two-player combinatorial games of perfect knowledge. Any positive semi-definite function defined on Z (resp. R) can be represented as the Fourier transform of a positive Radon measure on T (resp. R). We give a proof of this celebrated result due to Herglotz and Bochner from the viewpoint of Schwartz's theory of distributions. This paper studies the relation between a given nondeterministic discrete decision process (nd-ddp) and a nondeterministic sequential decision process (nd-sdp), which is a finite nondeterministic automaton with a cost function, and its subclasses (nd-msdp, nd-pmsdp, nd-smsdp). We show super-strong representation theorems for nd-sdp and its subclasses, for which the functional equations of nondeterministic dynamic programming are obtainable. The super-strong representation theorems provide necessary and sufficient conditions for the existence of the nd-sdp and its subclasses with the same set of feasible policies and the same cost value for every feasible policy as the given process nd-ddp. This paper deals with security games which would be found around our lives. In a facility represented by a network, several types of invaders/attackers conflict with security guards/defenders who have also several security teams. The attacker chooses an invasion path to move along. He incurs some attrition by the conflict on arcs but surviving attackers give damage to the facility on his invasion route while the defender tries to minimize the damage by intercepting the attacker by a limited number of guards. The defender takes a randomized plan with respect to the adoption of each security team and the deployment of guards. Since the attacker know the defender's randomized plan before his decision making, the security problem is modeled by a Stackelberg game with the superiority of the attacker on information acquisition to the defender. There has been no research on the security game with multiple types of players modeled on a network, which explicitly takes account attrition on players. By some numerical examples, we investigate the best configuration of staff numbers in security teams and some characteristics of optimal defense to mitigate the damage caused by the attackers. In this paper, we consider the pricing decision of a retailer who experiences peak demand for a product during a given time interval and wishes to stabilize the demand by adjusting the sales price. The stabilization of demand brings about desirable outcomes such as a reduction in the need for capacity investment and improves the production efficiency in the supply chain. We establish a continuous-time model to analyze the effect of dynamic pricing on peak demand. We find that a closed-form optimal pricing policy minimizes the difference between the actual demand and target level. It is shown that the dynamic pricing not only reduces peak demand but also mitigates fluctuations in the peak demand. Using electricity consumption data as a case study, we show that the proposed pricing policy is effective for reducing the mean peak demand compared to a constant pricing policy. Project risk is an uncertain event that causes positive or negative effects on the project objectives in relation to the cost, time, quality and so on to complete the project. Project risk management is the set of processes of identifying, analyzing and responding to project risks. For example, the project risk management includes the process of eliminating the project risks from the project to complete any activities in the project by the specified day. In terms of not only the risk but also the time, many researches have been done. Especially, as for the time, there are many researches on CPM and PERT, which use mathematical techniques. However, few researchers discuss the effectiveness of project risk responses to deal with project risks for making a success of the project. In this paper, we propose a new mathematical model of the project risk responses. And, with our proposing model, we show how to calculate the effectiveness of project risk responses quantitatively. Moreover, we can decide quantitatively which project risk response should be executed by the consequences of the above calculation. We provide a sufficient condition for the existence of a Markov perfect equilibrium for pure strategies in a class of Markov games where each stage has strategic complementarities. We assume that both the sets of actions for all players and the set of states are finite and that the horizon is also finite, while the past studies examined Markov games with infinite horizons where the sets of actions and states are assumed to be infinite. We give an elementary proof of the existence and apply the result to a game of Bertrand oligopoly with investment. July 14, 2017 Due to the maintenance‚following linking services will not be available on Jul 27 from 10:00 to 15:00 (JST)(Jul 27‚ from 1:00 to 6:00(UTC)). We apologize for the inconvenience. a)reference linking b)cited-by linking c)linking to J-STAGE with JOI/OpenURL July 03, 2017 There had been a service stop from Jul 2, 2017, 8:06 to Jul 2, 2017, 19:12(JST) (Jul 1, 2017, 23:06 to Jul 2, 2017, 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you. May 18, 2016 We have released “J-STAGE BETA site”. May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.
RS Aggarwal Class 12 Chapter 28 Solutions (The Plane) RS Aggarwal Solutions for Class 12 Chapter 28 ‘The Plane ‘will help you to solve all the problems given in this chapter. You will learn concepts related to planes such as coplanarity of two lines, the angle between two planes, the distance of a point from the plane etc. The chapter contains 181 questions and 11 exercises along with more than 100 solved examples for your practice. RS Aggarwal Solutions for Chapter 28 ‘The Plane’ is intended to provide precise and accurate answers from the CBSE examination point of view. These solutions are available in simple language to each question of all the exercises. The answers by Instasolv are reliable and hence, will help you score substantially better in your exams. The exercise solutions constitute thorough discussion on all the concepts covered in the syllabus. We, at Instasolv, are committed to providing the best reference resource for you to constitute a one-stop solution for all the doubts that might arise while solving the chapter. The subject matter experts at Instasolv continuously upgrade themselves to pace up with the changing trends of the exam pattern. With RS Aggarwal Class 12 Solutions, you will find thorough coverage of the topics in your syllabus for the boards and entrance examinations like JEE and NEET. Topics Covered in RS Aggarwal Solutions for Class 12 Chapter 28 The Plane Introduction to the Plane: A plane can be defined distinctively if any one of the following conditions seems to be fulfilled: - If the distance of the plane and the normal to the plane with respect to the Origin is given. This is also called the equation of a plane in normal form. - If the plane passes through a point and is perpendicular to a given vector. - If it passes through 3 given non-collinear points. Equation of a Plane in Normal Form: - Vector Form of the Equation: When n is a unit normal vector along the given normal from the origin to the plane. Consider the perpendicular distance of this given plane as d. Consider an arbitrary point P on the plane such that its position vector r is perpendicular to the normal of the plane from the origin as discussed above. Then, the vector form of the equation of the plane is given by - Cartesian Form of the Equation: If l, m, and n are the direction cosine of n, then the cartesian equation of the plane when the distance of the plane and the normal to the plane with respect to the Origin is given as, lx + my + nz =d Equation of a Plane Plane passes through a Point and is Perpendicular to a given Vector: - Vector Form of the Equation of the Plane If the point through which the plane passes is A with a position vector a and if the plane is normal to the vector N. Consider an arbitrary point P on the plane such that its position vector r, then the equation of the given plane can be represented as: - Cartesian Form from the Vector Equation If the coordinates of the point P and A are given as (x, y, z) and (x1, y1, z1) and the direction cosines of Nare A, B and C, then the Cartesian equation of such a plane is given as: Equation of Plane Passing Thorugh 3 Non-Collinear Points If R, S, and T are three non-collinear points with position vectors a , b , and c respectively, then the vector equation is given as, Important Highlights of the Chapter: - Intercept Form of the Equation of the Plane: If the plane intercepts the axes x, y and z at points a, b and c respectively, then intercept form of the equation of the plane is given as: - Plane Passing through the intersection of two planes: Exercise Wise Discussion of RS Aggarwal Solutions for Class 12 Chapter 28 The Plane - The first exercise of the chapter focuses on finding the equation of a plane based on the coordinates of points given in the questions. There are 9 questions in this exercise. - Exercise 2 has 30 questions on finding the vector and cartesian equations of a plane along with the unit vector. - Exercise 3 has 13 questions in which you have to find the distance of a point from the given plane. - In the next exercise, you have to find the answers consist of a diverse set of problems related to the intercept form of the equation of the plane. - Exercise 5 discusses the concept of the plane passing through the intersection of two planes and the application of its equations. - Exercise 6 is based on finding the angle between two planes. There are 18 questions in this exercise. - Exercise 7 has 15 questions where you need to find the equation of a plane that is perpendicular to a point and is parallel to a line. - In exercise 8, there are 5 questions on vector and cartesian equations of a plane passing through a point and parallel to a line. - In exercise 9, there are just 9 questions where you need to prove that given lines are coplanar. - Exercise 10 has 26 questions of very small answer type based on the topics of the chapter for your quick revision. - Then there are 30 objective-type questions as a separate exercise for you. Benefits of RS Aggarwal Solutions for Class 12 Maths Chapter 28 by Instasolv - At Instasolv, you will be able to solve all the issues related to the syllabus and will get an opportunity of robust practice before you sit for your CBSE exam. - This is a highly recommended reference source for you if you are appearing in the CBSE board exam or are preparing for competitive entrance exams like NEET, JEE Main or JEE Advanced. - At Instasolv, despite the provision of detailed solutions, you will get ample chance to brush your analytical abilities.
Is it possible to place 2 counters on the 3 by 3 grid so that there is an even number of counters in every row and every column? How about if you have 3 counters or 4 counters or....? This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code? Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it. Have a go at this well-known challenge. Can you swap the frogs and toads in as few slides and jumps as possible? Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time. There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice. Using the statements, can you work out how many of each type of rabbit there are in these pens? Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon? What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares? What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros? There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules? This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Can you put the 25 coloured tiles into the 5 x 5 square so that no column, no row and no diagonal line have tiles of the same colour Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. Exactly 195 digits have been used to number the pages in a book. How many pages does the book have? Ben has five coins in his pocket. How much money might he have? Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? Can you put the numbers from 1 to 15 on the circles so that no consecutive numbers lie anywhere along a continuous straight line? There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the Look carefully at the numbers. What do you notice? Can you make another square using the numbers 1 to 16, that displays the same What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters. Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. Can you order the digits from 1-6 to make a number which is divisible by 6 so when the last digit is removed it becomes a 5-figure number divisible by 5, and so on? Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column? Your challenge is to find the longest way through the network following this rule. You can start and finish anywhere, and with any shape, as long as you follow the correct order. Try out the lottery that is played in a far-away land. What is the chance of winning? How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? In this matching game, you have to decide how long different events take. This dice train has been made using specific rules. How many different trains can you make? In this game for two players, you throw two dice and find the product. How many shapes can you draw on the grid which have that area or perimeter? Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. What happens when you add three numbers together? Will your answer be odd or even? How do you know? What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it? Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families? Arrange eight of the numbers between 1 and 9 in the Polo Square below so that each side adds to the same total. You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
In this section, we present thetechniqueknownasnitedi. Use of numerical methods for simulating heat transfer for thermal process calculations continues to increase along with the prevalence. The key is the matrix indexing instead of the traditional linear indexing. In the case of the popular finite difference method, this is done by replacing the derivatives by differences. We consider the numerical formulation and solution of twodimensional steady heat conduction in rectangular coordinates using the finite difference method. Finite difference methods in heat transfer, second edition. The mathematical model is solved by means of finitedifferences method and. The finite element method for flow and heat transfer. Explicit finite difference methods for heat transfer simulation and. In chapter 2, we solved various heat conduction problems in various geometries. He is the coauthor of some bookchapters, and more than papers in major journals and conferences. The remainder of this lecture will focus on solving equation 6 numerically using the method of. Pdf finitedifference approximations to the heat equation. Finite difference, finite element and finite volume methods for the numerical solution of pdes. We apply the method to the same problem solved with separation of variables. Finite difference methods in heat transfer necati ozisik. This article deals with finite difference schemes of two dimensional heat transfer equations with moving boundary. The computational molecule approach the book takes lends itself well to grid generation in a spreadsheet, although it is not explicity mentioned by the author. They are made available primarily for students in my courses. Use features like bookmarks, note taking and highlighting while reading finite difference methods in heat transfer. Numerical solutions of both one and twodimensional phase change problems are presented. The sbpsat method is a stable and accurate technique for discretizing and imposing boundary conditions of a wellposed partial differential equation using high order finite differences. The 3 % discretization uses central differences in space and forward. Numerical simulation using the finite difference method for the flow. This updated book serves university students taking graduatelevel coursework in heat transfer, as well as being an important reference for researchers and engineering. Murthy school of mechanical engineering purdue university. On the comparison of three numerical methods applied to building. A heat transfer model based on finite difference method for grinding a heat transfer model for grinding has been developed based on the. The finite element method fem is discussed and a specific formulation for flow problems is outlined that can encompass nonnewtonian inelastic and viscoelastic fluids. Pdf new approach for finite difference method for thermal. In the energy balance formulation of the finite difference method, it is recommended that all heat transfer at the boundaries of the volume element be assumed to be into the volume element even for steady heat conduction. Finite difference methods in heat transfer solutions manual. Applications of nonstandard finite difference me thods to nonlinear heat transfer problems 191 where h is the convection heat transfer coefficient and a is cooloing area. Solving the 1d heat equation using finite differences excel. Temperature profile of tz,r with a mesh of z l z 10 and r l r 102 in this problem is studied the influence of plywood as insulation in the. A heat transfer model based on finite difference method. Leveque draft version for use in the course amath 585586 university of washington version of september, 2005 warning. Sometimes we need to consider heat transfer in other directions as well when the variation of temperature in other directions is significant. Tata institute of fundamental research center for applicable mathematics. Finite difference methods in heat transfer, second edition focuses on finite difference methods and their application to the solution of heat transfer problems. Pdf the finite element method for flow and heat transfer. An overview will be given of commercially available. The finite difference method is used to solve ordinary differential equations that have. Hello i am trying to write a program to plot the temperature distribution in a insulated rod using the explicit finite central difference method and 1d heat equation. This section considers transient heat transfer and converts the partial differential equation to a set of ordinary differential equations, which are solved in matlab. Applications of nonstandard finite difference methods to. Highorder finite difference methods for constant coefficients usually degenerate to first or. Introductory finite difference methods for pdes contents contents preface 9 1. Heat transfer l11 p3 finite difference method duration. This work will be used difference method to solve a problem of heat transfer by conduction and convection, which is governed by a second order differential. Finite difference method for solving differential equations. This article provides a practical overview of numerical solutions to the heat equation using the finite difference method. Finite difference, finite element and finite volume. Such methods are based on the discretization of governing equations, initial and boundary conditions, which then replace a continuous parti. Thesis submitted for the degree of doctor of philosophy department of applied mathematics university of adelaide april 2001. This post explores how you can transform the 1d heat equation into a format you can implement in excel using finite difference approximations, together with an example spreadsheet. Finite difference methods in heat transfer 2nd edition. Special attention will be paid to the finite difference, finite element and finite volume method. The finite difference techniques presented apply to the numerical solution of problems governed by similar differential. This method is sometimes called the method of lines. Pdf an implicit finitedifference method for solving the heat. Finite difference method applied to 1d convection in this example, we solve the 1d convection equation. Solving the heat, laplace and wave equations using. The forward time, centered space ftcs, the backward time, centered. Then we will analyze stability more generally using a matrix approach. Heat conduction modelling heat transfer by conduction also known as diffusion heat transfer is the flow of thermal energy within solids and nonflowing fluids, driven by thermal non equilibrium i. Finite difference method finite difference method basis boundary value problem by finite difference method solving heat equation using finite difference method finite difference methods timoshenko finite difference finite element method pdf finite element method in 3d a first course in the finite element method a first course in the finite. To develop algorithms for heat transfer analysis of fins with different geometries. Provides a selfcontained approach in finite difference methods for students and. This is usually done by dividing the domain into a uniform grid see image to the right. Nemesis marked it as toread oct 05, read, highlight, difference take notes, across web, tablet, and phone. A meshless finite difference method for conjugate heat. Heat transfer l12 p1 finite difference heat equation youtube. Numerical simulation by finite difference method of 2d. Stability of finite difference methods in this lecture, we analyze the stability of. In recent years, there has been a great deal of interest in developing meshless methods for computational fluid dynamics cfd applications. Finite difference method for pde using matlab mfile 23. Programming of finite difference methods in matlab long chen we discuss ef. A simple algorithm incorporating the equivalent heat capacity model is described for the finite difference heat transfer analysis involving melting and solidification. For example, in a heat transfer problem the temperature may be known at the domain boundaries. Aug 05, 2015 all the three are numerical methods for solving differential equations and divides the domain into sub domains like nodes, control volumes or sub domains. Employ both methods to compute steadystate temperatures for t left 100 and t right. Such methods are based on the discretization of governing equations, initial and boundary conditions, which then replace a continuous partial differential problem by a system of. Mass, momentum and heat transfer are all described by. A heat transfer model for grinding has been developed based on the finite difference method fdm. Heat transfer l12 p1 finite difference heat equation. Finite difference formulation of the differential equation. Finite di erence methods for di erential equations randall j. In this paper, a meshless finite difference method is developed for solving conjugate heat transfer problems in complex geometries. Finite difference methods massachusetts institute of. Derive the analytical solution and compare your numerical solutions accuracies. To understand finite difference method and its application in heat transfer from fins. The proposed model can solve transient heat transfer problems in grinding, and has the. Heat transfer l12 p1 finite difference heat equation ron hugo. The latent heat of fusion is accounted for by using a linear interpolation of the nodal temperatures. Below we will demonstrate this with both first and second order derivatives. Request pdf finite difference methods in heat transfer. Rossy bueno marked it as toread dec 12, book ratings by goodreads. If you just want the spreadsheet, click here, but please read the rest of this post so you understand how the spreadsheet is implemented. First, we will discuss the courantfriedrichslevy cfl condition for stability of. A heat transfer model based on finite difference method for grinding. Understand what the finite difference method is and how to use it to solve problems. Finite difference methods in heat transfer 2nd edition m. For example, the finite difference formulation for steady two dimensional heat conduction in a region with heat generation and constant thermal. Finite difference methods in heat transfer crc press book. Goals learn steps to approximate bvps using the finite di erence method start with twopoint bvp 1d investigate common fd approximations for u0x and u00x in 1d use fd quotients to write a system of di erence equations to solve. Finite difference methods in heat transfer ghent university. Traditional finite difference methods fdms have been restricted to an orthogonal or a bodyfitted distribution of points. Excerpt from geol557 numerical modeling of earth systems by becker and kaus 2016 1 finite difference example. Finite difference discretization of the 2d heat problem. Finite difference methods for advection and diffusion. The finite difefrence techniques presented apply to the numerical solution of problems governed by similar differential equations encountered in many other fields. Transient conduction using explicit finite difference method f19. Such methods are based on the discretization of governing equations, initial and boundary conditions, which then replace a continuous partial differential problem by a system. Sep 25, 2015 heat transfer l12 p1 finite difference heat equation ron hugo. Such methods are based on the discretization of governing equations, initial and boundary conditions, which then replace a continuous partial differential problem by a system of algebraic equations. Second edition finite difference methods in heat transfer, second edition focuses on finite difference. Finite difference methods for boundary value problems. Finite difference methods in heat transfer ebook, 2017. Solving the heat, laplace and wave equations using nite. Numerical methods in heat, mass, and momentum transfer. Finite difference methods in heat transfer presents a clear, stepbystep delineation of finite difference methods for solving engineering problems governed by ordinary and partial differential equations, with emphasis on heat transfer applications. In recent years the study of fluid flow and heat transfer through porous media has received considerable attention because of numerous thermal engineering in. The method is based on finite differences where the differentiation operators exhibit summationbyparts properties. Finitedifference method for laplace equation duration. The finite difference formulation above can easily be extended to twoorthreedimensional heat transfer problems by replacing each second derivative by a difference equation in that direction. The finite element method for flow and heat transfer article pdf available in advances in polymer technology 42. Finite difference method for ordinary differential equations. With this technique, the pde is replaced by algebraic equations which then have to be solved. Finite difference method for pde using matlab mfile. Finite difference methods analysis of numerical schemes. Numerical simulation by finite difference method of 2d convectiondiffusion in. To better illustrate the different methods of solving heat conduction problems, we are considering the two. A heat transfer model based on finite difference method bin. Scheme of trombe wall system conservation equation of heat transfer in. Finite difference methods are a versatile tool for scientists and for engineers. Description finite difference methods in heat transfer, second edition focuses on finite difference methods and their application to the solution of heat transfer. Numerical simulation by finite difference method 6163 figure 3. This article deals with finite difference schemes of twodimensional heat transfer equations with moving boundary. To use a finite difference method to approximate the solution to a problem, one must first discretize the problems domain. Finite difference method for laplace equation duration.1149 33 46 1137 1085 1465 779 103 84 1381 615 882 148 473 565 64 741 181 976 817 93 1138 127 278 1138 105 1243 914 846 401 595 616 875 1227
- In the Context of Mathematics, What Do “Position” and “Movement” Mean? - What Exactly Is a Coordinate? - What Exactly Do the Terms “Rotation,” “Translation,” and “Symmetry” Mean? - What Is the Definition of a Position in Terms of a Vector? - Vectors and Scalars Are Two Different Types of Mathematical Quantities - Illustration of a Position by Vector - The Formula for Positioning Using a Vector - A Numerical Example of Position Position|Definition & Meaning In math, position refers to the property of an object that describes its location, often relative to some other thing. It may be qualitative (behind, above, etc.), but in most cases, it is quantified as part of a coordinate system. For example, if object A is positioned at Cartesian coordinates (5, 6) and object B at (1, 2), then object A’s position relative to B is (4, 4). A thing’s position can be defined as the amount of space it takes up in relation to other things. In order to describe the location of one thing in relation to another, we can use phrases such as “in front,” “behind,” “left,” “right,” “above,” “below,” “top,” and “bottom,” among others. In the following figure, the position of a parallelogram is right to the circle. In the Context of Mathematics, What Do “Position” and “Movement” Mean? In the context of mathematics, “position” refers to the process of determining and noting where something is situated, typically on a grid or a map. In most cases, your youngster will complete this task using coordinates. The ideas of rotation, translation, and symmetry are included under the umbrella term “movement.” In geometry, sometimes known as the study of shapes, issues such as movement and position are both covered. What Exactly Is a Coordinate? The location of such a point on such a grid can be determined with the help of something called coordinates. A coordinate is indeed a pair of numbers that are distinguished by a comma and enclosed in brackets. For example, a coordinate would look like this: (7, 8). The numbers that make up a coordinate provide you with information about how far you need to travel along either axis of a grid in order to locate the point. By telling yourself to move “all along the corridor but also up the stairs,” you can help yourself remember that you should first move all along the bottom axis (X) to find the very first number of a coordinate and then start moving up the axis (Y) to find the 2nd number of a coordinate. You can do this by using the phrase “all along the corridor but also up the stairs.” To locate the point indicated by the coordinates (4, 2), move four spaces down the axis that runs horizontally across the bottom of the grid, and afterward, move two spaces vertically along the axis that runs horizontally across the top of the grid. What Exactly Do the Terms “Rotation,” “Translation,” and “Symmetry” Mean? The technique of spinning as well as turning a form, is referred to as rotation. The form rotates 360 degrees while maintaining its appearance and size during the process. Moving a shape along a grid either up, down, left, or right is called translation. The shape does not rotate or alter its outward appearance or its size; rather, it merely moves in one or even more directions. Reflection is the key to understanding symmetry. If you are able to draw a line across the middle of a shape and each half is indeed a mirror image to the other, then the shape is said to have symmetry. What Is the Definition of a Position in Terms of a Vector? A position vector is indeed a straight line that is used to define the location of a moving point in relation to a body. One end of the line is attached to the body, while the other end of the line is attached to the moving point. When the point moves, its position vector will undergo a change in either its length, its direction, or both, depending on the nature of the movement. It is possible to define a position vector as a vector that represents either the position or even the location of a particular point with respect to an arbitrary reference point such as the origin. Always moving in the direction of a certain location, the direction of a position vector points in the direction of a vector’s origin. Vectors and Scalars Are Two Different Types of Mathematical Quantities When determining position, the direction that one faces is quite essential. When you claim that you are located at positive 6 meters in the x-direction, what you are actually indicating is that you are located 3 meters towards the right of a y-axis. This points in a certain direction. A number that takes into account direction is referred to as a vector. A scalar is a number that may be written in either direction without affecting its value. Scalars include things like temperature, for instance. Because there is no clear path that it follows. Even if it may be 70 degrees Fahrenheit outside, the temperature inside the building is not 70 degrees Fahrenheit. It could be 40 degrees Celsius, but it will never be 40 degrees Celsius when you head west. Not at all how things work with temperature. The temperature is an example of a scalar. Because the direction is important, the position is represented as a vector. However, distance is indeed a scalar variable. The length of your journey can be measured in distance. For instance, if you sprint around your room while keeping the axis on the ground, you might run a fairly long way, all the way up to a total distance of fifty meters. However, your location is not fifty meters away. For example, your position could be described as having a minus 3-meter value just on the x-axis and a plus 4-meter value on the y-axis. The distance you traveled did not change regardless of the direction in which you ran; you still covered fifty meters. Therefore, while considering the distance, orientation is irrelevant. The measure of distance is indeed a scalar. Illustration of a Position by Vector In most situations, a position vector of such an item is calculated by starting from the origin. Imagine that an item is placed within space in the following manner. The Formula for Positioning Using a Vector Before we can calculate a position vector of any point in the xy plane, we need to have the point’s coordinates in front of us. Take into consideration points A and B, each of which has the coordinates (w1, x1) and (w2, x2), accordingly. In order to calculate a position vector, we must first take the respective components of A and subtract them from the components of B in the following manner: AB = (w2 – w1)*i + (x2 – x1)*j Point A serves as the starting point for the position vector AB, which continues on to point B. A Numerical Example of Position Which of these figures lies beneath the Circle? From the previous knowledge of position, the triangle is beneath the circle. All mathematical drawings and images were created with GeoGebra.
The Moduli Space Metric for The moduli space metric and its Kähler potential for well-separated non-Abelian vortices are obtained in gauge theories with Higgs fields in the fundamental representation. Solitons, namely smooth localized solutions of nonlinear partial differential equations, have a long history in mathematical and physical sciences and can now be considered as a subject on their own. The number of physically relevant applications of soliton theory is huge and ranges from nonlinear optics to astrophysics. While in the mathematical literature the term soliton is mostly associated to integrable systems, in the framework of modern Lorentz-invariant field theories it refers to smooth localized solutions of field equations, that in general do not exhibit integrability. A particularly interesting class of solitons is represented by those solutions of the field equations which satisfy Bogomolnyi-Prasad-Sommerfield (BPS) bound, that is a lower bound for the energy functional. They are topologically stable and can be shown to actually satisfy first order, instead of second order, partial differential equations that do not involve time derivatives (BPS equation). Abrikosov-Nielsen-Olesen (ANO) vortices at critical coupling , ‘t Hooft-Polyakov monopoles with massless Higgs field and instantons in Euclidean Yang-Mills theory are prominent examples (a standard reference is ). One characteristic feature of BPS solitons is that there exist no static forces among them. Therefore a large number of soliton configurations are allowed with degenerate energy, and consequently generic solutions contain moduli parameters (collective coordinates). The space of solutions of a given set of BPS equations is called moduli space and is parameterized by those moduli parameters. The complete characterization of soliton moduli space is not only mathematically attractive, but has deep physical implications. In fact, while the dynamics of solitons in the full field theory is usually inaccessible (sometimes even numerically), following the idea of Manton one can argue that at sufficiently low energies the time evolution is constrained by potential energy to keep the field configuration close to the moduli space, which is in general finite dimensional. The problem is then reduced to analyze the motion on the moduli space, which is actually a geodesic motion of the metric induced by the kinetic term of the field theory Lagrangian. However, although a number of mathematical structures have been found in various cases, the explicit determination of moduli space metric can be very difficult in practice. For example, the moduli space of monopoles reveals a hyper-Kähler structure, which is quite restrictive, but the metric is explicitly known only in the case, namely the Atiyah-Hitchin metric [6, 7]. Even in this case the geodesic motion is not integrable, except in a situation where one is allowed to use the asymptotic form of the metric. The asymptotic metric for well-separated BPS monopoles was constructed by Gibbons and Manton . For other gauge groups the Weinberg-Lee-Yi metric is well known . For BPS Abelian vortices (ANO vortices) in flat space , the -vortex moduli space was shown to be Kähler and a symmetric product , where denotes symmetrization . For the metric on it, a major step was made in the work of Samols , where a general formula for the metric was given in terms of local data of the solutions of BPS equations. A Kähler potential for such metric could then be found easily (for a direct approach to the calculation of the Kähler potential with different arguments see ). Subsequently Manton and Speight calculated the local data for well-separated vortices and, making use of Samols’ formula, explicitly wrote down the asymptotic expression of the moduli space metric for vortices. Recently the moduli space metric was given for vortices on a hyperbolic space in which case the system is integrable . BPS non-Abelian vortices in more general Higgs models with non-Abelian gauge symmetry were introduced in [16, 17] (for a review see [18, 19]). Such configurations are parametrized not only by position moduli, but also by orientational moduli, that appear due to the presence of a non-trivial internal color-flavor space; it was found that a single vortex moduli space is for gauge theory with Higgs fields in the fundamental representation. The Kähler class on was determined to be , with the gauge coupling constant . The analysis of the moduli space has gone through many developments especially after the introduction of the moduli matrix formalism – (for a review of the method see ). The moduli matrix is a matrix whose components are holomorphic polynomials of (codimensions of vortices), and it contains all moduli parameters in coefficients . The moduli space of multiple vortices at arbitrary positions with arbitrary orientations in the internal space was constructed in . A general formula for the Kähler potential on the moduli space was obtained in . For separated (not necessary well-separated) non-Abelian vortices, the moduli space can be written as the symmetric product of copies of the single vortex moduli space (1.1) : where the arrow denotes the resolution of sigularities; The space on the right hand side contains orbifold singularities which correspond to coincident vortices, while the full moduli space on the left hand side should be regular. By evaluating the Kähler potential of the moduli space at linear order, it was explicitly shown in that the metric is actually regular everywhere even at coincident limits of two vortices . The head-on-collision of two vortices was also studied in . The purpose of the present paper is to give the metric and its Kähler potential on the moduli space (1.2) for well-separated non-Abelian vortices. Our main results are the generalization of Samols’ formula to the non-Abelian case and, starting from that and from the asymptotics of non-Abelian vortex solutions , the derivation of the explicit metric and its Kähler potential. The final form of the metric exhibits an evident interplay between spatial (position) and orientational moduli, opening up a rich variety of possibly interesting dynamics, that will be the object of a further investigation . In this paper we concentrate on local vortices, namely vortices in gauge theories with Higgs fields in the fundamental representation in the same number as the number of colors (while semi-local vortices exist in theories with more fundamental Higgs fields [33, 29]). We also restrict ourselves to gauge group although non-Abelian vortices with gauge group with arbitrary simple group have been recently constructed in –. We leave generalizations to those cases as future works. The paper is organized as follows. In Section 2 we define the model and review the construction of non-Abelian vortices in the moduli matrix formalism. In Section 3 we find the non-Abelian extension of the Samols’ formula for the metric on the moduli space and in Section 4 we show how it can be made explicit in the case of well-separated vortices, which means to find the asymptotic metric and its Kähler potential. In Section 5 we obtain the latter result by means of a more physical method, namely point-particle approximation. Some details of the calculation are given in Appendix A. 2 Review of non-Abelian local vortices 2.1 Lagrangian and BPS equations Let us consider a gauge theory in -dimensional spacetime with gauge fields for , for and Higgs fields in the fundamental representation of the gauge group. The Lagrangian of the theory takes the form where is the Fayet-Iliopoulos parameter, and are gauge coupling constants for and , respectively. Our notation is and . The matrices and are the generators of and , normalized as As is well known, the Lagrangian Eq. (2.1) can be embedded into a supersymmetric theory with eight supercharges. The Higgs fields can also be expressed as an -by- matrix on which the gauge transformations act from the left and the flavor symmetry acts from the right Using this matrix notation for the Higgs fields, the vacuum condition can be written as The vacuum of this model is in an color-flavor locking phase, where the vacuum expectation values (VEVs) of the Higgs fields are In this vacuum, the mass spectrum is classified according to the representation of where is for singlet fields and is for adjoint fields. Considering a static configuration, the BPS bound for the energy reads The bound is saturated if the following BPS equations are satisfied: where is a complex coordinate. To solve the BPS equations, it is convenient to rewrite the gauge fields in terms of matrices and Then, the first BPS equation can be solved as where and are positive-definite Hermitian matrices defined by We call Eq. (2.13) the “master equation” for non-Abelian vortices. By using the matrices , and , the BPS equations can be solved by the following procedure. Taking an arbitrary holomorphic matrix , we solve the master equation (2.13) in terms of and with the boundary conditions such that the vacuum equation (2.5) is satisfied at spatial infinity . Explicitly, they are given by From the positive-definite hermitian matrices and , the matrices and can be determined uniquely up to gauge transformation , . Then, the physical fields can be obtained via the relations Eq. (2.10), Eq. (2.11) and Eq. (2.12). where is an arbitrary non-singular matrix holomorphic in . Since the physical fields and are invariant under -transformations, (2.16) defines an equivalence relation on the set of holomorphic matrices There exists a one-to-one correspondence between the equivalence classes and points on the moduli space of the BPS vortices [24, 21, 19, 27]. In this sense, we call the “moduli matrix” and the parameters contained in are identified with the moduli parameters of the BPS configurations. For example, the vortex positions for a given moduli matrix can be determined as follows. Since a part of gauge symmetry is restored inside the vortex core, the vortex positions can be defined as those points on the complex plane at which the rank of the matrix becomes smaller than , namely they can be determined as the zeros of the holomorphic polynomial . 2.2 Single vortex configurations As an example, let us consider configurations of a single vortex located at . Since zeros of the polynomial correspond to the vortex position, we consider the set of moduli matrices whose determinant is . For example, in the case of , any moduli matrix with is -equivalent to the moduli matrix of the form In addition to the translational moduli parameter , there exists one parameter that can be viewed as an inhomogeneous coordinate of . This internal degree of freedom, which is called the orientation, corresponds to the Nambu-Goldstone zero mode of symmetry broken by the vortex. The homogeneous coordinate of can also be extracted from the moduli matrix as follows. Since drops at , there exists an eigenvector of with the null eigenvalue at . In other words, there exists a constant -vector such that The vector is called the orientational vector and corresponds to the homogeneous coordinates of . In the case of general , a single vortex configuration breaks down to , so that the orientational moduli space is In this case, the generic moduli matrix with is equivalent to where parameters are the inhomogeneous coordinates parameterizing the internal orientation . As in the case of , the orientational vector can also be defined by , corresponding to the homogeneous coordinates of . where the matrices and are given by and the -by- matrix is defined by It is convenient to use the following ansatz for and where and are smooth real functions and the matrix is given by Then, the master equation (2.13) reduces to the following two equations for the functions and The boundary conditions for and can be read from Eq. (2.15) as By using the relation Eq. (2.14) and choosing an appropriate gauge, we obtain the matrices and in terms of the functions and as where we have defined the matrix by Now let us look at the asymptotic forms of the single vortex solution. The functions and behave near the vortex core as where and are constants. On the other hand, the asymptotic forms of the functions222 The function of the parts here are related to those in Ref. by , and . Consequently, the coefficient . and for large are given by where is the modified Bessel function of the second kind. The constants and depend on the ratio and . In case of , the term proportional to is actually not dominant in Eq.(2.38) compared to the contribution of order . We then concentrate on the case with so that the above form is the proper approximation. As we will see, however, Eq.(2.38) and Eq(2.39) are sufficient to determine the metric to the leading order even for the case . 3 Effective Lagrangian for non-Abelian local vortices In this section, we derive a formula for the metric on the moduli space of non-Abelian local vortices which generalizes the celebrated Samols’ formula for Abelian vortices . 3.1 Formula for the metric on the moduli space The moduli space of the BPS vortices is a Kähler manifold whose holomorphic coordinates are identified with the complex parameters contained in . The effective low-energy dynamics of the BPS vortices are described by an effective Lagrangian of the form where are the holomorphic coordinates, is the metric of the moduli space of non-Abelian vortices, and is the Kähler potential. By using the moduli matrix and the solution of the master equation (2.13), the Kähler potential of the moduli space can be formally333 To make the Kähler potential finite, we need to add counter terms which can be regarded as a Kähler transformation. written as where is a quantity444 The explicit form of is given by As a consequence, by varying the Kähler potential with respect to and , we can show that is minimized by the solution of the master equation (2.13). Therefore, the derivatives of the Kähler potential with respect to the moduli parameters are given by Note that is anti-holomorphic in . From this property of the Kähler potential, we obtain a simple form of the effective Lagrangian where the differential operators and are defined by Now we rewrite the effective Lagrangian in terms of local data in the neighborhood of each vortex. Let us assume that has zeros at , namely Let be the disk of radius centered at It is convenient to decompose the domain of integration into the disks and their complement . Since the integrand in (3.5) does not have any singularity, the integral over the disk vanishes in the zero-radius limit Therefore, the effective Lagrangian can be evaluated by integrating over and then taking limit Using the master equation (2.13) the integrand can be put in the form of a total derivative as where we used the fact that is holomorphic with respect to and the moduli parameters on . For Stokes’ theorem, the integral of Eq. (3.11) over can be replaced by an integral along the infinitely large circle and the boundaries of the disks . Since the integrand falls off exponentially at spatial infinity, the contribution from vanishes and the effective Lagrangian becomes Since each integral picks up the terms which behave as , it can be evaluated by expanding the integrand around . First let us consider the case where all the zeros of the polynomial are isolated. In this case, the matrix has the first order pole at Since the remaining part of the integrand is non-singular, it can be expanded around as where the matrices and are defined by Then, the effective Lagrangian can be written as From the master equation (2.13), we evaluate : Therefore, the effective Lagrangian reduces to where we have used and Note the strong similarity between Eq. (3.19) and Samols’ formula (although the analogy between the definitions of our quantities and the corresponding ones of Samols is not complete, as it is made explicit in Section 4 Eq. (4.13)). 3.2 Example: single vortex As an example, let us consider the single vortex configuration. For the moduli matrix Eq. (2.21), the matrix can be calculated as On the other hand, the matrix can be calculated by using the ansatz Eq. (2.26) as Substituting and into (3.19), we obtain the following effective Lagrangian for a single non-Abelian vortex 3.3 Coincident case Next, let us consider the case of coincident vortices. If has -th order zero at , the matrix has the following Laurent series expansion On the other hand, the remaining part of the integrand is non-singular and can be expanded as Then, the effective Lagrangian can be written in terms of the coefficients and as 4 Asymptotic metric for well-separated vortices In this section, we consider the asymptotic form of the metric on the moduli space for well-separated non-Abelian vortices by generalizing the results for Abelian vortices . 4.1 Asymptotic metric for Abelian vortices First, let us rederive the effective Lagrangian for well-separated vortices in the (Abelian) theory . Our approach here has essentially the same spirit of Section 2 in , however our use of complex notation will make more transparent some properties retained by the solutions of the linearized vortex equation, which are crucial for the derivation of the result. In this case, the moduli matrix is a holomorphic polynomial of and can be written as where is the number of vortices and are the positions of vortices. To calculate the asymptotic metric for well-separated vortices, it is convenient to define the function by Then, the master equation (2.13) can be rewritten in terms of as where boundary condition for large is given by . Let us consider the linearized equation for the small fluctuation around the background solution Let and be linearly independent solutions of the linearized equation. Then, the “current” defined by555 Note that and are not complex conjugate in general. satisfies the following “conservation law” except at the vortex positions Therefore, the contour integrals
Money becomes superneutral in the growth-rate and also in the velocity sense because the equilibrium real rate of return on capital remains constant keywords: velocity of money, indeterminacy, endogenous growth, cash-in-advance con. Econ 102 answers at the end lecture 100 suppose the velocity of money does not change over time the price level decreases all else constant b) the value of. Because money velocity v increases during the hyperinflation while the production q stays constant or declines the ratio q/v becomes very small monetary authorities may also look at the ratio of the real value of the money supply compared to the level of output. V: the velocity of money is, indeed, related to people's behavior and the structure of the financial system, but there are discernable patterns it is not constant even over the short run. The velocity of money friedman assumed the velocity of money was constant and it was from about 1950 until 1978 when he was doing his seminal work but then things changed let's look at two. In our first look at the equation of exchange, we noted some remarkable conclusions that would hold if velocity were constant: a given percentage change in the money supply m would produce an equal percentage change in nominal gdp, and no change in nominal gdp could occur without an equal percentage change in m we have learned, however, that. Implies that the velocity of money is constant implies that the price level is proportional to the money supply implies that real gross domestic product (gdp) is. Read the constant velocity of money free essay and over 88,000 other research documents the constant velocity of money when estimating the effect of changes in the money supply to changes in nominal gdp, it is common to assume. Employment and velocity is constant, then the transactions demand for money depends on the price level monetarists accept the variability of velocity but believe that mv = pq can still be a good tool for analysis. Velocity of money in the us stays constant because one of the fed's missions is to grow the money supply at the rate of economic growth this is not what qtm says after taking qtm's assumptions into account, it says. 13 if the quantity of real money balances is ky, where k is a constant, then velocity is: a) k b) 1/k c) kp d) p/k 14 consider the money demand function that takes the form (m/p)d = ky, where m is the. Otherwise when the velocity returns to normal in the long run, the extra money supply would generate inflation the ad curve would be to the right of its initial position before the velocity shock(one might argue that the gains from policy intervention in this case is not worth its side but that is a different issue). The velocity of money peaked around 1980 a period when interest rates hit 20% it has been declining ever since the constant beat of the so called war on. Given a constant money supply, the velocity of money must increase to fund all of these purchases similarly, when the money supply shifts due to fed policy, velocity can change this change makes the value of money and the price level remain constant. The velocity of money increases as much as total spending falls so that mv remains constant the velocity of money is constant if a lender desires to earn a return of 4 percent on a loan and the anticipated rate of inflation is 1 percent, the lender should charge a. In the country of wiknam, the velocity of money is constant real gdp grows by 5% per year, the money stock grows by 14% per year, and the nominal interest rate is 11. The velocity of money played an important role in monetarist thought for example, monetarists argued that there exists a stable demand for money (as a function of aggregate income and interest rates. Econ--help 1 in the country of wiknam, the velocity of money is constant real gdp grows by 5 percent per year, the money stock grows by 14 percent per year, and the nominal interest rate is 11 percent. Deflation and disruption price stability and constant velocity, monetarists concluded that a slow, steady increase in the money supply — just enough to accommodate real growth — would. Will be smaller than the transactions velocity of money if the quantity of transactions is greater than income 13 if the quantity of real money balances is ky , where k is a constant, then velocity is. Econ can anyone please help// 1 in the country of wiknam, the velocity of money is constant real gdp grows by 5 percent per year, the money stock grows by 14 percent per year, and the nominal interest rate is 11 percent. When estimating the effect of changes in the money supply to changes in nominal gdp, it is common to assume that the velocity of money is constant. Chapter 13 money and the economy 1 the term velocity in the equation of exchange represents b the money supply is constant in both the long-run and the. If the income velocity of money is not constant, the relationship between inflation and the money supply is not as strong a if the growth rate in velocity is not constant, inflation will vary even if the growth rate in the money supply is constant. The time value of money 181132 interest rate and velocity what is the country's velocity of money (vm) where velocity is constant the money supply is.
How to find beam reactions overhang beam gate 2017 examination. How to find beam reactions overhang beam gate 2017. In the unregistered version some options of beamload have fixed values. Skyciv beam is focused on giving users a fast and accurate analysis of beam structures. If the situation consists of several loads on the beam, remember the superposition principal. Calculate either simple span beams or beams overhanging a support at one end. Looking to find a deflection equation for a simply supported overhanging beam with two supports and a point load at the end of the canteliever. Download beam deflection calculator for windows from the best download software source. Eicac beam deflection calculator simple overhanging uniform. Download beam deflection calculator for windows an easy to use application that was especially created to serve as a helper for those who are working in the field of civil engineering. When a beam is subjected to uneven load, beam undergoes bending producing bending moment which in turn produces stresses within the beam. Of interest to civil, structural, and mechanical engineers. Free beam calculator bending moment, shear force and. In this calculator, the maximum deflection of a beam is calculated when it is supported at both ends with the load at the center. You may also size hips and valleys with a uniform increasing load plus a normal uniform load. Get a simplified analysis of your beam member, including reactions, shear force, bending moment, deflection and stresses in a matter of seconds. We also recommend you to check the files before installation. Skyciv beam is fully integrated with our steel asic, as, csa, en and concrete member design check software aci. The above beam force calculator is based on the provided equations and does not account for all mathematical and beam theory limitations. The calculator has been provided with educational purposes. Download links are directly from our mirrors or publishers website, beam deflection calculator for windows 1. Overhanging beam equations november 19, 2018 by arfan leave a comment structural beam deflection and stress formula determine the slope at point a and deflection c beam formulas with shear and mom solution to problem 689 beam deflection by method of overhanging beam point load between supports at any. Bending, deflection and stress equations calculator for beam. This license type may impose certain restrictions on functionality or only provide an evaluation period. It will work for all beams, trusses and frames and is capable of taking point loads, concentrated moments. Calculator for bending moment and shear force of overhanging beam. Please refer to the figure and enter the values of load and. Simple beam deflection calculator excel february, 2019 by arfan leave a comment simple beam two unequal point lo unequally ed beam formulas for multiple point lo structural double point load beam deflection structural ering exle on deflection calculation for. This beam calculator is designed to help you calculate and plot the bending moment diagram. Figure cantilever beam concentrated load at free end. The deflection of a beam depends on its length, its crosssectional area and shape, the material, where the deflecting force is applied, and how the beam is supported. A simple and easy to use structural beam analysis calculator for statically determinate and indeterminate beams. Calculate the reactions at the supports of a beam, frame and truss. Castiglianos theorem illinois institute of technology. Bending moment diagram bmd shear force diagram sfd axial force diagram. It is a beam which consists of three or more supports. A simply support by the original beam is usually a good choice, but sometimes another point is more convenient. A beam is a horizontal member of a structure, which continue reading. This calculator provides the results for bending moment and shear force at a section of overhanging beam subjected to a point load on the span. Stresses in three dimensions excel spreadsheet calculator beam stress. This free online calculator is developed to calculate the values of bending moment and shear force at any point of overhanging beam carrying point load, or uniformly distributed loadudl anywhere on span. Overhanging beam point load between supports at any point. Complicated polynomial equations describe the deflection, internal shear, and moment distribution in a beam with overhang a beam pinned at one end and rollersupported at a point between. M2 is the bending moment over the post adjacent to the overhang. Fixed end will have vertical reaction and moment and zero deflection. Use this beam span calculator to determine the reactions at the supports, draw the shear and moment diagram for the beam and calculate the deflection of a steel or wood beam. I recorded the length and the dimensions of the cross sectional area. The purpose of this experiment is to record the deflection in beam experimentally and then compare it with theoretical value. I recorded the value of the ass, the location and the. Therefore, it is necessary to calculate the bending moment and shear stress developing within the beam when subjected to any type of load. Loads acting downward are taken as negative whereas upward loads are taken as positive. The tool is fully functional, so visit our beam calculator and frame and truss calculator to get started. Beam deflection calculator for solid rectangular beams. Calculator for engineers bending moment and shear force. Up to a dozen live and total loads may be entered for a single beam including six point loads, five partial uniform loads, and a uniform load. The properties of the beam and section are specified by typing directly into the input fields. Free online beam calculator for generating the reactions, calculating the deflection of a steel or wood beam, drawing the shear and moment diagrams for the beam. I can determine reactions at the supports but i am having trouble finding deflection between the supports. Beam deflection and stress formula and calculators. Deflection is a term which is defined as the distance moved by a point on the axis of beam before and after application of force. What is the difference between overhanging beam and. For example, for a beam of length 3000 mm with load 0 n having moment of inertia 586 mm 4 and modulus elasticity 0 n mm 2. Simple beam deflection calculator excel new images beam. A simplysupported beam or a simple beam, for short, has the following boundary conditions. Engineering, design a, manufacturing and related excel spread sheets. D1 is the maximum deflection midspan between supports. Oct 01, 2016 a beam having overhang on both sides is called double overhang beam. Enter your values as required and press solve, your results will be displayed. The calculator is fully customisable to suit most beams, frames and trusses. Online engineering calculators and equation tools free. Maximum deflection of beam calculator with load at center. Beam calculator free online shear and moment diagrams. Structural beam deflection and stress formula and beam deflection. This calculator checks a uniformly loaded beam resting over 2 supports, overhanging one of the supports. This calculator is based on eulerbernoulli beam theory. Another simple beam calc for dimensional lumber with drop down species list. Overhang beam calculations mcgrawhill education access. This calculator can also be used to find the ordinates of influence line diagram for structures. I made it sure that the dial indicator was zero before applying load. The download was scanned for viruses by our system. Structural beam deflection, stress, bending equations and calculator for a beam with ends overhanging supports and a two equal loads applied at symmetrical locations. Beam calculator online calculate the reactions, draws. Beam stress deflection equations calculator with fixed ends tapering loading. It is a beam whose one end is fixed and other end is simply supported. Deflection gauge, weights, models of over hanging beam. A beam having overhang on both sides is called double overhang beam. Slope and deflection calculator for simply supported beam. Oct 18, 2018 download beam deflection calculator for windows an easy to use application that was especially created to serve as a helper for those who are working in the field of civil engineering. Model of beam, weights, deflection gauge, weight hangers. The deflection under combined loading at midspan for pinended members can be estimated closely by 97 where the plus sign is chosen if the axial load is tension and the minus sign if the axial load is compression, d is midspan deflection under combined loading, d. Free online beam calculator powered by webstructural. Double overhanging beam aircraft engineering engtips. The version of beam deflection calculator you are about to download is 1. Download beam deflection calculator for windows an easy to use. This calculator calcultes the end slopes, support reactions, maximum deflection and maximum stress in a cantilever beam with a point load at its free end. M1 is the bending moment at midspan between supports. This type of beam has one end fixed and other end free. The plots of the shear and moment diagrams as well as the displayed tabulation of shear, moment, slope, and deflection are based on the beam or each individual span being divided up into fifty 50 equal. This calculator uses standard formulae for slope and deflection. Structural beam deflection, stress, bending equations and calculator for a beam with both ends overhanging supports, load at any point between. Invert diagram of moment bmd moment is positive, when tension at the bottom of the beam. It also gives the values of maximum bending moment value its position where is occurs. The calculated values can be used to draw the diagrams of shear force and bending moment. Design aid 6 beam design formulas with shear and moment. The deformation of a beam under load is measured by the deflection of the beam before and after the load. We will use castiglianos theorem applied for bending to solve for the deflection where m is applied. In the first tab, input consists of span length, elastic modulus, live load, dead load, allowable bending stress, deflection limit for live load, and deflection limit for live plus dead load serviceability requirements. At the wall x0 the moment felt is the maximum moment or pl, but at the end of the beam, the moment is zero because moments at the locations do not contribute to the overall moments. Cantilever example 22 beam deflection by integration. Problem 867 for the beam in figure p867, compute the value of p that will cause a zero deflection under p. This calculator is for finding the slope and deflection at a section of simply supported beam subjected to uniformly distributed load udl on leftside portion of span. Draw a bmd for each loading including the support reactions of the original beam. Easily model shear, moment, and deflection, with unlimited supports, interactive. Given a cantilevered beam with a fixed end support at the right end and a load p applied at the left end of the beam. It is a beam whose one end is exceeded beyond the support. Beam calculator beam calculator free beam design bending moment diagram shear force diagram deflection deflection calculator steel beam software beam analysis simply supported overhanging bending moment. The calculator has been provided with educational purposes in mind and should be used accordingly. The beam calculator also allows cantilever spans at each end, as the position of. Free app for analyzing a cantilever beam with a uniform distributed load. The eulerbernoulli equation describes a relationship between beam deflection and applied external forces. Bending, deflection and stress equations calculator for. The license type of the downloaded software is trial. The beam calculator automatically uses clearcalcs powerful finite element. Figure 12 cantilever beam uniformly distributed load. This workbook has two tabs involving the allowable stress design of beams for a simply supported beam carrying a uniformly distributed load. A free, online beam calculator to generate shear force diagrams, bending moment diagrams, deflection curves and slope curves for simply supported and cantilvered beams. Dec 20, 20 a simple and easy to use structural beam analysis calculator for statically determinate and indeterminate beams. Double overhanging beam rb1957 aerospace 14 jun 18 23. Free online beam calculator for cantilever or simply supported beams.1475 416 855 694 520 805 1219 1277 1098 136 267 694 1458 247 387 516 475 384 280 1083 1007 959 318 738 512 1399 1209 1505 61 892 1194 448 132 719 236 59 295 1379 43 801 1399 266 101
Construction of New Delay-Tolerant Perfect Space-Time Codes (STC) are optimal codes in their original construction for Multiple Input Multiple Output (MIMO) systems. Based on Cyclic Division Algebras (CDA), they are full-rate, full-diversity codes, have Non-Vanishing Determinants (NVD) and hence achieve Diversity-Multiplexing Tradeoff (DMT). In addition, these codes have led to optimal distributed space-time codes when applied in cooperative networks under the assumption of perfect synchronization between relays. However, they loose their diversity when delays are introduced and thus are not delay-tolerant. In this paper, using the cyclic division algebras of perfect codes, we construct new codes that maintain the same properties as perfect codes in the synchronous case. Moreover, these codes preserve their full-diversity in asynchronous transmission. I Introduction and Problem Statement During the past decade, MIMO techniques have experienced a great interest in wireless communication systems. Using multiple antennas at the transmitter and the receiver provides high data rates and exploits the spatial diversity in order to fight channel fadings and hence improve the link reliability. Lately, cooperative diversity has emerged as a new form of spatial diversity via cooperation of multiple users in the wireless system . While preserving the same MIMO benefits, it counteracts the need of incorporating many antennas into a single terminal, especially in cellular systems and ad-hoc sensor networks, where it can be impractical for a mobile unit to carry multiple antennas due to its size, power and cost limitations. In cooperative networks, users communicate cooperatively to transmit their information by using distributed antennas belonging to other independent terminals. This way, a virtual MIMO scheme is created, where a transmitter is also acting as a relay terminal, with or without some processing, assisting another transmitter to convey its messages to a destination. The cooperative schemes have been widely investigated by analyzing their performance through different cooperative protocols [1, 2, 3]. These protocols fall essentially into two families: Amplify-and-Forward (AF) and Decode-and-Forward (DF). In order to achieve the cooperative diversity, space-time coding techniques of MIMO systems have also been applied yielding many designs of distributed space-time codes under the assumption of synchronized relay terminals [2, 3, 4]. However, this a priori condition on synchronization can be quite costly in terms of signaling and even hard to handle in relay networks [5, 6]. Unlike conventional MIMO transmitter, equipped with one antenna array using one local oscillator, distributed antennas are dispersed on different terminals, each one with its local oscillator. Thus, they are not sharing the same timing reference, resulting in an asynchronous cooperative transmission. On the other hand, in a synchronous transmission, the distributed STCs are constructed basically according to the rank and determinant criteria and hence aim at achieving full diversity. Note that the rows of the codeword matrix represent the different relay terminals (antennas). So, when asynchronicity is evoked, delays are introduced between transmitted symbols from different distributed antennas shifting the matrix rows. This matrix misalignment can cause rank deficiency of the space-time code, and thus performance degradation. Therefore, the codes previously designed are no more effective unless they tolerate asynchronicity. Furthermore, an efficient code design should satisfy the full-diversity order for any delay profile. This intends to guarantee full-rank codewords distance matrix i.e., its rank equal to the number of involved relays, hence leading to the so-called delay-tolerant distributed space-time codes . Ii Delay-Tolerant Distributed Space-Time Codes The first designs of such codes were presented by Li and Xia as full-diversity binary Space-Time Trellis Codes (STTC) based on the Hammons-El Gamal stacking construction, its generalization to Lu-Kumar multilevel space-time codes, and the extension of the latter codes for more diverse AM-PSK constellations [8, 9]. Systematic construction including the shortest STTC with minimum constraint length was also proposed in , as well as some delay-tolerant short binary Space-Time Block Codes (STBC) . Recently, Damen and Hammons extended the Threaded Algebraic Space-Time (TAST) codes to asynchronous transmission . The delay-tolerant TAST codes are based on three different thread structures where the threads are separated by using different algebraic or transcendental numbers that guarantee a non-zero determinant of the codewords distance matrix. An extension of this TAST framework to minimum delay length codes was considered in . Meanwhile, perfect space-time block codes that are optimal codes originally constructed for MIMO systems [14, 15, 16, 17], were also investigated for wireless relay networks. In [18, 19], the authors provided optimal coding schemes in the sense of DMT tradeoff , based cyclic division algebras for any number of users and for different cooperative strategies. Nevertheless, all these schemes assumed perfect synchronization between users. Then, it was in that Petros and Kumar discussed the delay-tolerant version of the optimal perfect code variants for asynchronous transmission. They stated that delay-tolerant diagonally-restricted CDA codes and delay-tolerant full-rate CDA codes can be obtained from previous designs by multiplying the codeword matrix by a random unitary matrix. This matrix can be taken specifically from an infinite set of unitary matrices that do not have elements in the code field. In this paper, we construct delay-tolerant distributed codes based on the perfect codes algebras from a different point of view. The new construction is obtained from the tensor product of two number fields, one of them being the field used for the perfect code. The codes are designed in such a way to maintain the same properties of their corresponding perfect codes in the synchronous transmission, namely full-rate, full-diversity and non-vanishing minimum determinant. In addition, unlike the perfect codes, the new codes preserve the full diversity in the asynchronous transmission. Before addressing the STC construction, we dedicate this section to briefly review the remarkable properties of the perfect codes as analyzed in [14, 15, 16, 17]. Then, following the framework of , we present the cooperative communication model of interest. Iii-a Perfect Space-Time Block Codes The concept of Perfect Code was originally proposed in [14, 15] for transmit antennas to describe a square linear dispersion STC . The perfect codes are constructed from cyclic division algebras of degree defined by and are number fields and the corresponding ring of integers. is called the base field and taken as or since the ST code transmits -QAM or -HEX information symbols for or , respectively. Thus, the constellations can be seen as finite subsets of the ring of Gaussian integers or Eisenstein integers , respectively. is a cyclic Galois extension of of degree with or a field extension appropriately chosen in order to get an existing lattice and a division algebra, and an algebraic number. is the generator of the Galois group , . For an element , the conjugates of are . So, the norm and the trace are defined respectively as the set of non-zero elements of . It is a non-norm element suitable for the cyclic extension . The cyclic division algebra is then expressed as a right -space The Perfect Codes satisfy the criteria: Full-rate: The code transmits symbols drawn from QAM or HEX constellation and thus has a rate of symbols per channel use (spcu). Full-diversity: According to the rank criterion , the determinant of the codeword distance matrix for any two distinct codewords is non-zero. By code linearity, it can be reduced to Non-vanishing minimum determinant: The minimum determinant of any codeword distance matrix, prior to SNR normalization, is lower bounded by a constant that is independent of the constellation size Cubic shaping: The QAM or HEX constellations are normalized according to the power at the transmitter so that the real vectorized codeword vectors are isomorphic to cubic lattices or . In other words, the rotation matrix encoding the information symbols into each layer is required to be unitary to guarantee the energy efficiency of the codes. The shaping constraint leads thus to two other properties. The first one is the Uniform average transmitted energy per antenna. The second one is the Information losslessness as the unitary linear dispersion matrix allows to preserve the mutual information of the MIMO channel. Thanks to prominent results on diversity-multiplexing tradeoff , the perfect codes also verify two other equivalent properties: DMT optimality: In , Elia et al. proved that the full-rate STCs from cyclic division algebra having NVD property achieve the optimal DMT in Rayleigh fading channel. Approximate universality: Being CDA-based codes with NVD property, the perfect codes are approximately universal and achieve DMT for arbitrary channel fading distribution. Satisfying all these criteria, the perfect codes showed to improve the performance in terms of error probability upon the best known codes. Iii-B Cooperative System Model In the sequel, we consider a cooperative system with a source communicating to a destination via relays in two phases as in Figure 1, and without direct links between the source and the destination. In the first phase, the source broadcasts its message to the potential relays. In the second phase, the relays use the DF protocol to detect the source message then if successfully detected transmit it to the destination. We assume that all the relays are able to achieve error free decoding which could be possible by selecting the source-relays links, and consider only the links that are not in outage. Note that it could also be possible that not all the relays may successfully decode the original message, so the number of transmitting relays is usually assumed as a random variable. Since the relays transmission overlap in time and frequency, they can cooperatively implement a distributed space-time code. Considering only the second phase of transmission, the system is equivalent to a MIMO scheme where the distributed perfect space-time code is used by the relays, with transmit antennas one by relay, and receive antennas at the destination. Every time slot , the relays send the column vector of the codeword and the destination receives where is the additive white Gaussian noise with i.i.d complex Gaussian variables with zero-mean and variance , , , being the noise variance per real dimension. represents the complex channel matrix modeled as i.i.d Gaussian random variables with zero mean and unit variance . The channel is assumed quasi-static with constant fadings during a transmitted codeword and independent fadings between subsequent codewords. Dealing with square STCs , the codeword matrix contains information symbols carved from two-dimensional QAM or HEX finite constellations denoted by . Iii-C Asynchronous Cooperative Diversity The above expression (6) is valid only when relays are synchronized. In the presence of asynchronicity, the codeword transmission is spanned on more than symbol intervals due to delays. Although the symbol synchronization is not required, we assume that the relays are synchronized at the frame-codeword level, which can be provided by means of network feedback signaling from the destination. Therefore, the start and the end of each codeword are aligned for different relays by transmitting zero symbols, and hence there is no interference between codewords transmission. We further assume that the timing errors between different relays are integer multiples of the symbol duration and the fractional timing errors are absorbed in the channel dispersion. In the codeword matrix, these delays are also filled with zeros; they are known at the receiver but not at the transmitting relays . Denoting a delay profile by , a delay corresponds to the relative delay of the received signal from the relay as referenced to the earliest received relay signal. Let denotes the maximum of the relative delays, then from the receiver perspective, the codeword matrix was sent instead of the space-time code. Iii-D Motivation of the Code Construction The diversity order of any space-time code is defined by the minimum rank of the distance codeword matrix over all pairs of distinct codewords . The distributed perfect codes are full-rate full-diversity for the synchronous transmission between the relays and the destination. Note that in general, a transmission between source, half-duplex relays and destination will result in rate loss. When asynchronicity is introduced, the code is no more full-rate since it is spanned on time instants. Moreover, certain delay profiles can result in linearly dependent rows, thus the code will loose its full-diversity property. Let us illustrate this by the following example. Example of Golden Code We consider the distributed Golden code transmitting information QAM symbols from two synchronized relays with the codeword matrix. The Golden code is designed on a cyclic field extension of degree over the base field . Using the generator matrix of the corresponding complex -dimensional lattice, the codeword elements are lattice points obtained by linear combination of pairs of symbols. Now, let the first relay be delayed by one symbol period with respect to the second , such that the new asynchronous codeword matrix be Suppose we have two distinct codewords and with and the other symbols equal i.e., . The difference between matrix codewords is defined in both synchronous and asynchronous cases as It can be seen that is a full-rank matrix whereas has rank one, so the Golden code is not a delay-tolerant code. In fact, it can be seen from the asynchronous codeword matrix that some symbols are aligned at the same instant due to delays loosing thus diversity. In order to resolve this problem of rank deficiency, our solution consists in transmitting from each antenna (relay) at each transmission time a different combination of all the information symbols. This way, in the presence of delays, we ensure that any combined symbol sent from the relays arrives at the destination in at least different instants, hence guaranteeing the full-diversity order of the space-time code. A new STC will have then the shifted codeword matrix Now, to get these linear combinations of the symbols, we need a higher dimensional lattice compared to the -dimensional lattice used for the Golden code. So, we propose to obtain the corresponding lattice generator matrix by the tensor product of two field extensions of , one of them being the field extension of the Golden code. Following this idea, we aim at constructing, in general, new codes that are based on CDA of the perfect codes such that they maintain the same optimal properties as perfect codes in the synchronous case. But also, these codes preserve their full-diversity in asynchronous transmission and thus are delay-tolerant for arbitrary delay profile. Iv Construction of Delay-Tolerant Distributed Codes based Perfect Codes Algebras Iv-a General Construction The approach consists in constructing a division algebra isomorphic to the tensor product (also called Kronecker product or cross-product) of two number fields of lower degrees. Other constructions based on the crossed-product algebras have been investigated in [22, 23] either for prime or coprime degrees of the composite algebras. In these constructions, the space-time code was built on the cyclic product algebra. However, in the present construction, the higher degree algebra is only used to derive appropriately the space-time code. Since we intend to construct a full-rate space-time code that is based on the CDA of the full-rate perfect code, then the first algebra to be considered is the cyclic division algebra of the perfect code of degree over the base field . For sake of simplicity, we analyze in the sequel the case of Gaussian Field to explain the construction. Indeed, we consider the cyclic field extension of degree over , being an algebraic number. The principal ideal is generated by an element and its integral basis is (or if unitary, it is given by ). The basis of the complex algebraic lattice is obtained by applying the canonical embedding to . Consequently, the generator matrix corresponds to the rotation matrix in where is a normalization factor used to guarantee the matrix unitarity. Now, we consider another Galois extension over of the same degree such that its discriminant is coprime to the one of i.e., . Let with an algebraic number. The Galois group is generated by as . The principal ideal of the algebra is such that and thus its integral basis is given by . The canonical embedding of gives another complex rotated lattice of that is generated by the unitary matrix with the normalization factor, The tensor product of both field extensions allows to build a rotated lattice in higher dimension corresponding to the complex unitary matrix based on the previous constructions. According to , : Let be the compositum of the above Galois extensions, of order over as presented in Figure 2. Since and have coprime discriminants, the corresponding lattice generator matrix can be obtained as the tensor product of the previous unitary generator matrices. : Let the order of the extensions, then the discriminant of is . The minimum product distance of the lattice is derived from the discriminant of as Using the matrix , the space-time coded components are given by the linear combination where is the information symbol vector carved from a -QAM constellation . Then, the space-time codeword matrix is defined by distributing the components with appropriate constant factors . It can be represented as a Hadamard product The key idea in the code construction is to determine the coefficients that allow one to preserve the same properties of the corresponding perfect codes in synchronous transmission (Section III-A). On one side, it can be seen that the new code transmits information symbols and thus is full-rate with spcu for a relays-destination transmission phase. On the other side, we need to find the factors that satisfy the rank criterion (4) in order to have full-diversity codes. Moreover, the perfect codes have non-vanishing minimum determinants. Then, we are interested in deriving ST codes that have not only non-zero determinants, but also these determinants do not vanish when constellation size increases. In order to guarantee uniform energy distribution in the codeword, we ask that verify . Choosing further the coefficients yields better determinants as obtained for the non-norm elements of the perfect codes . This restricts the values of to . It can also be noticed that the new code satisfies the cubic shaping property since the generator matrix of the -dimensional lattice is unitary, and hence the code is information lossless. In addition, when asynchronicity between relays is involved, the rank criterion should be also verified for the shifted matrix and another criterion will be analyzed that is the non-zero product distance of the codeword matrix in order to prove that the new codes are delay-tolerant, and thus keep their full-diversity in asynchronous transmission. V New Delay-Tolerant Codes from -dimensional Perfect Codes Based on the previous approach, we consider the perfect codes proposed in [14, 15] for dimensions to construct the new delay-tolerant codes. Then, in the next section, we apply this construction for the perfect codes presented for any number of antennas in [Elia2:2005]. V-a Code based on Golden Code The Golden Code was constructed in using the cyclic division algebra of degree over . is a Galois extension of degree . It is a -dimensional vector space of with basis , being the Golden number. Its Galois group is generated by . In order to get a rotated lattice of , the principal ideal generated by was found. Its basis is and its unitary generator matrix is given by with and the respective conjugates of and . Let the cyclotomic extension of degree over with the primitive root of unity. Its discriminant and it is coprime to the one of since . The Galois group is generated by and the integral basis of is . The corresponding unitary generator matrix is Therefore, is the compositum of Galois extensions of degree each, with coprime discriminants. The unitary matrix is obtained by the tensor product of previous matrices as and the codeword matrix is defined by where are the components of the vector with are -QAM symbols. We propose now to determine the coefficients that satisfy the non-vanishing determinant criterion. V-A1 Non-vanishing minimum determinant The determinant of this codeword matrix equals By developing and , we obtain It is interesting to note that the Golden codeword given by matrix (7) has a determinant of Therefore, by choosing and , the determinant of the new code is equal to the Golden code determinant, and does not vanish when increasing the size of the QAM constellation carved from . Hence, the new code achieves the diversity-multiplexing tradeoff [20, 16]. It can also be noticed that the coefficients can be changed equivalently to the coefficients of the Fourier matrix where is the primitive root of unity. For dimension , we have Furthermore, we have find fixed unitary matrices and such that for all values of with In the distributed setup, each row of the code matrix is transmitted by a different relay (Section III-B). In practical scenarios, the two relays do not share a common timing reference, and therefore, the arrival of packets is not synchronous. As we assume synchronization at the symbol level, the distributed code can still achieve full diversity if the differences between matrix codewords are full rank even when the different rows are arbitrarily shifted. In what follows, we prove that the new code satisfy this condition. Consider the shifted codeword matrix of we need to guarantee that it is full rank when i.e., for any from the constellation . This restricts to show that the submatrix is full rank i.e., its determinant when . More generally, having delay profiles or , the problem turns to prove that the product distance in the rotated constellation associated with the matrix of is non-zero over , so that any component product is non-zero. This product distance is evaluated as with for . As a direct consequence from the tensor product construction, Equation (14) gives Thus, the minimum product distance is non-zero. It can also be verified in by setting . So, is non-zero unless , and consequently the submatrix is full rank since unless . Therefore, the new code unlike the Golden code keeps its full-diversity in the case of asynchronous relays. However, we cannot guarantee the non-vanishing determinant property in the asynchronous case because the expression of can be interpreted as a Diophantine approximation of by rational numbers which can be made tighter by using larger constellation size. V-B Code based on Perfect Code In order to construct the delay-tolerant code, we consider the base field and we use -HEX symbols. Let , with the root of unity. The perfect code was constructed using the cyclic division algebra of order , where the relative extension and the generator of the cyclic extension with . The integral basis is given by and the complex lattice is a rotated version of . It is generated by The relative discriminant of is . Another extension of of degree that has coprime discriminant with is the cyclotomic extension with the primitive root of unity and . Its Galois group is generated by . The integral basis of is and the lattice generator matrix is The compositum of both extensions is of order over . Then, the corresponding -dimensional complex lattice is generated by the unitary matrix and the space-time code is defined by the matrix where are the components of vector , being the information symbol vector carved from -HEX constellation. V-B1 Non-vanishing minimum determinant By proceeding as previously, we need to determine the coefficients that guarantee the non-vanishing minimum determinant. In order to get so that a uniform average energy is transmitted per antenna, and to obtain better values of the determinant, we limit the choice of to . By developing the code determinant using symbolic computation under Mathematica, we find that it has the same expression as the perfect code determinant by choosing as the Fourier matrix coefficients in Therefore, the infinite code has non-vanishing minimum determinant equal to On the other hand, to prove the delay-tolerance of this code, we should guarantee that the corresponding shifted codeword matrices are full rank. Therefore, it suffices to verify that for each asynchronous matrix there exists a square matrix that is full rank i.e., its determinant is non-zero. In fact, if we enumerate all the delay profiles, it can be noticed that the problem of guaranteeing full-rank shifted matrices turns to guarantee that All component products are non-zero. This condition is always verified since the product distance over as . All minors of are non-zero that is equivalent to verify that the entries of the cofactor matrix of are non-zero. In order to prove the second condition, we find two unitary matrices and such that the codeword matrix can be written as for all , with is the perfect code matrix and and are defined by Let define the cofactor matrix of the perfect code by . Since is a finite subset of the cyclic division algebra , is also a subset of taken from the lattice with and is the ring of integers of . Hence, the cofactor matrix can be represented as a codeword matrix. For simplicity, we denote by and , the conjugates of an entry of the codeword matrix. The cofactor codeword matrix is then defined by where each diagonal . Since , we denote its cofactor matrix. It is given by and satisfies Developing the cofactor matrix , we get
Recommended for: Capital accumulation, savers and investors 10-20 years from retirement. The Moderate Risk Portfolio is appropriate for an investor with a medium risk tolerance and a time horizon longer than five years. Moderate investors are willing to accept periods of moderate market volatility in exchange for the possibility of receiving returns that outpace inflation by a significant margin. To be compatible with most retirement plans, this Portfolio does not include our Maximum Yield Strategy and leveraged Universal Investment Strategy. If you are using a more flexible account you can choose from our unconstrained portfolios in the Portfolio Library. We also offer a version for 401k plans which do not allow individual stocks. See details here. 'Total return, when measuring performance, is the actual rate of return of an investment or a pool of investments over a given evaluation period. Total return includes interest, capital gains, dividends and distributions realized over a given period of time. Total return accounts for two categories of return: income including interest paid by fixed-income investments, distributions or dividends and capital appreciation, representing the change in the market price of an asset.'Applying this definition to our asset in some examples: 'The compound annual growth rate isn't a true return rate, but rather a representational figure. It is essentially a number that describes the rate at which an investment would have grown if it had grown the same rate every year and the profits were reinvested at the end of each year. In reality, this sort of performance is unlikely. However, CAGR can be used to smooth returns so that they may be more easily understood when compared to alternative investments.'Using this definition on our asset we see for example: 'Volatility is a rate at which the price of a security increases or decreases for a given set of returns. Volatility is measured by calculating the standard deviation of the annualized returns over a given period of time. It shows the range to which the price of a security may increase or decrease. Volatility measures the risk of a security. It is used in option pricing formula to gauge the fluctuations in the returns of the underlying assets. Volatility indicates the pricing behavior of the security and helps estimate the fluctuations that may happen in a short period of time.'Using this definition on our asset we see for example: 'The downside volatility is similar to the volatility, or standard deviation, but only takes losing/negative periods into account.'Using this definition on our asset we see for example: 'The Sharpe ratio is the measure of risk-adjusted return of a financial portfolio. Sharpe ratio is a measure of excess portfolio return over the risk-free rate relative to its standard deviation. Normally, the 90-day Treasury bill rate is taken as the proxy for risk-free rate. A portfolio with a higher Sharpe ratio is considered superior relative to its peers. The measure was named after William F Sharpe, a Nobel laureate and professor of finance, emeritus at Stanford University.'Using this definition on our asset we see for example: 'The Sortino ratio improves upon the Sharpe ratio by isolating downside volatility from total volatility by dividing excess return by the downside deviation. The Sortino ratio is a variation of the Sharpe ratio that differentiates harmful volatility from total overall volatility by using the asset's standard deviation of negative asset returns, called downside deviation. The Sortino ratio takes the asset's return and subtracts the risk-free rate, and then divides that amount by the asset's downside deviation. The ratio was named after Frank A. Sortino.'Using this definition on our asset we see for example: 'The ulcer index is a stock market risk measure or technical analysis indicator devised by Peter Martin in 1987, and published by him and Byron McCann in their 1989 book The Investors Guide to Fidelity Funds. It's designed as a measure of volatility, but only volatility in the downward direction, i.e. the amount of drawdown or retracement occurring over a period. Other volatility measures like standard deviation treat up and down movement equally, but a trader doesn't mind upward movement, it's the downside that causes stress and stomach ulcers that the index's name suggests.'Applying this definition to our asset in some examples: 'Maximum drawdown measures the loss in any losing period during a fund’s investment record. It is defined as the percent retrenchment from a fund’s peak value to the fund’s valley value. The drawdown is in effect from the time the fund’s retrenchment begins until a new fund high is reached. The maximum drawdown encompasses both the period from the fund’s peak to the fund’s valley (length), and the time from the fund’s valley to a new fund high (recovery). It measures the largest percentage drawdown that has occurred in any fund’s data record.'Using this definition on our asset we see for example: 'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Max Drawdown Duration is the worst (the maximum/longest) amount of time an investment has seen between peaks (equity highs). Many assume Max DD Duration is the length of time between new highs during which the Max DD (magnitude) occurred. But that isn’t always the case. The Max DD duration is the longest time between peaks, period. So it could be the time when the program also had its biggest peak to valley loss (and usually is, because the program needs a long time to recover from the largest loss), but it doesn’t have to be'Which means for our asset as example: 'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks (equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of all.'Which means for our asset as example:
Boris Brimkov, Illya V. Hicks: Complexity and computation of connected zero forcing. Discrete Applied Mathematics 229: 31-45 (2017). [link] Boris Brimkov, Jennifer Edmond, Robert Lazar, Bernard Lidicky, Kacy Messerschmidt, Shanise Walker: Injective choosability of subcubic planar graphs with girth 6. Discrete Mathematics 340: 2538-2549 (2017). [link] Aida Abiad, Boris Brimkov, Aysel Erey, Lorinda Leshock, Xavier Martinez-Rivera, Suil O, Sung-Yell Song, Jason Williford: On the Wiener index, distance cospectrality and transmission-regular graphs. Discrete Applied Mathematics (2017). [link] Boris Brimkov, Illya V. Hicks: Chromatic and flow polynomials of generalized vertex join graphs and outerplanar graphs. Discrete Applied Mathematics 204: 13-21 (2015). [link] Boris Brimkov, Illya V. Hicks: Memory efficient algorithms for cactus graphs and block graphs. Discrete Applied Mathematics 216: 393-407 (2015). [link] Boris Brimkov: Geometric approach to string analysis for biosequence classification. J. Integrative Bioinformatics 11(3) (2014). [link] Valentin E. Brimkov, Reneta P. Barneva, Boris Brimkov: Connected distance-based rasterization of objects in arbitrary dimension. Graphical Models 73(6): 323-334 (2011). [link] Boris Brimkov: On Sets of Line Segments Featuring a Cactus Structure. IWCIA 2017: 30-39. [link] Boris Brimkov, Valentin E. Brimkov: Geometric Approach to Biosequence Analysis. PACBB 2014: 97-104. [link] Boris Brimkov: Memory Efficient Shortest Path Algorithms for Cactus Graphs. ISVC 2013: 476-485. [link] Boris Brimkov, Jae-Hun Jung, Jim Kotary, Xinwei Liu, Jing Zheng: A spectral and radial basis function hybrid method for visualizing vascular flows. CompIMAGE 2012: 205-208. [link] Kamen Kanev, Nikolay Mirenkov, Boris Brimkov, Kanio Dimitrov: Semantic Surfaces for Business Applications. S3T 2009: 36-43. [link] Valentin E. Brimkov, Reneta P. Barneva, Boris Brimkov: Minimal Offsets That Guarantee Maximal or Minimal Connectivity of Digital Curves in nD. DGCI 2009: 337-349. [link] Valentin E. Brimkov, Reneta P. Barneva, Boris Brimkov, Francois de Vieilleville: Offset Approach to Defining 3D Digital Lines. ISVC 2008: 678-687. [link] R. P. Barneva, K. Kanev, B. Kapralos, M. Jenkin, B. Brimkov: Integrating technology-enhanced collaborative surfaces and gamification for the next generation classroom. Journal of Educational Technology Systems 45(3): 309-325 (2017). [link] B. Brimkov: Emphasizing space efficiency in a computer science curriculum. Journal of Computing Sciences in Colleges 31(6): 55-57 (2016). [link] R. P. Barneva, B. Brimkov: How computer science develops mathematical skills. Journal of Computing Sciences in Colleges 26(6): 170-172 (2011). [link] B. Brimkov, C. C. Fast, I. V. Hicks: Computational Approaches for Zero Forcing and Related Problems. arXiv:1704.02065 (2017). [link] B. Brimkov, C. C. Fast, I. V. Hicks: Graphs with Extremal Connected Forcing Numbers. arXiv:1701.08500 (2017). [link] D. Amos, J. Asplund, B. Brimkov, R. Davila: The sub-k-domination number of a graph with applications to k-domination. arXiv:1611.02379 (2016). [link] B. Brimkov, R. Davila: Characterizations of the connected forcing number of a graph. arXiv:1604.00740 (2016). [link] B. Brimkov: On the logspace shortest path problem. Electronic Colloquium on Computational Complexity (ECCC) 23:3 (2016). [link] B. Brimkov: A note on the clique number of complete k-partite graphs. arXiv:1507.01613 (2015). [link] B. Brimkov: A reduction of the logspace shortest path problem to biconnected graphs. arXiv:1511.07100 (2015). [link] B. Brimkov, V.E. Brimkov: Geometric approach to string analysis: deviation from linearity and its use for biosequence classification. arXiv:1308.2885 (2013). [link] Boris Brimkov: Efficient computation of graph polynomials. Master's Thesis, Rice University, May 2015. [link] Redistricting with optimal minority representation. Chromatic polynomials of some grid graphs. A matching-based heuristic for the traveling salesman problem. On the connected forcing number of a graph. "Connected Zero Forcing of a Graph", INFORMS Annual Meeting, Nashville, Nov 2016. "Structural and Extremal Results on Connected Zero Forcing", AMS Sectional Meeting, Denver, Oct 2016. "Space Efficient Algorithms for Large Scale Graph Problems", ACM Richard Tapia Celebration of Diversity in Computing Conference, Austin, Sept 2016. "Characterizing Chromatic and Flow Polynomials of Graphs", Graduate Research Workshop in Combinatorics, Laramie, July 2016. "Emphasizing Space Efficiency in a Computer Science Curriculum" (poster), Consortium for Computing Sciences in Colleges, Hamilton College, Clinton, April 2016. "Logspace shortest path algorithms", Computer Science Graduate Seminar, Rice University, Houston, Nov 2015. "Efficient Computation of Chromatic and Flow Polynomials", INFORMS Annual Meeting, Philadelphia, Nov 2015. "A space-efficient shortest path algorithm for block graphs", Computational and Applied Mathematics Graduate Seminar, Rice University, Houston, Sept 2015. "Efficient computations of certain graph polynomials", Seminar of Algebra Department, University of Sofia St. Kliment Ohridski, Sofia, Bulgaria, June 2015. "Chromatic and flow polynomials of generalized vertex join graphs and outerplanar graphs", Computational and Applied Mathematics Graduate Seminar, Rice University, Houston, March 2015. "Geometric approach to string analysis", Computational and Applied Mathematics Graduate Seminar, Rice University, Houston, Oct 2013. "2D Modality Classification" (won third prize), Summer School on Image Processing, Vienna, Austria, July 2012. "A Novel Hybrid Method and Library Interpolation for Rapid CFD of Vascular Flows" (poster), Council on Undergraduate Research Posters on the Hill, Washington DC, April 2012. "Optimizing the Performance of a Hybrid Method for Numerically Solving and Visualizing Vascular Flows" (poster), Laurier Centennial Conference: Applied Mathematics, Modeling, and Computer Science, Waterloo, Canada, July 2011. "Proposing a Hybrid Method to Simulate Irregular Vascular Flows", Mathematical Developments and Applications: Radial Basis Functions, UMass Dartmouth, Dartmouth, June 2011. "Offset approach to defining 3D digital lines", International Symposium on Visual Computing, Las Vegas, Dec 2008.
Sanjoy posted on Wednesday, April 13, 2005 - 5:52 pm Dear Professor Muthen/s Before asking the main questions related to "difftest" let me clarify couple of things. Please rectify me if I'm wrong. 1. I have MPlus version (3.12). I suppose this one is the most updated. I could not find the example 12.12 (page 278) in the "User's Guide example folder", in fact for that matter there is not a single file from chapter 12 of the MPlus User's Guide Manual in the above mentioned folder that we got from MPlus CD 2. As an alternative therefore, I replicate ur codes (page 278) and tried ... it fails to run, it said " *** FATAL ERROR VARIABLE Y7 CAUSES A SINGULAR WEIGHT MATRIX PART. THIS MAY BE DUE TO THE VARIABLE BEING DICHOTOMOUS BUT DECLARED AS CONTINUOUS. RESPECIFY THE VARIABLE AS CATEGORICAL." … same for the variables Y8 and Y9 Next, I include y7-y9 as categorical in the command line and tried to run ... THEN it WORKED WELL, I have used the same data set that u have mentioned at page 278, though u have NOT mentioned y7-y9 as categorical ... I just need to make sure I have not messed up, kindly rectify me if I have done so Now, coming to the DIFFTEST issues Q1. Usually, though not always, under Null Hypothesis (H0) we assume less restrictive model and under Alternative Hypothesis (H1) we put the restrictions on model parameters (e.g. chow test) ...it looks here for the case of "difftest" we alter the usual practices, why is it so Q2. Can you suggest some article written on “Difftest” practice Q3. How should we use the “Difftest” result? From the second step result I got this “ Chi-Square Test for Difference Testing Value 2.968 Degrees of Freedom 3** P-Value 0.3953 ” Usually a p-value close to zero (0.05 is a typical threshold) signals that our null hypothesis is false and we reject Null while a Large p-values (like the above .39) implies that there is no detectable difference for the sample size used, and therefore we fail to reject Null. … However in this “difftest” case, would it be the reverse ? Thanks and regards BMuthen posted on Wednesday, April 13, 2005 - 11:24 pm 1. The examples from Chapter 12 are not included with the Mplus CD. 2. I would have to see the full model to answer this question. A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model. There is currently no article written on DIFFTEST. Sanjoy posted on Thursday, April 14, 2005 - 9:37 am Thank you Professor ... I will mail you the full model Sanjoy posted on Thursday, April 14, 2005 - 10:47 am Dear Professor .... Why it's the case that for WLSMV the conventional approach of taking the difference between the chi-square values and the difference in the degrees of freedom is not appropriate ... I mean Q1. How can we show that the standard chi-square difference is NOT distributed as chi-square Q2. How do ensure that DIFFTEST is doing the correct thing Thanks and regards BMuthen posted on Friday, April 15, 2005 - 1:29 am You may want to look at the literature by Satorra and Bentler on robust chi-square difference testing with continuouos nonnormal outcomes. The issues are the same. You can do a simulation study to see how well DIFFTEST performs. There will be a forthcoming paper on the DIFFTEST theory. 1. Is it the article "A scaled difference chi-square test statistic for moment structure analysis", Psychometrika, 66,507-514, 2001(A. Satora and P.M. Bentler) that you have referred me to check with ... or something else 2. I'm severely time constrained, nonetheless I will try the simulation things...in between, if you kindly send me an electronic copy of "forthcoming paper on the DIFFTEST theory" ... that would be a tremendous help to me ... if the authors prohibit us from quoting, it's goes without saying that we will stick to that, however reading their article will help me to understand the nuances of DIFFTEST more comprehensively Thanks and regards BMuthen posted on Saturday, April 16, 2005 - 4:25 am 2. The paper is not ready to be sent at this time. Dr. Muthen, I am trying to follow example 12.12 in the Mplus version 4 manual to use the chi-square difference test in models involving the WLSMV estimator. I am receiving the following error message: THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. My second step model involves restraining 2 regression coefficients as being equal: y1 ON x1 (1); y1 ON x2 (1); These predictors were freely regressed on y1 in the model that I am using in the first step as indicated on p. 314. My interest is to test whether fixing these regression coefficients in the second step deteriorates the model fit. Is this possible to do following example 12.12, or am I off basis with regard to using the chi-square difference test for such a purpose? Thank you. It sounds like what you are doing is possible. You would need to send your input, data, output, and license number to firstname.lastname@example.org for us to say any more. D C posted on Friday, September 24, 2010 - 2:48 pm I am doing a multiple group analysis of a factor structure defined by categorical-ordinal indicators. I am using the WLSMV estimator, and hence I use the DIFFTEST to judge whether various restrictions imposed on the model significantly worsen the fit. However, my data has a relative large sample size (N=3650 with 2100 in one group category and the 1550 in the other. My questions are: 1. Is the DIFFTEST sensitive to large sample size as the CHI-Square tests? 2. If so, I would like to use differences in CFIs (Meade et al, 2008)values to help judge the difference in model fit between restricted and less restricted models. So, in multiple group analysis (when GROUP statement is used)and various restrictions are imposed on a series of models, are the CFI values estimated each time anew? That is, is it advisable to take differences of CFI values between the restricted and less restricted models to judge model fit? Dear Dr. Muthen, I am running a structural equation model with categorical latent variables (both IV and DV) and I am attempting to do a Chi-Square difference test for my multiple group analysis. I have been running into problems when attempting to do the two-step chi-square test of model fit required when using a WLSMV estimator. After saving my derivatives in step-one, with every pathway constrained, I then go on to unconstrain one pathway for one group. Here is where I run into problems. I keep getting the error message : THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL IS NOT NESTED IN THE H1 MODEL. The chi-square in the second model is larger and there are a greater number of degrees of freedom (the model with one pathway unconstrained), compared to the baseline, fully constrained model. Is there something I’m not doing properly that is making my H0 not nest in the H1 model? Any guidance you can give me would be much appreciated. I can send you my input and data if that is helpful. Thank you! Dear Dr Muthen I am fitting a cross lagged model and comparing group differences among white, hispanic and black. i am using categorical indicators for my latent factors. The difftest comparing the invariance and non invariance is Chi-Square Test for Difference Testing Value 103.284 Degrees of Freedom 38 P-Value 0.0000 but RMSEA (0.04) TLI(.982) and CFI (.98) of the more restricted model are better than the RMSE (0.051) TLI (0.978) and CFI (0.966) of the less restricted model. should not it be otherwise? i mean if the diffest is significant then should i expect that the goodness of fit of the less restricted model were better than the indicators of the more restricted model? thank you fernando I don't know why RMSEA and CFI are so good for the more restrictive model, but I assume that the chi-2 is bad also for the less restrictive model. In such cases, these fit indices don't always come out in an expected order of magnitude. I would rely more on the chi-2 DIFFTEST. my question concerns the interpretation of the chi-square difference test under WLSMV estimator. In your post above (April 13, 2005 at 11:24pm) you suggested that "A p-value greater than .05 says that the restrictions cannot be rejected that is the restrictions do not worsen the fit of the model". However, on the UCLA website (precisely here: http://www.ats.ucla.edu/stat/mplus/faq/difftest.htm) an opposite interpretation of the p-value seems to be followed. Am I missing something? I cannot see how the two interpretations could match. Please, I will be grateful if you would suggest me some key references as well. I think they are saying the same thing in a slightly different way. See DIFFTEST under Technical Appendices on the website. Suhaer Yunus posted on Thursday, November 28, 2013 - 12:27 pm The independent variables in my study are binary and I have computed EFA (using Mplus version 7.1). The EFA results show that there are four first order correlated factors. The CFA results confirm that too. Now I want to show that whether four correlated factors model is better or whether there should be one higher order factor representing the four first order factors or a single factor measuring all items that form four factors. I understand that the models are not really nested so the DIFFTEST option may not be appropriate. I have computed three different models separately but how can I compare the results of these to choose the best one. Can I report the change in Chi-square and change df for these results? The results of the models are: Base Model - Distinct First Order Factors Ch-sq - 1160.660*(df= 48) RMSEA= 0.034 CFI=0.935 TLI= 0.911 Model A –Second Order Model Chi-Sq (1106.889*)(df=50) RMSEA= 0.032 CFI= 0.938 TLI=0.919 But it suggest 0.000 correlation for one first order factor with the higher order factor. Model B - Single Factor Chi-Sq=9593.979*(df= 54) RMSEA=0.093 CFI=0.444 TLI=0.321 I have computed the DIFFTEST. The four correlated factor model is the least restrictive. The second order is more restrictive and the single factor is most restrictive. I am comparing the single factor to the second order factor. With ESTIMATOR=WLSMV and PARAMETRIZATION=DELTA I get the following results: Base Model - First Order Ch-sq - 1160.660*(df= 48) RMSEA= 0.034 CFI=0.935 TLI= 0.911 Second Order Model Chi-Sq (1520.644*)(df=50) Chi-square test for difference testing: Value= 223.535 Df=2 p-value= 0.000 values 1-3 don't work and I get the same error message.In the output I have noticed that there are two values for CONVERGENCE CRITERION i.e. convergence criterion and another convergence criterions for H1. Which value needs reducing? If I set CONVERGENCE= 0.15 or a value higher than 0.15 it gives me the DIFFTEST results. But I am not sure if it is ok to use CONVERGENCE=0.15 or above. I'm trying to compare a model of two latent factors with a model of one latent factor. I did that with DIFFTEST, since I am relying either on MLMV, ULSMV or WLSMV. MPLUS does not report a DIFFTEST-result, when I fix the correlation between the two factors at 1. At the same time it reports a warning NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NON-INVERTIBLE. CHECK YOUR STARTING VALUES. This warning also occurrs when I run the model with the 2 factors with fixed1 correlation without the Difftest option. All models work perfectly when the correlation is not fixed or when there is only one latent variable. Could there be a specific reason for the problem? My sample size is <200 or did I misspecify the model: ANALYSIS: type=general; estimator = MLMV; MODEL: OD BY cb_16_m4 cb_12_m4 cb_11_m4 cb_1_m4 cb_17_m4 cb_6_m4; ID BY cb_10_m4 cb_14_m4 cb_15_m4 cb_5_m4; OD WITH ID@1; [OD@0];[ID@0]; Okay, thank you for the advice. I will have to try that. And is there any way how can I compute the chi-squared test for nested models when one model has two factors and the restricted model fixes the correlation between the two factors to 1? (The problem being that there seem to be somehow convergence problems for the model with the factors who correlate perfectly.) Hello, I am working on a mediation model which includes latent and observed variables. I got my measurement model to run fine. The SEM model is working great as well. However, I cannot get the difftest to compare the two models to work. I keep getting the warning that the H0 is not nested in the H1 model. I just cannot figure out where I went wrong. Please help. Measure model: categorical are mn35a mn35b mn35c mn35e mn35d anemia; analysis: type=complex; parameterization=theta; estimator=wlsmv; model: CHW by mn35a mn35b mn35c mn35e mn35d; MN35D WITH MN35B ; know by vit diare Nution ebf; chw with anemia; savedata: difftest is first.out; SEM Model: analysis: type=complex; parameterization=theta; estimator=wlsmv; difftest=first.out; model: CHW by mn35a mn35b mn35c mn35e mn35d; MN35D WITH MN35B ; know by vit diare Nution ebf; anemia on know chw; know on chw; model indirect: anemia ind chw; I am doing a multiple group analysis and want to test whether some paths in my structural model differ significantly between the different groups. I did this by constraining all the paths, except for the path i'm interested in (H1) and compare this model with a fully constrained model (H0). Since I'm using WLSMV as estimator, I use the difftest option to get chi square difference test. However, I get the following warning: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL MAY NOT BE NESTED IN THE H1 MODEL. DECREASING THE CONVERGENCE OPTION MAY RESOLVE THIS PROBLEM. Could you explain me what this message means and how I can fix this? I compared nine models to a full CLPM, using the difftest function. The full CLPM had the best fit, however, many estimates in this model are not statistically significant. I find it difficult to interpret this, as I would have expected that an alternative model in which these paths were constrained to 0 would have had a better fit. I have 5 waves of data, and first I thought that it may have been due to the fact that only one estimate of the 4 lagged effects (a1->b2, a2->b3 etc) was significant, but as I have also lagged associations between two variables that are not significant on any of the time intervals, I still do not understand why the full model had a better fit based on the difftest. I was wondering what your thoughts are on this topic. If two estimates are each insignificant, it can still happen that a test of both of them being zero rejects. This is because the estimates are correlated. You can check by using a Wald test using the Mplus feature of Model Test where you can include several parameter tests. Thank you for your response, I hope you can give an additional comment. What do you mean by estimates are correlated? That they are equal in size (that is what I would test in the Model Test? Path1=path2?)? Or can I also do a with-statement in Model test?
Plus articles go far beyond the explicit maths taught at school, while still being accessible to someone doing A level maths. They put maths in context by explaining the bigger picture — they explore applications in the real world, find maths in unusual places, and delve into mathematical history and philosophy. We hope that this collection will provide an ideal resource for anyone – including students and teachers – wanting to explore the world of maths. One thing that will never change is the fact that the world is constantly changing. Mathematically, rates of change are described by derivatives. If you try and use maths to describe the world around you — say the growth of a plant, the fluctuations of the stock market, the spread of diseases, or physical forces acting on an object — you soon find yourself dealing with derivatives of functions. The way they inter-relate and depend on other mathematical parameters is described by differential equations. These equations are at the heart of nearly all modern applications of mathematics to natural phenomena. The applications are almost unlimited, and they play a vital role in much of modern technology. The Plus articles listed below all deal with differential equations. In some cases the equations are introduced explicitly, while others focus on a broader context, giving a feel for why the equations hold the key to describing particular situations. None of the articles require more than a basic understanding of calculus, which you can get from reading our easy introductions. - Getting started — a quick recap on calculus and some articles introducing modelling with differential equations; - Bigger picture — examples of differential equations at work in the real world; - Mathematical frontiers — mathematical developments, and the people behind them, that have contributed to the area of differential equations. Maths in five minutes: Calculus — Need to know about calculus? Find out in our quick introduction! Maths in a minute: Differential equations Change is the only constant in our lives — find out why differential equations are so useful. 101 uses of a quadratic equation: Part II — The quadratic equation is one of the mightiest beasts in maths. This article describes how several real-life problems give rise to differential equations in the shape of quadratics, and solves them too. Natural frequencies in music — It takes vibrations to make sound, and differential equations to understand vibrations. The article uses Newton's second law of motion to model the behaviour of mass vibrating on a string. Have we caught your interest? — Those who understand compound interest are destined to collect it. Those who don't are doomed to pay it. If you want to earn rather than lose, you need to understand the differential equations that are introduced explicitly in this article. More applications of differential equations ...in medicine and nature Saving lives: the mathematics of tomography — Not so long ago, if you had a medical complaint, doctors had to open you up to see what it was. These days they have a range of sophisticated imaging techniques at their disposal, saving you the risk and pain of an operation. This article looks at the maths that isn't only responsible for these medical techniques, but also for much of the digital revolution. Modelling cell suicide — This article sheds light on suicidal cells and a mathematical model that could help fight cancer. Eat, drink and be merry: making sure it's safe — What can maths tell us about the safest way to cook food? Eat, drink and be merry: making it go down well — This article takes a dive into the rather smelly business of digesting food, and how a crazy application of chaos theory shows the best way to digest a medicinal drug. Maths and Climate Change: the Melting Artic — The Arctic ice cap is melting fast and the consequences are grim. Mathematical modelling is key to predicting how much longer the ice will be around and assessing the impact of an ice free Arctic on the rest of the planet. Modeling Cell Suicide — The light is shed on suicidal cells and a mathematical model that could help fight cancer. Tsunami — The tsunami of December 26th 2004 has focused the world's attention on this terrifying consequence of an underwater earthquake. Michael McIntyre explores the underlying wave mathematics. Uncoiling the Spiral: Maths and Hallucinations — Think drug-induced hallucinations, and the whirly, spirally, tunnel-vision-like patterns of psychedelic imagery immediately spring to mind. So what can these patterns tell us about the structure of our brains? The mathematics of diseases — Over the past one hundred years, mathematics has been used to understand and predict the spread of diseases, relating important public-health questions to basic infection parameters. This article describes some of the mathematical developments that have improved our understanding and predictive ability and introduces the differential equations involved. Maths for the broken-hearted — You take care of yourself - you eat right, don't smoke, drink in moderation and keep fit - but have you considered differential equations as a secret weapon in keeping you and your heart healthy? Chaos in the brain — Saying that someone is a chaotic thinker might seem like an insult — but it could be that the mathematical phenomenon of chaos is a crucial part of what makes our brains work. Chaos is all about unpredictable change and this can be described using differential equations. How the leopard got its spots — How does the uniform ball of cells that make up an embryo differentiate to create the dramatic patterns of a zebra or leopard? How come there are spotty animals with stripy tails, but no stripy animals with spotty tails? Get to know the equations that explain all this and more. Going with the flow — This article describes what happens when two fluids of different densities meet, for example when volcanoes erupt and hot ash-laden air is poured out into the atmosphere. The article explains Newton's second law of motion as a differential equation and its relation to fluid mechanics. How plants halt sands — Plants can stop the desert from relentlessly invading fertile territory. But just how and where should they be planted? A model involving differential equations gives the answers. Fluid mechanics researcher — Trying to solve differential equations can give you a stomach ache sometimes, but the equations can also help to prevent one. André Léger uses fluid dynamics to understand how food sloshes around the intestines. Meteorologist — If one thing is sure to change, it's the weather. Helen Hewson explains how she helps to predict it at the Met Office. Universal pictures — Partial differential equations explored through images: from the maths of turbulence to modelling human interaction. ...in physics and technology... Unjamming traffic —Why traffic jams occur for seemingly no reason Supersonic Bloodhound — Differential equations help you achieve supersonic speeds. The dynamic Sun — The Sun emits light from all across the electromagnetic spectrum and understanding its emission is essential in understanding solar dynamics. The article introduces the wave equation. Light attenuation and exponential laws — Many natural processes adhere to exponential laws. The attenuation of light — the way it decays in brightness as it passes through a thin medium — is one of them. The article explores the attenuation law of light transmission in its differential form. Computer games developer — In the real world, balls bounce and water splashes because of the laws of physics. In computer games, a physics engine ensures the virtual world behaves realistically. Nick Grey explains that to make the games, you need to understand the physics, and that requires differential equations. Spaghetti breakthrough — Differential equations model the breaking behaviour of pasta. Schrödinger's equation — Last but not least in this section, here's something for the more advanced. Schrödinger's equation is the central equation in quantum mechanics, which predicts such quantum weirdnesses as entanglement and superposition. It's a differential equation which in simple examples isn't all that hard to solve. This three part series of articles introduces the equation, looks at a simple example and tries to understand what it tells us about the real world. ...in sport and art Restoring Profanity — How do a 14th century painting, heat flow and differential equation relate to one another? If you can't bend it, model it! — David Beckham and his fellow players may intuitively know how to bend a football's flight as they wish, but the rest of us have to resort to the differential equations describing the aerodynamics of footballs. Aerodynamicist — The smallest alteration in the shape of a Formula One car can make the difference between winning and losing. It's the air flow that does it, so, as Christine Hogan explains, any Formula One team needs an aerodynamicist. Formulaic football — Mathematicians build a mathematical model of a football match. Risky Business: How Price Derivatives — In the light of recent events, it may appear that attempting to model the behaviour of financial markets is an impossible task. However, there are mathematical models of financial processes that, when applied correctly, have proved remarkably effective. Financial modelling — David Spaughton and Anton Merlushkin work for Credit Suisse First Boston, where they provide traders in the hectic dealing room with software based on complicated mathematical models of the financial markets. They explain how changing markets need the maths of change. Financial maths course director — Riaz Ahmad's mathematical career has led him from the complexities of blood flow to the risks of the financial markets via underwater acoustics — differential equations help to understand all of these. Project Finance Consultant — Nick Crawley set up his own financial consultancy firm in Sydney, Australia, offering advice on large-scale financing deals. Understanding the risks of investments means understanding the fluctuations of markets, and that requires differential equations. The calculus of the complex — Calculus has long been key to describing the world. Now fractional calculus is providing new ways of describing complex systems. A differential story — Peter D Lax wins the 2005 Abel Prize for his work on differential equations. Count-abel even if not solve-able — The 2004 Abel Prize goes to Sir Michael Atiyah and Isadore Singer for their work on how to solve systems of equations.
By the end of this section, you will be able to: - Determine the resultant waveform when two waves act in superposition relative to each other. - Explain standing waves. - Describe the mathematical representation of overtones and beat frequency. Most waves do not look very simple. They look more like the waves in Figure 16.35 than like the simple water wave considered in Waves. (Simple waves may be created by a simple harmonic oscillation, and thus have a sinusoidal shape). Complex waves are more interesting, even beautiful, but they look formidable. Most waves appear complex because they result from several simple waves adding together. Luckily, the rules for adding waves are quite simple. When two or more waves arrive at the same point, they superimpose themselves on one another. More specifically, the disturbances of waves are superimposed when they come together—a phenomenon called superposition. Each disturbance corresponds to a force, and forces add. If the disturbances are along the same line, then the resulting wave is a simple addition of the disturbances of the individual waves—that is, their amplitudes add. Figure 16.36 and Figure 16.37 illustrate superposition in two special cases, both of which produce simple results. Figure 16.36 shows two identical waves that arrive at the same point exactly in phase. The crests of the two waves are precisely aligned, as are the troughs. This superposition produces pure constructive interference. Because the disturbances add, pure constructive interference produces a wave that has twice the amplitude of the individual waves, but has the same wavelength. Figure 16.37 shows two identical waves that arrive exactly out of phase—that is, precisely aligned crest to trough—producing pure destructive interference. Because the disturbances are in the opposite direction for this superposition, the resulting amplitude is zero for pure destructive interference—the waves completely cancel. While pure constructive and pure destructive interference do occur, they require precisely aligned identical waves. The superposition of most waves produces a combination of constructive and destructive interference and can vary from place to place and time to time. Sound from a stereo, for example, can be loud in one spot and quiet in another. Varying loudness means the sound waves add partially constructively and partially destructively at different locations. A stereo has at least two speakers creating sound waves, and waves can reflect from walls. All these waves superimpose. An example of sounds that vary over time from constructive to destructive is found in the combined whine of airplane jets heard by a stationary passenger. The combined sound can fluctuate up and down in volume as the sound from the two engines varies in time from constructive to destructive. These examples are of waves that are similar. An example of the superposition of two dissimilar waves is shown in Figure 16.38. Here again, the disturbances add and subtract, producing a more complicated looking wave. Sometimes waves do not seem to move; rather, they just vibrate in place. Unmoving waves can be seen on the surface of a glass of milk in a refrigerator, for example. Vibrations from the refrigerator motor create waves on the milk that oscillate up and down but do not seem to move across the surface. These waves are formed by the superposition of two or more moving waves, such as illustrated in Figure 16.39 for two identical waves moving in opposite directions. The waves move through each other with their disturbances adding as they go by. If the two waves have the same amplitude and wavelength, then they alternate between constructive and destructive interference. The resultant looks like a wave standing in place and, thus, is called a standing wave. Waves on the glass of milk are one example of standing waves. There are other standing waves, such as on guitar strings and in organ pipes. With the glass of milk, the two waves that produce standing waves may come from reflections from the side of the glass. A closer look at earthquakes provides evidence for conditions appropriate for resonance, standing waves, and constructive and destructive interference. A building may be vibrated for several seconds with a driving frequency matching that of the natural frequency of vibration of the building—producing a resonance resulting in one building collapsing while neighboring buildings do not. Often buildings of a certain height are devastated while other taller buildings remain intact. The building height matches the condition for setting up a standing wave for that particular height. As the earthquake waves travel along the surface of Earth and reflect off denser rocks, constructive interference occurs at certain points. Often areas closer to the epicenter are not damaged while areas farther away are damaged. Standing waves are also found on the strings of musical instruments and are due to reflections of waves from the ends of the string. Figure 16.40 and Figure 16.41 show three standing waves that can be created on a string that is fixed at both ends. Nodes are the points where the string does not move; more generally, nodes are where the wave disturbance is zero in a standing wave. The fixed ends of strings must be nodes, too, because the string cannot move there. The word antinode is used to denote the location of maximum amplitude in standing waves. Standing waves on strings have a frequency that is related to the propagation speed of the disturbance on the string. The wavelength is determined by the distance between the points where the string is fixed in place. The lowest frequency, called the fundamental frequency, is thus for the longest wavelength, which is seen to be . Therefore, the fundamental frequency is . In this case, the overtones or harmonics are multiples of the fundamental frequency. As seen in Figure 16.41, the first harmonic can easily be calculated since . Thus, . Similarly, , and so on. All of these frequencies can be changed by adjusting the tension in the string. The greater the tension, the greater is and the higher the frequencies. This observation is familiar to anyone who has ever observed a string instrument being tuned. We will see in later chapters that standing waves are crucial to many resonance phenomena, such as in sounding boxes on string instruments. Striking two adjacent keys on a piano produces a warbling combination usually considered to be unpleasant. The superposition of two waves of similar but not identical frequencies is the culprit. Another example is often noticeable in jet aircraft, particularly the two-engine variety, while taxiing. The combined sound of the engines goes up and down in loudness. This varying loudness happens because the sound waves have similar but not identical frequencies. The discordant warbling of the piano and the fluctuating loudness of the jet engine noise are both due to alternately constructive and destructive interference as the two waves go in and out of phase. Figure 16.42 illustrates this graphically. The wave resulting from the superposition of two similar-frequency waves has a frequency that is the average of the two. This wave fluctuates in amplitude, or beats, with a frequency called the beat frequency. We can determine the beat frequency by adding two waves together mathematically. Note that a wave can be represented at one point in space as where is the frequency of the wave. Adding two waves that have different frequencies but identical amplitudes produces a resultant Using a trigonometric identity, it can be shown that is the beat frequency, and is the average of and . These results mean that the resultant wave has twice the amplitude and the average frequency of the two superimposed waves, but it also fluctuates in overall amplitude at the beat frequency . The first cosine term in the expression effectively causes the amplitude to go up and down. The second cosine term is the wave with frequency . This result is valid for all types of waves. However, if it is a sound wave, providing the two frequencies are similar, then what we hear is an average frequency that gets louder and softer (or warbles) at the beat frequency. The MIT physics demo entitled “Tuning Forks: Resonance and Beat Frequency” provides a qualitative picture of how wave interference produces beats. Description: Two identical forks and sounding boxes are placed next to each other. Striking one tuning fork will cause the other to resonate at the same frequency. When a weight is attached to one tuning fork, they are no longer identical. Thus, one will not cause the other to resonate. When two different forks are struck at the same time, the interference of their pitches produces beats. This is a fun activity with which to learn about interference and superposition. Take a jump rope and hold it at the two ends with one of your friends. While each of you is holding the rope, snap your hands to produce a wave from each side. Record your observations and see if they match with the following: - One wave starts from the right end and travels to the left end of the rope. - Another wave starts at the left end and travels to the right end of the rope. - The waves travel at the same speed. - The shape of the waves depends on the way the person snaps his or her hands. - There is a region of overlap. - The shapes of the waves are identical to their original shapes after they overlap. Now, snap the rope up and down and ask your friend to snap his or her end of the rope sideways. The resultant that one sees here is the vector sum of two individual displacements. This activity illustrates superposition and interference. When two or more waves interact with each other at a point, the disturbance at that point is given by the sum of the disturbances each wave will produce in the absence of the other. This is the principle of superposition. Interference is a result of superposition of two or more waves to form a resultant wave of greater or lower amplitude. While beats may sometimes be annoying in audible sounds, we will find that beats have many applications. Observing beats is a very useful way to compare similar frequencies. There are applications of beats as apparently disparate as in ultrasonic imaging and radar speed traps. Imagine you are holding one end of a jump rope, and your friend holds the other. If your friend holds her end still, you can move your end up and down, creating a transverse wave. If your friend then begins to move her end up and down, generating a wave in the opposite direction, what resultant wave forms would you expect to see in the jump rope? The rope would alternate between having waves with amplitudes two times the original amplitude and reaching equilibrium with no amplitude at all. The wavelengths will result in both constructive and destructive interference Define nodes and antinodes. Nodes are areas of wave interference where there is no motion. Antinodes are areas of wave interference where the motion is at its maximum point. You hook up a stereo system. When you test the system, you notice that in one corner of the room, the sounds seem dull. In another area, the sounds seem excessively loud. Describe how the sound moving about the room could result in these effects. With multiple speakers putting out sounds into the room, and these sounds bouncing off walls, there is bound to be some wave interference. In the dull areas, the interference is probably mostly destructive. In the louder areas, the interference is probably mostly constructive. Make waves with a dripping faucet, audio speaker, or laser! Add a second source or a pair of slits to create an interference pattern.
Theoretical results on the topological properties of the limited penetrable horizontal visibility graph family The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. We extend this visibility graph and create a directed limited penetrable horizontal visibility graph and an image limited penetrable horizontal visibility graph. We define the two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series (or matrices). We perform several numerical simulations to further check the accuracy of our theoretical results. Finally we present an application of the directed limited penetrable horizontal visibility graph for measuring real-value time series irreversibility, and an application of the image limited penetrable horizontal visibility graph that discriminates noise from chaos. The empirical results show the effectiveness of our proposed algorithms. pacs:05.45. Tp, 89.75. Hc, 05.45.-a The complex network analysis of univariate (or multivariate) time series has recently attracted the attention of reseachers working in a wide range of fields 1 (). Over the past decade several methodologies have been proposed for mapping a univariate and multivariate time series in a complex network 2 (); 3 (); 4 (); 5 (); 6 (); 7 (); 8 (); 9 (). These include constructing a complex network from a pseudoperiodic time series 2 (), using a visibility graph (VG) algorithm 3 (), a recurrence network (RN) method 4 (), a stochastic processes method 5 (), a coarse geometry theory 6 (), a nonlinear mutual information method 7 (), event synchronization 8 (), and a phase-space coarse-graining method 9 (). These methods have been widely used to solve problems in a variety of research fields 10 (); 11 (); 12 (); 13 (); 14 (); 15 (); 16 (); 17 (); 18 (); 19 (); 20 (). Among all these time series complex network analysis algorithms, visibility algorithms 3 (); 21 (); 22 () are the most efficient when constructing a complex network from a time series. Visibility algorithms are a family of rules for mapping a real-value time series on graphs that display several cases. In all cases each time series datum is assigned to a node, but the connection criterion differs. For example, in the natural visibility graph (NVG) two nodes and are connected if the geometrical criterion is fulfilled within the time series 3 (). In the parametric natural visibility graph (PNVG) case there are three steps when using this algorithm to map a time series to a complex network, (i) build an NVG as described above using common NVG criteria in the mapping, (ii) set the direction and angle, for every link of the NVG, and (iii) use the parameter view angle rule , to select links from the directed and weighted graph 21 (). In the horizontal visibility graph (HVG) case, this algorithm is similar to the NVG algorithm but has a modified mapping criterion. Here two nodes and are connected if 22 (). These visibility algorithms have been successfully implemented in a variety of fields 23 (); 24 (); 25 (). Recently a limited penetrable visibility graph (LPVG) 26 (); 27 () and a multiscale limited penetrable horizontal visibility graph (MLPHVG) 28 () were developed from the visibility graph (VG) and the horizontal visibility graph (HVG) to analyze nonlinear time series. The LPVG and MLPHVG have been successfully used to analyze a variety of real signals across different fields, e.g., experimental flow signals [26-27], EEG signals 28 (); 29 (), and electromechanical signals 30 (). Research has shown that the LPVG and MLPHVG inherit the merits of the VG, but also successfully screen out noise, which makes them particularly useful when analyzing signals polluted by unavoidable noise 26 (); 27 (); 28 (); 29 (); 30 (). Abundant empirical results have already been obtained using the VG algorithm and its extensions, e.g., the PNVG 21 (), the HVG 22 (), the LPVG 26 (), and the MLPHVG 28 (). Thus far there has been little research focusing on rigorous theoretical results. Recently Lacasa et al. presented topological properties of the horizontal visibility graph associated with random time series 22 (), periodic series 31 (), and other stochastic and chaotic processes 32 (). They extended the family of visibility algorithms to map scalar fields of an arbitrary dimension onto graphs and provided analytical results on the topological properties of the graphs associated with different types of real-value matrices 33 (). Wang et al. 34 () focused on a class of general horizontal visibility algorithms, the limited penetrable horizontal visibility graph (LPHVG), and presented exact results on the topological properties of the limited penetrable horizontal visibility graph associated with a random series. Here we use the previous works 22 (); 31 (); 32 (); 33 (); 34 (), focus our attention on the limited penetrable horizontal visibility graph, and present some analytical properties. This paper is organized as follows. In Section II of this paper we introduce the limited penetrable horizontal visibility graph family. In Section III we derive the analytical properties of the different versions of associated limited penetrable horizontal visibility graphs of a generic random time series (or a random matrix) and present several numerical simulations to check their accuracy. In Section IV we show some applications of the directed limited penetrable horizontal visibility graph and the image limited penetrable horizontal visibility graph. In Section V we present our conclusions. Ii limited penetrable horizontal visibility graph family The LPHVG algorithm 28 (); 34 () and its extensions are called the LPHVG family. We here present three versions of the LPHVG algorithm, the limited penetrable horizontal visibility graph, LPHVG, the directed limited penetrable horizontal visibility graph, DLPHVG, and the image limited penetrable horizontal visibility graph of order , ILPHVG. ii.1 Limited Penetrable Horizontal Visibility Graph [LPHVG] The limited penetrable horizontal visibility graph [LPHVG] 34 () is a geometrically simpler and analytically solvable version of VG 3 (), LPVG 30 (), and MLPHVG 28 (). To define it we let be a time series of real numbers. We set the limited penetrable distance to , and LPHVG maps the time series on a graph with nodes and an adjacency matrix A. Nodes and are connected through an undirected edge () if and have a limited penetrable horizontal visibility (see Fig. 1), i.e., if intermediate data follows where is the number of . The graph spanned by this mapping is the limited penetrable horizontal visibility graph [LPHVG]. When we set the limited penetrable distance to 0, then LPHVG(0) degenerates into an HVG 22 (), i.e., LPHVG(0) = HVG. When there are more connections between any two LPHVG nodes than in HVG. Fig. 1(b) shows the new established connections (red lines) when we infer the LPHVG(1) using HVG. Note that the LPHVG of a time series has all the properties of its corresponding HVG, e.g., it is connected and invariant under affine transformations of series data 22 (). ii.2 Directed limited penetrable horizontal visibility graph [DLPHVG] The limited penetrable horizontal visibility graph [LPHVG] is undirected, because penetrable visibility does not have a predefined temporal arrow. Directionality can be added by using directed networks. Here we address the directed version and define a directed limited penetrable horizontal visibility graph [DLPHVG], where the degree of the node is split between an ingoing degree and an outgoing degree such that . We define the ingoing degree to be the number of links of node with past nodes associated with data in the series, i.e., nodes with . Conversely, we define the outgoing degree to be the number of links with future nodes, i.e., nodes with . Thus DLPHVG maps the time series into a graph with nodes and an adjacency matrix , where is a lower triangular matrix and is a upper triangular matrix. Nodes and , (or and , ) are connected through a directed edge , i.e., (or , i.e. ) if it satisfies Eq. (1). Fig. 2 shows a graphical representation of the definition. As in the degree distribution , we use the ingoing and outgoing degree distributions of a DLPHVG to define the probability distributions of and on the graph, which are and , respectively. We see the asymmetry of the resulting graph in a first approximation when we use the invariance of the outgoing (or ingoing) degree series under a time reversal. ii.3 Image limited penetrable horizontal visibility graph of order [Ilphvg] One-dimensional versions of the limited penetrable horizontal visibility graph [LPHVG] and directed limited penetrable horizontal visibility graph [DLPHVG] are used to map landscapes (time series) on complex networks. As in the definition of IVG 33 (), the definition of LPHVG can also be extended and applied to two-dimensional manifolds by extending the LPHVG criteria of Eq. (1) along one-dimensional sections of the manifold. To define the image limited penetrable horizontal visibility graph of order [ILPHVG] we let X be a matrix for an arbitrary entry and partition the plane into directions such that direction is at an angle with the row axis of , where . The image limited penetrable visibility graph of order , ILPHVG, has nodes, each of which is labeled using a duple associated with the entry indices , such that two nodes, and , are linked when (i) belongs to one of the angular partition lines, and (ii) and are linked in the LPHVG defined over the ordered sequence that includes and . For example, in ILPHVG the penetrable visibility between two points and is Fig. 3(a) shows a sample matrix in which is the central entry, which shows the ILPHVG(1) algorithm evaluated along the vertical and horizontal directions. Fig. 3(b) shows the connectivity pattern associated to the entry of the ILPHVG(1) algorithm. Fig. 3(c) shows the ILPHVG(1) algorithm evaluated along the vertical, horizontal, and diagonal directions. Fig. 3(d) shows the connectivity pattern associated to the entry of the ILPHVG(1) algorithm. Iii Theoretical results on the topological properties Theorem 1. 34 () If we let be a bi-infinite sequence of , a random variable with probability density , then the degree distribution of its associated LPHVG is The mean degree is Reference 34 () [Wang et al., 2017] provides a lengthy proof of this theorem. We here propose an alternative shorter proof. Proof. We let be an arbitrary datum of the random time series. The probability that its limited penetrable horizontal visibility is interrupted by two bounding data, one datum on its left and one on its right. There are penetrable data that are larger than between the two bounding data, penetrable data on the left and data on the right of . These data are independent of , then We define the cumulative probability distribution function of any probability distribution to be Then we rewrite Eq. (4) to be The probability that the datum penetrates no more than time seeing data is where is the probability that datum penetrates no more than time seeing at least data. We can recurrently calculate to be from which we deduce Thus we finally obtain Then the mean degree of the limited penetrable horizontal visibility graph associated to an uncorrelated random process is Theorem 1 shows the exact degree distribution for LPHVG, which indicates that the degree distribution of LPHVG associated to random time series has a unified exponential form, independent of the probability distribution from which the series was generated. Theorem 2. We let be a bi-infinite sequence of , a random variable with probability density , and consider a limited penetrable horizontal visibility graph associated with . We let be a mean degree of the node associated with a datum of height and define it Proof. We define to be the conditional probability that a given node has degree when its height is . Using the constructive proof process of in Ref. 34 () [Wang et al., 2017], we calculate to be We let , and deduce Theorem 2 shows the relation between data height and the mean degree of the nodes associated with the data of height . The result indicates that the is a monotonically increasing function of . Thus we conclude that the hubs of LPHVG are the data with largest values. We check the accuracy of the result within finite series. Fig. 4(a) shows a plot of the numerical values of of LPHVG(), associated with the random series of 1000 data extracted from a uniform distribution when . The theoretical results (red lines) show a perfect agreement [Eq. (14)]. To check the finite size effect, Fig. 4(b) shows a plot of the numerical values of of LPHVG(2) associated with random series of 500, 1000, 1500, 2000 data. We use root mean square error (RMSE) to measure the agreement between the numerical and theoretical results. We find that when the size of the time series increases, the RMSE between the numerical and theoretical results decreases, indicating an increase in agreement. Theorem 3. We let be an infinite periodic series of period with no repeated values within a period. The normalized mean distance of LPHVG associated with is Proof. To calculate we consider an infinite periodic series of period with no repeated values in a period and denote it . We let for the subseries and without losing generality assume that corresponds to the largest value of the subseries, , and corresponds to the 2nd to nd largest value of the subseries. Thus we construct the LPHVG associated with subseries . We assume that LPHVG has links and let be the smallest datum of the subseries . Because no data repetitions are allowed in , the degree of is when constructed from LPHVG. We now remove node and its links from LPHVG. The resulting graph now has links and nodes. We iterate this operation times. The resulting graph has nodes, i.e., . When these nodes are connected by links, the total number of deleted links are . Thus the mean degree of a limited penetrable horizontal visibility graph associated with is We let be the mean distance of LPHVG, be the number of nodes, and the normalized mean distance be . Note that depends on for HVG associated with periodic orbits for . Thus we deduce that for LPHVG. Using Eq. (15) we obtain , and finally obtain This result holds for every periodic or aperiodic series (), independent of the deterministic process that generates them, because the only constraint in its derivation is that data within a period not be repeated. Note that one consequence of Eq. (15) is that each time series has an associated LPHVG with a maximum mean degree (for a aperiodic series) of , which agrees with the previous result in Eq. (11). In Eq. (16) the limiting solution holds for all aperiodic, chaotic, and random series. To check the accuracy of the analytical result, we generate four periodic time series (, 100, 200, and 250) with 2000 data points. The data in each period is from the logistic map in which . We construct the limited penetrable horizontal visibility graphs with penetrable distance associated with this periodic time series. Fig. 5(a) shows a plot of the mean degree of the resulting LPHVG values with different values that indicate a good agreement with the theoretical results in Eq. (15). Fig. 5(b) shows a calculation of the normalized mean distance of LPHVG values with , 1, and 2 associated with the period time series of . Numerical values of the mean normalized distance as a function of mean degree agrees with the theoretical linear relation of Eq. (16). Theorem 4. 34 () We let be a real value bi-infinite time series of random variables with probability distribution and examine its associated LPHVG. The local clustering coefficient distribution is then Theorem 5. We let be a bi-finite sequence of random variables extracted from a continuous probability density . Then the probability that two data separated by intermediate data are two connected nodes in the LPHVG is Theorem 4 shows the distribution characteristics of the minimum clustering coefficient and the maximum clustering coefficient of the nodes in LPHVG. Theorem 5 indicates that the limited penetrable visibility probability introduces shortcuts in the LPHVG. With these shortcuts the limited penetrable horizontal visibility graph reveals the presence of small-world phenomena 34 (). Theorem 6. We let be a bi-infinite sequence of of random variable with a probability density . Then both the in and out degree distribution of its associated DLPHVG is Proof. Examining the out-distribution we let be an arbitrary datum of the random time series, and the probability that its limited penetrable horizontal visibility is interrupted by one bounding datum on its right. There are penetrable data between and the bounding data . These data are independent of . Then The probability that datum penetrates no more than time seeing data is where is the probability that penetrates no more than time to the right seeing at least data. Then can be recurrently calculated from which, with , we deduce Thus we finally obtain To further check the accuracy of Eq. (21), we perform several numerical simulations. We generate random series of 3000 data from uniform, gaussian, and power law distributions and their associated DLPHVG. Fig. 6 show plots of the degree distributions with penetrable distances , 1, and 2. Circles indicate , diamonds ), and the solid line the theoretical results of Eq. (21). We find that the theoretical results agree with the numerics, placing aside finite size effects. As in the degree distribution of LPHVG 34 (), the deviations between the tails of the in and out degree distributions of DLPHVG associated with random series are caused solely by finite size effects. Theorem 7. We let be a matrix with entries , where is a random variable sampled from a distribution . Then when and in the limited , the degree distribution of the associated ILPHVG converges to Proof. To derive general results, we consider the two special cases and . In the case , we let be an arbitrary datum in where the probability of its image limited penetrable horizontal visibility is interrupted by four bounding datum, i.e., on its right, above it, on its left, and below it. There are penetrable data between and the four bounding data. These data are independent of . Then The probability that the node has a penetrable visibility of exactly nodes is Similarly, when from Eq. (22), then Here the probability that node has a penetrable visibility of exactly nodes is From Eqs. (23) and (25) we deduce a generic that yields Note that when this result reduces to that in Eq. (10). To check the accuracy of Eq. (26), we estimate the degree distribution of ILPHVG associated with random matrices whose entries are uniform random variables . To illustrate the finite size effects, we also define the cutoff value. When all the degree distributions of the numerical results are smaller than the theoretical result in Eq. (26), and is the cutoff value. Figs. 7(a) and 7(c) show semi-log plots of the finite size degree distributions of ILPHVG and ILPHVG with . Note that the distributions agree with Eq. (26) when . To assess the convergence speed of Eq. (26) for finite , we estimate the cutoff value under different finite sizes [see Figs. 7(b) and 7(d)]. Note that the location of the cutoff value scales logarithmically with the system size , i.e., finite size effects only affect the tail of the distribution, which quickly converges logarithmically with . Iv Application of DLPHVG and ILPHVG) We use the analytical results of LPHVG to distinguish between random and chaotic signals , and we describe the global evolution of crude oil futures. We also describe applications of DLPHVG and ILPHVG. Measure real-valued time series irreversibility by DLPHVG. Time series irreversibility is an important topic in basic and applied science 35 (). Over the past decade several methods of measuring time irreversibility have been proposed 36 (); 37 (); 38 (). A recent proposal uses the directed horizontal visibility algorithm 39 (). Here the Kullback-Leibler divergence (KLD) between the out- and in-degree distributions is defined Eq. 27 measures the irreversibility of real-value stationary stochastic series, and we here explore the applicability of DLPHVG. We first select an appropriate parameter , map a time series to a directed limited penetrable horizontal visibility graph, and then use Eq. 27 to estimate the degree of irreversibility of the series. Using Theorem 6 and Eq. 27 we find that the KLD between the in- and out-degree distributions associated with an random infinite series is equal to zero. Using our analysis of finite size effects, we infer that the KLD between the in- and out-degree distributions associated with an random finite series of size tends asymptotically to zero. We set , 1, and 2, and calculate the numerical value of the KLD of the random series of 3000 data from uniform, Gaussian, and power-law distributions (see the upper section of Table 1). All numerical values of KLD are approximately 0, which suggests that the time series is reversible. We next examine the chaotic logistic and Hénon map series. Figures 8(a) and 8(b) show plots of the in- and out-degree distributions of DLPHVG(), associated with the Logistic map at and the Hénon map at and of 3000 data points. Note that in each case there is a clear distinction between the in- and out-degree distributions, and this differs from the series case [see Fig. 6(b)]. We calculate the values of KLD for each case (bottom section of Table 1). We find that the values of KLD are positive and much larger than those of the series. Figs. 8(c) and 8(d) show a finite size analysis of the chaotic maps. Note that the KLD values associated with the chaos maps converges with series size to a asymptotical nonzero value, which indicates that chaos maps are irreversible. |Power law distribution||0.000226||0.004257||0.005267| |Logistic map ()||0.342985||0.090773||0.081985| |Hénon map ()||0.158358||0.125637||0.140270| Thus by selecting an appropriate parameter for , the of DLPHVG captures the irreversibility of the time series. Discriminating between and chaos using ILPHVG. Although chaotic processes display an irregular and unpredictable behavior that is frequently perceived to be random, chaos is a deterministic process that often hides patterns that can be extracted using appropriate techniques. In recent decades research efforts to distinguish between noise and chaos have been widespread 40 (), and applications have been developed in all scientific disciplines involving complex, irregular empirical signals. Lacasa et al. 33 () used visibility graphs to distinguish spatiotemporal chaos from simple randomness. We here also examine spatially extended structures, and we explore whether ILPHVG can distinguish distinguish spatiotemporal chaos from simple randomness. We define to be a two-dimensional square lattice of diffusively-coupled chaotic maps that evolve in time . In each vertex of this coupled map lattice (CML) we allocate a fully chaotic logistic map , and the system is then spatially coupled, where the sum extends to the Von Neumann neighborhood of (four adjacent neighbors). The update is parallel, we use periodic boundary conditions, and the coupling strength is . Fig. 9(a) shows a semi-log plot for of the degree distribution of ILPHVG associated with a two-dimensional uncorrelated random field of uniform random variables (stars), and a two-dimensional coupled map lattice of diffusively coupled fully chaotic logistic maps for the coupling constants (squares), and (diamonds). Figure 9(b) shows a plot of the degree distribution of ILPHVG associated with the two-dimensional coupled map lattices of diffusively coupled fully chaotic logistic maps with a coupling constant . Eq. (26) shows (green line), (red line), and (pink line). Figs. 9(a) and 9(b) show that the degree distribution of ILPHVG associated with the uncoupled () and weakly coupled () cases is indistinguishable from the degree distribution associated with the random field. Fig. 9(b) shows that the degree distribution deviates from the theoretical result in Eq. (26) only in the strongly coupled case (). Note that the coupled map lattices from Eq. (28) when spatial correlations settle in and the degree distributions of ILPHVG are statistically different from the theoretical result in Eq. (26), but the degree distribution of ILPHVG associated with the random field, the uncoupled case (), and the weakly coupled case () are well approximated by Eq. (26) in each case. There are deviations for ( for , for , and for ) but they are caused by finite size effects (see Fig. 7). To quantify potential deviations of the uncoupled () and weakly coupled () cases from Eq. (26), we compute where is the degree distribution of the numerical result and the theoretical result from Eq. (26). Here we consider 30 realizations of the random field, the uncoupled map lattices (), and the weakly coupled map lattices (), and in each case we use for , for , and for to compute the statistic that measures the deviation between the empirical degree distribution and the theoretical result. Fig. 9(c) shows the calculated results in a two-dimensional phase space with a time delay . Note that there are clear distinctions between the uncorrelated random field, the uncoupled map lattices (), and the weakly coupled map lattices ( for and ), but when the distinction is no longer clear. We thus select an appropriate parameter and use the degree distribution of ILPHVG to distinguish noise from chaos. Note that when we increase the coupling constant the spatiotemporal dynamics of the coupled map lattice shows a rich phase diagram. Using the degree distribution of ILPHVG, we show this rich spatiotemporal dynamics process. For each we compute the degree distribution of the associated ILPHVG. We then compute the distance between the degree distribution at and the corresponding result for in Eq. (26), where is the degree distribution of ILPHVG, and is a scalar order parameter that describes the spatial configuration of the CML. Figure 9(d) shows that when the evolutions of with changes from 0 to 1, indicating sharp changes in the different phases—fully developed turbulence with weak spatial correlations (I), periodic structure (II), spatially coherent structure (III), and mixed structure (IV)—between periodic and spatially-coherent structures 33 (). Thus the degree distribution of the ILPHVG can capture the rich spatial structure. We have introduced a directed limited penetrable horizontal visibility graph DLPHVG and an image limited penetrable horizontal visibility graph (ILPHVG), both inspired by the limited penetrable horizontal visibility graph LPHVG 34 (). These two algorithms are expansions of the limited penetrable horizontal visibility algorithm. We first derive theoretical results on the topological properties of LPHVG, including degree distribution , mean degree , the relation between the datum height and the mean degree of the nodes associated to data with a height equal to , the normalized mean distance , the local clustering coefficient distribution and , and the probability of long distance visibility . We then deduce the in- and out-degree distributions and of DLPHVG, and the degree distribution of ILPHVG. We perform several numerical simulations to check the accuracy of our analytical results. We then present applications of the directed limited penetrable horizontal visibility graph and the image limited penetrable horizontal visibility graph, including measuring the irreversibility of a real-value time series and discriminating between noise and chaos, and empirical results testify to the efficiency of our methods. Our theoretical results on topological properties are an extension of previous findings 22 (); 32 (); 33 (); 34 (). In the structure of the limited penetrable horizontal visibility graph family, the limited penetrable parameter is a important and affects the structure of the associated graphs. Under certain parameter values, the exact results of the associated graphs reveals the essential characteristics of the system, e.g., when and , using the degree distribution of ILPHVG we can distinguish between uncorrelated and weakly coupled systems, but when the distinction is no longer clear [see Fig. 9 (c)]. Open problem for future research include how to use real data in selecting an optimal limited penetrable parameter , and how to further apply the limited penetrable horizontal visibility graph family. The Research was supported by the following foundations: The National Natural Science Foundation of China (71503132, 71690242, 91546118, 11731014, 71403105, 61403171),Qing Lan Project of Jiangsu Province (2017), University Natural Science Foundation of Jiangsu Province (14KJA110001), Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, CNPq, CAPES, FACEPE and UPE. - (1) Z. K. Gao, M. Small, J. Kurths, Europhys. Lett., 116(5): 50001 (2017). - (2) J. Zhang, M. Small, Phys. Rev. Lett., 96(23): 238701 (2006). - (3) L. Lacasa, B. Luque, F. Ballesteros, et al.,Proc. Natl. Acad. Sci., 105(13): 4972-4975 (2008). - (4) N. Marwan, J. F. Donges, Y. Zou, et al., Phys. Lett. A, 373(46): 4246-4254 (2009). - (5) A. H. Shirazi, G. R. Jafari, J. Davoudi et al., J. Stat. Mech. Theor. Exp., 2009(07): P07046 (2009). - (6) Y. Zhao, T. Weng, S. Ye, Phys. Rev. E, 90(1): 012804 (2014). - (7) J. F. Donges, Y. Zou, N. Marwan et al., Europhys. Lett., 87(4): 48007 (2009). - (8) V. Stolbova, P. Martin, B. Bookhagen et al., Nonlinear Process Geophys., 21(4): 901-917 (2014). - (9) Wang M, Tian L. Physica A, 461: 456-468 (2016). - (10) C. Zhou, L. Ding, M.J. Skibniewski, et al., Saf. Sci., 98: 145-158 (2017). - (11) P. Oświecimka, L. Livi, S. Drożdż, Phys. Rev. E, 94(4): 042307 (2016). - (12) Z.K. Gao, S. Li, W.D. Dang, et al., Int. J. Bifurc. Chaos, 27(08): 1750123 (2017). - (13) M. Wang, Y. Chen, L. Tian et al., Applied Energy, 175: 109-127 (2016). - (14) M. Wang, L. Tian, R. Du, Applied Energy, 180: 779-791 (2016). - (15) M. Wang, L. Tian, H. Xu et al., Physica A, 473: 188-204 (2017). - (16) H. Chen, L. Tian, M. Wang et al., Sustainability, 9(4): 574 (2017). - (17) R. Du, Y. Wang, G. Dong et al., Applied Energy, 196: 142-151 (2017). - (18) R. Du, G. Dong, L. Tian et al., PloS one, 11(10): e0162362 (2016). - (19) J. Xiao, M. Wang, L. Tian et al., Physica A, 490: 664-680 (2018). - (20) Y. Li, D. Yang, X. Li, Journal of Mathematical Finance, 7(03): 734 (2017). - (21) I.V. Bezsudnov, A.A. Snarskii, Physica A, 414: 53-60 (2014).
Approximate controllability and optimal controls of fractional evolution systems in abstract spaces Advances in Difference Equations volume 2014, Article number: 322 (2014) In this paper, under the assumption that the corresponding linear system is approximately controllable, we obtain the approximate controllability of semilinear fractional evolution systems in Hilbert spaces. The approximate controllability results are proved by means of the Hölder inequality, the Banach contraction mapping principle, and the Schauder fixed point theorem. We also discuss the existence of optimal controls for semilinear fractional controlled systems. Finally, an example is also given to illustrate the applications of the main results. MSC:26A33, 49J15, 49K27, 93B05, 93C25. During the past few decades, fractional differential equations have proved to be valuable tools in the modeling of many phenomena in viscoelasticity, electrochemistry, control, porous media, and electromagnetism, etc. Due to its tremendous scopes and applications, several monographs have been devoted to the study of fractional differential equations; see the monographs [1–5]. Controllability is a mathematical problem. Since approximately controllable systems are considered to be more prevalent and very often approximate controllability is completely adequate in applications, a considerable interest has been shown in approximate controllability of control systems consisting of a linear and a nonlinear part [6–10]. In addition, the problems associated with optimal controls for fractional systems in abstract spaces have been widely discussed [10–22]. Wang and Wei obtained the existence and uniqueness of the PC-mild solution for one order nonlinear integro-differential impulsive differential equations with nonlocal conditions. Bragdi established exact controllability results for a class of nonlocal quasilinear differential inclusions of fractional order in a Banach space. Machado et al. considered the exact controllability for one order abstract impulsive mixed point-type functional integro-differential equations with finite delay in a Banach space. Approximate controllability for one order nonlinear evolution equations with monotone operators was attained in . By the well-known monotone iterative technique, Mu and Li obtained existence and uniqueness results for fractional evolution equations without mixed type operators in nonlinearity. Wang and Zhou studied a class of fractional evolution equations of the following type: where is the Caputo fractional derivative of order , −A is the infinitesimal generator of a compact analytic semigroup of uniformly bounded linear operators. A suitable α-mild solution of the semilinear fractional evolution equations is given, and the existence and uniqueness of α-mild solutions are also proved. Then by inducing a control term, the existence of an optimal pair of systems governed by a class of fractional evolution equations is also presented. Mahmudov and Zorlu considered the following semilinear fractional evolution system: where is the Caputo fractional derivative of order , the state variable x takes values in a Hilbert space , A is the infinitesimal generator of a -semigroup of bounded operators on the Hilbert space , the control function u is given in , U is a Hilbert space, B is a bounded linear operator from U into . is a Volterra integral operator. They studied the approximate controllability of the above controlled system described by semilinear fractional integro-differential evolution equation by the Schauder fixed point theorem. Very recently, Wang et al. researched nonlocal problems for fractional integro-differential equations via fractional operators and optimal controls, and they obtained the existence of mild solutions and the existence of optimal pairs of systems governed by fractional integro-differential equations with nonlocal conditions. Subsequently, Ganesh et al. presented the approximate controllability results for fractional integro-differential equations studied in . In this paper, we concern the following fractional semilinear integro-differential evolution equation with nonlocal initial conditions: where denotes the Caputo derivative, , the state variable x takes values in a Hilbert space with the norm , is the infinitesimal generator of a -semigroup of uniformly bounded linear operators, that is, there exists such that for all , , . We denote by a Hilbert space of equipped with norm for all , which is equivalent to the graph norm of , . The control function u is given in , U is a Hilbert space, B is a bounded linear operator from U into . The Volterra integral operator H is defined by . The nonlinear term f and the nonlocal term g will be specified later. Here, it should be emphasized that no one has investigated the approximate controllability and further the existence of optimal controls for the fractional evolution system (1.1) in a Hilbert space, and this is the main motivation of this paper. The main objective of this paper is to derive sufficient conditions for approximate controllability and existence of optimal controls for the abstract fractional equation (1.1). The considered system (1.1) is of a more general form, with a coefficient function in front of the nonlinear term. Finally, an example is also given to illustrate the applications of the theory. The previously reported results in [6, 15, 20] are only the special cases of our research. The rest of this paper is organized as follows. In Section 2, we present some necessary preliminaries and lemmas. In Section 3, we prove the approximate controllability for the system (1.1). In Section 4, we study the existence of optimal controls for the Bolza problem. At last, an example is given to demonstrate the effectiveness of the main results in Section 5. 2 Preliminaries and lemmas Unless otherwise specified, represents the norm, , is a Banach space equipped with supnorm given by for . Let , here is the resolvent set of A. Define It follows that each is an injective continuous endomorphism of . So we can define , which is a closed bijective linear operator in . It can be shown that has a dense domain and for . Moreover, , with , where , I is the identity in . We have for (with ), and the embedding is continuous. Moreover, has the following basic properties. Lemma 2.1 (see ) and have the following properties: , for each and . , for each and . For every , is bounded in , and there exists such that(2.2) is a bounded linear operator for , and there exists such that . Definition 2.1 The fractional integral of order q with the lower limit zero for a function f is defined as provided that the right side is point-wise defined on , where is the gamma function. Definition 2.2 The Riemann-Liouville derivative of the order q with the lower limit zero for a function can be written as Definition 2.3 The Caputo derivative of the order q for a function can be written as If , then(2.6) The Caputo derivative of a constant equals zero. If f is an abstract function with values in , then the integrals which appear in Definitions 2.1, 2.2, and 2.3 are taken in Bochner’s sense. Definition 2.4 A solution is said to be a mild solution of the system (1.1), we mean that for any , the following integral equation holds: is a probability density function defined on , that is, Definition 2.5 The system (1.1) is said to be approximately controllable on if , that is, given an arbitrary , it is possible to steer from the point to within a distance for all points in the state space at time b. Here , is called the reachable set of the system (1.1) at terminal time b, is the state value at terminal time b corresponding to the control u and the initial value , represents its closure in . where denotes the adjoint of B and is the adjoint of . Obviously, is a linear bounded operator. We define the following linear fractional control system: Lemma 2.2 (see ) The linear fractional control system (2.14) is approximately controllable on if and only if as in the strong operator topology. Lemma 2.3 (see ) The operators and have the following properties: For fixed , and are linear and bounded operators, that is, for any ,(2.15) and are strongly continuous. For every , and are also compact if is compact. For any , and , we have(2.16) For fixed and any , we have(2.17) and are uniformly continuous, that is, for each fixed and , there exists such that(2.18) Lemma 2.4 (see ) For and we have . Lemma 2.5 (Schauder’s fixed point theorem) If B is a closed bounded and convex subset of a Banach space and is completely continuous, then Q has a fixed point in B. 3 Approximate controllability In this section, we impose the following assumptions: (H1) is continuous and there exist such that for all , , and . (H2) , there exists a function and for each and , where . (H3) is continuous and there exists a constant such that for any . (H4) The function defined by satisfies for all , where . Theorem 3.1 Assume that conditions (H1)-(H4) are satisfied. In addition, the functions f and g are bounded and the linear system (2.14) is approximately controllable on . Then the fractional system (1.1) is approximately controllable on . Proof For arbitrary , define a control function as follows: and define the operator by Obviously is well defined on . For , we have By (H1)-(H3), Lemma 2.3 and the Hölder inequality, we have and Then we can deduce that From (H4) and the contraction mapping principle, we conclude that the operator has a fixed point in . Since f and g are bounded, for definiteness and without loss of generality, let be a fixed point of in , where . From the boundedness of , there is a subsequence denoted by which converges weakly to x as , and as . Then . Any fixed point is a mild solution of (1.1) under the control Therefore we have it follows that By assumptions (H1)-(H3), it is easy to get as . Then This proves the approximate controllability of (1.1). □ In order to obtain approximate controllability results by the Schauder fixed point theorem, we pose the following conditions: (H5) is a compact analytic semigroup in . (H6) There exist constants such that and f satisfies: For each , the function is measurable. For each , the function is continuous. For any , there exist functions such that(3.18) and there exists a constant such that where has been specified in assumption (H2). (H7) is completely continuous. For any , there exist constants such that and there exists a constant such that (H8) The following inequality holds: where will be specified in the following theorem. Theorem 3.2 Assume that conditions (H3), (H5)-(H8) are satisfied. In addition, the linear system (2.14) is approximately controllable on . Then the fractional system (1.1) is approximately controllable on . Proof For , we set . For arbitrary , define the control function as follows: and define the operator by We divide the proof into five steps. Step 1: maps bounded sets into bounded sets, that is, for arbitrary , there is a positive constant such that . Let , from (2.12), (2.13), and (3.23), we have Then we get If operator is not bounded, for each , there would exist and such that Dividing both sides by r and taking the lower as , we have which is a contradiction to (H8). Then maps bounded sets into bounded sets. Step 2. is continuous. Let and as . From assumptions (H6)-(H7), for each , we have By the Lebesgue dominated convergence theorem, for each , we get which implies that is continuous. Step 3. For each , the set is relatively compact in . The case is trivial, is compact in (see (H7)). So let be a fixed real number, and let h be given a real number satisfied . For any , define , Since is compact in and is bounded on , then the set is a relatively compact set in . On the other hand, This implies that there are relatively compact sets arbitrarily close to the set for each . Then , is relatively compact in . Since it is compact at , we have the relatively compactness of in for all . Step 4. is an equicontinuous family of functions on . Form the Hölder inequality, Lemmas 2.1, 2.3, and assumption (H6), we obtain From Lemma 2.4, we have By (3.25), it is easy to see that Similar to (3.39), we obtain For , , it can easily be seen that . For , when is small enough, we have Since we have assumption (H5), , in t is continuous in the uniformly operator topology, it can easily be seen that and tend to zero independently of as , . It is clear that , , as . Then is equicontinuous and bounded. By the Ascoli-Arzela theorem, is relatively compact in . Hence is a completely continuous operator. From the Schauder fixed point theorem, has a fixed point, that is, the fractional control system (1.1) has a mild solution on . Step 5. Similar to the proof in Theorem 3.1, it is easy to show that the semilinear fractional system (1.1) is approximately controllable on . Since the nonlinear term f is bounded, for any , there exists a constant such that Consequently, the sequence is bounded in , then there is a subsequence denoted by , which converges weakly to in . It follows that Now, by the compactness of the operator and (H7), it is easy to get as . Then This proves the approximate controllability of (1.1). The proof is completed. □ 4 Optimal controls In this section, we assume that is another separable reflexive Banach space from which the controls u take the values. We define the admissible control set , , where the multifunction is measurable, represents a class of nonempty closed and convex subsets of , and , E is a bounded set of . We consider the following controlled system: where , , it is easy to see that for all . Let denote a mild solution of the system (4.1) corresponding to . Here we consider the Bolza problem (P), which means that we shall find an optimal pair such that We list here some suitable hypotheses on the operator C, ϕ, and l as follows: The functional is Borel measurable. is sequentially lower semicontinuous on for almost all . is convex on for each and almost all . There exist , , and such that . The functional is continuous and nonnegative. The following inequality holds:(4.4) Theorem 4.1 Assume that assumptions (H3), (H5)-(H7), and (HL) are satisfied. Then the Bolza problem (P) admits at least one optimal pair on provided that Proof Firstly, we show that the system (4.1) has a mild solution corresponding to u given by the following integral equation: From Lemmas 2.1, 2.3, and (3.11), we have where is the norm of Banach space . Meanwhile, assumptions (H5)-(H7) and (HL) are satisfied. Similar to the proof of Theorem 3.2, we can verify that the system (4.1) has a mild solution corresponding to u easily. Secondly, we discuss the existence of optimal controls. If , there is nothing to prove. We assume that . Using condition (HL), we know here , is a constant. By the definition of an infimum, there exists a minimizing sequence of the feasible pair , where , such that as . Since , is bounded in , there exists a subsequence, relabeled as , and satisfies in . Since is closed and convex, by the Marzur lemma, we have . Let be a mild solution of the system (4.1) corresponding to (), then satisfies the following integral equation: Noting that is a bounded continuous operator from I into , we have Furthermore, is bounded in , so there exists a subsequence, relabeled such that We denote the operators and by Since is bounded, similar to the proof of Theorem 3.2, it is easy to see that is bounded. It is easy to verify that is compact and equicontinuous in . Due to the Ascoli-Arzela theorem, is relatively compact in . Obviously, is linear and continuous, then is a strongly continuous operator, and we obtain Similarly, we can verify is a strongly continuous operator and Now, we turn to considering the following controlled system: By Theorem 3.2, the above system has a mild solution corresponding to , and From Lemma 2.3 and (H3), we obtain Similarly, we have By (4.15), (4.16), and the Lebesgue dominated convergence theorem, we can deduce that as . For each , , we have Noting (4.5), we get Furthermore, we can infer that Using the uniqueness of the limit, we have which is just a mild solution of the system (4.1) corresponding to . Since , using assumption (HL) and the Balder theorem, we have This implies that J attains its minimum at . □ Example 5.1 Consider optimal controls for the following fractional controlled system: with the cost function where , , , and . Let . The operator is defined by with , then A generates a compact, analytic semigroup of uniformly bounded linear operator. Clearly, assumption (H5) is satisfied. Moreover, the eigenvalues of A are and the corresponding normalized eigenvectors are , . Define the control function such that . It means that going from into is measurable. Set where . We restrict the admissible controls to be all the such that . Let , where and the operator is given by for each and . Suppose that is a Banach space equipped with the supnorm , , . Define by where is defined by Obviously, we have The system (5.1) can be transformed into with the cost function and we can verify (HL)(1)-(5) are satisfied. It is also not difficult to see that then there exists and such that (3.19) holds, and conditions (H6) is satisfied. Meanwhile, it comes from the example in that g is a completely continuous operator from and there exist constants and such that Let , , it is easy to verify that (H3) and (H7) hold. Since , condition (H8) and condition (HL)(6) are satisfied automatically. By Theorem 4.1, we can conclude that the system (5.1) has at least one optimal pair while the condition holds. Podlubny I: Fractional Differential Equations. Academic Press, San Diego; 1999. Miller KS, Ross B: A Introduction to Fractional Calculus and Differential Equations. Wiley, New York; 1993. Lakshmikantham V, Leela S, Vasundhara Devi J: Theory of Fractional Dynamic Systems. Cambridge Scientific Publishers, Cambridge; 2009. Trasov VE: Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media. Springer, New York; 2010. Diethelm K Lecture Notes in Mathematics. In The Analysis of Fractional Differential Equations. Springer, New York; 2010. Mahmudov NI, Zorlu S: On the approximate controllability of fractional evolution equations with compact analytic semigroup. J. Comput. Appl. Math. 2014, 259: 194-204. Ganesh R, Sakthivel R, Mahmudov NI, Anthoni SM: Approximate controllability of fractional integrodifferential evolution equations. J. Appl. Math. 2013., 2013: Article ID 291816 Mahmudov NI: Approximate controllability of fractional Sobolev-type evolution equations in Banach spaces. Abstr. Appl. Anal. 2013., 2013: Article ID 502839 Mahmudov NI: Approximate controllability of evolution systems with nonlocal conditions. Nonlinear Anal. 2008, 68: 536-546. 10.1016/j.na.2006.11.018 Sakthivel R, Ren Y, Mahmudov NI: On the approximate controllability of semilinear fractional differential systems. Comput. Math. Appl. 2011, 62: 1451-1459. 10.1016/j.camwa.2011.04.040 Wang J, Xiang X, Wei W: A class of nonlinear integrodifferential impulsive periodic systems of mixed type and optimal controls on Banach spaces. J. Appl. Math. Comput. 2010, 34: 465-484. 10.1007/s12190-009-0332-8 Peng Y, Xiang X: Necessary conditions of optimality for second order nonlinear impulsive integro-differential equations on Banach spaces. Nonlinear Anal., Real World Appl. 2010, 11: 3121-3130. 10.1016/j.nonrwa.2009.11.007 Peng Y, Xiang X, Wei W: Second order nonlinear impulsive integrodifferential equations of mixed type with time-varying generating operators and optimal controls on Banach spaces. Comput. Math. Appl. 2009, 57: 42-53. 10.1016/j.camwa.2008.09.029 Liu G, Xiang X, Peng Y: Nonlinear integrodifferential equations and optimal control problems on time scales. Comput. Math. Appl. 2011, 61: 155-169. 10.1016/j.camwa.2010.10.013 Wang J, Zhou Y, Wei W, Xu H: Nonlocal problems for fractional integrodifferential equations via fractional operators and optimal controls. Comput. Math. Appl. 2011, 62: 1427-1441. 10.1016/j.camwa.2011.02.040 Wang J, Zhou Y, Medved’ M: On the solvability and optimal controls of fractional integrodifferential evolution systems with infinite delay. J. Optim. Theory Appl. 2012, 152: 31-50. 10.1007/s10957-011-9892-5 Zhang N, Zhang L: Optimal control problem of positive solutions to second order impulsive integrodifferential equations. J. Comput. Inf. Syst. 2012, 8(20):8311-8318. Wang J, Zhou Y, Wei W: Optimal feedback control for semilinear fractional evolution equations in Banach spaces. Syst. Control Lett. 2012, 61: 472-476. 10.1016/j.sysconle.2011.12.009 Liu X, Liu Z, Han J: The solvability and optimal controls for some fractional impulsive equation. Abstr. Appl. Anal. 2013., 2013: Article ID 914592 Wang J, Zhou Y: A class of fractional evolution equations and optimal controls. Nonlinear Anal., Real World Appl. 2011, 12: 262-272. 10.1016/j.nonrwa.2010.06.013 Pazy A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York; 1983.
Mathematical modeling of a vertical shaft impact crusher using the Whiten model. . Several mathematical models for the VSI crusher have been proposed in the last two decades or so. The Whiten crusher model, originally developed for cone crushers, has served as the basis of several approaches to model VSI crushers. . Sep 01, 2017 The Andersen/Awachie/Whiten model of the cone crusher has been applied in modeling the performance of a 264 kW vertical shaft impact crusher producing manufactured sand. The model has been supplemented by equations describing the variation of parameter K 3 in the classification function and the T 10 parameter of the breakage function, which were both described as a function of m b z z k - z - z - F, (6) m k z k z - z k - z k, where and are the effective stiffness coefficients of the springs that connect the body with the cone and the body with the base, respectively. The term z k is an approximate model of crushing, and is destined for further discussion. Jan 01, 2016 Equation is the widely accepted model of crushers that is used to predict the size distribution of products in different types of crushers. While considering the above aspects of a model of crushers, it is important to remember that the size reduction process in commercial operations is continuous over long periods of time. Aug 01, 2007 The development of a gyratory crusher model was achieved in the following three main stages mathematical representation and coding of the crushing process building an amperage constant model to derive an energy-scaling formula and modifying the amperage constant model to represent a full-scale model. Mathematical Model For Crusher monter-anten.pl. 2021-4-10 Mathematical modeling of a vertical shaft impact . Several mathematical models for the VSI crusher have been proposed in the last two decades or so. The Whiten crusher model, originally developed for cone crushers, has served as the basis of several approaches to model VSI crushers. Therefore, the availability of relevant mathematical models for impact crushers is important for a successful simulation of such plants (Nikolov, 2002). Address correspondence to Prof. Vedat Deniz, Department of Polymer Engineering, Hitit University, Mhendislik Fakltesi, Polimer Mh. Bl., evre Yolu, orum 19030, Turkey. 2. Factors affecting jaw crusher capacity The mathematical model of the working process of the crusher should take into account many factors. Operating experience and investigations have shown that the performance of jaw crushers significantly depends on their design and the motion law of its function element the moving jaw 5, 6. The development of a gyratory crusher model was achieved in the following three main stages mathematical representation and coding of the crushing process building an Aug 01, 2007 The objective of this research was to develop a mathematical model for gyratory crushers to help in the prediction of energy consumption and to analyze dominant parameters that affect this energy consumption. The development of a gyratory crusher model was achieved in the following three main stages mathematical representation and coding of the crushing process building an Feb 03, 2021 Mathematical Modeling and Coronavirus Disease 2019. Since the emergence of coronavirus disease 2019 (COVID-19) as a global pandemic, many policymakers have relied on mathematical models to guide highly consequential decisions about mitigation strategy, balancing the goals of protecting health while limiting economic and social disruption. 2 days ago Mathematical model offers new insights into spread of epidemics. Modeling contagions and superspreading events through higher-order networks. Credit Queen Mary, University of A mathematical model is an abstract model that uses mathematical language to describe the behaviour of a system. Mathematical models are used particularly in (4) In Mathematical Models with Applications, students will use a mathematical modeling cycle to analyze problems, understand problems better, and improve decisions. A basic mathematical modeling cycle is summarized in this paragraph. The student will (A) represent Jan 30, 2021 Let us note that the well-known mathematical models used to analyze the dynamics of such crushers include a number of assumptions about the nature of their motion and the interaction of their elements with the medium processed, which, on the one hand, greatly simplify the analysis, and, on the other, can lead to the loss of descriptions of some ... Jun 29, 2020 The mode of a two-mass cone crusher at the approaches to one of the Sommerfeld thresholds is considered. Frequency close to this threshold from low side provides the most efficient energy transfer from the unbalance to the crusher body. However, the stochastic model of crushing shows the possibility of breakdown such mode even after a very long stay on it. Mathematical model-based analysis has proven its potential as a critical tool in the battle against COVID-19 by enabling better understanding of the disease transmission dynamics, deeper analysis of the cost-effectiveness of various scenarios, and more accurate forecast of the trends with and withou The paper designed a straw crusher device based on genetic optimization design method. Firstly, straw crusher device math model was build. Secondly, taking the cutter shaft as optimized object, the cutter shaft in the working device axis diameter, internal diameter and ratio were set as variables, and taking the lightest quality of cutter shaft as the objective function, to build the ... Corpus ID 114785934. Mathematical model for the determination of an optimal crusher product size inproceedingsMagdalinovic1990MathematicalMF, titleMathematical model for the determination of an optimal crusher product size, authorN. Magdalinovic and M. Grbovic and I. Budic and A. Jankovic and Z. Markovic and . 97/01663 Mathematical model for prediction of pressure. Share 97/01663 Mathematical model for and a recycling means for recycling at least a part of the fine particles to Crusher Mathematical Model. Crusher Mathematical Model Products. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, Crusher Mathematical Model, quarry, aggregate, and different kinds of minerals. mathematical models for maximizing aggregate plant production An aggregate production plant mainly consists of conveyers, crushers, and screens. How to set up the closed-side openings of crushers and the openings of screens is an important issue for the plant manager in order to maximize the plants production, and to produce the aggregate with ... Feb 24, 1998 Solving the model, the maximum production rate and the optimum settings of the plant can be directly obtained. Because of the difficulties in solving the nonlinear model, a linear mathematical model is obtained by fixing the settings of screens and crushers at specific values. Keywords Crusher, Aggregates, Impact loading, Mathematical modeling, Material selection, Detail design 1 I. NTRODUCTION . N industry, crushers are machines which use a metal surface to break or compress materials into small fractional denser masses. Throughout most of industrial ... crusher. The rst generation models were based on empirical expressions to predict capacity 9,10 and energy consumption 1012. Later, more robust mathematical models were proposed, based on the population balance model, to represent more details of the machine performance, including the full product size distribution 13. Oct 20, 2017 This paper presents results of the use of the N4SID method, belonging to the subspaces identification techniques, in its deterministic case, applied to a prototype of a crusher, where this unit operation is part of the mineral processing chain. A mathematical model in discrete-time linear state space system is obtained, where the result indicates that the estimation of the model is ... Aug 23, 2021 Mathematical model predicts best way to build muscle Date August 23, 2021 Source University of Cambridge Summary Researchers have developed a mathematical model that The mathematical model of working elements of the feed grain vibration crusher considering its design features and interaction of working elements with process environment is obtained. Numerical experiments show the possibility of receiving the required dynamic property of a crusher with sufficient vibration amplitude of its working elements. The vibrations are synchronous and antiphase, which ... 2. Mathematical Model for the Inertia Cone Crusher with Bonded Particles The MBD of an inertia cone crusher that interacts with bonded particles is determined by this section, as shown in Figure 2. B1 B2 B3 B5 B6 B0 O1 O2 O3 O5 O6 O7 d 12 12 12 (c0,e0) (c1,e 1) (c2,e2) (c3,e3) (c5,e5) (c6,e 6) y0 x0 z kx ky kz cz cx cy l1 l0 ... The cone crusher is an indispensable equipment in complex ore mineral processing and a variant of the cone crusher is the inertia cone crusher. A real-time dynamic model based on the multibody dynamic and discrete element method is established to analyze the performance of the inertia cone crusher. This model considers an accurate description of the mechanical motions, the nonlinear contact ... Crushers. More than 90% of crushers in the mining industry are jaw and cone crushers. Mills. The most common type of mill in the mining industry is SAG and ball mills. ... Vector control based on a mathematical model of the motor ensures smooth starting of mechanisms even at a full load with rated starting currents and without mechanical stress ... ResearchArticle Dynamic Modeling and Analysis of a Novel 6-DOF Robotic Crusher Based on Movement Characteristics GuoguangLi,BoqiangShi ,andRuiyueLiu The results obtained are used to present the energysize relationship and to compare the crushing amenability of the rocks tested. A suggestion for further study is to derive the mathematical model of the breakage curves and to calculate the breakage energy for Math Modeling in Epidemiology The Beginnings of Mathematical Epidemiology 1 Bernoulli 1760 Daniel Bernoulli formulated and solved a model for smallpox in 1760 Using his model, he evaluated the e ectiveness of (vaccination) inoculating of healthy people against the smallpox virus. We are not allowed to display external PDFs yet. You will be redirected to the full text document in the repository in a few seconds, if not click here.click here. We immediately communicate with youGet Quote Online If you have any needs or questions, please click on the consultation or leave a message, we will reply to you as soon as we receive it!
This lesson is meant to serve as a final review before students take a unit exam on linear momentum. The lesson starts with an AP practice problem (SP5), before students solve problems involving impulse and conservation of momentum (HS-PS2-2). Students need this final opportunity to review, so this process of problem solving, explaining, and sharing helps students practice and apply their linear momentum knowledge. Specifically, after students are done working through an AP practice problem, they get into small groups to collaboratively solve one linear momentum quantitative problem. Once they are done solving, students come to the front of the room and share their solutions with the rest of the class. Because the AP Physics 1 exam has five free response questions that students must work through in ninety minutes, this practice problem serves as a warm-up activity for today's review. I choose this specific problem because it forces students to use principles of momentum conservation and energy in collisions. When students enter the room I hand each student a copy of the problem. My classroom has a routine, so students know that when they are handed a practice AP problem, they must pull out their equation sheets, calculators, and pencils. The warm-up officially starts when I remind students of several free response suggestions. Then, students work individually through the practice problem as I walk around to informally assess how much progress they are making through the problem. Students get precisely fifteen minutes to work on the problem, since that's about the amount of time they are given in the actual AP exam. Once time has expired, I share with the students the grading rubric by displaying it on the front board with a document camera. I go through and explain how the points are distributed while students check their own work against the solution. The students can ask general questions during this time, but I encourage them to see me after class if they have a specific question about their own work. This entire process takes about five minutes, and I encourage students to be as honest as possible in their self grading, since one goal of this activity is to see how well they'd do if this question were actually on the AP Physics 1 exam. Today's class uses collaborative problem solving as a way for students to review linear momentum. Students are each given a different problem from a review problem set. These problems are taken from our textbook, and are problems that I feel are most representative of the test questions. Students stay at their lab tables for this activity so we don't lose time while they get into groups I give each group one of the problems with its final answer. Including the final answer with the problem is an important part of this activity. I want students to not focus so much on getting the right answer, but on being able to explain their reasoning and justify their decisions. The problem that each group receives is random: I literally walk down the center aisle and give the group whichever problem is on the top stack of my pile. All of the students should be able to answer any of these questions. My expectation is that students take about ten minutes and actively work together to write down the solution on their papers. Without giving the students too many details, I tell them to be prepared to not only show their solutions, but to also be able to explain their solutions. Also, I encourage students to use pen as they work. Using pen keeps them from erasing, so even if they make a mistake or change their thinking, I can see evidence of their entire thought and solution process. As students are working at their lab tables, I walk around and ensure that everyone is engaged in the discussion and thinking critically about their assigned problem. I am willing to give students hints as I observe, but my feeling is that by this point in the Linear Momentum Unit students should be able to independently work through these problems. After the collaborative work time is over, I share with students that they are presenting their solutions to the rest of the class. Each group comes forward, puts their problem with the solution under the document camera, and explains that solution to the rest of the class. The goal of this activity is to show students the variety of problems that are on the unit exam. Because my students are sometimes shy, I ask if any groups volunteer to go first. There is always at least one group that wants to get the presentation out of the way, so I choose them and applaud them for being so willing. This first group of students walk to the front of the room and place the problem with solution under the document camera. One person from the group must read the problem aloud so the entire class becomes familiar with that problem. A second student should explain any diagrams that were drawn and the list of given information. Finally, a third student from the group verbalizes the solution that has been written on the paper. Once the solution is appropriately provided, I ask the class if they have any questions for the group. If someone from the class does need to ask a clarifying question, or if I need to ask a clarifying question throughout the solution sharing, I expect that any presenting group members who haven't participated yet answer these questions. Once that first group is finished, I let them pick which group they'd like to see share next. This chosen group comes forward with their problem and solution, shares, and appropriately answers questions as the last group did. The process repeats itself until all problems have been shared. This solution sharing activity is our closure to the review lesson today. If students took good notes throughout this activity, they now have a great study resource of new questions with solutions.
Out of the money call option Options Expiration, Assignment, and Exercise. If your option is out-of-the-money on expiration Friday,. What does 'In the Money', 'Out of Money', 'At the MoneyThe amount of cash equals the difference between the exercise price of the option and the value of the index.That means if you choose to close your position prior to expiration, it will be less expensive to buy it back.But would you rather be buying back an out of the money call than an in the money call.Out-of-The-Money (OTM) — For call options, this means the stock price is below the strike price.C The maximum loss a buyer of a stock put option can suffer is equal to A. the striking price minus the stock price. B. the stock price minus the value of the call. C. the put premium. D. the stock price. E. none of the above. out-of-the-money options - Options PlaybookThat will decrease the price of the option you sold, so if you choose to close your position prior to expiration it will be less expensive to do so. Why at the money option has higher theta than out of moneyAs long as the stock price is at or below strike A at expiration, you make your maximum profit.Selling the call obligates you to sell stock at strike price A if the option is assigned.C Asian options differ from American and European options in that A. they are only sold in Asian financial markets. B. they never expire. C. their payoff is based on the average price of the underlying asset. D. both A and B. E. both A and C. You are free to close out a long call or put before expiration by selling it if it has. you might anticipate assignment on any in-the-money option at expiration. You may wish to consider ensuring that strike A is around one standard deviation out-of-the-money at initiation.The Out-Of-The-Money Butterfly. that most traders have never considered — the out-of-the-money. and lower strike price call. Covered Calls: What Works, What Doesn't - forbes.comThe out-of-the-money naked call strategy involves writing out-of-the-money call options without owning the underlying stock.After the strategy is established, you want implied volatility to decrease.C The maximum loss a buyer of a stock call option can suffer is equal to A. the striking price minus the stock price. B. the stock price minus the value of the call. C. the call premium. D. the stock price. E. none of the above. The Returns and Risk of Alternative Call Option Portfolio TradeKing provides self-directed investors with discount brokerage services, and does not make recommendations or offer investment, financial, legal or tax advice.Your strategy is known as A. a long straddle. B. a horizontal spread. C. a vertical spread. D. a short straddle. E. none of the above.Learn what out of the money options are and what are out of the money call options and out of the money put options. Tesla Buys an Out of the Money Call Option on LithiumWhy at the money option has higher theta than out of money option. Why is the theta highest for the option at the money. to roll deep in the money call options. 0.B The intrinsic value of an out-of-the-money call option is equal to A. the call premium. B. zero. C. the stock price minus the exercise price. D. the striking price. E. none of the above.Selling in-the-money strikes is the most conservative approach to this strategy.This strategy has a low profit potential if the stock remains below strike A at expiration, but unlimited potential risk if the stock goes up.If the option expires worthless, the buyer merely loses the option premium. Call options are the most important type of option,. money by taking out a term,.B (When an index option is exercised the writer of the option pays cash to the option holder.The reason some traders run this strategy is that there is a high probability for success when selling very out-of-the-money options.Op het moment dat dit bedrag onvoldoende is gaat men over tot een zogenaamde margin call,.Buying Out-of-the-Money Call Options. traders often have when buying out-of-the-money (OTM) call options. Option Trading Mistakes.D The lower bound on the market price of a convertible bond is A. its straight bond value. B. its crooked bond value. C. its conversion value. D. A and C. E. none of the above B The potential loss for a writer of a naked call option on a stock is A. limited B. unlimited C. larger the lower the stock price. D. equal to the call premium. E. none of the above. Options 101: In the Money | ProfitableTrading Options- Series 7 Flashcards | Quizlet If you own (bought) a call,. all out-of-the-money options at the close.Out-of-the-Money Option. 1. A call option with a strike price more than the value of the underlying asset. 2. A put option with a strike price less than the value of. Difference between In-the-money (ITM), out-of-the-moneyFor a call option being in the money means that the market price of the underlying stock. WWWFinance - Option ContractsYour strategy is known as A. a vertical spread. B. a straddle. C. a horizontal spread. D. a collar. E. none of the above. Option Greeks Price Changes to the Stock Time to Expiration A put option is a contact. the call option is out of the money and if the. What is In The Money? definition and meaningOptions Expiration Explained. you will have to call your broker.The seller of a put option is committed to selling the stock at the exercise price.Please consult a tax professional prior to implementing these strategies.The outlay is low therefore, in terms of money at stake, risk is low.TradeKing Group, Inc. is a wholly owned subsidiary of Ally Financial Inc.Definition of out of the money: A call option whose strike price is higher than the market price of the underlying security, or a put option whose. A call option is in-the-money if the current market value of.THE DISRUPTIVE DISCOVERIES JOURNAL1 OF 3 Weekly analysis of how disruption in commodities, geopolitics, and macroeconomics converge to create opportunities.Strike price selection is a critical concept needed to master covered call writing.B The intrinsic value of an at-the-money call option is equal to A. the call premium. B. zero. C. the stock price plus the exercise price. D. the striking price. E. none of the above.
Celestial Encounters - Florin Diacu, Philip Holmes - Häftad CIF Forapport 28-2006 inlaga.indd - Centrum för idrottsforskning Author Guidelines As a general mathematics journal, Acta et Commentationes Universitatis Tartuensis de Mathematica publishes original research papers in pure and applied mathematics. Submission of a paper acknowledges that the paper has not been published nor submitted for publication elsewhere. Guidelines Proceed only if this is an entirely new submission to the Acta Mathematica Hungarica and you have read the submission guidelines. In case of coauthors, they should be listed alphabetically by their last or family name, regardless of their contribution to the manuscript. Submission of a manuscript implies that the work described has not been published before (except in the form of a preprint), that it is not under consideration for publication elsewhere, and that it will not be submitted elsewhere unless it has been rejected by the editors of Acta Arithmetica. Submission Authors must use LaTeX for typewriting (it would be better to use our template), and visit our website www.ActaMath.com to submit your paper. Our address is Editorial Office of Acta Mathematica Sinica, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, P. R. China. - Smitta magsjuka spädbarn - Bangladesh daily newspapers - Forbundet kultur og information - Inackorderingsbidrag folkhögskola Silva axiomatic theory of distributions", Portugaliae Mathematica, 48 Current program; Upcoming conferences; Previous conferences; Call for conferences; For visitors; Contact; Publications ›› Acta Mathematica ›› Submission of Uppgift 25 - Np Årskurs 9 - Delprov D, 2013 - Eddler. Mathematica Vol. 7, No. 2, 2015 by Acta Universitatis Nationella Prov 2020 Matematik - Nationellt Prov Please submit your manuscript to Acta Mathematica as a pdf file here: EditFlow Normally, only manuscripts prepared using LaTeX (preferred) or TeX are accepted. Allowable languages are English, French and German. The decision to accept or reject a manuscript is taken by unanimous vote of the editorial board. Submission of manuscripts | IML About Submissions For current information about submission of your manuscript for consideration, please visit the Acta Mathematica “Submission of Manuscripts” web page maintained by the Institut Mittag-Leffler. Bibliography - CORE ISSN Print 0001-5962 ISSN Online 1871-2509. 4 issues per year Submission of manuscripts Please submit your manuscript to Acta Mathematica as a pdf file here: EditFlow Normally, only manuscripts prepared using LaTeX (preferred) or TeX are accepted. Acta Mathematica Hungarica. 59099 A/SM AA AAA AB ABC/M ABM/S ABS AC ACLU ACM Submissions are strictly refereed and only papers of the highest quality are accepted for publication. Ever since the start, it has been one of the most prestigious mathematics journals in the world. Since 2017, the full texts of all old and new articles are available online. Data in Brief co-submission Published on behalf of Acta Materialia, Inc. Acta Materialia provides a forum for publishing full-length, original papers and commissioned overviews that advance the in-depth understanding of the relationship between the processing, the structure and the properties of inorganic materials. Author Guidelines As a general mathematics journal, Acta et Commentationes Universitatis Tartuensis de Mathematica publishes original research papers in pure and applied mathematics. Oradea medical university admission 2021 Acta Scientiarum Mathematicarum is now archived and no longer receiving submissions with this publisher. Acta Mathematica ›› · Submission of manuscripts · Accepted manuscripts · Subscription/Online Articles · Arkiv för matematik ›› · Submission of manuscripts Acta Mathematica ›› · Submission of manuscripts · Accepted manuscripts · Subscription/Online Articles · Arkiv för matematik ›› · Submission of manuscripts Thesis for the Degree of Filosofie licentiat, submitted to Stockholm University. Universitatis Iagellonicae Acta Mathematica 35 (1997), 7–36. Each submission service is completed within 4 - 5 working days. 16. Can I download Acta Mathematica Hungarica Endnote Style? Anmala sig till forsakringskassan tasava skola stockholm skattefria traktamenten utland 2021 svt barn telefonnummer john aage tandberg - Amundi sri policy - Fastighetsförvaltare engelska translate - Circle k nora - Thomas andersson fotograf falkenberg - Evidensia strömsholm katt Over dog coupons? - seo.ocom.vn Paper submitted to the Conf. on AI and the Professions, organized by the Historia Mathematica, 1979, 6, 385-404. Vasa Acta Anaesthesiologica Scandinavica 2017;61:135-48. to Students With Learning Disabilities: A Review Of Literature: Online Submission; 2008. D.o.w.n.l.o.a.d Islamic law in Africa Review Online – MCS Partners By special arrangement with the Institut Mittag-Leffler, International Press provides fully open online access to the entire content of Acta Mathematica — from its first issue of 1882 to the most recent. The submission file is in Latex document file format using article document style format. Philosophica Fennica 8, ss. Principia Mathematica. 31 Oloph Bexell Ad Acta – några anteckningar om Acta Universitatis currently being scoped, called Going for Gold, to be submitted in September 2012. matematiska principer (Philosophiae Naturalis Principia Mathematica) eller, som den Swedish committee, he submitted to the Berlin Jagdausstellung a collection skriften Acta Mathematica, som fortfarande ges ut, sina initiativ till inter- nationellt this volume is a so-called open volume, having invited scholars to submit articles on any topic relating Principia mathematica förelåg färdig vid hennes alltför tidiga död (hon schen Romantik: Sechs Studien, Acta Universitatis. Upsaliensis Corvus,fielders,nestling international stature!thorn personify Roman submission nets. notations theater counseled Mathematica, phentermine prescription diet pill http://www.jimind.com/apostar-online/ Bordeaux Acta sneakier!underground Acta Anaesthesiol Scand 2010;54:1007-. 17.
How Far Is 20km? 20km is a distance, and it’s not a speed. Km is a unit of length in the SI (International System of Units). It is equal to 1000 meters, or 0.621 miles. A mile is also a unit of length in the SI. It is equal to 63,360 inches or 5,280 feet. The distance calculator is designed to help you calculate the distance between any city or location on a map. It uses the great circle distance (air distance) and also draws a rhumb line so that you can measure the shortest possible distance between any two locations on a map. The calculator has many features and can be used for different calculations. For example, you can use it to find the shortest distance between cities, the best routes for a road trip, or simply as a general travel calculator. The distance can be calculated in miles, kilometers, or meters, and also you can input the name of a destination city or place. A kilometer is one thousand meters or about 0.621 miles. Miles are the most common unit of measurement and are equivalent to 5,280 feet or 1.609344 kilometers. Another common unit of measure is the meter, or m, which is a SI (International System of Units) unit. A meter is approximately 3.28 feet and is widely used to measure the distance between places or objects. In addition to being a unit of length, a meter is also a unit of mass. It is equal to 0.022 ounces, which is about 0.007 grams. If you’re unsure how to convert from m to km, our online meter-to-km converter is a free tool that displays conversion values in a fraction of a second. You can also use our speed, distance, and time calculator to calculate the duration of your trip. It will accept speed in mph, kph, and km/h, as well as the time units of minutes, hours, or seconds. This is very useful if you compare travel distances between different countries or need to determine the maximum allowable length of a route for grant support. For example, if you’re applying for a travel grant, you can enter the number of kilometers or miles needed to reach the final destination, and the calculator will show you how much it costs in km or mi. The calculator will also display the total travel time, which you’ll need to decide whether you’re eligible for a grant. Using the time calculator to calculate the distance between two points is one thing, but when it comes to calculating how long it takes to do something, it can be tricky. There are several methods for doing this, some very simple and others more complex. The best way to determine how long it takes to travel from point A to point B is to measure the distance between two points. This can be done in various ways, but for most people, using a measuring tape or even an ordinary piece of paper with a scale at its center is the easiest. Once the distance has been determined, you can calculate how long it will take to travel from point A to point B by multiplying it by the appropriate rate of speed or kilometers per hour (km/h). The resulting time in minutes should be close to the estimated time required for the trip, but you should also be aware that it may not be a true measure of your travel time. Aside from the obvious, there are a few other notable things about the time calculator. Firstly, it can be used to convert several units of measurement, including feet, meters, inches, and pounds. It can also display several more standard-length units, such as miles and kilometers. It can be a great tool for calculating the time between two points and is well worth a look. The other notable feature of this calculator is that it displays the most important data user-friendly and intuitively, ensuring the best results for all users. A speed calculator is a tool that lets you calculate the average speed of a moving object. This can be useful for determining running, walking, or cycling speed at a certain time and distance. To use the speed calculator, enter your journey’s total distance in kilometers and time in hours, minutes, or seconds. The calculator will then return your answer in the most common units, including meters per second, kilometers per hour, and miles per hour. You can also input your training pace to calculate the average speed for a certain distance or race. This can be a helpful way to track progress and identify areas for improvement. If you’re using the speed calculator to determine the average speed of a running journey, enter the total distance in kilometers and the time taken in hours. Then click Calculate average speed to return your answer in mph or km/h. Another way to calculate the speed of a movement is by using the formula s = d/t. This is usually the easiest way to find the answer since it’s based on a fixed pair of values, distance and time, and doesn’t require you to make any changes. The speed equation is often used in math exams to answer distance, time, or rate questions. Knowing the formula inside out is important to always solve these types of problems correctly and quickly. Similarly, you can use the speed equation to determine whether an object travels at uniform or non-uniform speed or has an average speed. Again, this is important for the speed distance time formula, as it can determine whether an object covers equal distances in equal intervals of time. If you’re using the speed calculator to estimate a distance or time in one unit, and you need to know it in another, you can easily convert the units with the help of conversion factors. This is especially helpful when comparing two sets of units and the need to use the same conversion factor to convert one into the other. Unit conversion is a process that involves multiplying or dividing one set of units by another using a conversion factor. Many people use different units of measurement in their everyday lives, such as weight and temperature, and it is important to know how to convert these numbers to their proper form. This is especially true in cases where a person needs to measure length or distance, but the measurement is in a non-standard unit. This can cause confusion or misunderstanding, and it is best to know how to convert a given number of units to its proper form before you attempt to measure something. Many books provide conversion factors and algorithms for use in unit conversion. Some cover a wide range of units, while others focus more on accuracy and methodology. The method often used in unit conversion involves a series of acyclic-directed graphs, which are arranged by size. Each graph node represents a unit, and an arc between nodes is labeled with a conversion factor. The user traverses the graph, multiplying the conversion factors encountered (or dividing them when moving against the arrows). A kilometer (SI symbol: km) is a unit of length in the International System of Units. It is equal to one thousand meters. It is commonly used for measuring long distances and speed. It is also a unit of length in the metric system and can be converted to the mile. The kilometer measures distances and speeds in the United States and many other countries. However, in some parts of the world, the mile is more widely used as a measurement for long distances. Kilometers and miles have long histories dating back to the Romans. It is, therefore, useful to have a calculator that can convert the two types of measurements. This meter-to-kilometer calculator makes this task quick and easy, with conversion values displayed in a fraction of a second. Moreover, this tool is free; you can use it on any device with an internet connection. Twenty kilometers (20 km) is a distance measurement commonly used in many parts of the world. It is equivalent to 12.4274 miles in the United States and other countries that use the imperial measurement system. To give you a better understanding of how far 20 kilometers are. Let’s Take A Closer Look At Some Common Ways To Visualize This Distance: - Driving Distance: If you were to drive 20 kilometers, it would take about 24 minutes to travel that distance at a speed of 50 km/h (31 mph). This assumes no traffic jams or other delays on the road. - Walking Distance: If you were to walk 20 kilometers, it would take you approximately 4 hours at an average speed of 5 km/h (3.1 mph). Of course, your actual walking time may vary depending on your level of fitness, the terrain, and the weather conditions. - Cycling Distance: If you were to cycle 20 kilometers, it would take you about 40 minutes at a speed of 30 km/h (18.6 mph). This assumes you are cycling on a flat, smooth road without traffic or other obstacles. - Running Distance: If you were to run 20 kilometers, it would take you approximately 2 hours at an average speed of 10 km/h (6.2 mph). Again, your actual running time may vary depending on your fitness level, the terrain, and other factors. - Airline Distance: Finally, it’s worth noting that 20 kilometers are a relatively short distance when it comes to air travel. For example, if you were flying from New York to Los Angeles, which is a distance of approximately 4,000 kilometers (2,500 miles), you would cover the same distance as 200 trips of 20 kilometers. In conclusion, 20 kilometers is a moderate distance covered by car, bike, or foot. It is often used as a benchmark for long-distance running events such as marathons, which cover a distance of 42.195 kilometers (26.219 miles). Whether you’re driving, walking, cycling, or running, 20 kilometers is a distance that requires some effort but is still manageable for most people with a bit of training and preparation. How long does a 20-kilometer journey take? The travel distance of 20 kilometers is affected by both speed and mode of transportation. For instance, driving 20 kilometers could take anywhere from 15 minutes to an hour, depending on traffic, whereas walking 20 kilometers would take approximately 4 hours. What is a 20-kilometer marathon’s approximate distance? A marathon is a race over a long distance, about 42.195 kilometers or 26.219 miles. Therefore, a 20-kilometer marathon would be approximately 12.4 miles, or just under half of a full marathon. How far is 20 kilometers from other distances that are frequently used? 20km is roughly identical to 12.4 miles, which is simply over portion of a half-long distance race. It also roughly equates to a walk of four hours or a drive of forty minutes. Is 20km a significant distance to travel? Depending on the context, a 20-kilometer distance can be categorized as either long or short. For instance, whereas a walk or run may require a much longer distance, a car ride or train ride may only cover a relatively short distance. For a 20-kilometer trip, what is the best mode of transportation? For a 20-kilometer trip, the best mode of transportation depends on individual preferences and circumstances. For instance, driving or taking public transportation may be the quickest option if you are in a hurry. Alternately, you can get some exercise and take in the scenery by cycling or walking. What typical landmarks or locations can be found 20 kilometers away? The starting point and the surrounding geography will determine the specific landmarks or locations found 20 kilometers away. However, nearby towns or cities, parks or nature reserves, or other well-known tourist attractions are a few examples of locations that can be found 20 kilometers away.
On David Hilbert’s “On the Infinite” (“Über das Unendliche”) Page last updated 27 Feb 2023 In a 1925 lecture the mathematician David Hilbert set out his philosophy of the infinite. (Footnote: David Hilbert, Über das Unendliche, Mathematische Annalen 95.1, 1926, 161-190, translation by Erna Putnam and Gerald J. Massey, viewable online at Philosophy of mathematics: On the infinite by Daivd Hilbert.) While a few quotations from this work are regularly rehashed out of context, the details of Hilbert’s overall viewpoint espoused therein are not widely known, and so At the outset, Hilbert acknowledges that the meaning of the infinite in mathematics has still never been completely resolved: “… in spite of the foundation Weierstrass has provided for the infinitesimal calculus, disputes about the foundations of analysis still go on. These disputes have not terminated because the meaning of the infinite, as that concept is used in mathematics, has never been completely clarified.” He proceeds to set out what he considers to be the real nature of the infinite in mathematics. He first examines the question of whether the infinite occurs in nature, and comes to a conclusion which is precisely the opposite to that of Cantor, who asserted that the infinite, and even the transfinite, occurs in nature, see, for example Cantor’s belief in the reality of the infinite. Hilbert decides that the infinite does not occur at all in nature: “… the infinite is nowhere to be found in reality, no matter what experiences, observations, and knowledge are appealed to.” “We have established that the universe is finite in two respects, i.e., as regards the infinitely small and the infinitely large.” “Our principal result is that the infinite is nowhere to be found in reality. It neither exists in nature nor provides a legitimate basis for rational thought …” Today it is generally accepted that Hilbert was right and Cantor was wrong; we have found no instances of an infinity of any kind in nature, never mind any “bigger” infinities, nor is there any indication that we might discover any sort of infinity in the future. But although Hilbert thought it worthwhile to clarify his ideas on this matter, he seems to have turned a blind eye to the possibility that Cantor’s beliefs in that respect involved fundamental difficulties that led to an inherently problematic theory of the infinite. How to determine if a new concept is mathematically acceptable? He then considers how a decision as to whether a new concept may be deemed acceptable as a new part of mathematics, and asserts that “success” is to be the final arbiter: “If, apart from proving consistency, the question of the justification of a measure is to have any meaning, it can consist only in ascertaining whether the measure is accompanied by commensurate success. Such success is in fact essential, for in mathematics as elsewhere success is the supreme court to whose decisions everyone submits.” However, this simply begs the question: What is success in mathematics? Is it simply a vote by mathematicians that it is successful, and hence a purely subjective appraisal? That is not acceptable; mathematics, of all subjects, must strive for as much objectivity as possible. Of course, the success of those parts of mathematics that are used in the advancement of science and technology can be subjected to a reasonably objective appraisal - but what are we to do with those parts of mathematicians that have never had any successful real world application? Are we to simply take the word of the mathematicians who study those parts of mathematics when they claim that their mathematics is successful? We wouldn’t accept that in any other sphere of human activity, so why should we do so for mathematical concepts that have never had any useful application? If politicians are approached for funding, in no other area would they unquestioningly accept the claims of those who ask for funds. Praise for Cantor’s set theory: Potential and Actual Infinity Hilbert then pauses to lavish praise upon Cantor’s set theory, and asserts that Cantor’s notion that there are two different types of infinity is correct: “In this paper, we are interested only in that unique and original part of set theory which forms the central core of Cantor’s doctrine, viz., the theory of transfinite numbers. This theory is, I think, the finest product of mathematical genius and one of the supreme achievements of purely intellectual human activity. What, then, is this theory? Someone who wished to characterize briefly the new conception of the infinite which Cantor introduced might say that in analysis we deal with the infinitely large and the infinitely small only as limiting concepts, as something becoming, happening, i.e., with the potential infinite. But this is not the true infinite. We meet the true infinite when we regard the totality of numbers 1, 2, 3, 4, … itself as a completed unity, or when we regard the points of an interval as a totality of things which exists all at once. This kind of infinity is known as actual infinity.” This is arrant nonsense. To talk about the “infinitely large” as a limiting concept is a blatant contradiction, since to be infinitely large is to be nothing other than to be without any limitation to size. And to lump the “infinitely large” and the “infinitely small” together as though their properties are comparable is, if not intentionally dishonest, completely misleading, since the notion of “infinitely small” does imply a limit (zero), whereas the notion of “infinitely large” does not. Furthermore, the very phrase potential infinite (“potentiellen Unendlichen”) is also nonsensical, being an obvious oxymoron. Either a thing is without any limit, or it is restricted by some sort of limit. To say that a thing is potentially without a limit could only mean one of two things: - It is inherently without any limit, in which case, the inclusion of the term potential is superfluous and only serves to confuse, so the term should simply be “infinite”, or - It requires something that changes its properties, so that the thing changes from being something that has a limit to something that does not have a limit, in which case when the term is applied to something, it means, ipso facto, that without such change the thing is not unlimited, and again the term only serves to confuse, and the term in this case should simply be “finite”. The notion of actual infinity (“aktual Unendlichen”) can also be summarily dismissed as a similarly incoherent notion. Every number in any set of natural numbers is associated back to the natural number zero by a finite number of unitary iterations. If there could be more than a finite number of natural numbers actually together in a set, then there would have to be numbers that are so associated with the natural number zero by more than a finite number of unitary iterations. But that is impossible, since such an entity, by definition, cannot be a natural number. These are simple straightforward refutations that should be readily accessible to anyone who expends a modicum of contemplative effort. But mathematicians like Hilbert instead refuse to see the obvious, like a lover blinded by love for the object of his desire. There is also a more detailed look at the notion of these two types of infinity at Actual, Completed and Potential Infinity. Limitlessly large sets of different sizes Hilbert then goes on to talk about limitlessly large sets that are “larger” than other limitlessly large sets: “But it was Cantor who systematically developed the concept of the actual infinite. Consider the two examples of the infinite already mentioned - 1, 2, 3, 4, … - The points of the interval 0 to 1 or, what comes to the same thing, the totality of real numbers between 0 and 1. It is quite natural to treat these examples from the point of view of their size. But such a treatment reveals amazing results with which every mathematician today is familiar. For when we consider the set of all rational numbers, i.e., the fractions 1⁄2, 1⁄3, 2⁄3, 1⁄4, … 3⁄7, … , we notice that - from the sole standpoint of its size - this set is no larger than the set of integers. … Surprisingly enough, the set of all the points of a square or cube is no larger than the set of points of the interval 0 to 1. … On learning these facts for the first time, you might think that from the point of view of size there is only one unique infinite. No, indeed! … the set (2) cannot be enumerated, for it is larger than the set (1).” Here Hilbert falls into the same fallacy as so many others, simply assuming that the absence of any one-to-one correspondence between two sets necessarily implies that one of the limitlessly large sets has fewer elements than the other limitlessly large set, that one set is larger than the other. This is not only an intuitive assumption that has absolutely no logical basis - it is actually self-contradictory. It is truly astonishing that so many mathematicians and logicians have a blind spot in this respect and refuse to see the obvious. For an in-depth analysis on this matter, see One-to-one Correspondences and Properties. (Footnote: Also see Proof of more Real numbers than Natural numbers? and Why do people believe weird things?) Criticism of Cantor’s set theory Having previously heaped praise on Cantor’s set theory, Hilbert proceeds to point out all the contradictions that are inherent in that theory, seemingly totally oblivious to the incongruity of his stance: “In the joy of discovering new and important results, mathematicians paid too little attention to the validity of their deductive methods. For, simply as a result of employing definitions and deductive methods which had become customary, contradictions began gradually to appear. These contradictions, the so-called paradoxes of set theory, though at first scattered, became progressively more acute and more serious. In particular, a contradiction discovered by Zermelo and Russell had a downright catastrophic effect when it became known throughout the world of mathematics. … Too many different remedies for the paradoxes were offered, and the methods proposed to clarify them were too variegated. Admittedly, the present state of affairs where we run up against the paradoxes is intolerable. Just think, the definitions and deductive methods which everyone learns, teaches, and uses in mathematics, the paragon of truth and certitude, lead to absurdities! If mathematical thinking is defective, where are we to find truth and certitude?” And after pointing out that mathematics has immersed itself into a quagmire of self-inflicted contradictions, he offers his view of how the problem might be resolved: “There is, however, a completely satisfactory way of avoiding the paradoxes without betraying our science. The desires and attitudes which help us find this way and show us what direction to take are these: - Wherever there is any hope of salvage, we will carefully investigate fruitful definitions and deductive methods. We will nurse them, strengthen them, and make them useful. No one shall drive us out of the paradise which Cantor has created for us. - We must establish throughout mathematics the same certitude for our deductions as exists in ordinary elementary number theory, which no one doubts and where contradictions and paradoxes arise only through our own carelessness.” This emotive proclamation “No one shall drive us out of the paradise …” sits very uneasily alongside his concomitant claim that every cause of any contradiction in Cantor’s set theory will be rooted out without compunction. It indicates very clearly a strong desire to retain aspects of that theory that are emotionally appealing. It is not far-fetched to suggest that this emotional attachment led to the turning of a blind eye to the possibility that the notion of a number larger than any limitlessly large number might be indicative of a fundamental problem worth investigating in depth. From Hilbert’s 1920 Göttingen lectures, transcribed by Moses Schonfinkel and Paul Bernays, with annotations by Hilbert, translation by William Ewald, in the book “From Kant to Hilbert”, Vol 2, Oxford University Press, 1996. In 1908 Poincaré said of set theory: “Unfortunately they have reached contradictory results, what are called the Cantorian antimonies, to which we shall have occasion to return. These contradictions have not discouraged them and they have tried to modify their rules so as to make those disappear which had already shown themselves, without being sure, for all that, that new ones would not manifest themselves. It is time to administer justice on these exaggerations. I do not hope to convince them; for they have lived too long in this atmosphere. Besides, when one of their demonstrations has been refuted, we are sure to see it resurrected with insignificant alterations, and some of them have already risen several times from their ashes. Such long ago was the Lernaean hydra with its famous heads which always grew again.” From introductory paragraphs added in 1908, to a 1905 article in the “Revue de metaphysique et de morale” under the title “Les mathematiques et la logique”, translation by William Ewald, in the book “From Kant to Hilbert”, Vol 2, 1996.) Furthermore, the flagrant fact is that in spite of the so-called Cantorian “fruitful scientific” approach, over 100 years later there has been no scientific application of any of the notions of Cantorian set theory that cannot be achieved by a simple theory where elements are distinct from sets, and sets cannot be elements, see Natural Set Theory. (Footnote: As Nik Weaver said: “Virtually all modern mathematics outside set theory itself can be carried out in formal systems which are far weaker than Zermelo-Fraenkel set theory and which can be justified in very concrete terms without invoking any supernatural universe of sets … axiomatic set theory is not indispensable to mathematical practice, as most philosophers of mathematics have apparently assumed it to be. It is one arena in which mathematics can be formalized, but it is not the only one, nor even necessarily the best one.” From: ‘The Concept Of A Set’, arXiv:0905.1677, 2009.) (Footnote: As Solomon Feferman said: “I am convinced that the Platonism which underlies Cantorian set theory is utterly unsatisfactory as a philosophy of our subject … Platonism is the medieval metaphysics of mathematics; surely we can do better.” From: ‘Infinity in Mathematics: Is Cantor Necessary?’ in “Infinity in Science”, Instituto dello Enciclopedia Italiana (1987), pp.151‑209, also in the book: In the Light of Logic, OUP on Demand, 1998.) Poincaré was remarkably prescient - today, over 100 years later, the contradictions of set theory have still not been eradicated; I document many of them on this website. The initial flurry of attempts of the early 20th century to remove all contradictions from set theory has settled down into the dismal defeatist dogma that it is impossible to remove these contradictions. So, we can now ask, after almost 100 years since Hilbert wrote these words, has the sought-after certitude been achieved? Have all contradictions and paradoxes been expunged from set theory? No - it is patently obvious that this is not the case. Perhaps the reason is simply because, as Hilbert himself remarked, there is not a real understanding of the infinite: “Obviously these goals can be attained only after we have fully elucidated the nature of the infinite.” Isn’t it the case, that despite the constant forlorn hope that set theory has been on the right track for over 100 years, mathematicians have still not fathomed how to deal with set theory and the infinite without generating contraindications? For more on set theory, see the overview starting at Overview of set theory: Part 1: Different types of set theories. Logic or intuition? Towards the end of his article, Hilbert expounds his ideas of how to create a completely consistent formal mathematical system based entirely on logical symbol manipulation. But then, astonishingly, having explained his completely logical formal system, he turns about face and declares that logic is not sufficient and that intuition is always necessary: “In contrast to the earlier efforts of Frege and Dedekind, we are convinced that certain intuitive concepts and insights are necessary conditions of scientific knowledge, and logic alone is not sufficient.” But intuition cannot be anything more than a launching pad for ideas which must then withstand any assault of logic. Intuition is nothing more than a temporary ladder that must be thrown away once it has been replaced by an indestructible logical staircase. It is clear that Hilbert failed to achieve his objective of clarifying the infinite as it is used in mathematics. Several fundamental errors of logic, as indicated above, together with an emotional attachment to various concepts, eliminated any possibility that he might attain a lucid objective assessment of the situation. Having examined various works by Gödel, Hilbert, Cantor and others, it is evident that having an expertise in the manipulation of mathematical symbols bears no correlation to a capacity for profound philosophical thinking outside of that specialized area. Indeed the level of philosophical thinking outside of their area of expertise strikes one as having the sort of naivety that is engendered by a careful avoidance of any real engagement with the key points in case the result might not be to one’s liking - much as a child, when faced with accumulating evidence that Santa Claus might not be a real person, takes refuge in the comfort of anything positive, such as the morning absence of the food left out on Christmas Eve, and pushes any negative thoughts to the back of the mind, to be left unexamined. Moreover, it is evident that any criticism of set theory that is found in various journals is indulgent of the claims of conventional mathematics. Instead of continuing to hammer away relentlessly at all the deficiencies of set theory, there seems to be an unwritten gentleman’s agreement that the obvious flaws will be overlooked in a pretense that any problems are minor rather than fundamental.
Purpose of analysis Several studies have investigated the covariates of life expectancy at birth (Kozilek, Sorana & Iulia, 2017; Mahyar, 2016). Data on life expectancy at birth and GDP per capita was drawn from 50 countries across the world. This analysis investigates the relationship between these variables in order to gather more knowledge of the already existing literature on determinants of life expectancy. Does GDP per Capita predict life expectancy at birth? The variables being investigated are GDP per capita as the independent variable and life expectancy at birth as the dependent variable Independent variable - GDP per capita refers to the value of all goods and services produced in a country in a given year divided by the mid-year population Dependent variable - Life expectancy at birth refers t length of time in years a person is expected to live, from the time of birth. To be able to answer the research question, a brief literature review is conducted in order to find out what other work has been done relating to the research problem of interest in the present study. Following this, the data to be analyzed is gathered. In this case, data on GDP per capita and life expectancy at birth from 50 countries across the world was gathered from various internet sources including See http://en.wikipedia.org/wiki/. Once data is obtained, a choice is made on what statistical tools would be useful to answer the research question. An important consideration is on the type of variable in terms of how it was measured (Frankfort-Nachmias & Leon-Guerrero, 2015, 43). Numerical variable would need to be analyzed using statistical tools different from categorical variables. Both the independent and dependent variables in the present research are numerical. The research question aims at determining the relationship between the variables. Being continuous variables, a Pearson Correlation analysis is used to determine if an association exists between the two variables as well as the strength and direction of the association (Wagner, 2016. 23). Further, the analysis aims at determining whether GDP per capita predicts life expectancy at birth. An appropriate test to use is linear regression analysis, which is applicable when one intends to predict the value of one variable based on the value of another (others) (Warner, 2012. 160). In the present case, only one independent variable (GDP per capita) is considered, hence a simple linear regression analysis is used. Before carrying out any statistical analysis on data, the data has to be checked to ensure that all the assumptions for a given test are met. For linear regression analysis the data has to meet several assumptions namely: the variables have to be on a continuous scale, there should not be any significant outliers in the data, there should be normal distribution of errors within the data set, homescdasticity is met, meaning that distances between data points and the best line of fit are similar and the relationship between the variables must be linear. Although the data had a few outliers, it was expected that they would not cause undue influence on the model given that the value of cooks distance obtained was between 0.0 and 0.2, which is below the general rule of 1. Given this, no data points were removed in this analysis. In order to describe the data, means were obtained for the variables. The mean life expectancy at birth was 72.6 years while the mean GDP per capita was 19552USD. To determine the strength and direction of the relationship between GDP per capita and life expectancy at birth, a Pearson correlation analysis was run. The correlation was found to be 0.73, p < 0.001, as shown below. This suggests a statistically significant positive and strong relationship between the two variables. This means that as GDP per capita increases so does life expectancy at birth. GDP Pearson Correlation 1 .730** Sig. (2-tailed) .000 N 50 50 LIFEEx Pearson Correlation .730** 1 Sig. (2-tailed) .000 N 50 50 **. Correlation is significant at the 0.01 level (2-tailed). A trend analysis involving the two variables reveals a positive linear exponential trend with life expectancy increasing as GDP per capita increased, as shown in the scatter chart below. The best line of fit is also plotted in the chart which also shows the equation of the line. The general line equation is given as follows: Y = Mx + c, where m represents the slope of the line and C is the Y intercept, or the point at which the best line of fit crosses the y axis. As indicated the equation of the line is: Y = 0.00043x + 64.23 In this case the slope (gradient) of the line is 0.00043 while the y intercept is 64.23. In this context, GDP per capita begins to affect life expectancy after 64.23 years. The scatter chat below shows a cluster of points and a number of outliers. Grouping is also evident around GDP between 10, 000USD and 20, 000 USD and between 30, 000 USD and 40, 000USD. The scatter plot also shows homescdasticity, distances of data points from the best line of fit are similar, despite the few cases of outliers. Model R R Square Adjusted R Square Std. Error of the Estimate 1 .730a .533 .523 5.725 a. Predictors: (Constant), GDP b. Dependent Variable: LIFEEx The model summary above gives a sense of the model fit of the data. A strong correlation coefficient of 0.73 between the two variables is indicated. It also shows R2 (coefficient of determination) of 53.3, which means that 53% of the variance in life expectancy at birth is explained by GDP per capita. Model Sum of Squares df Mean Square F Sig. 1 Regression 1795.034 1 1795.034 54.770 .000a Residual 1573.146 48 32.774 Total 3368.180 49 a. Predictors: (Constant), GDP b. Dependent Variable: LIFEEx The ANOVA table above shows that the model is a good fit for the data (p< 0.001) and GDP per capita significantly predicts life expectancy at birth, F(48) = 54.77, p>0.001 Model Unstandardized Coefficients Standardized Coefficients t Sig. B Std. Error Beta 1 (Constant) 64.231 1.389 46.255 .000 GDP .00043 .000 .730 7.401 .000 a. Dependent Variable: LIFEEx The regression model can be obtained from the table of coefficients above as follows Y = Mx + C: Life expectancy at birth = MGDP per capita + C Life expectancy at birth = 64.23 + 0.00043GDP per capita From the regression model above the prediction is that for every unit increase in Gross Domestic Product per capita, life expectancy at birth will increase by 0.00043 years. From the analysis of the sample it can be concluded that at the population level, life expectancy at birth will increase by 0.00043 for every unit increase in GDP per capita. This finding is consistent with those of other researchers who found similar results (Kozilek et al., 2017, 70; Mahyar, 2016. 85). However, as literature shows, GDP is not the only covariate in respect to life expectancy at birth. Many other determinants play a role including infrastructure index, economy index, urban city, employment status and poverty index (Mahyar, 2016. 85). That these findings are based on data obtained on various internet sites, it is not immediately clear what methods were used to obtain the figures. This may affect both the internal and external validity of our findings. Nevertheless, these findings are similar to those found by other researchers and there is nothing unusual about them. Frankfort-Nachmias, C., & Leon-Guerrero, A. Social statistics for a diverse society (7th ed.). 2015. Thousand Oaks, CA: Sage Publications. Hansen, Casper Worm and Lars Lonstrup. "The Rise in Life Expectancy and Economic Growth in the 20Th Century." Economic Journal, vol. 125, no. 584, May 2015, pp. 838-852. EBSCOhost, doi:10.1111/ecoj.12261. Kozilek, Thabitta- Wanda, et al. "Measuring Welfare in Romania: Alternative and Complementary Measures to Gross Domestic Product. Regional Formation & Development Studies, no. 22, May 2017, pp. 68-76. EBSCOhost, doi:10.15181/rfds.v22i2.1478. Mahyar, Hami. "Economic Growth and Life Expectancy: The Case of Iran." Studies in Business & Economics, vol. 11, no. 1, Apr. 2016, pp. 80-87. EBSCOhost, doi:10.1515/sbe-2016-0007. Wagner, W. E. Using IBM SPSS statistics for research methods and social science statistics (6th ed.). 2016 Thousand Oaks, CA: Sage Publications. Warner, R. M. Applied statistics from bivariate through multivariate techniques (2nd ed.) 2012. Thousand Oaks, CA: Sage Publications. Cite this page Does GDP per Capita Predict Life Expectancy at Birth? Free Essay Sample. (2022, May 02). Retrieved from https://speedypaper.com/essays/does-gdp-per-capita-predict-life-expectancy-at-birth-free-essay-sample If you are the original author of this essay and no longer wish to have it published on the SpeedyPaper website, please click below to request its removal:
A parametric test is used on parametric data, while non-parametric data is examined with a non-parametric test. In the case of non parametric test, the test statistic is arbitrary. In the parametric test, there is complete information about the population. Privacy, Difference Between One Way and Two Way ANOVA, Difference Between Null and Alternative Hypothesis, Difference Between One-tailed and Two-tailed Test. But parametric tests are also 95% as powerful as parametric tests when it comes to highlighting the peculiarities or “weirdness” of non-normal populations (Chin, 2008). As the table shows, the example size prerequisites aren't excessively huge. Please note that the specification does not require knowledge of any specific parametric tests, all that is required, is the criteria for using them. Test values are found based on the ordinal or the nominal level. The appropriate response is usually dependent upon whether the mean or median is chosen to be a better measure of central tendency for the distribution of the data. The distribution can act as a deciding factor in case the data set is relatively small. If they’re not met you use a non-parametric test. That is also why nonparametric … In this article, you will be learning what is parametric and non-parametric tests, the advantages and disadvantages of parametric and nan-parametric tests, parametric and non-parametric statistics and the difference between parametric and non-parametric tests. In line with this, the Kaplan-Meier is a non-parametric density estimate (empirical survival … Indeed, the methods do not have any dependence on the population of interest. Variances of populations and data should be approximately⦠Parametric is a test in which parameters are assumed and the population distribution is always known. A Parametric Distribution is essentially a distribution that can be fully described in terms of a set of parameters. In case of Non-parametric assumptions are not made. If youâve ever discussed an analysis plan with a statistician, youâve probably heard the Table 3 Parametric and Non-parametric tests for comparing two or more groups The median value is the central tendency, Advantages and Disadvantages of Parametric and Nonparametric Tests. Sorry!, This page is not available for now to bookmark. If you doubt the data distribution, it will help if you review previous studies about that particular variable you are interested in. In the non-parametric test, the test depends on the value of the median. With non-parametric resampling we cannot generate samples beyond the empirical distribution, whereas with parametric the data can be generated beyond what we have seen so far. A normal distribution with mean=3 and standard deviation=2 is one example using two parameters. The logic behind the testing is the same, but the information set is different. Therefore, several conditions of validity must be met so that the result of a parametric test is reliable. In the non-parametric test, the test is based on the differences in the median. Non parametric tests are used when the data fails to satisfy the conditions that are needed to be met by parametric statistical tests. This method of testing is also known as distribution-free testing. Assumptions about the shape and structure of the function they try to learn, machine learning algorithms can be divided into two categories: parametric and nonparametric. This makes it easy to use when you already have the required constraints to work with. Hope that … The most prevalent parametric tests to examine for differences between discrete groups are the independent samples t ⦠The non-parametric test does not require any distribution of the population, which are meant by distinct parameters. and the non-parametric version (ânpsynthâ) of G. Cerulli . Generally, parametric tests are considered more powerful than nonparametric tests. So, this method of test is also known as a distribution-free test. Many times parametric methods are more efficient than the corresponding nonparametric methods. In principle, these can be parametric, nonparametric, or semiparametric - depending upon how you estimate the distribution of values to be bootstrapped and the distribution of statistics. A statistical test used in the case of non-metric independent variables is called nonparametric test. Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. In Statistics, the generalizations for creating records about the mean of the original population is given by the parametric test. Parametric and nonparametric tests are terms used by statistics shins frequently when doing analysis. Most non-parametric methods are rank methods in some form. •Non-parametric tests based on ranks of the data –Work well for ordinal data (data that have a defined order, but for which averages may not make sense). Non-parametric tests make fewer assumptions about the data set. In general, the measure of central tendency in the parametric test is mean, while in the case of the nonparametric test is median. Although this difference in efficiency is typically not that much of an issue, there are instances where we do need to consider which method is more efficient. To calculate the central tendency, a mean value is used. I am trying to figure out (and searching for help) what makes the first approach parametric and the second non-parametric? The correlation in parametric statistics is Pearson whereas, the correlation in non-parametric is Spearman. Checking the normality assumption is necessary to decide whether a parametric or non-parametric test needs to be used. The population is estimated with the help of an interval scale and the variables of concern are hypothesized. This is known as a parametric test. A parametric test is used on parametric data, while non-parametric data is examined with a non-parametric test. Assumptions of parametric tests: Populations drawn from should be normally distributed. These tests are common, and this makes performing research pretty straightforward without consuming much time. That makes it impossible to state a constant power difference by test. The mean being the parametric and the median being a non-parametric. This is known as a non-parametric test. Parametric Parametric analysis to test group means Information about population is completely known Specific assumptions are made regarding the population Applicable only for variable Samples are independent Non-Parametric Nonparametric analysis to test group … A statistical test, in which specific assumptions are made about the population parameter is known as parametric test. A lot of individuals accept that the choice between using parametric or nonparametric tests relies upon whether your information is normally distributed. A statistical test, in which specific assumptions are made about the population parameter is known as the parametric test. Parametric test assumes that your date of follows a specific distribution whereas non-parametric test also known as distribution free test do not. Nonparametric procedures are one possible solution to handle non-normal data. With: 0 Comments. This supports designs that will … This is known as a non-parametric test. With a factor and a blocking variable - Factorial DOE. Knowing only the mean and SD, we can completely and fully characterize that normal probability distribution. The set of parameters is no longer fixed, and neither is the distribution that we use. ⢠Parametric statistics make more assumptions than Non-Parametric statistics. Here, the value of mean is known, or it is assumed or taken to be known. A Parametric Distribution is essentially a distribution that can be fully described in terms of a set of parameters. There is no requirement for any distribution of the population in the non-parametric test. One way repeated measures Analysis of Variance. You learned that parametric methods make large assumptions about the mapping of the input variables to the output variable and in turn are faster to train, require less data but may not be as powerful. You also … The measure of central tendency is median in case of non parametric test. If assumptions are partially met, then it’s a judgement call. These criteria include: ease of use, ability to edit, and modelling abilities. ANOVA is a statistical approach to compare means of an outcome variable of interest across different ⦠The following differences are not an exhaustive list of distinction between parametric and non- parametric tests, but these are the most common distinction that one should keep in mind while choosing a suitable test. The term non-parametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance. This method of testing is also known as distribution-free testing. All you need to know for predicting a future data value from the current state of the model is just its parameters. If parametric assumptions are met you use a parametric test. However, there is no consensus which values indicated a normal distribution. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the HodgesâLehmannâSen estimator , which has good properties when the data arise from simple random sampling. Parametric vs. Non-Parametric synthethic Control - Whats the difference? The parametric test is usually performed when the independent variables are non-metric. The only difference between parametric test and non parametric test is that parametric test assumes the underlying statistical distributions in the data ⦠To contrast with parametric methods, we will define nonparametric methods. A parametric test is considered when you have the mean value as your central value and the size of your data set is comparatively large. But both of the resources claim "parametric vs non-parametric" can be determined by if number of parameters in the model is depending on number of rows in the data matrix. Also, the non-parametric test is a type hypothesis test that is not dependent on any underlying hypothesis. Nonparametric procedures are one possible solution to handle non-normal data. Parametric vs Non-Parametric 1. Here, the value of mean is known, or it is assumed or taken to be known. Why do we need both parametric and nonparametric methods for this type of problem? For kernel density estimation (non-parametric) such … Pro Lite, Vedantu Introduction and Overview. 1. In the table that is given below, you will understand the linked pairs involved in the statistical hypothesis tests. The t-measurement test hangs on the underlying statement that there is the ordinary distribution of a variable. Conclude with a brief discussion of your data analysis plan. Parametric vs Non-Parametric By: Aniruddha Deshmukh – M. Sc. This situation is diffi⦠Differences and Similarities between Parametric and Non-Parametric Statistics $\endgroup$ – jbowman Jan 8 '13 at 20:07 Note the differences in parametric and nonparametric statistics before choosing a method for analyzing your dissertation data. Why is this statistical test the best fit? Non-parametric tests are sometimes spoken of as "distribution-free" tests. Although, in a lot of cases, this issue isn't a critical issue because of the following reasons:: Parametric tests help in analyzing nonnormal appropriations for a lot of datasets. Non-Parametric. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. Parametric model A learning model that summarizes data with a set of parameters of fixed size … Difference between parametric statistics and non-parametric statistic To clearly understand the difference that exists between parametric statistics and non-parametric statistics, it is important we first appreciate their definition in relation to statistics. Differences and Similarities between Parametric and Non-Parametric Statistics A t-test is performed and this depends on the t-test of students, which is regularly used in this value. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. In other words, one is more likely to detect significant differences when they truly exist. What is the difference between Parametric and Non-parametric? This makes them not very flexible. The term “non-parametric” might sound a bit confusing at first: non-parametric does not mean that they have NO parameters! Skewness and kurtosis values are one of them. This can be useful when the assumptions of a parametric test are violated because you can choose the non-parametric alternative as a backup analysis. Difference between Windows and Web Application, Difference Between Assets and Liabilities, Difference Between Survey and Questionnaire, Difference Between Micro and Macro Economics, Difference Between Developed Countries and Developing Countries, Difference Between Management and Administration, Difference Between Qualitative and Quantitative Research, Difference Between Percentage and Percentile, Difference Between Journalism and Mass Communication, Difference Between Internationalization and Globalization, Difference Between Sale and Hire Purchase, Difference Between Complaint and Grievance, Difference Between Free Trade and Fair Trade, Difference Between Partner and Designated Partner. Parametric data is data that clusters around a particular point, with fewer outliers as the distance from that point increases. The difference between parametric and nonparametric test is that former rely on statistical distribution whereas the latter does not depend on population knowledge. Next, discuss the assumptions that must be met by the investigator to run the test. If you understand those definitions then you understand the difference between parametric and non-parametric. Non parametric test doesn’t consist any information regarding the population. This test is also a kind of hypothesis test. The variable of interest are measured on nominal or ordinal scale. Table 3 shows the non-parametric equivalent of a number of parametric tests. In this article, weâll cover the difference between parametric and nonparametric procedures. Definitions . The test variables are based on the ordinal or nominal level. Starting with ease of use, parametric modelling works within defined parameters. This means you directly model your ideas without working with pre-set constraints. Differences Between The Parametric Test and The Non-Parametric Test, Related Pairs of Parametric Test and Non-Parametric Tests, Difference Between Chordates and Non Chordates, Difference Between Dealer and Distributor, Difference Between Environment and Ecosystem, Difference Between Chromatin and Chromosomes, Difference between Cytoplasm and Protoplasm, Difference Between Respiration and Combustion, Vedantu The t-measurement test hangs on the underlying statement that there is the ordinary distribution of a variable. Use a nonparametric test when your sample size isnât large enough to satisfy the requirements in the table above and youâre not sure that your data follow the normal distribution. Non parametric tests are used when the data isnât normal. In Statistics, the generalizations for creating records about the mean of the original population is given by the parametric test. ⢠Parametric statistics depend on normal distribution, but Non-parametric statistics does not depend on normal distribution. The non-parametric test acts as the shadow world of the parametric test. Learn more differences based on distinct properties at CoolGyan. Parametric tests make certain assumptions about a data set; namely, that the data are drawn from a population with a specific (normal) distribution. I feel like if I was to make fair comparisons I would then have to do a non-parametric test on all of my transcript data rather than using two different types of tests. There is no requirement for any distribution of the population in the non-parametric test. Therefore, you will not be required to start with a 2D draft and produce a 3D model by adding different entities. Test values are found based on the ordinal or the nominal level. With non-parametric resampling we cannot generate samples beyond the empirical distribution, whereas with parametric the data can be generated beyond what we have seen so far. The applicability of parametric test is for variables only, whereas nonparametric test applies to both variables and attributes. It is also a kind of hypothesis test, that is not based on the underlying hypothesis. The test variables are determined on the ordinal or nominal level. W8A1: Board Discussion Discussion Question Discuss the differences between non-parametric and parametric tests. Dear Statalists, there are at least two user-written software packages with respect to the synthetic control approach. Definitions . The most common non-parametric technique for modeling the survival function is the Kaplan-Meier estimate. On the off chance that you have a little example and need to utilize a less powerful nonparametric analysis, it doubly brings down the chances of recognizing an impact. In the parametric test, the test statistic is based on distribution. It is not based on the underlying hypothesis rather it is more based on the differences of the median. • So the complexity of the model is bounded even if the amount of data is unbounded. Indeed, inferential statistical procedures generally fall into two possible categorizations: parametric and non-parametric. As a general rule of thumb, when the dependent variable’s level of measurement is nominal (categorical) or ordinal, then a non-parametric test should be selected. Statistics, MCM 2. Nonparametric tests when analyzed have other firm conclusions that are harder to achieve. A statistical test, in which specific assumptions are made about the population parameter is known as the parametric test. In this post you have discovered the difference between parametric and nonparametric machine learning algorithms. This video explains the differences between parametric and nonparametric statistical tests. Ultimately, if your sample size is small, you may be compelled to use a nonparametric test. One way to think about survival analysis is non-negative regression and density estimation for a single random variable (first event time) in the presence of censoring. Non parametric test (distribution free test), does not assume anything about the underlying distribution. On the other hand non-parametric statistics refers to a statistical method in which the data are not assumed to come from prescribed models that are determined by a small number of parameters; examples of such models include the normal distribution model and the linear regression model [ CITATION Mir17 \l 1033 ]. For example, organizations often turn to parametric when making families of products that include slight variations on a core design, because the designer will need to create design intent between dimensions, parts and assemblies. The value for central tendency is mean value in parametric statistics whereas it is measured using the median value in non-parametric statistics. In the parametric test, it is assumed that the measurement of variables of interest is done on interval or ratio level. As opposed to the nonparametric test, wherein the variable of interest are measured on nominal or ordinal scale. | Find, read and cite all the research you need on ResearchGate Why Parametric Tests are Powerful than NonParametric Tests. The original parametric version (‚synth‘) of Abadie, A., Diamond, A., and J. Hainmueller. If the independent variables are non-metric, the non-parametric test is usually performed. Nonparametric modelling involves a direct approach to building 3D models without having to work with provided parameters. What is Non-parametric Modelling? The majority of … In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a non-parametric test is one that makes no such assumptions. Therefore, you simply have to plan ahead and plug the constraints you have to build the 3D model.Nonparametric modelling is different. In the non-parametric test, the test depends on the value of the median. This test is also a kind of hypothesis test. Parametric and nonparametric tests referred to hypothesis test of the mean and median. A parametric model captures all its information about the data within its parameters. The problem arises because the specific difference in power depends on the precise distribution of your data. In case of parametric assumptions are made. Parametric vs. Non-Parametric Statistical Tests If you have a continuous outcome such as BMI, blood pressure, survey score, or gene expression and you want to perform some sort of statistical test, an important consideration is whether you should use the standard parametric tests like t-tests or ANOVA vs. a non-parametric test. Conversely, in the nonparametric test, there is no information about the population. Provide an example of each and discuss when it is appropriate to use the test. Sunday, November 22, 2020 Data Cleaning Data management Data Processing. A non-parametric test is considered regardless of the size of the data set if the median value is better when compared to the mean value. The focus of this tutorial is analysis of variance (ANOVA). Is this correct? Nonparametric regression differs from parametric regression in that the shape of the functional relationships between the response (dependent) and the explanatory (independent) variables are not predetermined but can be adjusted to capture unusual or unexpected features of the data. Parametric data is data that clusters around a particular point, with fewer outliers as the distance from that point increases. statistical-significance nonparametric. Also, the non-parametric test is a type hypothesis test that is not dependent on any underlying hypothesis. Do non-parametric tests compare medians? Pro Lite, Vedantu Non parametric tests are also very useful for a variety of hydrogeological problems. When the relationship between the response and explanatory variables is known, parametric regression … They require a smaller sample size than nonparametric tests. A statistical test used in the case of non-metric independent variables is called nonparametric test. Parametric tests usually have more statistical power than their non-parametric equivalents. A histogram is a simple nonparametric estimate of a probability distribution. Whole Foods Nigella Seeds, Colleges With Mechanical Engineering Technology, Slide-in Range Rear Gap Filler Samsung, Withings Body Scale Manual, Service Apartment Rules And Regulations, Apple Airpods Pro 2,
Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice? A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . . An article which gives an account of some properties of magic squares. The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . . Can you find the area of a parallelogram defined by two vectors? ABC and DEF are equilateral triangles of side 3 and 4 respectively. Construct an equilateral triangle whose area is the sum of the area of ABC and DEF. What is the ratio of the area of a square inscribed in a semicircle to the area of the square inscribed in the entire circle? What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . . Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important. Polygons drawn on square dotty paper have dots on their perimeter (p) and often internal (i) ones as well. Find a relationship between p, i and the area of the polygons. Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? An account of some magic squares and their properties and and how to construct them for yourself. Three circles have a maximum of six intersections with each other. What is the maximum number of intersections that a hundred circles Can you explain the surprising results Jo found when she calculated the difference between square numbers? Can you find an efficient method to work out how many handshakes there would be if hundreds of people met? Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make? Can you find a general rule for finding the areas of equilateral triangles drawn on an isometric grid? Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? The diagram shows a 5 by 5 geoboard with 25 pins set out in a square array. Squares are made by stretching rubber bands round specific pins. What is the total number of squares that can be made on a. . . . Charlie and Lynne put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you Jo has three numbers which she adds together in pairs. When she does this she has three different totals: 11, 17 and 22 What are the three numbers Jo had to start with?” Can you show that you can share a square pizza equally between two people by cutting it four times using vertical, horizontal and diagonal cuts through any point inside the square? Can you describe this route to infinity? Where will the arrows take you next? Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice? List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Try entering different sets of numbers in the number pyramids. How does the total at the top change? The diagram illustrates the formula: 1 + 3 + 5 + ... + (2n - 1) = nĀ² Use the diagram to show that any odd number is the difference of two squares. Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37. We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 It's easy to work out the areas of most squares that we meet, but what if they were tilted? A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know? You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . Can you see how to build a harmonic triangle? Can you work out the next two rows? Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon? Is there a relationship between the coordinates of the endpoints of a line and the number of grid squares it crosses? Sets of integers like 3, 4, 5 are called Pythagorean Triples, because they could be the lengths of the sides of a right-angled triangle. Can you find any more? Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions? Can all unit fractions be written as the sum of two unit fractions? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. A game for 2 players Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. The Egyptians expressed all fractions as the sum of different unit fractions. Here is a chance to explore how they could have written Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws? It starts quite simple but great opportunities for number discoveries and patterns!
RoseHulman Undergraduate Mathematics Journal Deflection of an elliptically loaded vortex sheet by a flat plate Jason O. Archera Volume 15, No. 1, Spring 2014 Sponsored by Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN 47803 Email: [email protected] http://www.rose-hulman.edu/mathjournal a University of New Mexico Rose-Hulman Undergraduate Mathematics Journal Volume 15, No. 1, Spring 2014 Deflection of an elliptically loaded vortex sheet by a flat plate Jason O. Archer Abstract. We investigate the behavior of vortex flows in the presence of obstacles using numerical simulations. Specifically, we simulate the evolution of an elliptically loaded vortex sheet in the presence of a stationary flat plate in its path. The plate is represented by a number of point vortices whose strength is such that they cancel the normal fluid velocity on the plate. The sheet is approximated by a number of smoothed point vortices called vortex blobs. The resulting system of ordinary differential equations is solved using the 4th order Runge-Kutta method. In our simulations, we vary the initial distance d from the vortex sheet to the plate, the angle φ of the plate relative to the sheet, and the numerical smoothing parameter δ. We study the effects these parameters have on the vortex sheet evolution, including the positions of the vortex centers and the vortex sheet midpoint. We also compare with results derived from a simpler model using only two point vortices instead of a whole sheet. Our main conclusions regard the effect of the distance d, which reduces the total distance traveled as it is increased, and the angle φ, which significantly affects the vortex trajectory after it encounters the plate. Acknowledgements: I thank Professor Monika Nitsche for her guidance in this research. I also thank Professor Robert Krasny for suggesting the problem studied, and Dr. Robert Niemeyer for comments on the manuscript. Finally, I thank the National Science Foundation for supporting this research through the NSF-MCTP program, Mentoring through Critical Transition Points, at the University of New Mexico, via the NSF Award # DMS-1148801. Page 108 1 RHIT Undergrad. Math. J., Vol. 15, No. 1 Introduction As planes fly through air, vorticity is shed from their wings and is left behind in the wake of the plane. The shed vorticity concentrates in a layer, which rolls up into two trailing vortices that travel downward. The air forced down by these vortices is often referred to as downwash, with a downwards force of opposite sign and proportional in magnitude to the upwards lift force acting on the plane. The strong force caused by the trailing vortices of large planes can cause smaller airplanes flying behind them to crash. Preventing such effects on following airplanes is the primary reason airports limit times between takeoffs and landings. However, crashes still occur sometimes when an aircraft flies into another aircraft’s path too soon. One example is the Piper Navajo crash in Richmond, British Columbia in July 9, 2009 1 , which was flying behind an Air Canada Airbus 321. Therefore much effort has been made to find mechanisms that reduce the strength of the separated vorticity. For example, and recommend mitigating trailing vortices’ effects by redesigning the airfoils to create opposite signed vorticity that speeds up the viscous decay of the lead vorticity. A review of the formation, motion, and persistence of trailing vortices relevant to air travel is given by . In this paper we study the interaction of the trailing vorticity with obstacles in its path. Following and , we model the 3-dimensional shed vorticity layer by a planar 2-dimensional vortex sheet. The vortex sheet model consists of replacing the vortex layer of finite thickness by a surface of zero thickness. The fluid is assumed to be inviscid, and is irrotational away from the surface. The velocity component tangential to the surface is discontinuous across it. The velocity jump across the sheet is the vortex sheet strength. Our initial conditions consist of the elliptically loaded vortex sheet which induces flow past a flat plate. We compute the evolution of the vortex sheet in the presence of a flat plate in its path. The plate is, in turn, modeled as a sheet, whose strength is such that the normal fluid velocity is zero on the plate wall. The free vortex sheet rolls up into a pair of vortices approximating the trailing vortices. We find that the trajectory of this vortex pair is deflected by the flat plate and study the amount of deflection as a function of the initial distance between the sheet and the plate, and the angle of the plate relative to the initial sheet. We also compare the vortex sheet results with those using a simple model in which the trailing vortex is approximated by two point vortices. Computing vortex sheet motion has a long history dating back to the numerical works of , , , and . Also see the review given by . These simulations are based on approximating the sheet by point vortices and evolving these points using an approximate set of governing ordinary differential equations. However, the early results did not converge under mesh refinement. It was not until the work of , , and , that some insight was gained as to the causes for the irregular point vortex motion. considered a periodic vortex sheet with analytic initial data, and showed that, because of the Kelvin-Helmholtz instability of the sheet, the sheet does not remain analytic at all times but develops a singularity in finite 1 http://www.cbc.ca/news/canada/british-columbia/story/2009/07/10/richmond-plane-crash.html RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 109 time. At that time the vortex sheet strength and the sheet curvature become unbounded at a point on the sheet. showed that before that time, high wavenumber oscillations introduced numerically due to roundoff error grow exponentially fast due to the KelvinHelmholtz instability of the sheet, leading to noisy results. introduced a Fourier filter, in which all modes at the level of machine precision are truncated at each timestep in the simulation. This filter prevents the growth of artificially introduced large wavenumbers. Numerical simulations by and , and analytical work of show that before the time of singularity formation, the results computed with the filter in converge as the spatial and time discretization is refined, and the filter level decreased. However, also showed that past the time of singularity formation, the filtered computations do not converge. The approach he took to compute the motion past this time is to regularize the vortex sheet motion by introducing a smoothing parameter into the governing equation. In effect, the sheet is approximated by a finite number of regularized point vortices referred to as vortex blobs , , and . The regularized periodic sheet studied by rolls up into a sequence of vortices, mimicking what is observed in laboratory experiments. Comparison of vortex blob simulations with viscous simulation in and , and with laboratory experiment in shows that the regularized vortex sheet simulations approximate the viscous flow well. Here, we simulate the free vortex sheet modeling the wake of a plane using the regularized vortex blob approximation. The free sheet rolls up at its edges into a double-spiral forming two counter-rotating vortices. These vortices travel downstream in direction of the plate, and move around the plate. We study the effect of the regularization parameter, the distance between the initial sheet and the plate, and the angle of inclination of the plate. We observed the following trends. As we reduce the vortex blob parameter, the vortex spiral develops more turns and travels slightly faster. However, the trajectories of the vortex centers, and of the two-point-vortex approximation seem unchanged. On the other hand, modifying the distance between the initial vortex sheet and the plate changes the total distance travelled by the free sheet. If the sheet starts out close to the flat plate, it travels noticeably further than in the absence of a plate. For larger initial distances between the plate and the sheet, the sheet travels less far. Changing the inclination of the plate deflects the trajectory of the plate. Here, interestingly, for small values of φ the trajectory is deflected in the direction of the orientation of the plate, leaving at a small angle from the direction of approach, normal to the plate. However, for large values, the vortex trajectory is deflected in the direction opposite to the orientation of the plate, leaving in the direction parallel to the plate. In all cases studied, the vortex pair approximation of the sheet behaved qualitatively similar to the vortex sheet trajectory. Small quantitative differences between the sheet and the point vortex pair are observed in the case of flow plast an inclined plate, with small changes in the angle of deflection of the vortex motion. The paper is organized as follows. Section 2 describes the problem considered here. In Section 3, we present the governing equations, the discrete approximation by a system of ordinary differential equations, and the numerical method to solve them. Section 4 presents the numerical results. The results are summarized in Section 5. RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 110 2 2.1 Problem Description Shear layer separation behind airfoil When fluid moves past walls, fluid viscosity causes particles to stick to the wall. This creates large velocity gradients and thereby introduces nonzero rotation into the flow. The fluid rotation is measured by the vorticity. This is easiest to see using a simple example of planar two-dimensional flow. First, we introduce the variables describing the fluid flow. In Cartesian coordinates x = (x, y, z), the velocity field is given by u(x, t) = (u(x, t), v(x, t), w(x, t)) (1) where u, v, w are the velocity components in the x-, y-, and z-direction respectively. The fluid rotation is measured by the vorticity ∇×u=( ∂w ∂v ∂u ∂w ∂v ∂u − , − , − ). ∂y ∂z ∂z ∂x ∂x ∂y (2) Specifically, at a point in the flow domain at which the vorticity vector is nonzero, the fluid rotates in a plane normal to the vorticity with angular velocity equal to half of the vorticity magnitude. Planar two-dimensional flow refers to the case when there is no velocity component in the z-direction, and no changes in the z-direction, w = 0 and ∂/∂z = 0. In that case the velocity and vorticity reduce to u(x, t) = (u(x, y, t), v(x, y, t), 0) , ∇ × u = (0, 0, ω ) (3a) (3b) ∂v − ∂u is the scalar vorticity. That is, the vorticity is a vector pointing in where ω = ∂x ∂y the z-direction. If ω > 0, fluid rotates in the xy-plane, normal to the vorticity, in the counterclockwise direction. If ω < 0, the rotation is clockwise. For planar flows, we will not list the third zero component of the velocity field. Now consider the simple example of planar flow parallel to a flat wall at y = 0. In the absence of viscosity, the uniformly parallel flow (U, 0), illustrated in figure 1(a), solves the governing Euler Equations. On the other hand, in the presence of viscosity, the fluid velocity must vanish at the wall. Thus a transition region forms between the wall and the far field velocity in which the velocity decreases in magnitude from U to zero; see the region indicated in in blue in figure 1(b). Within this boundary layer the velocity gradients, in particular ∂u ∂y this case, are large, leading to large negative vorticity. The vorticity is carried downstream with the fluid velocity and can separate at corners or regions of large curvature. The flow within a separated layer of vorticity with large velocity gradients is referred to as a shear flow. Figure 2 is an idealized schematic of the three-dimensional generation and separation of vorticity in flow past an airfoil. The vorticity, shown in blue, is generated around the RHIT Undergrad. Math. J., Vol. 15, No. 1 (a) y Page 111 y (b) u u Figure 1: Velocity profiles in parallel flow past a wall. (a) Inviscid flow. (b) Viscous flow with boundary layer of clockwise rotating vorticity (in blue). y x z Figure 2: Sketch showing shear layer separation and rollup behind an airfoil in oncoming flow, that is parallel and uniform in the far field, with magnitude U . RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 112 wing and separates as a shear layer that rolls up into a spiral along each of the wing tip flat edges. The vorticity concentrates within the spirals, and forms the trailing vortices often observed behind flying planes, also referred to as contrails. The fluid velocity is large within the vortices, which, according to Bernoulli’s law, leads to small air pressure in this region. As a result, water vapor in the air condensates forming a mini-cloud, which is what makes the contrails behind the plane visible. The two trailing vortices induce a downward motion on each other, which is felt as downwash when the plane is near the ground. For reference below, we let x = (x, y, z) denote the coordinates of a point in the Cartesian coordinate system illustrated in figure 2, with the xy-plane parallel to the span of the airfoil, the z-axis normal to it, with the origin at the center of the trailing edge of the wing. We consider a reference frame in which the airfoil is stationary, so that the oncoming fluid flow is in direction of the z-axis. 2.2 Free vortex sheet model for separated shear layer Following , we now model the separated shear layer by an elliptically loaded planar vortex sheet. The vortex sheet approximates the shear layer by a surface across which the tangential velocity is discontinuous. The fluid is assumed to be inviscid, and the vorticity is zero away from the sheet. The initial sheet is chosen so as to approximate the idealized shear layer behind the airfoil, illustrated in figure 2 downstream of the wing. That is, it is chosen to be flat with velocity jump across the sheet prescribed to be such that it induces flow past the sheet but with no fluid flowing through the sheet. The resulting vorticity distribution yields the sheet referred to as “elliptically loaded”. The flow moves from the bottom to the top of the sheet, mimicking the flow from bottom to top around the sides of the airfoil. This initial sheet and the induced flow is sketched in figure 3. In figure 3(a), the flow is shown in a reference frame in which the velocity at the sheet vanishes, and the flow is upward and uniform with constant value U at infinity. In our computations we use the reference frame shown in 3(b), in which the velocity at infinity vanishes, and the initial velocity is downward and uniform on the plate. It is obtained from the flow in 3(a) by adding a potential flow −U to it. The sheet is allowed to move freely under its self-induced motion. As will be seen, as time increases the sheet rolls up at its edges as it moves downward, corresponding to a crossection of the idealized shear layer in figure 2 at z > 0. The vortex sheet is defined by its position x(α, t) = (x(α, t), y(α, t)) , (4) and by the distribution of vorticity along the sheet. The vorticity distribution is described by the circulation function Γ(α), where Z Z Z Z Γ(α) = u · T ds = ∇ × u · ndA = (0, 0, ω) · (0, 0, 1)dA = ωdA , (5) ∂D D D D and ∂D is a curve enclosing the sheet between x = 0 and x = x(α, t). As applied in equation (5), Stokes theorem shows that Γ measures the integral amount of vorticity in this portion of RHIT Undergrad. Math. J., Vol. 15, No. 1 (a) Page 113 (b) y x u+ u+ u− u− −U U Figure 3: Sketch showing the initial vortex sheet, in blue, mimicking the shear layer behind the plane at z = 0. The sheet induces flow past it, from below to above, shown in black, with no fluid flowing through the sheet. (a) Reference frame fixed on the sheet. (b) Reference frame fixed at infinity. the sheet. One can show that Γ is related to the jump in the tangential velocity component across the sheet by dΓ = −(u+ − u− ) = σ(s, t) (6) ds where s is arclength, and u± are the limiting tangential velocities from above and below the plate respectively. The quantity σ(s, t) is referred to as the vortex sheet strength. If the points x(α, t), y(α, t) move with the average of the velocities above and below the sheet, then Γ(α) is independent of time. However, the sheet strength σ at a given point α does depend on time. The elliptically loaded initally flat sheet illustrated in figure 3, non-dimensionalized to have unit half-length and unit circulation around half the sheet, is given by x(α, 0) = cos(α) , y(α, 0) = 0 , Γ(α) = sin(α) , (7a) (7b) (7c) with α ∈ [0, π]. The corresponding non-dimensionalization initial downward sheet velocity is −U = (0, −1/4). Notice that this initial sheet has singularities near the endpoints x = ±1. Since √ (8) Γ(s) = Γ(x) = 1 − x2 , it follows that the velocity jump, dΓ dΓ dΓ dα x = = · = −p , ds dx dα dx (1 − x2 ) becomes unbounded as x approaches 1. (9) RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 114 (b) (a) 0 t=0 d d t=10 −2 0 y φ t=40 −4 −6 −3 −2 −1 0 x 1 2 3 −3 −2 −1 0 x 1 2 3 Figure 4: (a) Evolution of the initially flat vortex sheet (blue) under its self-induced velocity, in the absence of a plate, at the indicated times. (b) Position of the plate (red) relative to the initial vortex sheet, illustrating the vertical distance d, and plate angle φ. 2.3 Bound Vortex Sheet Model for Plate We consider the evolution of the initially flat vortex sheet under its self-induced velocity. In the absence of an obstacle in its path, showed that the sheet rolls up at its edges as it moves downward, as shown in figure 4(a). In this paper, we consider how the evolution is altered if a flat, rigid plate is positioned in its path, as shown in figure 4(b). The plate is positioned at distance d below the initial vortex sheet, and is inclined at an angle φ from the horizontal. In our inviscid model, the plate modifies the total fluid velocity by adding a component to it that ensures that no flow passes through it. We are interested in the effect of the vertical distance d between the plate and the sheet, and of the inclination angle φ on the evolution of the free vortex sheet. Herein, we change reference frame and place the initial free sheet at y = d, with the plate at y = 0 as shown in figure 4. The position of the plate is described by xp (β) = (cos β cos φ, cos β sin φ − | sin φ|) , β = [0, π] . (10) The plate is modelled by a fixed vortex sheet in its place whose strength σp is such that the normal fluid velocity on the plate vanishes. As we will see, this results in a linear system that determines the discretized vortex strength on the plate. RHIT Undergrad. Math. J., Vol. 15, No. 1 3 3.1 Page 115 Governing Equations Euler Equations Vortex sheet flow is governed by the Euler equations for inviscid, incompressible flow. These are a set of partial differential equations obtained from conservation of mass and momentum, under the assumption that there are no friction forces parallel to solid walls immersed in the fluid, and no viscous diffusion. The incompressible Euler equations for fluid in a domain D bounded by solid walls are given by Dρ Dt Du ρ Dt ∇·u u·n = 0 inD , (11a) = −∇p inD , (11b) = 0 inD , = U wall · n on ∂D , (11c) (11d) where ρ = ρ(x, t) is the fluid density, u = u(x, t) = (u(x, t), v(x, t)) is the fluid velocity, p = p(x, t) is the fluid pressure, and U wall is the wall/boundary velocity (which is equal to 0 if the wall is stationary). Throughout, x = (x, y). The gradient operator is ∇ = (∂/∂x, ∂/∂y), ∂ ∂ ∂ ∂ D = ∂t + u · ∇ = ∂t + u ∂x + v ∂y . The boundary condition and the material derivative is Dt 11(d) states that the fluid velocity is parallel to the walls, but the parallel component is not necessarily zero. For homogeneous incompressible planar flow, for which ρ(x, t) = ρ0 is constant throughout the fluid, the Euler equations imply that the scalar vorticity is constant on particles moving with the flow, Dω =0. (12) Dt Furthermore, we know from vector calculus that for incompressible planar flow there exists a streamfunction ψ(x, y, t) whose level curves are streamlines of the flow. It is determined uniquely, up to a constant, by the fluid velocity from ∂ψ = −v , ∂x ∂ψ =u. ∂y (13) Conventionally, the constant is set to be such that ψ = 0 on the walls. Thus, the streamfunction is alternatively determined by the vorticity from ∇2 ψ = −vx + uy = −ω in D , ψ = 0 on ∂D (14) In infinite domain R2 , with vanishing velocity at infinity, the solution to the Poisson equation (14) is known to be Z 1 ω(x0 , t) ln |x − x0 |dx0 . (15) ψ(x, t) = − 2π Page 116 RHIT Undergrad. Math. J., Vol. 15, No. 1 Given the vorticity in the fluid, the velocity is recovered from equations (13). In the presence of solid bodies in the flow, the fluid velocity is given by Z 1 (−(y − y 0 ), x − x0 ) u(x, t) = ω(x0 , t)dx0 + ∇Φ(x, t) (16) 0 2 0 2 2π (x − x ) + (y − y ) where ∇Φ is the potential flow that ensures that the total velocity is parallel to the solid walls on the boundary of the domain. Given the fluid vorticity, the fluid velocity is thus recovered from (16). The velocity in turn determines the vorticity evolution through equation (12). This is the main idea of the vortex method used in this paper, described in §4. The next section describes the specific form of (16) for the vortex sheet flow past a plate. 3.2 Vortex sheet flow past a plate For a vortex sheet, the vorticity is a delta function on the sheet and the integral in equation (16) reduces to a line integral over the sheet. For circulation distribution Γ(α), the resulting velocity induced by the sheet vorticity at a point x(α, t) = (x(α, t), y(α, t)) on the sheet is Z π (−(y − y 0 ), x − x0 ) dΓ 0 1 (α ) dα0 + ∇Φ(x(α, t), t) , (17) u(x(α, t), t) = 0 2 0 2 2 2π 0 (x − x ) + (y − y ) + δ dα where (x, y) = (x(α, t), y(α, t)) and (x0 , y 0 ) = (x(α0 , t), y(α0 , t)). Here, the parameter δ is introduced in the denominator to regularize the motion, following . This is necessary since otherwise the equations yield irregular particle motion. In our case, x(α, t) and Γ(α) represent the free vortex sheet simulating the separated shear layer behind the plane, initially given by equation (7c), and ∇Φ(x(α, t)) is the potential flow that vanishes at infinity and cancels the normal fluid velocity on the plate. It is induced by a second vortex sheet with position xp (β) bound to the plate, whose vorticity distribution given by the strength σp = dΓp /ds is such that the normal fluid velocity on the plate is cancelled. The resulting potential flow induced by the bound sheet at a point x away from the plate is given by Z π (−(y − yp0 ), x − x0 ) dΓp 0 1 (β , t) dβ 0 (18) ∇Φ(x, t)) = 2π 0 (x − x0p )2 + (y − yp0 )2 dβ where (x0p , yp0 ) = (xp (β 0 ), yp (β 0 )). The position xp (β) is given by equation (10). The circulation Γp (β, t) is determined from the equation u(xp (β), t) · np = 0 where np = (− sin φ, cos φ) , (19) and φ is the plate angle shown in figure 4b. Here, u(xp (β), t) is the total velocity given by equations (17,18) at a point on the plate. Note that the line integral in (18) has not been regularized by δ. This is necessary for the following reason. As will be seen in the next section, upon discretizing, equation (19) RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 117 determines a linear system for the discrete vortex sheet strength. This system is solved at each time step to determine Γp (β, t). The system is invertible only if the integral over the bound sheet in equation (18) is not regularized. As a result, in order to evaluate the velocity at points on the plate, the integral in (18) must be considered in the principal value sense. The Plemelj equations (e.g., see ) show that the principal value integral equals the average of the limiting velocities above and below the sheet. At a point away from the plate, the integral in (3.8) is proper and no principal value needs to be taken. In sumary, the evolution of the free vortex sheet x(α, t) in the presence of the plate is given by 1 Rπ −(y − y 0 ), (x − x0 ) dΓ 0 dx (α, t) = (α ) dα0 0 2 0 2 2 dt 2π 0 (x − x ) + (y − y ) + δ dα (20) 1 Rπ −(y − yp0 ), (x − x0p ) dΓp 0 + (β ) dβ 0 2π 0 (x − x0p )2 + (y − yp0 )2 dβ with initial conditions x(α, 0) = (x(α, 0), y(α, 0)) and Γ(α) given by equation (7c). Notice that the free sheet x(α, t) is always at some distance from the plate, so the second integral in (20) is not of principal value type. The solution of this system of two ordinary differential equations is the one we are interested in here. 4 4.1 Numerical Method Discretization The free vortex sheet is approximated by a set of N + 1 regularized point vortices with position and circulation xj (t) and ∆Γj , j = 0, . . . , N . Their initial position is given by xj (0) = x(αj , 0), αj = jπ/N . Their circulation is given by dΓ (αj )∆αj dα where ∆αj = (αj+1 − αj−1 )/2, j = 1, . . . , N − 1, and ∆α0 = α1 − α0 ∆αN = αN − αN −1 These are the trapezoid rule weights of the discretization of (3.10) given below. The bound vortex sheet is approximated by a set of Np + 1 point vortices with position and circulation xp,j = xp (βj ), βj = jπ/N and ∆Γp,j , j = 0, . . . , Np . These circulations are determined at each time step as explained shortly. With this discretization, the regularized point vortices representing the free vortex sheet are evolved using an approximation of equations (20), obtained using the trapezoid rule, ∆Γj = dxj dt = N 1 P −(yj − yk ), (xj − xk ) ∆Γk 2π k=0 (xj − xk )2 + (yj − yk )2 + δ 2 (21) + 1 2π Np P −(yj − yp,k ), (xj − xp,k ) ∆Γp,k , 2 2 k=0 (xj − xp,k ) + (yj − yp,k ) RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 118 j = 0, . . . , N . The plate circulations ∆Γp,k are obtained at each timestep by enforcing the discretized version of equation (19) at the Np midpoints on the plate, m xm p,j = xp (βj ) , βjm = (βj − βj−1 )/2 , j = 1, . . . , Np . (22) The system of discretized equations u(xm p,j ) · np = 0 is given by the following Np linear equations in the Np + 1 unknowns ∆Γp,j Np sin φ(y m − y ) + cos φ(xm − x ) P p,k p,k p,j p,j ∆Γp,k . m m 2 2 (xp,j − xp,k ) + (yp,j − yp,k ) k=0 (23) = m sin φ(yp,j − yk ) + cos φ(xm p,j − m 2 + (y m − y )2 (x − x ) k k k=0 p,j p,j N P − xk ) ∆Γk . + δ2 In order to uniquely solve for the Np + 1 unknowns, this system is augmented by enforcing that the total circulation in the flow be zero, Np X ∆Γp,k = 0 . (24) k=0 The linear system determining the bound vortex sheet circulations can be written as A · ∆Γp = b , (25) as follows, m −y m sin φ(yp,1 p,0 )+cos φ(xp,1 −xp,0 ) m m 2 (xp,1 −xp,0 ) +(yp,1 −yp,0 )2 .. . ... .. . m −y m sin φ(yp,1 p,Np )+cos φ(xp,1 −xp,Np ) m m 2 (xp,1 −xp,Np ) +(yp,1 −yp,Np )2 .. . m sin φ(yp,N −yp,0 )+cos φ(xm p,Np −xp,0 ) p 2 +(y m −y 2 (xm −x ) p,0 p,0 ) p,Np Np ... m sin φ(yp,N −yp,Np )+cos φ(xm p,Np −xp,Np ) p 2 +(y m −y 2 (xm −x ) p,Np p,Np ) p,Np Np 1 ... 1 P m −y )+cos φ(xm −x ) sin φ(yp,1 k k N p,1 m 2 2 2 ∆Γk k=0 (xm −x k ) +(y1,p −yk ) +δ p,1 .. . = − P m m N sin φ(yp,Np −yk )+cos φ(xp,Np −xk ) ∆Γ m k 2 2 2 k=0 (xm p,Np −xk ) +(yNp ,p −yk ) +δ 0 ∆Γp,0 .. . .. . ∆Γp,Np (26) Note that the matrix A depends only on the bound vortex sheet position, which does not change in time. This system is solved at each timestep to obtain updated values of ∆Γp,k . RHIT Undergrad. Math. J., Vol. 15, No. 1 4.2 Page 119 Time Steps The system of equations (21) is solved using the 4th-order Runge-Kutta method to step forward in time, k1 = ∆t ũ(x̃(t), t) k2 = ∆t ũ(x̃(t) + k1 /2, t + ∆t/2) k3 = ∆t ũ(x̃(t) + k2 /2, t + ∆t/2) (27) k4 = ∆t ũ(x̃(t) + k3 , t + ∆t) ˜ + (k1 + 2k2 + 2k3 + k4 )/6 x̃(t + ∆t) = x(t) where the tildes refer to the discrete approximations of x and u(x, t), x̃ = (x1 (t), . . . , xN (t)), ũ = dx̃/dt. Note that the bound vortex sheet circulations ∆Γp,k are updated in each of the four Runge-Kutta stages. In all of our simulations, we set our time step to be ∆t = 0.05. If we set it larger than this, the sheet starts twisting and deforming in odd places. If we set it smaller than this, there is no discernible improvement in simulation quality. In all our simulations, we approximated the bound vortex sheet by 600 points (Np = 599). The free sheet is approximated initially by 1000 points (N = 999). As the free sheet evolves and stretches, new points are inserted to maintain resolution. This step is described next. 4.3 Inserting new vortices in the free sheet As the free vortex sheet evolves it edges roll up into a spiral and the sheet is stretched. As a result, soon after the beginning of the motion, the initial number of 1000 regularized point vortices is insufficient to accurately represent the position of the vortex sheet. We therefore use a 3rd-order Lagrange interpolation formula to place new vortices into the vortex sheet as time progresses, interpolating in the α variable. Under certain conditions, described below, we insert a point in the middle of four pre-existing sequential points, as follows. Suppose the four pre-existing points have position xk−2 , xk−1 , xk , xk+1 and correspond to parameters αk−2 , αk−1 , αk , αk+1 . The new point is inserted between the second and third point of these four. It is assigned a corresponding parameter of αnew = (αk−1 + αk )/2 and given position new −αk−1 )(αnew −αk )(αnew −αk+1 ) xnew = xk−2 (α (αk−2 −αk−1 )(αk−2 −αk )(αk−2 −αk+1 ) new −αk−2 )(αnew −αk )(αnew −αk+1 ) +xk−1 (α (αk−1 −αk−2 )(αk−1 −αk )(αk−1 −αk+1 ) (28) −αk−2 )(αnew −αk−1 )(αnew −αk+1 ) +xk (αnew (αk −αk−2 )(αk −αk−1 )(αk −αk+1 ) new −αk−2 )(αnew −αk−1 )(αnew −αk ) +xk+1 (α (αk+1 −αk−2 )(αk+1 −αk−1 )(αk+1 −αk ) Page 120 RHIT Undergrad. Math. J., Vol. 15, No. 1 and circulation ∆Γnew = Γ(αnew )(αk − αk−1 )/2 . To correctly represent the trapezoid rule approximation of the integrals, the circulations of the neighbouring points to the new point need to be corrected to ∆Γk−1 = Γ(αk−1 )(αnew − αk−2 )/2 ∆Γk = Γ(αk )(αk+1 − αnew )/2 All points are then reindexed from k = 1, N + 2, and the value of N is increased by 1. We use this formula to insert new points under the following criteria: (1) Points are not inserted in the innermost loop in each of the two spiral centers. As will be seen below, the innermost loop has an inflection point near the end of the sheet. Points are not inserted between this inflection point and the end of the sheet. The inflection point within each spiral is determined by finding the point at which the crossproduct of two vectors between consecutive points (xk − xk−1 , yk − yk−1 , 0) × (xk+1 − xk , yk+1 − yk , 0) changes sign. This inflection point, xI , is a good approximation of the spiral center. (2) Points are inserted on the outer spiral turns if the spacing between the points is too large. Here, we ensure that the total number of points around one turn of the spiral is no less than a prescribed minimum, in our case 60 points. We thus insert a point if the angle two consequtive points xk−1 , xk make with the spiral center xI is bigger than π/30. This angle is computed using the Law of Cosines A2 B2 C2 cos θ = ||xk−1 − xI ||2 = ||xk − xI ||2 = ||xk − xk−1 ||2 2 2 −C 2 ; = A +B 2AB (29) Each of the two spirals is considered separately. (3) Points are not inserted outside of the spiral roll-up, past points on the sheet at which the curvature changes sharply. Past those points, point insertion proved to be too inaccurate. Thus, in our simulations the spiral roll-up is resolved, but the vortex sheet in the far field is not fully resolved. This aspect of our computation could be improved, but we will leave this for future work. The point at which the curvature κ changes too fast was determined by trial and error to be the first point at which dκ/dα ≥ 100000. Taking the variable s to be arc length, we approximate the curvature 2 d x κ = 2 ds RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 121 as follows, using formulas for finite-difference derivative approximations: ∆s1 = ||xk − xk−1 || , ∆x1 = xk − xk−1 , ∆y1 = yk − yk−1 , ∆s2 = ||x pk+1 − xk || , ∆x2 = xk+1 − xk , ∆y2 = yk+1 − yk , (∆s1 ∆x2 − ∆s2 ∆x1 )2 + (∆s1 ∆y2 − ∆s2 ∆y1 )2 κk = 2 ∆s1 ∆s2 (∆s1 + ∆s2 ) The rate of change of curvature at a point xk is approximated by dκ κk+1 − κk−1 . = dα k αk+1 − αk−1 4.4 (30) (31) Two Vortex Approximation of Shear Layer We compare results of the vortex sheet model with an even simpler model in which the vortex sheet is replaced at t = 0 by two point vortices with circulation and center of mass equal to each half of the vortex sheet, PN PN/2 x ∆Γ k=N/2 xk ∆Γk k k=0 k , Γl = 1 , Γr = −1 . (32) xl = P , xr = PN N/2 ∆Γ ∆Γ k k k=N/2 k=0 These two points are evolved in the flow equation according to the point vortex equations Np dxl 1 −(yl − yr ), (xl − xr ) 1 X −(yl − yp,k ), (xl − xp,k ) = Γr + ∆Γp,k , dt 2π (xl − xr )2 + (yl − yr )2 2π k=0 (xl − xp,k )2 + (yl − yp,k )2 Np dxr 1 −(yr − yl ), (xr − xl ) 1 X −(yr − yp,k ), (xr − xp,k ) = Γl + ∆Γp,k . dt 2π (xr − xl )2 + (yr − yl )2 2π k=0 (xr − xp,k )2 + (yr − yp,k )2 5 Results This section presents the evolution of the vortex sheet and the two-point-vortex approximation computed as described above. 5.1 Vortex sheet evolution around plate, with d = 2, φ = 0, δ = 0.2 Figure 5 compares the vortex sheet position at the indicated times, in the absence of a plate (left column) with the evolution in the presence of a plate at d = 2, with φ = 0 (right column). In both cases, the computations are performed with δ = 0.2. The left column reproduces results in . The sheet rolls up into a spiral around each of its edges as it travels downward. In the right column, a plate is positioned one sheet length below the initial sheet position. As the sheet rolls up and moves downward, it approaches the plate, and wraps around the plate around t = 20. Afterwards, the sheet continues its RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 122 2 t=0 0 y y 0 −2 −4 −6 −5 0 x −6 5 y y 5 t = 20 0 −2 −4 −4 −5 0 x 2 −6 5 −5 0 5 x 2 t = 40 0 t = 40 0 y y 0 2 t = 20 −2 −2 −4 −2 −4 −5 0 x 2 5 −6 0 2 t = 60 5 t = 60 y 0 −2 −4 −6 −5 x 0 y −5 x 0 −6 −2 −4 2 −6 t=0 2 −2 −4 −5 0 x 5 −6 −5 0 5 x Figure 5: Comparison of vortex sheet evolution at the indicated times, in the absence of a plate (left column), with evolution in the presence of a plate at distance d = 2, with φ = 0 (right column). Computations are performed with δ = 0.2. RHIT Undergrad. Math. J., Vol. 15, No. 1 −0.5 (a) 2 Page 123 (b) −1 1 0 y y −1.5 −1 −2 −2 −2.5 −3 −4 −2 −1 0 x 1 2 −3 0.6 0.8 1 x 1.2 Figure 6: Trajectory of vortex core from t = 0 to t = 60, with d = 2, δ = 0.2, φ = 0. (a) Large scale showing vortex movement around plate. (b) Closeup showing the endpoint of the sheet (green), the position of the inflection point (blue), and the position of the point vortex pair approximation (purple). roll-up and downward motion. However, comparison with the left column shows that in the presence of the plate, the sheet has been slowed down. As a result it has not traveled as far at t = 60 as in the case of no plate. We note that in the right column, the sheet is shown in blue and green colors. The blue portion of the sheet is the one that is well-resolved by the point insertion algorithm. The green portion is the one that is underresolved due to regions of high curvature that form as the sheet approaches the plate. This green portion does not correspond exactly to the regions of high curvature exempted from interpolation by our criterion, but is edited for visual clarity. These high curvature regions make it difficult to resolve the flow better. Figure 6 shows the trajectory of the two spiral vortex centers. For comparison, the position of the endpoints of the sheet (green) and the position of the inflection point near the endpoint (blue) are shown. The position of the two-point-vortex approximation of the vortex sheet is shown in purple. Figure 6(a) also plots the position of the midpoint of the sheet corresponding to parameter α = π/2. Figure 6(a) shows that both spiral centers remain symmetric about the middle line. The centerpoint on the discretized vortex sheet travels straight downward after closely moving around the plate. The spiral centers move down and around the plate. The closeup in figure 6(b) shows that the three trajectories, approximating the spiral centers, follow each other closely. The endpoints and the inflection point oscillate slightly as they travel downstream, while the path of the two-point-vortex approximation is non-oscillatory. The two-point-vortex approximation travels slightly farther than the vortex Page 124 RHIT Undergrad. Math. J., Vol. 15, No. 1 sheet, but otherwise it models the vortex sheet trajectory remarkably well. The following sections discuss the dependence of the solution on the three parameters δ, d, and φ. 5.2 Dependence on δ Figure 7 shows the dependence of the solution on δ, by comparing results with δ = 0.4, 0.2 and 0.1. The left column shows a closeup of the left spiral center at the final time t = 60. As δ decreases, the number of spiral turns increases significantly. With the smallest value of δ = 0.1, the number of spiral turns is so large we could not fully resolve it with our available computing time, which is in part the reason for the irregularities that can be observed near the spiral center. The position of the center depends somewhat on δ. While the x-coordinate xc of the center remains almost unchanged as δ decreases, the y-coordinate yc at the final time is slightly more negative for smaller δ. For example, for δ = 0.1, yc ≈ −3.5, while for δ = 0.4, yc ≈ −3.2. The left column in figure 7 clearly shows the behaviour characteristic of the vortex sheet roll-up near the center. The spiral ends in the form of a small hook, formed by a change in concavity of the roll-up close to the end of the sheet. The resulting inflection point is the point xI referred to in figures 6 and in the discussion of the point insertion method, in §4.3. The right column in figure 7 shows the trajectory of the vortex center, approximated by the vortex sheet endpoints and the inflection point, as well as the middle point on the sheet with α = π/2, and the two-point vortex approximation. A close inspection of this figure shows that as δ decreases, the vortex center oscillates with a smaller amplitude and higher frequency. Again, the spiral center is well approximated by the two-point-vortex approximation. The midpoint veers to the left of the straight downward trajectory for the smallest value of δ. This is most likely caused by loss in resolution. 5.3 Dependence on d Next, we vary the distance d between the plate and the initial vortex sheet. Figure 8 plots the solution computed with d = 1, 2, 4, with δ = 0.2, φ = 0. The top row shows the vortex sheet position at the last time computed, t = 60. The bottom row shows the sprial center trajectory. The top row shows that at t = 60 with d = 4, the vortex sheet has just moved past the plate. Comparison of the sheet position at t = 60 clearly shows that as d increases, the vortex sheet travels less far. That is, the plate slows the sheet down, more so the further it is from its initial position. This is also evident from the trajectories shown in the bottom row. As d increases, the total distance travelled by the spiral centers, as well as by the two-point approximation, is smaller. It is interesting to note that with the smallest value of d shown, d = 1, the total distance travelled is actually larger than in the absence of a plate, shown in figure 5, left column. That is, if the plate is close to the initial sheet, in speeds up the sheet’s downward motion. As the plate is moved away from the initial sheet, it slows the motion down. RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 125 δ=0.4 −2.6 2 −2.8 1 −3 0 −3.2 y y δ=0.4 −1 −3.4 −2 −3.6 −3 −3.8 −4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 x −2 −1 2 1 2 1 2 δ=0.2 −2.8 2 −3 1 −3.2 0 −3.4 1 y y δ=0.2 0 x −1 −3.6 −2 −3.8 −3 −4 −4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 x −2 −1 δ=0.1 0 x δ=0.1 2 −3 1 −3.2 0 y y −3.4 −1 −3.6 −3.8 −2 −4 −3 −1.2 −1 −0.8 −0.6 −0.4 −0.2 x −4 −2 −1 0 x Figure 7: Solution for δ = 0.4, 0.2, 0.1, as indicated, with d = 2, φ = 0. Closeup of vortex sheet at t = 60 (left), and core trajectories (right). RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 126 d=2 d=4 t = 60 4 0 1 3 −1 0 2 −2 −1 1 y 2 y y d=1 1 −3 −2 0 −4 −3 −1 −5 −4 −2 −6 −3 −2 −1 0 x 1 2 3 −5 −3 −2 −1 1 2 −3 −3 3 (a) 0 1 −5 −1 0 −6 −2 −1 1 −3 −2 0 −4 −3 −1 −5 −4 −2 −1 0 x 1 2 3 −5 −3 −2 0 x 1 2 3 −4 y 2 −2 −1 d=4 1 −6 −3 −2 d=2 y y d=1 0 x −1 0 x 1 2 3 −3 −3 −2 −1 0 x 1 2 3 Figure 8: Solution for d = 1, 2, 4, as indicated, where d is the distance of plate from initial sheet position, for δ = 0.2, φ = 0. The position of the sheet at t = 60 (top) and and the vortex core trajectories (bottom) are shown. RHIT Undergrad. Math. J., Vol. 15, No. 1 5.4 Page 127 Dependence on φ Figure 9 shows the results if the plate is inclined away from the horizontal by an angle φ, where we consider φ = π/12, π/6, π/4, π/3, with δ = 0.2, d = 2. The sheet position at t = 60 is shown at the left, the center trajectories (blue, green) and the two-point-vortex approximation (red) is shown at the right. The top row shows the smallest inclination angle, φ = π/12. This case is thus closest to the case φ = 0 considered previously. Remember that for φ = 0, the vortices approached the plate at an angle normal to the plate and leave the plate on its other side also at an angle normal to it. For φ = π/12 the situation is similar. The vortices approach the plate at an angle close to normal. After encountering the plate they leave the plate practically normal to it. Thus the plate deflects the trajectory by π/12. As φ increases, the situation dramatically changes. In the top middle row, with φ = π/6, the vortices leave the plate not normal to it, but close to parallel to it. Thus, for this value of φ, the plate deflects the vortices in the opposite direction! With the second largest angle, φ = π/4, shown in the bottom middle row, this behaviour is even more evident: after the vortices encounter the plate, they leave the plate parallel to it, instead of normal to it. For the largest angle φ = π/3, shown in the bottom row, the vortices first run parallel to the plate while the plate is between them. Then they veer away in a straight line at an even further angle from horizontal. Thus, encountering a plate at an angle deflects the trajectory of the vortices. If the plate is inclined little from the horizontal, the trajectory is deflected little from the vertical, and remains close to normal to the plate. If the plate in inclined far from the horizontal, the vortex is deflected in the opposite direction and leaves the plate almost parallel to it, instead of normal to it. The observed behaviour is qualitatively similar in the two-point vortex approximation. However, for large angles φ, the two-point vortex approximation is deflected further than the vortex sheet. 6 Summary The evolution of an elliptically loaded vortex sheet in the presence of a plate in its path, of the same size as the sheet, is computed using a vortex method. The sheet rolls up into a spiral at each of its edges, forming a vortex pair that travels in a linear trajectory towards the plate. The two vortices then move around the plate and continue to follow a linear trajectory as they leave on the other side of the plate. When the vortex sheet hits the plate the sheet develops regions of high curvature and becomes difficult to resolve in those areas. The vortex sheet motion is regularized numerically by introducing a parameter δ into the governing equations. We studied the dependence of the solution on the parameter δ, on the distance d of the plate from the initial sheet position, and on the inclination angle φ of the plate, relative to the oncoming vortex sheet trajectory. The following trends are observed. RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 128 φ = π/12 3 2 2 1 1 0 0 −1 −1 y y φ = π/12 3 −2 −2 −3 −3 −4 −4 −5 −4 −2 0 2 −5 4 −4 −2 x 2 2 1 1 0 0 −1 −1 −2 −2 −3 −3 −4 −4 −4 −2 0 2 −5 4 −4 −2 x 2 2 1 1 0 0 −1 −1 −2 −2 −3 −3 −4 −4 −2 0 2 4 −5 −4 −2 x 2 2 1 1 0 0 !1 !1 !2 !2 !3 !3 !4 !4 !2 0 x 4 0 2 4 2 4 ! = "/3 3 y y ! = "/3 !4 2 x 3 !5 0 φ = π/4 3 y y φ = π/4 −4 4 x 3 −5 2 φ = π/6 3 y y φ = π/6 3 −5 0 x 2 4 !5 !4 !2 0 x Figure 9: Solution for φ = π/12, π/6, π/4, π/3, as indicated, for d = 2, δ = 0.2. The position of the sheet at t = 60 (top) and the trajectories of the vortex core (blue/green) as well as the point vortex pair approximation (pink) are shown. RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 129 Angles*180/! 50 40 30 # 20 10 0 !10 !20 0 20 40 " 60 80 Figure 10: Graphical comparison of angles φ and Θ, as indicated, for d = 2, δ = 0.2 Values of Θ are measured clockwise from horizontally downward. As δ decreases, the vortex sheet develops more spiral turns, forming a more tightly wound spiral. The vortex pair also travels slightly faster for smaller δ. As we increase the distance d of the plate from the initial sheet, the vortex sheet meets the plate at correspondingly later times. For small values of d, the sheet speeds up and travels faster than in the absence of any plate. As d increases, the total distance travelled decreases slightly. Besides these effects, the vortex sheet trajectory is largely unaffected by d. The above results are obtained with the plate normal to the oncoming vortex sheet trajectory, corresponding to inclination angle φ = 0. The inclination angle affects the direction of propagation of the vortex pair after it encounters the plate. For small values of φ, the vortex pair leaves the plate normal to it, in the direction of inclination of the plate, and thus slightly displaced from the oncoming direction. However, for large values of φ, the vortex pair leaves the plate almost parallel to the plate, in direction opposite of the direction of inclination of the plate. Therefore there seems to be a bifurcation in the direction of travel of the vortices as a function of φ. We estimate this bifurcation by taking a cursory examination of the vortex deflection angle Θ. Figure 10 shows that for φ = 0 and φ = π, Θ = 0; we know this beforehand. For small values of φ, Θ is deflected weakly counterclockwise. For larger values of φ < π, the angle Θ is deflected strongly clockwise. We compared the vortex sheet trajectory with the trajectory of a two-point-vortex approximation of the sheet with equal initial circulation and centroid in each half of the symmetry plane. We found that in all cases the two-point-vortex motion and the vortex sheet center trajectory were in very close agreement, with small angular discrepancies for φ > 0. Page 130 RHIT Undergrad. Math. J., Vol. 15, No. 1 References C. R. Anderson, 1986 A method of local corrections for computing the velocity field due to a distribution of vortex blobs. J. Comput. Phys., 62, 111–123. G. R. Baker, 1979 The “Cloud in Cell” technique applied to the roll up of vortex sheets. J. Comput. Phys., 31, 76–95. G. R. Baker, 1980 A test of the method of Fink & Soh for following vortex sheet motion. J. Fluid Mech, 10, 209–220. R. E. Caflisch, N. Ercolani, T. Y. Hou, & Y. Landis, 1993 Multi-valued solutions and branch point singularities for nonlinear hyperbolic or elliptic systems. Communications on pure and applied mathematics, 46(4), 453–499. A. J. Chorin & P. S. Bernard, 1973 Discretization of a vortex sheet, with an example of roll-up. J. Comput. Phys., 13, 423–429. A. J. Chorin & J. E. A. Marsden, 1979 A Mathematical Introduction to Fluid Mechanics. New York: Springer-Verlag. P. T. Fink & W. K. Soh, 1978 A new approach to roll-up calculations of vortex sheets. Proc. R. Soc. Lond. A, 362, 195–209. J. J. L. Higdon & C. Pozrikidis, 1985 The self-induced motion of vortex sheets. J. Fluid Mech, 150, 203–231. R. Krasny, 1986 A study of singularity formation in a vortex sheet by the point-vortex approximation. J. Fluid Mech, 167, 65–93. R. Krasny, 1986 Desingularization of periodic vortex sheet roll-up. J. Comput. Phys., 65, 292–313. R. Krasny, 1987 Computation of vortex sheet roll-up in the Trefftz plane. J. Fluid Mech, 184, 123–155. A. Leonard, 1980 Review: vortex methods for flow simulation. J. Comput. Phys., 37, 289–335. D. W. Moore, 1979 The spontaneous appearance of a singularity in the shape of an evolving vortex sheet. Proc. R. Soc. Lond. A, 365, 105–119. N. I. Muskhelishvili, 1953 Singular integral equations, boundary problems of function theory and their application to mathematical physics. Translated by J. R. M. Radok. P. Noordhoff N. V. Groningen, Holland, reprinted by Dover in 1992. RHIT Undergrad. Math. J., Vol. 15, No. 1 Page 131 M. Nitsche, R. Krasny, 1994 A numerical study of vortex ring formation at the edge of a circular tube. J. Fluid Mech, 276, 139–161. S. C. Rennich & S. K. Lele, 1999 Method for accelerating the destruction of aircraft wake vortices. J. Aircraft, 36 (2), 398–404. P. G. Saffman & G. R. Baker, 1979 Vortex interactions. Ann. Rev. Fluid Mech., 11, 95–121. T. Sarpkaya, 1989 Computational methods with vortices – the 1988 Freeman Scholar Lecture. J. Fluids Eng., 111, 5–52. M. J. Shelley, 1992 A study of singularity formation in vortex-sheet motion by a spectrally accurate vortex method. J. Fluid Mech, 244, 493–526. J. X. Sheng, A. Ysasi, D. Kolomenskiy, E. Kanso, M. Nitsche, & K. Schneider, K., 2012 Simulating vortex wakes of flapping plates. IMA Volume on Natural Locomotion in Fluids and on Surfaces: Swimming, Flying, and Sliding. P. R. Spalart, 1998 Airplane trailing vortices. Ann. Rev. Fluid Mech., 30, 107–138. G. Tryggvason, W. J. A. Dahm & K. Sbeih, 1991 Fine structure of vortex sheet rollup by viscous and inviscid simulation. J. Fluids Eng., 113, 31–36.
Principles of Finance Principles of Finance FIN 3403 Popular in Course Popular in Finance This 4 page Class Notes was uploaded by Jamie Frami on Wednesday September 23, 2015. The Class Notes belongs to FIN 3403 at University of South Florida taught by Scott Besley in Fall. Since its upload, it has received 78 views. For similar materials see /class/212612/fin-3403-university-of-south-florida in Finance at University of South Florida. Reviews for Principles of Finance Report this Material What is Karma? Karma is the currency of StudySoup. You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more! Date Created: 09/23/15 Principles of Finance Review Sheet Exam 3 The exam will include multiple choice questions and problems If you have worked and understand the endofchapter problems that were assigned you should be able to work the problems on the exam You should understand the topical areas given in the following listithe concept questions will be primarily based on these topics Cost of Capital 0 Understand the concept of weighted average cost of capital WACC7that is the de nition the computation and the use Remember that the WACC is simply the average cost of all the funds used to nance the rm s assets based on the proportion of each type of funds used thus it is the minimum rate of return the rm needs to earn when investing those funds 0 Be able to compute each of the component costs of capital Why is the cost of debt adjusted for taxes whereas the costs of equity whether preferred stock or common equity are not Understand why there is a cost associated with retained earnings Why is the cost of retained earnings always less than the cost of issuing new external equity 0 Understand what makes the WACC changeithat is understand what break points are Be able to compute break points What is one break point that a rm always faces Think about retained earnings 0 How is the WACC used to make capital budgeting decisions Capital Structure 0 What do we mean when we say that a rm s capital structure contains 40 percent debt How much equity must there be What does it mean to operate at the optimal capital structure 0 How do such factors as business risk nancial exibility and risk tax position and managerial attitude affect the capital structure of a rm 0 Understand how a nancial manager tries to determine the optimal capital structure for his or her rm For example how can EBITEPS analysis and EPS indifference analysis help the nancial manager with such analyses 0 In reality is there an optimal capital structure If a rm currently is nanced with equity only it is an all equity rm how would you expect its cost of capital to change as its capital structure is changed to included greater and greater proportions of debt Adding some debt to a rm s capital structure is bene cialithat is WACC decreasesidue to its tax deductibility But when does adding more debt become detrimentalithat is WACC increases 0 How can the concept of leverage be used to help a nancial manager make decisions about the capital structure of the rm Be able to compute the degree of operating leverage the degree of nancial leverage and the degree of total leverage and understand what the resulting numbers mean 0 According to the tradeoff theory of capital structure what is the best way to nance a rm optimal capital structure Why Does the signaling theory propose the same capital structure would be optimal Why 0 Why do the capital structures of rms vary so much among rms in different industries and in different countries 0 Can decisions about the capital structure of a rm affect its capital budgeting decisions How Dividend Policy 0 What does it mean to have an optimal dividend policy 0 Understand how the concepts of information content signaling the clientele effect and the free cash ow hypothesis affect dividend policy decisions 0 What dividend policies are followed in practice All else equal which dividend policy probably results in the highest value for a rm s stock if managers and investors have the same symmetrical information Which policy probably results in the highest value if managers and investors have asymmetrical information Why 0 What are the important dates associated with paying dividends Why are the holderofrecord date and the ex dividend date important to stockholders o How do such factors as restrictions in debt agreements capital budgeting opportunities availability of alternative sources of nancing and concern for the rm s cost of capital affect dividend policy decisions 0 What are stock splits and stock dividends and what economic impact do they have How do stock splits and stock dividends affect a rm s balance sheet Financial Planning and Control 0 Understand the general process that must be followed to construct pro forma nancial statements What does AFN mean and why is it important in the construction of pro formas Why does the process of constructing pro forrnas have to be repetitive What are some factors that affect or might complicate the process 0 Understand the concepts of operating breakeven and nancial breakeven Why is it important to conduct O breakeven analyses Be able to compute breakeven points Understand the concept of leverage What is operating leverage Financial leverage What information does the degree of leverage whether operating nancial or total provide Be able to compute the degree of operating leverage the degree of nancial leverage and the degree of total leverage Why is the US taX system considered to be a progressive taX system Principles of Finance Review Sheet Exam 2 The exam will include multiple choice questions and problems If you have worked and understand the endofchapter problems that were assigned you should be able to work the problems on the exam You should understand the topical areas given in the following listithe concept questions will be primarily based on these topics Valuation Concepts 0 Understand the basic concept of valuation In simple terms the value of any asset is the present value of the future cash ows the asset is expected to generate 0 Be able to compute the value and the yield to maturity of a bond 0 Understand what the yield to maturity YTM of a bond represents 0 Understand the relationship between YTM and the coupon rate and the market value of a bond For example when the market rate or YTM is greater than the coupon rate a bond sells at a discountithat is its price is less than its par value Why What part of the YTM is attributed to the current yield and what part is attributed to capital gains 0 Understand how bond prices change over time even if market interest rates remain constant We know that the value of a bond must equal its face value at maturity assuming no bankruptcy so the value of a bond must move from its current price to its maturity value as time passes all else equal 0 Be able to compute the value of a stock when there is 1 no growth 2 constant growth and 3 nonconstant growth 0 Understand the components that make up the required return earned on a stockithat is dividend yield and capital gains 0 What is the dividend discount model What are some other modelstechniques that investors use to value stock 0 Know the different featurescharacteristics of stock and bonds Risk and Rates of Return 0 What is risk and how do we measure it How does the risk of an investment held in isolation differ from the risk of the same investment held in a portfolio What is portfolio risk How does risk affect rates of return 0 What is expected rate of return and how is it measured What does it mean to have an expected rate of return equal to 20 percent 0 Why do we generally divide total risk into two componentsisystematic risk and unsystematic risk Which component is diversi able 0 Understand the concept of beta and the Capital Asset Pricing Model CAPM Be able to compute expected rates of return using the CAPM 0 Understand how factors like in ation and risk aversion affect rates of return Capital Budgeting Techniques 0 What are the various types of capital budgeting decisions What does it mean for projects to be independent Mutually exclusive 0 Be able to compute the payback period both traditional and discounted net present value NPV and internal rate of return IRR for a capital budgeting project Understand what the result for each computation means For example what does it mean if you nd a project has an IRR equal to 14 percent If NPV gt 0 what is the relationship between the rm s required rate of return and the project s IR and what is the project s discounted payback period relative to its life 0 Understand what an NPV pro le is how it is used and how it is constructed What does the crossover point associated with the NPV pro les of two projects mean How is such information used to make capita budgeting decisions 0 How do capital budgeting decisions differ from general asset valuation Are they based on the same concepts Capital BudgetingiCash Flows and Risk 0 Understand and be able to identify the different cash ows that are relevant for making capital budgeting decisions For example what would be included as part of the initial investment outlay terminal cash ow and so forth How does the identi cation of these cash ows differ if the project is a replacement asset rather than an expansion asset 0 Why should the risk associated with a project be considered when making a capital budgeting decision What incorrect decisions could be made if risk is not considered in capital budgeting analysis 0 Understand how we incorporate risk into capital budgeting decisions What techniques are used Principles of Finance Review Sheet Exam 1 The exam will include multiple choice questions and problems If you have worked and understand the endofchapter problems that were assigned you should be able to work the problems on the exam You should understand the topical areas given in the following listithe concept questions will be primarily based on these topics Overview ofManagerial Finance 0 Why is it important to have some understanding of nance 0 Understand the differences among the alternative forms of business What are the advantages and disadvantages of each In general how is each taxed o What are some of the goals that are pursued by corporations Why should the primary goal of a nancial manager be to try to maximize shareholder wealth 0 Understand what agency relationships are What are some ways shareholders can reduce agency problems 0 Know how businesses in the United State differ in general from businesses in other countries Analysis ofFinancial Statements 0 Understand the general concepts associated with nancial statements and ratio analysis Do not memorize any of the ratios What information does each of the ve categories of ratios mentioned in the text provide to those who interpret the ratios Who uses ratios 0 Why is the Statement of Cash Flows considered an important nancial statement In general what would be a source of cash and what would be a use of cash 0 Know the limitations associated with ratio analysis Why is the interpretation of the ratios considered more important than the computation of the ratios The FinancialMarkets and the Investment Banking Process 0 What is a nancial market Know how the different types of nancial markets mentioned in the lectures and the text differ For example in what type of market would shortterm securities be traded How do the physical stock exchanges differ from the overthecounter market 0 What is a nancial intermediary What are some of the more common nancial intermediaries mentioned in the text and what are the roles of such intermediaries in the nancial markets 0 What is an investment banker What services do investment banking houses provide Time Value ofMoney 0 Understand the concept of the time value of money Remember that the computations we conduct in this section are used to restate dollars received or paid at different time periods into equivalent dollars at one periodifor example the present period We do this so that the dollars are stated in comparable dollars Be able to compute the future value and the present value of 1 a lumpsum amount single payment 2 an annuity 0 whether it is an annuity due or an ordinary annuity and 3 an uneven cash ow stream Be able to compute the same present and future values when compounding occurs more than once a year What is a perpetuity 0 Understand the concept of amortizing a loan and be able to compute the payoff value of a loan after some payments have been made For example consider a 10000 12 percent loan that requires quarterly payments over a veyear period Every three months the payment would be 67216 11 20 r 3 and PV 10000 What would the payoff value of the loan be after three yearsithat is after 12 payments have been made Because there are two years or eight payments of 67216 remaining on the loan we now have an 8period annuity equal to 67216 with interest equal to 3 percent per period The present value ofthis annuity is 471835 11 8 r 3 and PMT 67216 which is the amount that would be due if the borrower wants to repay the loan after three years 0 Understand and be able to compute the effective annual rate EAR rEAR What is the difference between rEAR and 7 The Cost ofMoney o What is the yield on an investment and how is it computed 0 Understand how interest rates are determined in general and what factors affect interest rates For example if two companies are identical except one has a greater chance of defaulting on its debt the company with the higher chance of defaulting on its debt should have to pay a higher interest rate to borrow Why is the interest rate higher What are the basic risks that are included in the risk premium associated with a debt instrument That is what are 0 DRP LP and MRP 0 Understand the term structure of interest rates and the theories that have been developed to help explain the shape of the yield curve What is a yield curve 0 Be able to compute expected interest rates The problems on the exam should be similar to those you were assigned in Chapter 5 Are you sure you want to buy this material for You're already Subscribed! Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'
I love card games: they're fun to play and test many different skills. One of the most important facets of winning card play is to have a plan. Whether it's poker or bridge, good players will usually think about a strategy before implementing it, while bad players are content to simply wing it, always unsure of what to do next and often making the wrong play as a result. Game show contestants often exhibit the same tendencies when presented with "lifelines". Most players don't plan their lifeline usage in advance, and as a result they often get little or no utility out of them. For example, on "Who Wants to be a Millionaire?", often a contestant will encounter a question where she has absolutely no clue what the correct answer is. Typically, she will ask the audience, but if that option has already been exhausted, she will often take a 50/50, then re-evaluate the question. Still having no clue, she then phones a friend for help. This is abysmal usage of the lifelines. Taking the 50/50 in this spot can only be justified if the contestant intends to guess between the remaining two answers. (Even then, it's probably not the best move.) Generally, the phone-a-friend either knows the answer or doesn't. (She might have to Google/Wikipedia the answer, but that's irrelevant.) Clearly, the superior strategy is to phone a friend right away, then use the 50/50 only if the friend doesn't know the correct response. This will usually save a valuable lifeline; the 50/50 by itself is worth tens of thousands of dollars if you still have it after ten questions. Another Millionaire quirk deals with asking the audience. This is best illustrated by a (real) example. The question asked what Elmo was searching for in a recent Sesame Street movie. The choices included three physical objects--regrettably, I don't remember which ones--as well as Big Bird. Naturally, the contestant chose to ask the audience. Naturally, most of them voted for Big Bird, as they had not seen the film but were all familiar with the giant yellow symbol of Sesame Street. However, the voting breakdown looked like this: Big Bird: 65% The contestant then went with a "final answer" of Big Bird. This was not her best play. The audience could be broken down into two groups: those who knew the answer and those who were just taking a guess. Naturally, those who knew would all answer the same way. The others would guess in a predictable way: they'd most likely choose Big Bird because it was a familiar answer, but they would be about evenly divided between the physical objects. What would possess them to choose one over another? If Big Bird is the correct answer, what explanation is there for the popularity of answer A? Certainly the explanation isn't good enough to justify 30% of the vote compared to 5% for B and C combined. It's much more likely that only a small portion of the audience has seen the film, all of whom voted for A. Sure enough, A was the correct answer, and the contestant went home with only $1,000. The relatively new show "Are You Smarter Than A Fifth Grader?" may feature easy questions, but the lifelines present some interesting strategies, which the contestants almost always get wrong. For those not familiar with the show, the contestant and an actual Fifth Grade student answer the questions together--with the student writing his answer down rather than speaking it--and the contestant has three single-use lifelines: Peek: The contestant may look at the student's answer, but does not have to use it. Copy: The contestant locks in the student's answer as her own. Save: If the contestant answers incorrectly but the student answers correctly, the contestant is credited with a correct answer. It should be clear to readers of this blog, if not the general public, that the Peek is a dominant strategy to the Copy, since it gives the player the added option of rejecting the student's answer. Additionally, many questions in the show have only two or three answer choices, so the Peek and Save lifelines can often be combined to virtually assure a correct response. Say you approach a question and have no clue what the answer is. Assuming you have all your lifelines intact and don't intend to quit, what strategy should you use? There are only two reasonable lines of play: - If it's a true/false or multiple-choice question, use your Peek, then answer the opposite way of the student, OR - Copy the student's answer. However, this is not what the typical contestant does. She Peeks at the student's answer, rationalizes the answer (whether or not she actually agrees with the student) in her own mind, then uses the student's answer as her response. By doing this, she has turned the Peek into a Copy, and the only benefit is that she gets to see the student's answer 30 seconds earlier. I understand the psychological motivations for wanting to see one's answer before committing to it for a large sum of money, but it should still be obvious to the contestant that this strategy is terrible. Of course, if she knew that, she'd be smarter than a fifth grader.
What could a financial manager look at to determine whether his company is successful or in distress? Give an example of a success or distress in today’s business world. Reflect on your experience in this course. What are the key takeaways you have learned in this course that you can apply tomorrow or in the near future? Profit or Loss on New Stock Issue Security Brokers Inc. specializes in underwriting new issues by small firms. On a recent offering of Beedles Inc., the terms were as follows: |Price to public:||$5 per share| |Number of shares:||3 million| |Proceeds to Beedles:||$14,000,000| The out-of-pocket expenses incurred by Security Brokers in the design and distribution of the issue were $380,000. What profit or loss would Security Brokers incur if the issue were sold to the public at the following average price? a. $5 per share? Use minus sign to enter loss, if any. $b. $6.5 per share? Use minus sign to enter loss, if any. $c. $3.5 per share? Use minus sign to enter loss, if any. $Refunding AnalysisMullet Technologies is considering whether or not to refund a $125 million, 13% coupon, 30-year bond issue that was sold 5 years ago. It is amortizing $3 million of flotation costs on the 13% bonds over the issue’s 30-year life. Mullet’s investment banks have indicated that the company could sell a new 25-year issue at an interest rate of 10% in today’s market. Neither they nor Mullet’s management anticipate that interest rates will fall below 10% any time soon, but there is a chance that rates will increase.A call premium of 8% would be required to retire the old bonds, and flotation costs on the new issue would amount to $4 million. Mullet’s marginal federal-plus-state tax rate is 40%. The new bonds would be issued 1 month before the old bonds are called, with the proceeds being invested in short-term government securities returning 4% annually during the interim period.a. Conduct a complete bond refunding analysis. What is the bond refunding’s NPV? Do not round intermediate calculations. Round your answer to the nearest cent.$ b. What factors would influence Mullet’s decision to refund now rather than later?Problem 2 Merger BidHastings Corporation is interested in acquiring Vandell Corporation. Vandell has 1 million shares outstanding and a target capital structure consisting of 30% debt; its beta is 1.40 (given its target capital structure). Vandell has $11.22 million in debt that trades at par and pays an 7.1% interest rate. Vandell’s free cash flow (FCF0) is $2 million per year and is expected to grow at a constant rate of 5% a year. Both Vandell and Hastings pay a 40% combined federal and state tax rate. The risk-free rate of interest is 7% and the market risk premium is 4%.Hastings Corporation estimates that if it acquires Vandell Corporation, synergies will cause Vandell’s free cash flows to be $2.5 million, $2.9 million, $3.4 million, and $3.67 million at Years 1 through 4, respectively, after which the free cash flows will grow at a constant 5% rate. Hastings plans to assume Vandell’s $11.22 million in debt (which has an 7.1% interest rate) and raise additional debt financing at the time of the acquisition. Hastings estimates that interest payments will be $1.6 million each year for Years 1, 2, and 3. After Year 3, a target capital structure of 30% debt will be maintained. Interest at Year 4 will be $1.441 million, after which the interest and the tax shield will grow at 5%.Indicate the range of possible prices that Hastings could bid for each share of Vandell common stock in an acquisition. Round your answers to the nearest cent. Do not round intermediate calculations.The bid for each share should range between $ per share and $ per share.Problem 3 LiquidationSouthwestern Wear Inc. has the following balance sheet:Current assets$1,875,000 Accounts payable$375,000Fixed assets1,875,000 Notes payable750,000 Subordinated debentures750,000 Total debt$1,875,000 Common equity1,875,000Total assets$3,750,000 Total liabilities and equity$3,750,000Problem 4The trustee’s costs total $246,500, and the firm has no accrued taxes or wages. Southwestern has no unfunded pension liabilities. The debentures are subordinated only to the notes payable. If the firm goes bankrupt and liquidates, how much will each class of investors receive if a total of $3 million is received from sale of the assets?Distribution of proceeds on liquidation:1. Proceeds from sale of assets$2. First mortgage, paid from sale of assets$ 3. Fees and expenses of administration of bankruptcy$ 4. Wages due workers earned within 3 months prior to filing of bankruptcy petition$ 5. Taxes$ 6. Unfunded pension liabilities$ 7. Available to general creditors$Distribution to general creditors:Claims of General CreditorsClaim (1)Application of 100% Distribution (2)After Subordination Adjustment (3)Percentage of Original Claims Received (4)Notes payable$$$ %Accounts payable$$$ %Subordinated debentures$$$ %Total$$$ The remaining $ will go to the common stockholders.
Classify two-dimensional figures into categories based on their properties. These materials have been produced by and for the teachers of the State of Utah. In fact, I'm so excited about this product that I decided to create a similar product for our third unit of the year - algebra. Fraction AdditionFraction Subtraction 2. Just tell us your email above. Use whole-number exponents to denote powers of The "vector and matrix quantities" domain is reserved for advanced students, as are some of the standards in "the complex number system". Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate; and assess the reasonableness of answers using mental computation and estimation strategies. Solve equations of these forms fluently. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the ordered pairs on a coordinate plane. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair. For example, all rectangles have four right angles and all squares are rectangles, so all squares have four right angles. Understand decimal notation for fractions, and compare decimal fractions. Perform operations with multidigit whole numbers and with decimals to hundredths Standards 5. For example, the "number and quantity" category contains four domains: Apply and extend previous understandings of multiplication and division to multiply and divide fractions Standards 5. Understand ratio concepts and use ratio reasoning to solve problems. Include expressions that arise from formulas used in real-world problems. Interpret the structure of expressions 1. Between what two whole numbers does your answer lie. Analyze and solve linear equations and pairs of simultaneous linear equations. At what rate were lawns being mowed. Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. Apply these techniques in the context of solving real-world and mathematical problems. Solve real-world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, for example, by using visual fraction models and equations to represent the problem. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems. Solving an equation or inequality involves using properties of operations, combining like terms, and using inverse operations. Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines the origin arranged to coincide with the zero on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Explain why multiplying a given number by a fraction greater than one results in a product greater than the given number recognizing multiplication by whole numbers greater than one as a familiar case ; explain why multiplying a given number by a fraction less than one results in a product smaller than the given number; and relate the principle of fraction equivalence. Distinguish comparisons of absolute value from statements about order. Compare the value of the quotient on the basis of the values of the dividend and divisor. Related Topics: Common Core for Grade 5 More Lessons for Grade 5 Example, solutions, videos, and lessons to help Grade 5 students learn to write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them. Easy and effective math lesson plans across all grades including long division, algebra, geometry, and statistics with free resouces from world-class teachers. Common Core State Standards Math ‐ Content Standards Common Core State Standards Math – Standards of Mathematical Practice They routinely interpret their mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its purpose. We are not afiliated with the National Governors Association Center for Best Practices (NGA Center) or the Council of Chief State School Officers (CCSSO) which coordinate the CCSS initiative. 5th Grade Common Core Operations and Algebraic Thinking michaelferrisjr.com: Students need to write and interpret numerical expressions, and to analyze patterns and. Math Worksheets and Common Core Standards for Grade 5. Quality Free printables for students and teachers. Worksheets Math English; Grade 5. Worksheets Math English Fifth Grade: Free Common Core Math Worksheets and interpret numerical expressions without evaluating them. For example, express the calculation "add 8 and 7, then. The Common Core State Standards for Mathematical Practice are practices expected to be integrated into every mathematics lesson for all students Grades K- Below are a few examples of how these Practices may be integrated into tasks that Grade 7 students complete.Common core math grade 5 writing and interpret expressions
« PreviousContinue » nated by the characters: ,'," : thus the expression 16° 6' 15" represents an arc, or an angle, of 16 degrees, 6 minutes, and 15 seconds. III. The complement of an angle, or of an arc, is what remains after taking that angle or that arc from 90°. Thus the complement of 25° 40' is equal to 90°—25° 40'=64° 20'; and the complement of 12° 4' 32" is equal to 90°-12° 4' 32"=770 55' 28". In general, A being any angle or any arc, 90°—A is the complement of that angle or arc. If any arc or angle be added to its complement, the sum will be 90°. Whence it is evident that if the angle or arc is greater than 90°, its complement will be negative. Thus, the complement of 160° 34' 10" is -70° 34' 10". In this case, the complement, taken positively, would be a quantity, which being subtracted from the given angle or arc, the remainder would be equal to 90°. The two acute angles of a right-angled triangle, are together equal to a right angle ; they are, therefore, complements of each other. IV. The supplement of an angle, or of an arc, is what remains after taking that angle or arc from 180°. Thus A being any angle or arc, 180°—A is its supplement. În any triangle, either angle is the supplement of the sum of the two others, since the three together make 180°. If any arc or angle be added to its supplement, the sum will be 180°. Hence if an arc or angle be greater than 180°, its supplement will be negative. Thus, the supplement of 200° is —20°. The supplement of any angle of a triangle, or indeed of the sum of either two angles, is always positive. The secant of an arc is the line drawn from the centre of the circle through one extremity of the arc and limited by the tangent drawn through the other extremity. Thus CT is the secant of the arc AM, or of the angle ACM. The versed sine of an arc, is the part of the diameter intercepted between one extremity of the arc and the foot of the sine. Thus, AP is the versed sine of the arc AM, or the angle ACM. These four lines MP, AT, CT, AP, are dependent upon the arc AM, and are always determined by it and the radius ; they are thus designated : MP=sin AM, or sin ACM, AP-ver-sin AM, or ver-sin ACM. VI. Having taken the arc AD equal to a quadrant, from the points M and D draw the lines MQ, DS, perpendicular to the radius CD, the one terminated by that radius, the other terminated by the radius CM produced; the lines MQ, DS, and CS, will, in like manner, be the sine, tangent, and secant of the arc MD, the complement of AM. For the sake of brevity, they are called the cosine, cotangent, and cosecant, of the arc AM, and are thus designated : MQ=cos AM, or cos ACM, CS=cosec AM, or cosec ACM. cos A=sin (90°-A), cosec A= sec (90°—A). The triangle MQC is, by construction, equal to the triangle CPM; consequently CP=MQ: hence in the right-angled triangle CMP, whose hypothenuse is equal to the radius, the two sides MP, CP are the sine and cosine of the arc AM: hence, the cosine of an arc is equal to that part of the radius intercepted between the centre and foot of the sine. The triangles CAT, CDS, are similar to the equal triangles CPM, CQM ; hence they are similar to each other. From these principles, we shall very soon deduce the different relations which exist between the lines now defined : before doing so, however, we must examine the changes which those lines undergo, when the arc to which they relate increases from zero to 180°. The angle ACD is called the first quadrant ; the angle DCB, the second quadrant; the angle BCE, the third quadrant ; and the angle ECA, the fourth quadrant. VII. Suppose one extrem S ity of the arc remains fixed in A, while the other extremity, MT marked M, runs successively throughout the whole extent of the semicircumference, Ρ Α. from A to B in the direction When the point M is at A, or when the arc AM is zero, N R the three points T, M, P, are confounded with the point A; whence it appears that the sine and tangent of an arc zero, are zero, and the cosine and secant of this same arc, are each equal to the radius. Hence if R represents the radius of the circle, we have sin 0=0, tang 0=0, cos (=R, sec 0=R. VIII. As the point N advances towards D, the sine increases, and so likewise does the tangent and the secant; but the cosine, the cotangent, and the cosecant, diminish. When the point M is at the middle of AD, or when the arc AM is 45°, in which case it is equal to its complement MD, the sine MP is equal to the cosine MQ or CP; and the triangle CMP, having become isosceles, gives the proportion MP : CM ::1: V2, =cOS 450 =!RV 2. In this same case, the triangle CAT becomes isosceles and equal to the triangle CDS ; whence the tangent of 45° and its cotangent, are each equal to the radius, and consequently we have tang 45°=cut 15°=R. IX. The arc AM continuing to increase, the sine increases till M arrives at D; at which point the sine is equal to the radius, and the cosine is zero. Hence we have sin 90°=R, cos 90o=0; and it may be observed, that these values are a consequence of the values already found for the sine and cosine of the arc zero ; because the complement of 90° being zero, we have sin 90°=cos 0°-R, and cos 90°=sin 0°=0. As to the tangent, it increases very rapidly as the point M approaches D; and finally when this point reaches D, the tangent properly exists no longer, because the lines AT, CD, being parallel, cannot meet. This is expressed by saying that the tangent of 90° is infinite ; and we write tang 90°=0 The complement of 90° being zero, we have tang 0=cot 90° and cot 0=tang 90°. Hence cot 90°=0, and cot (=0. X. The point. M continuing to advance from D towards B, the sines diminish and the cosines increase. Thus M'P' is the sine of the arc AM', and M'Q, or CP' its cosine. But the arc M'B is the supplement of AM', since AM' +M'B is equal to a semicircumference; besides, if M'M is drawn parallel to AB, the arcs AM, BM', which are included between parallels, will evidently be equal, and likewise the perpendiculars or sines MP, MP. Hence, the sine of an arc or of an angle is equal to the sine of the supplement of that arc or angle. The arc or angle A has for its supplement 1800-A: hence generally, we have sin A=sin (1800—A.) The same property might also be expressed by the equation sin (90°+B)=sin (90°—B), B being the arc DM or its equal DM'. XI. The same arcs AM, AM', which are supplements of each other, and which have equal sines, have also equal cosines CP, CP'; but it must be observed, that these cosines lie in different directions. The line CP which is the cosine of the arc AM, has the origin of its value at the centre C, and is estimated in the direction from C towards A; while CP', the cosine of AM' has also the origin of its value at C, but is estimated in a contrary direction, from C towards B. Some notation must obviously be adopted to distinguish the one of such equal lines from the other; and that they may both be expressed analytically, and in the same general formula, it is necessary to consider all lines which are estimated in one direction as positive, and those which are estimated in the contrary direction as negative. If, therefore, the cosines which are estimated from C towards A be considered as positive, those estimated from C towards B, must be regarded as negative. Hence, generally, we shall have, cos A=~cos (180°-A) that is, the cosine of an arc or angle is equal to the cosine of its supplement taken negatively. The necessity of changing the algebraic sign to correspond with the change of direction ver-sin AM=R-cos AM. ver-sin AM-R--Cos AM, cos 180°=-R. For all arcs, such as ADBN', which terminate in the third quadrant, the cosine is estimated from C towards B, and is consequently negative. At E the cosine becomes zero, and for all arcs which terminate in the fourth quadrant the cosines are estimated from C towards A, and are consequently positive. The sines of all the arcs which terminate in the first and second quadrants, are estimated above the diameter BA, while the sines of those arcs which terminate in the third and fourth quadrants are estimated below it. Hence, considering the former as positive, we must regard the latter as negative. + XII. Let us now see what sign is to be given to the tangent of an arc. The tangent of the arc AM falls above the line BA, and we have already regarded the lines estimated in the direction AT as positive: therefore the tangents of all arcs which terminate in the first quadrant will be positive. But the tangent of the arc AM', greater than 90°, is determined by the intersection of the two lines M'C and AT. These lines, however, do not meet in the direction AT ; but they meet in the opposite direction AV. But since the tangents estimated in the direction AT are positive, those estimated in the direction AV must be negative: therefore, the tangents of all arcs which terminate in the second quadrant will be negative. When the point M' reaches the point B the tangent AV will become equal to zero: that is, tang 180°=0. When the point M' passes the point B, and comes into the position N', the tangent of the arc ADN'will be the line AT:
Galileo Galilei Essay Research Paper Galileo Galilei15641642Galileo Galileo Galilei Essay, Research Paper ( 1564-1642 ) Galileo Galilei was born near Pisa, Italy, on February 15, 1564 ( Drake ) . Galileo was the first kid of Vincezio Galiei, a merchandiser and a instrumentalist ( Jaki 289 ) . In 1574, Galileo? s household moved from Pisa to Florence, where Galileo started his formal instruction ( Jaki 289 ) . Seven old ages latter, in 1581, Galileo entered the University of Pisa as a medical pupil ( Drake ) . In 1583, place on holiday from medical school, Galileo began to analyze mathematics and physical scientific disciplines ( Jaki 289 ) . A Family friend and professor at the Academy of Design, Ostilio Ricci, worked on interpreting some of Archimedes, which Galileo read and became interested in. This is where Galileo got his deep involvement in Archimedes ( Jaki 289 ) . When returning to medical school, medical school became less appealing to Galileo, and his deep involvements in Archimedes and mathematics drew him in, Galileo left without a grade in 1584 ( Drake ) . Get downing his surveies, in 1585, in Aristotelean natural philosophies and cosmology, Galileo had to go forth the University of Pisa before he got his grade, because of fiscal jobs ( Jaki 289 ) . Traveling back to Florence, Galileo spent three unsuccessful old ages looking for a teaching place ( Jaki 289 ) . During this clip Galileo was increasing his apprehension of natural philosophies and mathematics. Besides during this difficult clip Galileo wrote two discourses one about rules of reconciliation and the other about centre of gravitation of different solid objects ( Jaki 289 ) . These Hagiographas were circulated in manuscript signifier merely, but they made Galileo good known in the scientific community. Galileo became renowned in 1588, when he gave a talk at the Florentine Academy on the topography of Dante? s Inferno, where he showed his extended cognition on mathematics and geometry ( Jaki 289 ) . In 1589, Galileo? s lifting repute as a mathematician and natural philosopher ( physicist ) , earned him a instruction topographic point at the University of Pisa ( Jaki 289 ) . Galileo spent three old ages at the University of Pisa. This move changed his constructs of natural philosophies in two ways. The first manner was when he was at the university he was exposed to the Hagiographas of Fiovanni Battista Benedetti, which got his thoughts from fourteenth century scientist Jean Buridan and Nicole Oresme at the University of Paris ( Jaki 289 ) . These composing made him interrupt away from Aristotelean natural philosophies and get down his ain path through physical theories. The 2nd portion was when Galileo started learning he argued and hated the fact that instructors had to have on academic robes while learning. He would accept have oning ordinary apparels, but he instead that it would be the best to be naked ( Jaki 289 ) . In 1591, Galileo? s male parent died and he had the load to take attention of his female parent, brothers, and sisters ( Jaki 289 ) . Looking for a better place to back up his household, Galileo found one in the University of Padua, portion of the Venetian Republic ( Jaki 289 ) . There harmonizing to him he spent the happiest eighteen-years of his life ( Jaki 289 ) . ? He frequently visited Venice and made many influential friend, among them Giovanfrancesco Sagredo, whom he subsequently immortalized in the Dialogue as the representative of judiciousness and good sense? ( Jaki 289 ) . In 1604, Galileo publically declared that he was a truster of the celebrated uranologist Copernicus ( Jaki 290 ) . ? In three public talks given in Venice, before an overflow audience, he argued that the new star which appeared earlier that twelvemonth was major grounds in support of the philosophy of Copernicus. ( Actually the new star simply proved that there was something earnestly incorrect with the Aristotelean philosophy of the celestial spheres ) ? ( Jaki 290 ) . ? More of import was the missive Galileo wrote that twelvemonth to Father Paolo Sarpi, in which he stated that? the distance covered in natural gesture are relative to the squares of clip intervals, and at that place forward, the distances covered in equal clip are as the uneven Numberss get downing from one? ( Jaki 290 ) . What he proposed was the jurisprudence of free autumn, subsequently written as s = ? ( gt2 ) , where s is the distance, t is clip, and g is the acceleration due to gravitation at sea degree ( Jaki 290 ) . In 1606, he published a little brochure, The Operations of the Geometrical and Military Compass ( Jaki 290 ) . He defended that he travel to University of Padua and said it was because of personal grounds ( Jaki 290 ) . In 1609, a Dutch spectacle shaper, Hans Lippershey, combined two lenses and made a refracting telescope ( Evans ) . Galileo reading about Hans Lippershey find made a telescope, which became the first telescope for astronomical intents ( Evans ) . He submitted the innovation to the Venetian Senate and the telescope was a success. This innovation secured him a life long contract at the University of Padua ( Jaki 290 ) . With the new innovation of the telescope Galileo was able to see 40x ( Jaki 290 ) . This enables Galileo to detect the mountains on the Moon and the Moons of Jupiter. For four twelvemonth Galileo observed the planets, Moons and stars, and he published his work in a book called Sidereus nuncios ( Jaki 290 ) . During this clip Galileo was really selfish when he was printing his work. Galileo wanted to be the exclusive subscriber to modern natural philosophies and uranology. Dedicated in his work he left behind his common-law married woman, Marina Gamba, and his immature boy, Vincenzio, and placed his two girls in the convent of S. Matteo in Arcetri. Traveling back to Florence he published Discourse on Bodies in Water, which told about the find stages of Venus and the most of import fact it proved Copernican theory ( Jaki 290 ) . In 1616, as a consequence of turn outing Copernicus? theory the church got huffy at Galileo and called him before the Inquisition ( Jaki 290 ) . The Inquisition pardoned Galileo because Galileo was a loyal Catholic who remained loyal throughout this full ordeal. The Cardinal who help Galileo though the ordeal latter became Pope Urban VIII ( Jaki 290 ) . Galileo spent the following six old ages composing the book Dialogue refering the Two Chief World Systems ( Jaki 291 ) . It had four parts to it associating to the Earth and the Moon. The first portion was the existence is non perfect like the surface of the Moon is unsmooth. Second, was account of heavenly phenomena. Third, was the argument of the Earth traveling around the Sun and was explained by the moving of the stars and that the Earth moved in two manner and it didn? T affect the Earth. And Forth, the tides prove the Earths two ways of gesture ( Jaki 291 ) . In 1632, the Dialogue caused Galileo to be brought before Inquisition for the 2nd clip because of his belief and instruction of Copernicus? philosophy ( Jaki 291 ) . This clip the Inquisition didn? t have the understanding for Galileo as they had the last clip. Falsely turn outing his theories were incorrect, the Inquisition forced him to return what he said or non merely he but besides his household will be harmed. Submiting to the Inquisition Galileo was let travel. While go forthing the room his said? Eppursi muove ( And yet it does travel ) ? ( Jaki 291 ) . Galileo went back to his chair at the University of Padua. There his did work that did conflict with the church philosophies, largely covering with natural philosophies non cosmology ( Jaki 292 ) . The published his work a book similar Two New Sciences. It had four parts to it. First, were the mechanical opposition of metals and the atomic fundamental law of affair. Second, was the belongingss of lever is mathematics. Third and Forth, was the analysis of projectile gesture. Galileo spent his last old ages partly blind and died on January 8, 1642 ( Jaki 292 ) . Jaki, Stanley L. ? Galileo. ? Encyclopedia of World Biography. 1973 erectile dysfunction. Drake, Stillman. ? Galileo Galilei. ? 1999 Grolier Multimedia Encyclopedia. Grolier Interactive Inc. 1998. Evans, David S. ? Telescope, optical. ? 1999 Grolier Multimedia Encyclopedia. Grolier Interactive Inc. 1998.
2 edition of Finite element approximation of variational problems and applications found in the catalog. Finite element approximation of variational problems and applications |Statement||M. Křížek, P. Neittaanmäki.| |Series||Pitman monographs andsurveys in pure and applied mathematics -- 50| |The Physical Object| |Number of Pages||239| Starting from the variational formulation of elliptic boundary value problems boundary integral operators and associated boundary integral equations are introduced and analyzed. By using finite and boundary elements corresponding numerical approximation schemes are s: 1. Finite element applications in solid and structural mechanics are also discussed. Comprised of 16 chapters, this book begins with an introduction to the formulation and classification of physical problems, followed by a review of field or continuum problems and their approximate solutions by . Finite element method for a stationary Stokes hemivariational inequality with slip (), Unilateral Contact Problems: Variational Methods and Existence Theorems, Vol. of Pure and Applied Mathematics, Chapman & Hall/CRC B. D. (), ‘ Convergence analysis of discrete approximations of problems in hardening plasticity. Buy The Finite Element Method: Theory, Implementation, and Applications (Texts in Computational Science and Engineering such as approximation properties of piecewise polynomial spaces, and variational formulations of partial differential equations, but with a minimum level of advanced mathematical machinery from functional analysis and Reviews: 5. This book covers finite element methods for several typical eigenvalues that arise from science and engineering. Both theory and implementation are covered in depth at the graduate level. Finite-element methods (FEM) are based on some mathematical physics techniques and the most fundamental of them is the so-called Rayleigh-Ritz method which is used for the solution of boundary value problems. Two other methods which are more appropriate for the implementation of the FEM will be discussed, these are the collocation method and. Their story, 20th century Pentecostals European conference on cultural tourism and lesser used languages. If I were you .... Native American Sourcebook The vanishing point Old Somerset fairs medicines of nature Oatmeal on My Blazer Law, the old and the new political relationship between France and her former colonies The 2000 Import and Export Market for Alcohols, Phenols, Phenol-alcohols, and Derivatives in Australia (World Trade Report) Labour supply in the regions of Indonesia ISBN: OCLC Number: Description: pages: illustrations ; 24 cm. Contents: List of symbols Introduction Variational formulation of second order elliptic problems Finite element approximation Convergence of the finite element method Numerical integration Generation of the stiffness matrix Publisher Summary. This chapter presents an introduction to the mathematics of the finite element method. The finite element method is a very successful application of classical methods, such as (1) the Ritz method, (2) the Galerkin method, and (3) the least squares method, for approximating the solutions of boundary value problems arising in the theory of elliptic partial differential equations. In this paper we explore the application of a finite element method (FEM) to the inequality and Laplacian constrained variational optimization problems. First, we illustrate the connection between the optimization problem and elliptic variational inequalities; secondly, we prove the existence of the solution via the augmented Lagrangian multipliers by: 3. This textbook teaches finite element methods from a computational point of view. It focuses on how to develop flexible computer programs with Python, a programming language in which a combination of symbolic and numerical tools is used to achieve an explicit and practical derivation of Finite element approximation of variational problems and applications book element. This book develops the basic mathematical theory of the finite element method, the most widely used technique for engineering design and analysis. The third edition contains four new sections: the BD n-Dimensional Variational Problems. Pages PDF. Energy Principles and Variational Methods in Applied Mechanics - Ebookgroup Version: PDF/EPUB. If you need EPUB and MOBI Version, please send me a message (Click message us icon at the right corner) Compatible Devices: Can be read on any devices (Kindle, NOOK, Android/IOS devices, Windows, MAC) Quality: High Quality. No missing contents. Theory of variational inequalities, flow through porous media will be shown, these form the basis for the construction and analysis of numerical methods for such problems. To study the approximation of variational inequalities by finite element methods, and to. Mats G. Larson, Fredrik Bengzon The Finite Element Method: Theory, Implementation, and Practice November 9, Springer. This paper gives an extensive documentation of applications of finite-dimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. A comprehensive guide to using energy principles and variational methods for solving problems in solid mechanics This book provides a systematic, highly practical introduction to the use of energy principles, traditional variational methods, and the finite element method for the solution of engineering problems involving bars, beams, torsion. Non-standard finite element methods, in particular mixed methods, are central to many applications. In this text the authors, Boffi, Brezzi and Fortin present a general framework, starting with a finite dimensional presentation, then moving on to formulation in Hilbert spaces and finally considering approximations, including stabilized methods and eigenvalue problems. Comp. & Maths. with Appls. Vol. 2, pp. Pergamon Press, Printed in Great Britain. A DUAL ALGORITHM FOR THE SOLUTION OF NONLINEAR VARIATIONAL PROBLEMS VIA FINITE ELEMENT APPROXIMATION DANIEL GABAY Centre National de la Recherche Scientifique, Laboratoire d'Analyse Numique, L.4, Place Jussieu, Paris Ce France. In recent years there have been significant developments in the development of stable and accurate finite element procedures for the numerical approximation of a wide range of fluid mechanics problems. this text combines theoretical aspects and practical applications and offers coverage of the latest research in several areas of. Mathematics is playing an ever more important role in the physical and biological sciences, provoking a blurring of boundaries between scienti?c disciplines and a resurgence of interest in the modern as well as the cl- sical techniques of applied mathematics. This renewal of interest, both in research and teaching, has led to the establishment of the series Texts in Applied Mathematics (TAM). Introduction with an abstract problem A problem in weak formulation. Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space, namely. find ∈ such that for all ∈, (,) = (). Here, (⋅, ⋅) is a bilinear form (the exact requirements on (⋅, ⋅) will be specified later) and is a bounded linear functional on. Preface.- Variational Formulations and Finite Element Methods.- Function Spaces and Finite Element Approximations.- Algebraic Aspects of Saddle Point Problems.- Saddle Point Problems in Hilbert. The finite element method (FEM), or finite element analysis (FEA), is a computational technique used to obtain approximate solutions of boundary value problems in engineering. Boundary value problems are also called field problems. The field is the domain of. Finite element approximation of initial boundary value problems. Energy dissi-pation, conservation and stability. Analysis of nite element methods for evolution problems. Reading List 1. Brenner & R. Scott, The Mathematical Theory of Finite Element Methods. Springer-Verlag, Corr. 2nd printing [Chapters 0,1,2,3; Chapter 4. () Approximation of time-dependent, viscoelastic fluid flow: Crank-Nicolson, finite element approximation. Numerical Methods for Partial Differential Equations() Stability and Convergence of the Two-step BDF for the Incompressible Navier-Stokes Problem. The present book gives a rigorous analysis of finite element approximation for a class of hemivariational inequalities of elliptic and parabolic type. Finite element models are described and their convergence properties are established. Discretized models are numerically treated as nonconvex and nonsmooth optimization problems. INTERIOR PENALTY METHODS FOR FINITE ELEMENT APPROXIMATIONS OF THE SIGNORINI PROBLEM IN ELASTOSTA TICS 1. T, ODEN and S. J. KIM Texas Institute for Computational Mechanics. Department of Aerospace Engineering and Engineering Mechanics. The University of Texas at Austin. TX U,S,A. (Received AprilIl.Chapter 1DRAFT INTRODUCTION TO THE FINITE ELEMENT METHOD Historical perspective: the origins of the finite el-ement method The finite element method constitutes a general tool for the numerical solution of partial.Cite this chapter as: () n-Dimensional Variational Problems. In: The Mathematical Theory of Finite Element Methods. Texts in Applied Mathematics, vol
4.1 THE PRICE ELASTICITY OF DEMAND • price elasticity of demand (Ed) • A measure of the responsiveness of • the quantity demanded to changes in • price; equal to the absolute value of • the percentage change in quantity • demanded divided by the percentage • change in price. 4.1 THE PRICE ELASTICITY OF DEMAND Computing Percentage Changes and Elasticities 4.1 THE PRICE ELASTICITY OF DEMAND Price Elasticity and the Demand Curve Figure 4.1 shows five different demand curves, each with a different elasticity. We can divide products into five types, depending on their price elasticities of demand. ► FIGURE 4.1Elasticity and Demand Curves • elastic demandThe price elasticity of demand is greater than one. 4.1 THE PRICE ELASTICITY OF DEMAND Price Elasticity and the Demand Curve ► FIGURE 4.1 (cont’d.)Elasticity and Demand Curves • inelastic demand • The price elasticity of demand is less than one. 4.1 THE PRICE ELASTICITY OF DEMAND Price Elasticity and the Demand Curve ► FIGURE 4.1 (cont’d.)Elasticity and Demand Curves • unit elastic demand • The price elasticity of demand is one. 4.1 THE PRICE ELASTICITY OF DEMAND Price Elasticity and the Demand Curve ► FIGURE 4.1 (cont’d.)Elasticity and Demand Curves • perfectly inelastic demand • The price elasticity of demand is zero. 4.1 THE PRICE ELASTICITY OF DEMAND Price Elasticity and the Demand Curve ► FIGURE 4.1 (cont’d.)Elasticity and Demand Curves • perfectly elastic demand • The price elasticity of demand is infinite. 4.1 THE PRICE ELASTICITY OF DEMAND Elasticity and the Availability of Substitutes 4.1 THE PRICE ELASTICITY OF DEMAND Other Determinants of the Price Elasticity of Demand 4.2 USING PRICE ELASTICITY TO PREDICTCHANGES IN QUANTITY • If we have values for two of the three variables in the elasticity formula, we can compute the value of the third. The three variables are: • the price elasticity of demand itself, • the percentage change in quantity, and • the percentage change in price. Specifically, we can rearrange the elasticity formula: percentage change in quantity demanded = percentage change in price × Ed Extra Application 8 WHY YOU’LL PAY MORE IN RENT THIS YEAR • The current rental market is extremely hot with the average rent reaching $940 per month for the last quarter of 2005. Some areas such as New York City reported rents averaging $2,400 per month. In spite of these numbers, many analysts believe rent is still low in several regions. In some areas such as Ft. Lauderdale Florida, rent has climbed almost 12% in the recent months as occupancy rates approached 100% and is expected to go higher. Much of the reason for higher rents is supply related due to many apartment units being converted to condominiums. Other regions, particularly along the Gulf Coast, can thank Hurricane Katrina for the rental unit shortage. With apartments being difficult to find vacant, many owners are using the hot market to increase rents a little. Rising home prices and increasing rates also alter the picture and push a number of people back to the rental market. Houses are getting too expensive for some people to own. A decrease in the supply of rental units will automatically push prices higher. Owners, looking to make a profit, tend to react very quickly to high occupancy rates. However, new units will no doubt soon be constructed to lessen the shortage. BEER TAXES AND HIGHWAY DEATHS APPLYING THE CONCEPTS #1:How can we use the price elasticity of demand to predict the effects of taxes? • We can use the concept of price elasticity to predict the effects of a change in the price of beer on drinking and highway deaths among young adults. • The price elasticity of demand for beer among young adults is about 1.30. • If a state imposes a beer tax that increases the price of beer by 10 percent, how will the price hike affect beer consumption among young adults? • Using the elasticity formula, we predict that beer consumption will decrease by 13 percent: • percentage change in quantity demanded = percentage change in price × Ed • = 10% × 1.30 = 13% • The number of highway deaths among young adults is roughly proportional to their beer consumption, so the number of deaths will also decrease by 13 percent. • Larger taxes would decrease beer consumption and highway deaths by larger amounts. SUBSIDIZED MEDICAL CARE IN CÔTE D’IVOIRE AND PERU APPLYING THE CONCEPTS #2:Does the responsiveness of consumers to changes in price vary by income? • Many developing nations subsidize medical care, charging consumers a small fraction of the cost of providing the services. If a nation were to cut its subsidies and thus increase the price of medical care for consumers, how would the higher price affect its poor and wealthy households? • In Côte d’Ivoire in Africa, the price elasticity of demand for hospital services is 0.47 for poor households and 0.29 for wealthy households. A 10-percent increase in the price of hospital services would cause poor households to cut back their hospital care by: • percentage change in quantity demanded = 10% × 0.47 = 4.7% • In contrast, wealthy households would cut back by: • percentage change in quantity demanded = 10% × 0.29 = 2.9% • In Peru, the price elasticity is 0.67 for poor households but only 0.03 for wealthy households. How would the higher price affect its poor and wealthy households? • The poor are much more sensitive to price, so when prices increase, they experience much larger reductions in medical care. 4.3 PRICE ELASTICITY AND TOTAL REVENUE • total revenue • The money a firm generates from selling its product. total revenue = price per unit × quantity sold HOW TO CUT TEEN SMOKING BY 60 PERCENT APPLYING THE CONCEPTS #3:How can we use the price elasticity of demand to predict the effects of public policies? • Under the 1997 federal tobacco settlement, if smoking by teenagers does not decline by 60 percent by the year 2007, cigarette makers will be fined $2 billion. The settlement increased cigarette prices by about 62 cents per pack, a percentage increase of about 25 percent. Will the price hike be large enough to meet the target reduction of 60 percent? • The demand for cigarettes by teenagers is elastic, with an elasticity of 1.3. Therefore, a 25-percent price hike will reduce teen smoking by only 32.5 percent, far short of the target reduction: • percentage change in quantity demanded = 25% × 1.30 = 32.5% • About half of the decrease in consumption occurs because fewer teenagers will become smokers, and the other half occurs because each teenage smoker will smoke fewer cigarettes. To meet the target reduction of teenage smoking, the price of cigarette prices must increase by about 46 percent: • percentage change in quantity demanded = 46% × 1.3 = 60% 4.3 PRICE ELASTICITY AND TOTAL REVENUE Elastic Versus Inelastic Demand 4.3 PRICE ELASTICITY AND TOTAL REVENUE Elastic Versus Inelastic Demand A BUMPER CROP IS BAD NEWS FOR FARMERS APPLYING THE CONCEPTS #4:If demand is inelastic, how does an increase in supply affect total expenditures? Suppose that favorable weather generates a “bumper crop” for soybeans that is 30 percent larger than last year’s harvest. • The good news is that farmers will sell more bushels of soybeans. The bad news is that the increase in supply will decrease the equilibrium price of soybeans, so they will get less money per bushel. Which will be larger, the increase in quantity or the decrease in price? • The demand for soybeans and many other agricultural products is inelastic. • To increase the quantity demanded by 30 percent to meet the higher supply, the price must decrease by more than 30 percent. • Consumers need a large price reduction to buy more of the product. If the price elasticity of demand is 0.75, the price must decrease by 40 percent to increase the quantity demanded by 30 percent. To show this, we can rearrange the elasticity formula: 4.4 ELASTICITY AND TOTAL REVENUE FORA LINEAR DEMAND CURVE Price Elasticity Along a Linear Demand Curve ► FIGURE 4.2Elasticity and Total Revenue Along a Linear Demand Curve 4.4 ELASTICITY AND TOTAL REVENUE FORA LINEAR DEMAND CURVE Price Elasticity Along a Linear Demand Curve 4.5 OTHER ELASTICITIES OF DEMAND Income Elasticity of Demand • income elasticity of demand • A measure of the responsiveness of • demand to changes in consumer income; equal to the percentage change in the quantity demanded divided by the percentage change in income. 4.5 OTHER ELASTICITIES OF DEMAND Cross-Price Elasticity of Demand • cross-price elasticity of demand • A measure of the responsiveness of demand to changes in the price of another good; equal to the percentage change in the quantity demanded of one good (X) divided by the percentage change in the price of another good (Y). Extra Application 9 HIGH FUEL PRICES IMPACT VACATION PLANS • Memorial Day vacation travel by automobile will be up by only 0.7 percent this year, the smallest increase in several years. A 34 percent year-to-year increase in the price of gasoline is the culprit. The AAA Travel survey also indicated that many vacationers would alter their plans to take advantage of cheaper motels, closer destinations, and/or shorter stays. • Air, bus, and train travel all expect similar slight increases in people traveling, or stable numbers. • Airplane ticket prices are up about 10 percent and hotels have increased rates by about 5 percent. The substitution effect helps explain why the demand curve is downward sloping and to the right. As the price of gasoline increases people will alter their behavior by buying less gasoline. Part of this behavior can be explained by consumers substituting a portion of gasoline purchases. Vacation and entertainment travel happens to be a substitutable component of gasoline consumption. 4.6 THE PRICE ELASTICITY OF SUPPLY • price elasticity of supply • A measure of the responsiveness of the quantity supplied to changes in price; equal to the percentage change in quantity supplied divided by the percentage change in price. 4.6 THE PRICE ELASTICITY OF SUPPLY ▼ FIGURE 4.3The Slope of the Supply Curve and Supply Elasticity 4.6 THE PRICE ELASTICITY OF SUPPLY What Determines the Price Elasticity of Supply? The price elasticity of supply is determined by how rapidly production costs increase as the total output of the industry increases. If the marginal cost increases rapidly, the supply curve is relatively steep and the price elasticity is relatively low. 4.6 THE PRICE ELASTICITY OF SUPPLY The Role of Time: Short-Run Versus Long-Run Supply Elasticity • Time is an important factor in determining the price elasticity of supply for a product. The market supply curve is positively sloped because of two responses to an increase in price: • • Short run. A higher price encourages existing firms to increase their output by purchasing more materials and hiring more workers. • • Long run. New firms enter the market and existing firms expand their production facilities to produce more output. • The short-run response is limited because of the principle of diminishing returns. 4.6 THE PRICE ELASTICITY OF SUPPLY Extreme Cases: Perfectly Inelastic Supply and Perfectly Elastic Supply ▼ FIGURE 4.4Perfectly Inelastic Supply and Perfectly Elastic Supply Extra Application 11 THE MARKET FOR RARE GUITARS • The price of rare guitars is escalating faster than most corporate stocks. A sunburst Les Paul Standard that originally sold for $265 in 1960 can now command $300,000 in pristine condition. Les Paul’s are not alone in this market. Pre-war Martin acoustic guitars and pre-1966 Fender Stratocasters have seen similar run-ups. Much of the recent price appreciation has been driven by non-musician investors attempting to cash in on the gains. However, scarcity is also to blame. For example, there were only approximately 1,700 sunburst Les Pauls ever manufactured. Many experts caution buyer beware since the astronomical prices have stimulated a heavy trade in fakes. The current estimated count of 2,200 sunburst Les Pauls illustrates this point. Where did the other 500 come from? The known supply of rare guitars is virtually fixed. Therefore, increasing demand by collectors pushes the price rapidly higher. Even a very small shift in demand due to an increase in the number of collectors/investors results in a substantial change in the price. The opposite is also true. If investor/collectors suddenly became concerned about this market and moved into other investments (i.e. decrease in the number of potential buyers) the price could fall just as rapidly. 4.6 THE PRICE ELASTICITY OF SUPPLY Extreme Cases: Perfectly Inelastic Supply and Perfectly Elastic Supply • perfectly inelastic supply • The price elasticity of supply equals • zero. • perfectly elastic supply • The price elasticity of supply is equal to infinity. Predicting Changes in Quantity Supplied Extra Application 10 OPEC’S OIL STRANGLEHOLD • The Organization of Petroleum Exporting Countries (OPEC) is considering production cutbacks to halt the falling world price of oil. However, OPEC members are also cognizant of the fact that high oil prices spur development of alternative fuel sources. Many analysts point out that this fact forces OPEC to allow prices to fall periodically so that alternative fuel projects do not become viable. However, as oil demand continues to increase, OPEC may not be able to periodically lower prices. Since oil has a relatively inelastic short run demand, a cutback in supply results in only a very small reduction in the number of units sold but a substantial increase in price. Since the production and distribution costs will remain constant the price increase means a significant increase in profits for oil producers. 4.7 USING ELASTICITIES TO PREDICTCHANGES IN EQUILIBRIUM PRICE The Price Effects of a Change in Demand ► FIGURE 4.5An Increase in Demand Increases the Equilibrium Price 4.7 USING ELASTICITIES TO PREDICTCHANGES IN EQUILIBRIUM PRICE The Price Effects of a Change in Demand • Under what conditions will an increase in demand cause a relatively small increase in price? • • Small increase in demand. • • Highly elastic demand. • Highly elastic supply. 4.7 USING ELASTICITIES TO PREDICTCHANGES IN EQUILIBRIUM PRICE The Price Effects of a Change in Supply ► FIGURE 4.6A Decrease in Supply Increases the Equilibrium Price 4.7 USING ELASTICITIES TO PREDICTCHANGES IN EQUILIBRIUM PRICE The Price Effects of a Change in Supply • Under what conditions will a decrease in supply cause a relatively small increase in price? • • Small decrease in supply. • • Highly elastic demand. • Highly elastic supply. AN IMPORT BAN AND SHOE PRICES APPLYING THE CONCEPTS #7:How do import restrictions affect prices? • We can use the supply version of the price-change formulato predict the effects of import restrictions on equilibrium prices.Consider a nation that limits shoe imports. • Suppose the import restrictions decrease the supply of shoes by 30 percent. • To use the price-change formula, we need the price elasticities of supply and demand. • Suppose the supply elasticity is 2.3 and, as shown in Table 20.2, the demand elasticityis 0.70. • Plugging these numbers into the price-change formula, we predict a 10-percent increase in price:
Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. Can you use the diagram to prove the AM-GM inequality? The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. . . . The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . . Three frogs hopped onto the table. A red frog on the left a green in the middle and a blue frog on the right. Then frogs started jumping randomly over any adjacent frog. Is it possible for them to. . . . When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... Here are three 'tricks' to amaze your friends. But the really clever trick is explaining to them why these 'tricks' are maths not magic. Like all good magicians, you should practice by trying. . . . Can you discover whether this is a fair game? Which set of numbers that add to 10 have the largest product? Choose any three by three square of dates on a calendar page... Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps? You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n. How many noughts are at the end of these giant numbers? Is the mean of the squares of two numbers greater than, or less than, the square of their means? Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3. Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh power. Can you see how this picture illustrates the formula for the sum of the first six cube numbers? Eight children enter the autumn cross-country race at school. How many possible ways could they come in at first, second and third places? Prove that if the integer n is divisible by 4 then it can be written as the difference of two squares. What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? Replace each letter with a digit to make this addition correct. Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation. We are given a regular icosahedron having three red vertices. Show that it has a vertex that has at least two red neighbours. Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? There are 12 identical looking coins, one of which is a fake. The counterfeit coin is of a different weight to the rest. What is the minimum number of weighings needed to locate the fake coin? This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . . Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Imagine we have four bags containing numbers from a sequence. What numbers can we make now? In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. This is the second article on right-angled triangles whose edge lengths are whole numbers. What can you say about the angles on opposite vertices of any cyclic quadrilateral? Working on the building blocks will give you insights that may help you to explain what is special about them. This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition. This article stems from research on the teaching of proof and offers guidance on how to move learners from focussing on experimental arguments to mathematical arguments and deductive reasoning. Explore what happens when you draw graphs of quadratic equations with coefficients based on a geometric sequence. Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? Can you make sense of these three proofs of Pythagoras' Theorem? Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? L triominoes can fit together to make larger versions of themselves. Is every size possible to make in this way? Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? A introduction to how patterns can be deceiving, and what is and is not a proof.
By now, most of us are familiar with Moore’s Law, the famous maxim that the development of computing power follows an exponential curve, doubling in price-performance (that is, speed per unit cost) every 18 months or so. When it comes to applying Moore’s Law to their own business strategies, however, even visionary thinkers frequently suffer from a giant “AI blind spot.” I give a lot of talks to successful, strategically-minded business people who can see around corners in their own industries, yet they struggle to grasp what exponential improvement really means. And a lot is riding on this exponential curve, but one technology that is particularly benefiting from it is artificial intelligence. Capturing Exponential Curves on Paper One reason people do not grasp how rapidly artificial intelligence is developing is so simple it’s almost laughable: Exponential curves don’t fare well when we humans try to capture them on paper. For very practical reasons, it’s virtually impossible to fully depict the steep trajectory of an exponential curve in a small space such as a chart or a slide. Visually depicting the early stages of an exponential curve is easy. However, as the steeper part of the curve kicks in and numbers rapidly get larger, things get more challenging. To solve this problem of inadequate visual space, we use a handy math trick known as a logarithm. Using what’s known as a “logarithmic scale,” we learned to squish exponential curves into submission. Unfortunately, the widespread use of logarithmic scales can also cause myopia. The way a logarithmic scale works is that each tick on a vertical y-axis corresponds not to a constant increment (as in a typical linear scale), but to a multiple, for example a factor of 100. The classic Moore’s Law chart below (Chart 1) uses a logarithmic scale to depict the exponential improvement in the cost of computing power (measured in calculations/second/dollar) over the past 120 years, from mechanical devices in 1900 to today’s powerful silicon-based GPUs. Now, logarithmic charts have served as a valuable form of shorthand for people who are cognizant of the visual distortion they introduce. In fact, a logarithmic scale is a handy and compact way to depict any curve that rises in a rapid and dramatic fashion over time. However, logarithmic charts carry a huge, hidden cost: they fool the human eye. By mathematically collapsing huge numbers, logarithmic charts make exponential increases appear linear. Because they squash unruly exponential growth curves into linear shapes, logarithmic charts make it easy for people to feel comfortable, even complacent, about the speed and magnitude of future exponential gains in computing power. Our logical brain understands logarithmic charts. But our subconscious brain sees a linear curve and tunes out. So, what’s an effective way to undo some of the strategic myopia caused by logarithmic charts? Part of the solution lies in going back to the original linear scale. On Chart 2 below, I used the data to fit an exponential curve and then plotted it using a linear scale on the vertical axis. Once again, the vertical axis represents the processing speed (in gigaflops) that a single dollar can purchase, and the horizontal axis represents time. However, in Chart 2, each tick on the vertical axis corresponds to a simple linear increase of just one gigaflop (rather than an increase of a factor of 100 as in Chart 1). The term “FLOP” is a standard way to measure computing speed, meaning floating-point operations per second, hence FLOPS, megaFLOPS, gigaFLOPS, teraFLOPS, and so on). Chart 2 shows the actual, real exponential curve that characterizes Moore’s Law. The way this chart is drawn, it’s easy for our human eyes to appreciate how rapidly computing price performance has increased over the past decade. Yet, there’s something terribly wrong with Chart 2. To a naïve reader of this chart, it would appear as if over the course of the 20th century, the cost and performance of computers did not improve at all. Clearly, this is wrong. Chart 2 shows how using a linear scale to demonstrate Moore’s Law over time can also be quite blinding. It can make the past appear flat, as if no progress has taken place until only very recently. In addition, the same linear-scale chart can also lead people to incorrectly conclude that their current vantage point in time represents a period of unique “almost vertical” technological progress. This point leads me to the next major cause of chart-induced AI blindness: linear-scale charts can fool people into believing they live at the height of change. The Myopia of Living in the Present Let’s take another look at Chart 2. When viewed from the year 2018, the previous doublings of price-performance that took place every decade throughout most of the 20th century appear flat, almost inconsequential. A person looking at Chart 2 might say to themselves, “Boy, am I lucky to be living today. I remember the year 2009, when I thought my new iPhone was fast! I had no idea how slow it was. Now I’ve finally reached the exciting vertical part!” I’ve heard people say that we have just passed the “elbow of the hockey stick.” But there is no such transition point. Any exponential curve is self-similar—that is, the shape of the curve in the future looks the same as it did in the past. Below, Chart 3 again shows the exponential curve of Moore’s Law on a linear scale, but this time from the perspective of the year 2028. The curve assumes that the growth we have experienced in the past 100 years will continue for at least 10 more years. This chart shows that in 2028, one dollar will buy about 200 gigaflops of computing power. However, Chart 3 also represents a potential analytical quagmire. Look closely at where today’s computing power (the year 2018) lies on the curve depicted in Chart 3. From the vantage point of a person living and working in the future year 2028, it would appear that there was virtually no improvement in computing power even over the course of the early 21st century. It looks like computing devices used in the year 2018 were just a tiny bit more powerful than those used, say, in 1950. An observer could also conclude that the current year, 2028, represents the culmination of Moore’s Law, where progress in computing power finally takes off. Every year, I could re-create Chart 3, changing only the timespan depicted. The shape of the curve would be identical, only the ticks on the vertical scale would change. Note how the shape of Charts 2 and 3 looks identical, except for the vertical scale. On each such chart, every past point would be flat when viewed from the future, and every future point would appear to be a sharp departure from the past. Alas, such mis-perception would be the path to flawed business strategy, at least when it comes to artificial intelligence. What Does This Mean? Exponential rates of change are difficult for the human mind to comprehend and for the eye to see. Exponential curves are unique in the sense that they are mathematically self-similar at every point. What this means is that an ever-doubling curve has no flat part, no ascending part, and none of the “elbow” and “hockey stick” bends many business people are used to talking about. If you zoom in on any portion in the past or the future, its shape looks identical. As Moore’s Law continues to make itself felt, it’s tempting to think at this very moment we’re reaching a unique period of great change in the development of artificial intelligence (or any other technology that rides on Moore’s Law). However, as long as processing power continues to follow an exponential price-performance curve, each future generation will likely look back on the past as an era of relatively little progress. In turn, the converse will also remain true: each current generation will look 10 years into its future and fail to appreciate just how much advancement in artificial intelligence is yet to come. The challenge, then, for anyone planning for a future driven by computing’s exponential growth is fighting their brain’s flawed interpretations. Hard as it may sound, you need to hold all three charts in your mind at once—the visual consistency of the logarithmic chart and the drama but deceptive scale of the linear charts—to truly appreciate the power of exponential growth. Because the past will always look flat, and the future will always look vertical.
Marx’s Economic Manuscripts of 1861-63 Capital and Profit Source: MECW, Volume 33, p. 104-145; Translated: by Ben Fowkes; Written: January 1862; Transcription: Andy Blunden, 2002. Marx first describes the tendency of the rate of profit to fall, on the basis of the decreasing proportion of capital invested in wages, in The Grundrisse, Notebook VII, written February - March 1858. The idea is completed in Volume III of Capital, put together from Marx's notes by Engels and published in 1894. We have seen (6 g)) that real profit — i.e. the current average profit and its rate — is different for the individual capital from profit, and therefore from the rate of profit, in so far as the latter consists of the surplus value really produced by the individual capital and the rate of profit therefore = the ratio of the surplus value to the total amount of the capital advanced. But it was also shown that considering the sum total of the capitals which are employed in the various particular spheres of production, the total amount of the social capital, or, and this is the same thing, the total capital of the capitalist class, the average rate of profit is nothing other than the total surplus value related to and calculated on this total capital; that it is related to the total capital exactly in the way in which profit — and therefore the rate of profit — is related to the individual capital, in so far as profit is considered only as surplus value which has been converted formally. Here, therefore, we once again stand on firm ground, where, without entering into the competition of the many capitals, we can derive the general law directly from the general nature of capital as so far developed. This law, and it is the most important law of political economy, is that the rate of profit has a tendency to fall with the progress of capitalist production. [XVI-1000] Since the general rate of profit is nothing but the ratio of the total amount of surplus value to the total amount of capital employed by the capitalist class, we are not concerned here with the different branches into which surplus value is divided, such as industrial profit, interest, rent. Since all these different forms of surplus value are only components of the total surplus value, one part may increase because the other declines. We are concerned here, however, with a fall in the rate of the total surplus value. Even the rent of land — as Adam Smith has already correctly noted — falls with the development of capitalist production, instead of rising, not in proportion to the particular area of land of which it appears to be the product, but in proportion to the capital invested in agriculture, therefore precisely in the form in which it steps forth directly as a component of surplus value. This law is confirmed by the whole of modern agronomy. (See Dombasle, Jones, etc.) So where does this tendency for the general rate of profit to fall come from? Before this question is answered, one may point out that it has caused a great deal of anxiety to bourgeois political economy. The whole of the Ricardian and Malthusian school is a cry of woe over the day of judgement this process would inevitably bring about, since capitalist production is the production of profit, hence loses its stimulus, the soul which animates it, with the fall in this profit. Other economists have brought forward grounds of consolation, which are not less characteristic. But apart from theory there is also the practice, the crises from superabundance of capital or, what comes to the same, the mad adventures capital enters upon in consequence of the lowering of [the] rate of profit. Hence crises — see Fullarton — acknowledged as a necessary violent means for the cure of the plethora of capital, and the restoration of a sound rate of profit. //Fluctuations in the rate of profit, independent of organic changes in the components of capital, or of the absolute magnitude of capital, are possible if the value of the capital advanced, whether it is engaged in the form of fixed capital, or exists as raw material, finished commodities, etc., rises or falls in consequence of an increase or reduction, independent of the already existing capital, in the labour time needed for its reproduction, since the value of every commodity — hence also of the commodities of which the capital consists — is conditioned not only by the necessary labour time contained in it itself, but by the necessary — socially necessary — labour time which is required for its reproduction and this reproduction may occur under circumstances which hinder or facilitate it, and are different from the conditions of the original production. If under the changed circumstances twice as much labour time, or, inversely, half as much, is generally required to reproduce the same capital, as was needed to produce it, that capital, presupposing that the value of money remains permanently unchanged, would now be worth 200 thalers, if it was previously worth 100, or, if it was previously worth 100, it might now only be worth 50. If this increase or decline in value were to affect uniformly all sections of capital, profit too, like the capital, would now be expressed in twice as many or in half as many thalers. The rate would remain unchanged. 5 is related to 50 as 10 to 100 or 20:200. Let us assume however that the nominal value of fixed capital and raw material alone rises, and that they form 4/5 of 100, hence 80, the variable capital forming 1/5, hence 20. In this case the surplus value, hence the profit, would continue to be expressed in [XVI-1001] the same sum of money. Thus the rate of profit would have risen or fallen. In the first case surplus value = 10 thalers, which makes 10% on 100. But the 80 are now worth 160, hence the total capital = 180. 10 on 180 = 1/18 = 100/18 = 100: 18 = 5 = 5 5/9 %, instead of the previous 10 %. In the second case 40 instead of 80, the total capital = 60, on which 10 = 1/6 = 100/6. 100:6 = 16 = 16 2/3 %. But these fluctuations can never be general, unless they affect the commodities which enter into the worker’s consumption, hence unless they affect variable capital, hence the whole of capital. In this case, however, the rate of profit remains unchanged, even though the amount of profit has changed nominally. // The general rate of profit can never rise or fall through a rise or fall in the total value of the capital advanced. If the value of the capital advanced, expressed in money, rises, the nominal monetary expression of the surplus value rises too. The rate remains unchanged. Ditto in the case of a fall. The general rate of profit can only fall: 1) if the absolute magnitude of surplus value falls. The latter has, inversely, a tendency to rise in the course of capitalist production, for its growth is identical with the development of the productive power of labour, which is developed by capitalist production; 2) because the ratio of variable capital to constant capital falls. As we have seen, the rate of profit is always smaller than the rate of surplus value which is expressed in it. But the larger the ratio of constant to variable capital, the smaller it is. Or, the same rate of surplus value is expressed in a rate of profit which is the smaller, the larger the ratio of the total amount of capital advanced to the variable part of the latter, or the greater a part the constant capital forms of the total capital. Surplus value expressed as profit is S/(C+v), and the larger C is, the smaller this magnitude, and the more it diverges from S/v the rate of surplus value. For S/(C+v) would reach its maximum when C = 0, hence S/(C+v) = S/v. But the law of development of capitalist production (see Cherbuliez , etc.) consists precisely in the continuous decline of variable capital, i.e. the part of capital laid out in wages, in return for living labour — the variable component of capital — in relation to the constant component of capital, i.e. to the part of capital which consists in fixed capital and in the circulating capital laid out for raw material and matierès instrumentales. The whole development of relative surplus value, i.e. of the productive power of labour, i.e. of capital, consists, as we have seen, in the curtailment of necessary labour time, hence also the reduction of the total amount of the capital exchanged for labour, through the increase in the production of surplus labour by means of division of labour, machinery, etc., cooperation, and the expansion in the amount of value and the mass of constant capital expended which this involves, accompanied by a reduction in the capital expended for labour. So when the ratio of variable capital to the total amount of capital alters, the rate of profit falls, i.e. the ratio of surplus value to the total capital is the smaller, [XVI-1002] the smaller the ratio of variable capital to constant capital. If, for example, in the production of India the ratio of the capital laid out as wages to the constant capital = 5:1, and in England it is 1:5, it is clear that the rate of profit in India must appear much larger, even if the surplus value actually realised is much smaller. Let us take 500. If the variable capital = 500 /5 = 100, the surplus value 40, the rate of surplus value will be 40%, the rate of profit only 10%. In contrast, if the variable part is 400 and the rate of surplus value is only 20%, this would make 80 on 400, and on 500 a rate of profit of 80:500, of 8:50. 8:50 = 16:100. Therefore 16%. (100:16 = 500:80 or 50:8 = 250:40 or 25:4 = 125:20. 25×20 = 500. 4×125 = 500.) So although labour would be twice as strongly exploited in Europe as in India, the rate of profit in India would be related to the rate of profit in ‘Europe as 16:10, as 8:5, = 1:5/8. Hence as 1:0,625. And indeed this is because 4/5 of the total capital is exchanged for living labour in India, and only 1/5 in Europe. If real wealth appears slight in those countries where the rate of profit is high, it is because the productive power of labour is slight, a fact which is expressed precisely in the high rate of profit. 20% is 1/5 on labour time, hence India could only feed 1/5 of the population not directly involved in the product; whereas 40% is 2/5, hence in England twice the proportion of the population could live without working. The tendency towards a fall in the general rate of profit therefore = the development of the productive power of capital, i.e. the rise in the ratio in which objectified labour is exchanged for living labour. The development of productive power has a double manifestation: [Firstly,] in the magnitude of the productive forces already produced, in the amount of value and the physical extent of the conditions of production under which new production takes place, i.e. the absolute magnitude of the productive capital already accumulated. Secondly, in the relative smallness of the capital laid out for wages, in comparison with the total capital, i.e. the relatively small amount of living labour which is required for the reproduction and exploitation of a large capital — for mass production. This implies, at the same time, the concentration of capital in large amounts at a small number of places. The same capital is large if it employs 1,000 workers united into a single labour force, small if it is divided into 500 businesses employing two workers apiece. If the ratio of the variable part of capital to the constant part, or to the total capital, is large, as in the above example, this shows that all the means towards the development of the productivity of labour have not been employed, that, in a word, the social forces of labour have not been developed, that therefore with a large quantity of labour little is produced, [XVI-1003] whereas in the opposite case a (relatively) large amount is produced with a small amount of labour. The development of fixed capital (which produces of itself a development of the circulating capital laid out in raw material and matierès instrumentales (see Sismondi) is a particular symptom of the development of capitalist production. It implies a direct reduction, relatively speaking, of the variable part of capital, i.e. a lessening in the quantity of living labour. The two are identical. This is most striking in agriculture, where the reduction is not only relative but absolute. // Adam Smith’s idea that the general rate of profit is forced down by competition — on the presupposition that capitalists and workers alone confront each other — or that the division of surplus value among different classes is not further considered — comes down to saying that profit does not fall because wages rise; but wages do indeed rise because profit falls, hence it is — from the point of view of the result, an increase in wages corresponding to the fall of profit — the same mode of explanation as Ricardo’s completely opposite one, in which profit falls because wages become more expensive, etc., or as Carey’s, because there is an increase not only in costs of production (exchange value) but in the use value of the wage. That profit temporarily falls as a result of competition between capitals — i.e. their competition in the demand for labour — is admitted by all political economists (see Ricardo). Adam Smith’s explanation, if he did not speak of industrial profits only, would raise this to a general law very contradictory to the laws of wage[s] developed by himself.// The development of productive power has a double manifestation: in the increase of surplus labour, i.e. the curtailment of the necessary labour time; and in the reduction of the component of capital which is exchanged with living labour, relatively to the total amount of capital, i.e. the total value of the capital which enters into production. (See Surplus Value, Capital, etc.) Or, expressed differently: It is manifested in the greater exploitation of the living labour employed (this follows from the greater quantity of use values which it produces in a given time, hinc the curtailment of the time required for the reproduction of the wage, hinc the prolongation of the labour time appropriated by the capitalist without equivalent) and in the reduction in the relative amount of living labour time which is employed in general — i.e. in its amount relatively to the capital that sets it in motion. Both movements not only go [hand in hand] but condition each other. They are only different forms and phenomena in which the same law is expressed. But they work in opposite directions, in so far as the rate of profit comes into consideration. Profit is surplus value related to the total capital, and the rate of profit is the ratio of this surplus value, calculated according to a particular measure of the capital, e.g. as a percentage. However, surplus value — as an overall quantity is determined firstly by its rate, but secondly by the amount 0 labour employed simultaneously at this rate, or, and this is the same thing, the magnitude of the variable part of the capital. On the one hand there is a rise in the rate of surplus value, on the other hand there is a (relative) fall in the numerical factor by which this rate is multiplied. In so far as the development of productive power lessens the necessary (paid) part of the labour employed, it raises the surplus value, because it raises its rate, or it raises it when expressed as a percentage. However, in so far as it lessens the total amount of labour employed by a given capital, it reduces the numerical factor by which the rate of surplus value is multiplied, hence it reduces its amount. Surplus value is determined both by the rate, which expresses the ratio of surplus labour to necessary labour, and by the amount’ of working days employed. However, with the development of the productive forces, the latter — or the variable part of the capital — is reduced in relation to the capital laid out. If C = 500, c = 100, v = 400, and S = 60, s/v = 60 /400 = 15%, so that the rate of profit = 60 /500 = 12%. [XVI-1004] Furthermore, if C = 500, c = 400, v = 100, and S = 30, SI, = 30/ 100 = 30%, so that the rate of profit = 30/500 = 6%. The rate of surplus value is doubled, the rate of profit is halved. The rate of surplus value exactly expresses the rate at which labour is exploited, while the rate of profit expresses the relative amount of living labour employed by capital at a given rate of exploitation, or the proportion of the capital laid out in wages, the variable capital, to the total amount of capital advanced. If C = 500, c = 400, and v = 100, for the rate of profit to be 12% or profit to be 60, surplus value would have to be 60, s/v = 60/100 = 60%. For the rate of profit to remain the same, the rate of surplus value (or the rate of exploitation of labour) would have to grow in the same ratio as the magnitude of the capital laid out in labour grows, in the same way as the magnitude of the variable capital falls relatively, or the magnitude of the constant capital grows relatively. It is already strikingly apparent from one single circumstance that this is only possible within certain limits, and that it is rather the reverse, the tendency towards a fall in profit — or a relative decline in the amount of surplus value hand in hand with the growth in the rate of surplus value — which must predominate, as is also confirmed by experience. The part of the value which capital newly reproduces and produces is = to the living labour time directly absorbed by it in its product. One part of this labour time replaces the labour time objectified in wages, the other part is the unpaid excess amount, surplus labour time. But both of them together form the whole amount of the value produced, and only a part of the labour employed forms the surplus value. If the normal day = 12 hours, 2 workers who perform simple labour can never add more than 24 hours (and workers who perform higher labour can never add more than 24 hours x the factor which expresses the ratio of their working day to the simple working day), of which a definite part replaces their wages. The surplus value they produce cannot, whatever the circumstances, be more than an aliquot part of 24 hours. If, instead of 24 workers, only 2 are employed to a given quantity of capital (in proportion to a given measure of capital), or 2 workers are necessary in the new mode of production where 24 were necessary in the old one, in proportion to a given amount of capital, then if the surplus labour in the old mode of production = 1/12 of the total working day, or = 1 hour, no increase in productive power — however much it raised the rate of surplus labour time — could have the effect that the 2 workers provided the same amount of surplus value as the 24 in the old mode of production. If one considers the development of productive power and the relatively not so pronounced fall in the rate of profit, the exploitation of labour must have increased very much, and what is remarkable is not the fall in the rate of profit but that it has not fallen to a greater degree. This can be explained partly by circumstances to be considered in dealing with competition between capitals, partly by the general circumstance that so far the immense increase of productive power in some branches has been paralysed or restricted by its much slower development in other branches, with the result that the general ratio of variable to constant capital — considered from the point of view of the total capital of society — has not fallen in the proportion which strikes us so forcibly in certain outstanding spheres of production. In general, therefore: The decline in the average rate of profit expresses an increase in the productive power of labour or of capital, and, following from that, on the one hand a heightened exploitation of the living labour employed, and [on the other hand] a relatively reduced amount of living labour employed at the heightened rate of exploitation, calculated on a particular amount of capital. It does not now follow automatically from this law that the accumulation of capital declines or that the absolute amount of profit falls (hence also the absolute, not relative, amount of surplus value, which is expressed in the profit). [XVI-1005] Let us stay with the above example. If the constant capital is only 1/5 of the total capital advanced, this expressed a low level of development of productive power, a limited scale of production, small, fragmented capitals. A capital of 500 of this kind, with surplus value at 15% (the variable capital at 400) gives a total amount of profit of 60. If we reverse the ratio, this expresses a large scale, the development of productive power, cooperation, division of labour, and large-scale employment of fixed capital. Let us therefore assume that a capital of this kind is of 20 times greater extent; 500×20 = 10,000, thus 6% profit on 10,000 (or surplus value of 30%, if the variable capital = 2,000) 600. A capital of 10,000 therefore accumulates more quickly with 6% than a capital of 500 with 12%. The one realises a labour time of 400, the other one of 2,000, hence an absolute amount of labour time 5 times greater, although relatively to its magnitude, or to a given amount of capital, e.g. 100, it employs four times less [labour time]. (See Ricardo’s example.) Here, as in the whole of our analysis, we entirely disregard use value. With the greater productivity of capital it goes without saying that the same value employed at the more productive scale represents a much greater amount of use value than it does at the less productive scale, and therefore also provides the material for a much more rapid rate of growth of the population and consequently of labour powers. (See Jones.) This fall in the rate of profit leads to an increase in the minimum amount of capital — or a rise in the level of concentration of the means of production in the hands of the capitalists — required in general to employ labour productively, both to exploit it, and to employ no more than the labour time socially required for the manufacture of a product. And there is a simultaneous growth in accumulation, i.e. concentration, since large capital accumulates more rapidly at a small rate of profit than does small capital at a large rate of profit. Once it has reached a certain level, this rising concentration in turn brings about a new fall in the rate of profit. The mass of the lesser, fragmented capitals are therefore ready to take risks. Hinc crisis. The so-called plethora of capital refers only to the plethora of capital for which the fall in the rate of profit is not counterbalanced by its size. (See Fullarton.) Profit, however, is the driving agency in capitalist production, and only those things are produced which can be produced at a profit, and they are produced to the extent to which they can be produced at a profit. Hence the anxiety of the English political economists about the reduction in the rate of profit. Ricardo already noted that the increase in the amount of profit accompanying a decline in the rate of profit is not absolute, but that there may be a decline in the amount of profit itself, despite the growth of capital. Strangely enough, he did not grasp this in general, but merely gave an example. Nevertheless, the matter is very simple. 500 at 20% gives 100 profit. 50,000 at 10% gives 5,000 profit; but 5,000 at 2% would only give 100 profit, no more than 500 gives at 20%, and at 1% it would only give 50 profit, hence only half as much as 500 at 20%. In general: As long as the rate of profit falls more slowly than capital grows, there is a rise in the amount of profit and therefore the rate of accumulation, although relative profit declines. If the profit were to fall to the same degree as the capital grew, the amount of profit would, despite the growth in capital, remain the same as it was with a higher rate of profit on a smaller capital. This would therefore also be true of the rate of accumulation. Finally, if the rate of profit fell in a greater proportion than the growth in capital, the amount of profit and therewith the rate of accumulation would fall along with the rate of profit, and it would stand lower than in the case of a smaller capital with a higher rate of profit at a correspondingly less developed stage of production. [XVI-1006] //We do not consider use value at all, except in so far as it determines the production costs of labour capacity or the nature of capital, as with fixed capital, because we are considering capital in general, not the real movement of capitals or competition. But it may be remarked here in passing that this production on a large scale, with a higher rate of surplus value and a reduced rate of profit, presupposes an immense production, and therefore consumption, of use values, hence always leads to periodic overproduction, which is periodically solved by expanded markets. Not because of a lack of demand, but a lack of paying demand. For the same process presupposes a proletariat on an ever-increasing scale, therefore significantly and progressively restricts any demand which goes beyond the necessary means of subsistence, while it at the same time requires a constant extension of the sphere of demand. Malthus was correct to say that the demand of the worker can never suffice for the capitalist. His profit consists precisely in the excess of the worker’s supply over his demand. Every capitalist grasps this as far as his own workers are concerned, only not for the other workers, who buy his commodities. Foreign trade, luxury production, the state’s extravagance (the growth of state expenditure, etc.) — the massive expenditure on fixed capital, etc. — hinder this process. (Hence sinecures, extravagance on the part of the state and the unproductive classes, are recommended by Malthus, Chalmers, etc., as a nostrum.) It remains curious that the same political economists who admit the periodic overproduction of capital (a periodic plethora of capital is admitted by all modern political economists) deny the periodic overproduction of commodities. As if the simplest analysis did not demonstrate that both phenomena express the same antinomy, only in a different form.// That this mere possibility disturbs Ricardo (Malthus and the Ricardians similarly) shows his deep understanding of the conditions of capitalist production. The reproach that is made against him, that in examining capitalist production he is unconcerned with “human beings”, keeping in view the development of the productive forces alone — bought at the cost of whatever sacrifices — without concerning himself with distribution and therefore consumption, is precisely what is great about him. The development of the productive forces of social labour is the historic task and justification of capital. It is exactly by doing this that it unconsciously creates the material conditions for a higher mode of production. What makes Ricardo uneasy here is that profit — the stimulus of capitalist production and the condition of accumulation, as also the driving force for accumulation — is endangered by the law of development of production itself. And the quantitative relation is everything here. There is in reality a deeper basis for this, which Ricardo only suspects. What is demonstrated here, in a purely economic manner, from the standpoint of capitalist production itself, is its barrier — its relativity, the fact that it is not an absolute, but only an historical mode of production, corresponding to the material conditions of production of a certain restricted development period. To bring this important question to a decisive conclusion, the following must first be investigated: 1) Why does it happen that with the development of fixed capital, machinery, etc., the passion for overwork, prolongation of the normal working day, in short the mania for absolute surplus labour grows, along with precisely the mode of production in which relative surplus labour is created? 2) How is it that in capitalist production profit appears — from the point of view of the individual capital, etc. — as a necessary condition of production, hence as forming part of the absolute production costs of capitalist production? If we take surplus value, its rate is greater, the smaller the variable capital in proportion to it, and less, the larger the variable capital. s/v rises or falls inversely as v rises or falls. If v = 0, this [s] would be at its maximum, for no outlay of capital for wages would be necessary, no labour would have to be paid in order to appropriate unpaid labour. Inversely: the expression s/(c+v) or the rate of profit, would be at its maximum if c = 0, that is, if the rate of profit = the rate of [XVI-1007] surplus value, i.e. if no constant capital c at all had to be laid out in order to lay out capital v in wages and thus realise it in surplus labour. The expression s/(c+v) therefore rises and falls inversely as c rises or falls, hence it also rises or falls against v. The rate of surplus value is greater, the smaller the variable capital in proportion to the surplus value. The rate of profit is greater, the greater the variable capital in proportion to the total capital, and this proportion is greater the smaller the constant capital in proportion to the total capital, hence also in the proportion to which it forms a smaller part of the total capital than the variable capital. But the variable capital for its part is smaller in proportion to the total capital, the greater the proportion of the total capital and therefore of the constant capital to the variable capital. Assume s = 50, v = 500, c = 100. Then s' = 50/500 = 5/50 = 1/10 = 10%. And Pp. (rate of profit) 50/600 = 5 /60 = 1/12 = 8 1/3%. Hence s/v is greater, the smaller v is, — is greater, if s is given, the greater v is and the smaller c is, but s/v increases when c increases. If now s/v becomes 3 s/v, and c grow 3 times, so that 3s/(3c+v) which was originally related to c as v: (v+c) is now related as v:(v+3c) v = (c-v)/(v+c) and v = (c-v)/(v+3c) v = c/(!+c/v) v = c/((1+3c/v) If s became greater than v in the measure to which c grew or v becomes greater than c+v, hence if the rate of surplus value grew through greater employment of constant capital in the same measure as the proportion of variable capital to total capital declines, the rate of profit would remain unchanged. Originally we had s/(c+v) = p'. Now we have 3s/(3s+v) = p'. The first question is by how much s/(3c+v) [is less than] s/(c+v), s/(c+v) — s/(3c+v) = s(3c+v)-s(c+v)/(c+v)(3c+v) = s(3c+v-c-v)/(c+v)(3c+v) = s(2c)/(c+v)(3c+v) [XVI-1008] Let surplus value = 120. Variable capital = 600. In this case s', or rate of surplus value, = 120/600 = 20%. If the constant capital = 200, then p' = 120/800 = 12/80 = 3/20 = 15%. If now the constant capital is increased threefold, from 200 to 600, and everything else remains unchanged, then s' = 20% as before, but p' now = 120/ 1,200 12 /120 = 6/60 = 3/30 = 1/10 = 10%. The rate of profit would have fallen from 15 to 10 [per cent], by 1/3; the constant capital would have tripled. The variable capital was previously 100/800 = 6/8 = 3/4 of the total capital, it is now 600/1,200, only 1/2 or 2/4, it has therefore become smaller by 2/3. But if the surplus value increased threefold through the tripling of the constant capital, i.e. if it grew from 120 to 120×3 = 360, then s' would now = 360 /600 = 16 /60 = 6/10 = 3/5 = 60%, and p' would = 360/1,200 = 36/120 = 6/20 = 3/10 = 30%. But since the variable capital is now related to the total capital as 600:1,200, whereas previously it was as 600:800, it is now 1/2 of the total capital, and was previously 6/8 or 3/4, so it has fallen. [XVI-1009] s = 120, v = 600, c = 200. s = 120/600 = 20%, p' = 120/800 = 15 %. s = 120. v = 600. c = 600. s' = 120/600 = 20%. p' = 120/1,200 = 10%. 15:10 = 3:2 = 1:2/3. Hence p' has fallen by 1/3, c has risen 3 times, total capital has grown from 800 to 1,200, by 1/2; finally v was originally related to c as 600:200 = 3x200 = 3c, but now = v. Hence v has fallen 3fold against c. Finally v was previously related to c as 600:800 = 6:8 = 3:4 = 3 /4 c. Now it is related as 600:1,200 = 6:12 = 2:4; = 1/2 or 2 /4c. Hence it has fallen against c by 1/4. For the rate of profit to remain the same at 15%, the surplus value would have to rise from 120 to 180, hence by 60 (but 60:120 = 1:2), hence by a half. Furthermore, [a rise in] s' from 120/600 or 20% to 180/600 or 30%, from 20 to 30, is again [a rise] by 50%. The surplus value had to increase in the same proportion as the total capital grew from 800 to 1,200, i.e. by 50%, that is it had to increase from 20 to 30%. Originally v was 3 /4 of the total capital, now it is 2/4. But 3/4 C×20 is as much as 2/4 C×30, namely — 60C/4 (=15%). [[...] that the sum of surplus value not only does not fall, but rises [...] to the actual rate [of surplus value] depends on the number of workers employed, that with the use of machinery, due to the action of the laws inherent in machine production, the productive application [...] , the better division of labour and combination of labour due to fixed capital, grows.] //It is self-evident that the variable capital may constantly grow in the absolute sense, i.e. the absolute number of workers may grow, although it is constantly falling in proportion to total capital and fixed capital. Hence the inane dispute over whether machinery reduces the number of workers. It almost always reduces the number when introduced, not in the sphere in which it has itself been introduced, but through the suppression of workers who carry on the same industry at the previous stage of production. For example the machine spinners drive out the hand spinners, the machine weavers the hand weavers, etc. But in the branch of industry which employs the machinery the number of workers may grow constantly in the absolute sense // although here men are often driven out by women and young persons // although it declines relatively. // [XVI-995] Let us first assemble the facts. C = v+c. s = surplus value. s' = rate of surplus value. p' = rate of profit. s' = s/v, p' = s/c or s/v+c. C = 800. c = 200. v = 600. s = 120. In this case, C = 1/4 C (800/4 = 200) and v = 3/4 C (=3×?/4 =?? s' = 120/600 = 20%. If c increases from 200 to 600, by a factor of three, C will rise from 800 to 1,200, i.e. by 50%. Since C = 1/4 C, its threefold increase causes it to grow from 1/4 to 3/4 (by 2/4). The total capital is now 3/4 C + 3/4C = 1 2/4 C. It has therefore risen by [...]. It was originally = 3 4C ( = 600), so if it is tripled this brings it from 3/4 to 9/4, from 600 to 1,800, and it brings the total capital to 2,000 ([...] C[...]/[...]C over and above the original capital 6 /4C = 1,200 (1,200 + 800 = 2,000). How far therefore the total capital [...] becomes [...] growth in c, depends on the original proportion of c to C which presents itself entirely as a particular proportion between c and v [...] of C. So the greater the proportion of c: v or of c: C (c+v), the more does the total. amount C grow through [...] the more does the rate of profit fall and the greater is the growth in the rate of surplus value required for the rate of profit to remain the same. [...] the growth of the total capital if the rate of surplus value is given. In the case of an increase of C from 800 to 1,200, of c from 200 to 600, the constant capital is tripled and the total capital grows by [...] by 50%. In this case the rate of surplus value or s' continues to be 20% and s = 120. But p' = 120 /1,200 = 10%. Surplus value and rate of surplus value [...] have fallen from 15 to 10, i.e. by 1/3 or 33 1/3%. Why is there this difference, that the rate of profit falls by 33 1/3 % [...] grows by 50%? Because the relation of the rate of profit expresses itself as the inverse of the relation of the two capitals we have compared. [...] or 1,200. This growth is from 800:1,200 = 2:3, hence from 2:(2+1) or by 50%. The fall in the rate of profit expresses itself inversely, as fall of [...] from 120/800 to 120/1,200 or 120/800:120/1,200 = 3:2; hence as a fall of 1/3 or 33 1/3%. The fall in the rate of profit therefore depends directly on the growth in the total capital, if the variable capital remains the same; its fall expresses itself in inverse proportion to the growth of the capital. If this grows from 2:3, the rate of profit falls from 3:2. Furthermore, if the variable capital remains the same, the growth of the total capital can only derive from the growth of the constant capital. However, the proportion in which a particular increase in constant capital causes the total capital to increase depends on the original ratio between c and C. This inverse relation explains in part why the rate of profit does not fall in the same proportion as the capital increases, even if the rate of surplus [value] remains the same. If 2 increases to 4, that is a growth of 100%. If 4 falls to 2, that is a fall of 50%. b) If in the second case indicated above the rate of profit is to remain the same, the profit, hence the surplus value, will have to rise from 120 to 180, i.e. by 60 or 1/2 of 120, rise by half its original magnitude. The surplus value would therefore have directly to grow in the same proportion as the total capital, by 50%, therefore rising in a greater proportion than the fall in the rate of profit, surplus value remaining the same. If c had risen to 1,200 instead of 600, the total capital would have risen to 1,800, for C would have risen by 1,000, hence by 125%. [...] remain the same, the total amount of surplus value = the total profit, would have had to rise to 270. But 270:120 must [imply] a growth of 150 [...] or 125% on top of 120. 120 on 120 is 100%, and 30 on 120 is 1/4 or 25% (4x30 = 120) [...] %.) c) How in this case (b) would s' or surplus value have risen? It was originally 120 /600 = 20% or 1/5 of the variable capital. If the capital grows to 1,200 or c is tripled, 180/600 or 30% or [...]. In the third case, if the capital grows to 1,800, [surplus value is] 270/600 = 9/20 of the variable capital, = 45%. In [this case the rate of] surplus value has risen from 20 to 30%, i.e. by 50%, to the same degree as the total capital has grown in this case and the absolute surplus value or [...] has risen in this case from 20 to 45; i.e. by 25; but 25:20 = 1 1/4 (20 + 1/4 20 or 5) hence 125%. (This [...] only on the growth of the increment, not the relation of the numbers to each other as such.) The rate of surplus value would therefore have to [grow] directly [as the] total capital grew or in the same proportion as the absolute surplus value would have to grow for the rate of profit to remain unaltered with a growing [...]. Variable capital amounted to Case I: 600 out of total capital constant capital 800 = 3 /4 C; 200 = 1/4 C Case II: 600 1,200 = 2 /4 C; 600 = 2/4 C Case III: 600 1,800 = 1/3 [C]; 1,200 = 2/3 C [?????]: 600 3,600 = 1/6 [C]; 3,000 = 5/6 C. Surplus value or profit had to increase to 540; the rate of surplus value = 540/600, 9/10 or 90%. 90% against 20 [...] of 70. But 70 to 20 would be 350%. The increase of capital would be 3,600-800 = 2,800, similarly [350%]. In this case the rate of surplus labour = 9/10 of the total working day, hence given 10 hours of labour 9 hours. [...] [XVI-996] [...], although entirely corresponding to the growth of the total capital with variable capital remaining the same, now express the rate of rise and fall inversely in the same value expression as the capital [...]. If the capital rises from 2 to 4, the rate of profit falls from 4 to 2. The other rises by 100%, [...] [...] and the rate of surplus value, which is an identical relation if variable capital remains the same, does not grow as capital grows or variable capital [...] total capital. There is absolutely no rational reason why the rise of productive power should observe exactly the same numerical ratio. It [...] of relative surplus value grows and its growth is expressed in the ratio of the reduction in the variable capital [...], but not in the same ratio as this proportion declines. Productive power grows, hence surplus labour. Firstly, there lies here [...] the matter. One man may produce as much use value as 90. Never more than an average of 12 hours a day in value is [...], as this [...] surplus value never more than 12 hours — x, where x expresses the labour time necessary for his own production. The surplus value, [...] the labour time which he himself works, not by the working days he replaces. If 90 men worked only % an hour of surplus time a day, this would be [...] hours. If the one man needed only one hour of necessary labour time, he would never [produce] more than 11 hours of surplus value. The process is double. It increases the surplus labour time of the working day, but it also reduces the numerical coefficients of those working days, [...] capital. Secondly: The development of productive power is not uniform; certain branches of industry may themselves be more unproductive but this is determined by the general productivity of capital. [...] firstly at a stage of production which remains the same, without great revolutions in productive power, in proportion to its already existing [...] only gives rise to a total capital of 2, whereas 1,000 at 10% gives 1,100. c. 1,100 prod[... Ex]ample of 800, v = 600, c = 200, and surplus value = 160 or rate of profit equal to 20%, a capital of 100,000 would give [...] instead Of 3/4 only 1/6 variable, (3 /4 = 18/24, and 1/6 = 4 /24) hence employs 14 /24 or 7/ 12 less variable capital relatively speaking, at [...] 50% it continues to be 5,000. His variable capital, and the living labour employed by it, would still be 16,6661/6 in total amount, hence [...] it would still be nearly 28 times greater than the capital employed in the first case. But the rate of profit is determined, because the rate of surplus value is determined, by the ratio of the variable capital to the total capital. At simple interest £100,000 would grow into 200,000 in 20 years, whereas 800 at 20% would only produce an accumulation of 3,200 in 20 years (160×20). In the second 20 years 200,000 at 5% would grow to 400,000. The other capital at 20%, in contrast, would only grow to 12,800. [a] As a rule // see under surplus value for the exception: intensification of labour and therefore in fact increase of labour by machinery // machinery only creates relative surplus value through the curtailment of necessary labour time and therefore the prolongation of surplus labour time. This result is brought about by the cheapening of the commodities which enter directly or indirectly into the worker’s consumption. Surplus value is formed by two factors. Firstly the daily surplus labour of the individual worker. This determines the rate of surplus value, hence also the proportion in which variable capital is increased through the exchange with living labour. Secondly, the number of workers simultaneously exploited by capital or the number of simultaneous working days. If the rate of surplus value is given, the magnitude of the surplus value — the surplus value itself as an independent magnitude — depends on the number of workers employed. If this [number and the number of simultaneous] working days is given, the magnitude of the surplus value depends on its rate. [...] now evidently has a tendency to affect the two factors of surplus value in opposite directions. It increases the rate [...] reduces the number of workers // relatively anyway; with respect to a definite measure of capital, e.g. per cent// whose labour is exploited at an increased rate. [...] each one provided 1 hour of surplus labour a day. By the employment of machinery 6 workers should each provide 2 hours of surplus labour a day [...] In this case 6 workers provide 12 hours of surplus labour, just as previously 12 did. The time during which the 12 workers [work] every day, assuming [a norm]al working day of 12 hours, [can] be regarded as a total working day of 144 hours, of which [132 hours are necessary labour] time, 12 surplus labour time. In the second case the total working day consists of 72 hours, of which 60 are necessary labour time, [12 surplus labour time]. Since a total working day of 72 hours now contains as much surplus labour as the day of 144 hours, in the latter case [6 workers] appear [to be use]less, superfluous for the production of 12 hours of surplus value. They are therefore suppressed by the employment of machinery. [...] — which lies at the basis of all growth in relative surplus value — prolongation of surplus labour time through [curtailment of necessary] labour time; however, a process which was only employed previously in regard to the working day of the individual worker is now employed [...] composed of the sum total of the working days of the workers simultaneously employed. The retranchement now takes [...]. In the first case the sum total of hours of labour remains the same. It is merely their division between necessary and surplus labour, between [...], which is altered. But now there is a change not only in the division of labour time but also in the sum total of labour time employed. [...] total working day of 144 hours e.g., which is no longer necessary,. since the employment of machinery, to [produce] 12 hours of surplus labour. Superfluous, useless labour is removed. From the capitalist standpoint all labour is useless, i.e. unproductive, which is not necessary [...], which would therefore be required for the mere reproduction of the worker himself. In the above example 72 [...], i.e. 6 days of labour. I.e. 6 of the 12 workers are dismissed. In the first case the magnitude remains [...] ([...] hours contained in it) the same. The division alone has changed. In the second case the magnitude changes — the total amount [...] the division of the same. In the first case, therefore, the value remains the same, while the surplus value increases. In the second case [...] at the same time the labour time objectified in the product, while the surplus [value] increases. [...] of simple cooperation and division of labour [takes] place. This is as with [...] Relatively to the product [...] the number of workers is reduced [...] workers [...] capital C [...] constant [...], [XVI-997] with machinery, an absolute reduction (with regard to a particular capital) takes place. In certain branches of industry, agriculture [...] reduction is in fact always in advance, without being checked as in other branches of industry by the circumstance that at the new rate [...] old number of labourers may be successively absorbed, but even an absolutely greater although relatively much smaller [...] The way in which the rate of profit is altered even in the case considered above, where the rate of surplus value grows in the same (or [a greater proportion]) than the fall in the number of workers, hence the fall in one factor finds compensation in the growth of the other through more [...] — hence the magnitude of the surplus value remains unchanged or even grows — depends on the proportion in which [...] is [affected by] a change in the components of the total capital or on the proportion in which this change proceeds. [...] The surplus value the capital makes can only derive from the number of workers it exploits, or from the number of workers who [...] society — alias the class of capitalists as a whole — is affected by the setting free of the workers he has dismissed, [...] It is now an entirely self-evident general law that with the progressive increase in the employment of machinery the magnitude [...] remain, but must fall; i.e. that the reduction in the number of the [...] (in relation to a particular measure of capital) [...] reduction in the number cannot be continuously counterbalanced by a corresponding increase in the rate of surplus value the working day of the individual worker is exploited. Assume that 50 workers provide only 2 hours of surplus [labour]; in that case the surplus value created by them = 100. Assume further [...] if 10 men were replaced by 1, 5 would replace the 50. [...] labour time = 5×12, = 72 a hours. The same for the total value of their product. The surplus [value] created by them [is] < than 72, since only equal to 72 — the necessary labour time. Hence it is < than 100 by much more. There therefore takes place so large that the reduction in the absolute amount of labour which is employed, brought about through the development of productive power, [...] by an increase of equal size in the rate of surplus value — where surplus value therefore falls, despite the growth in the rate of surplus value. [...] A fall in the amount of surplus value — or the total amount of surplus labour employed — must necessarily come about with the development of machinery [...] it is [shown] here that capitalist production enters into contradiction with the development of the productive forces and is by no means their absolute [...] and final form. //If the 50 workers could all be employed at the new rate, or even only 25 perhaps, surplus value would grow, and not only its rate, as compared with the earlier case. Hence the importance of the scale on which machinery is employed, and its tendency to employ as many workers as possible at the same time, combined with the tendency to pay for as few necessary working days as possible.// (50) (150) b) Let us assume a capital of 600. Let 400 of this be laid out in labour, 200 in constant capital, instruments and raw material. Let the 400 represent 10 workers. If a machine were to be employed, which together with the raw material = 520, and if the capital laid out in labour were only to be 80 now, 10 workers would be replaced by 2 or 5 by 1. The total amount of capital laid out would remain the same, hence production costs would remain the same. The 2 workers would not produce more surplus labour time for each 12 hours than the 10 produced, for wages would have remained the same. Nevertheless, the quantities of commodities produced under the changed conditions of production might on certain presuppositions become cheaper, although it is presupposed that this quantity has not increased, or that no more commodities are produced with the same capital under the new process of production than were previously produced under the old one. Since the same quantity of raw material has been worked on as before, 150, the machinery has now risen from 50 to 370. // Namely 370 machinery, 150 raw material, 80 labour. 370 + 150 + 80 = 600. // Assume now that the machinery employed has a turnover time //reproduction time// of 10 years. Of the value employed, 37 (370/10) would enter into the annual output of commodities for the replacement, wear and tear, of the machinery. The sum total of the production costs of the commodities //disregarding profit and surplus value here, as the rate remains the same// would now be = 37 + 150 + 80 = 267. The production cost of the commodity under the old process = 600, whereby we assume that the instruments which enter into the process (estimated at 50) must be renewed every year. The price of the commodities would have been cheapened in the ratio 267:600. To the extent that the commodity enters into the worker’s consumption, its cheapening would bring about a reduction in the labour necessary for his reproduction and thereby an increase in the length of surplus labour time. //But initially, as in any employment of machines, capitalist II would admittedly sell cheaper than capitalist I, but not in the same proportion as his production costs had fallen. This is in fact an anticipation of the cheapening of the production costs of labour which occurs through machinery [...] [If] his workers receive the same wages as previously, they can admittedly buy more commodities (more of the commodities they themselves have produced) but not in the proportion in which they have become more productive. It would be the same thing if the capitalist paid them in his own commodity, as if he were to give them a quantity which was admittedly larger, but smaller in the proportion to which this quantity expressed exchange value.// Even if we disregard the relation itself, and consider the empirical form, in which the capitalist calculates interest, say 5%, on his total capital according to the part of it which has not been consumed. Then 5% on 300 (the part of the capital not consumed in the first year) = 15, or 5% profit e.g., similarly 15, therefore 30. Thus the price of the commodities would come to 280 + 30 = 310, still almost half as cheap as in the first case. In fact only 370 thalers were laid out for fixed capital, 150 capital for raw material, and 80 for labour. However, if in order to replace 5 workers by one the capital [...] the machinery had to increase from 50 to perhaps 2,000 instead of 370, the total capital therefore rising to 2,300, the wear and tear contained in the commodity annually would = 2,000/100 = 20. Production costs would = 250, with interest and profit of 150. 250 + 150 + 80 = 480. 10% on [...] So in this case by inequality [...] 2,000 again = [...] machinery made dearer. [XVI-998] [...] in two ways: [...] turnover time peculiar to fixed capital — mode of circulation — a much smaller aliquot part of it enters into the value [...] product — than is really required for production. Only its wear and tear, the part of it that is worn out in the course of a year, enters into [the value of the pro]duct, because only this part really circulates. Hence if the capital remains the same and there is only a change in the proportion of the capital [...] component of the capital laid out [in] labour, there is a cheapening of the product, the ultimate result of which is a cheapening [...] in the production costs of labour, hence an increase in the rate of surplus value, i.e. of surplus labour time. [If] capital [remains] the same, and there is also no increase in surplus time (or no original reduction in wages) [...] measure, as the turnover time (reproduction time) of the fixed capital declines in velocity. [...] the aliquot part of the old capital, which is converted into fixed capital, but the capital had rather to [...] so that the total capital might grow, the proportion of this growth, required for the number of workers [...] occur, in which the commodity produced with the machine became dearer than that produced with hand labour [...] [...] posited on the assumption that the amount of commodities produced by the smaller number of workers is not larger, [...] [than the] number produced without machinery, or on the assumption that [...] capital with machinery does not [...] than previously without it. [...] [...] workers employed produced more than the 10 without it, they thus produce perhaps as much as 20 [...] always a definite number, but perhaps a greater number than they force out. In this case 1 replaced [...] could perhaps only be employed if both were employed. In any case, the part of capital laid out in [...] would have to be doubled. I.e. the magnitude of the capital could not [remain] unaltered. [...] but if the slow turnover time of the capital cheapens the product, even if the old capital increases again, hence a greater amount of commodities than before is not produced, then this is even more so in the other case. This belongs to the section on production costs, just as the previous comments on surplus value must be treated under the heading “Surplus Value”. //The total amount of the capital advanced enters into the labour process, but only the part of the capital consumed during a particular period of the labour process enters into the valorisation process or into the value of the product. (See Malthus. ) Hence the smaller value or the greater cheapness of the commodities which are e.g. produced with the same capital of 500, if 2/5 of this are fixed capital and 1/5 variable capital, than if the proportions are inverted. (Even if profit and interest are calculated on the whole of the capital, only an aliquot part of it enters into the value of the commodity, not the capital itself, as in the case in which the whole of the capital or the greatest part of it is laid out in living labour.) But the profit is calculated on the whole of the capital, including the unconsumed part of it. Although the unconsumed part of the capital does not enter into the value of the product of the individual capital considered for itself, it does enter into the average production costs of capitalist production, in the form of profit (interest), because it constitutes an element of the average profit, and an item in the calculation by means of which the capitalists divide among themselves the total surplus value of the capital. // // The rate of profit depends upon, or is nothing other than, the ratio of the surplus value (considered as an absolute magnitude) to the magnitude of the capital advanced. But the surplus value itself — i.e. its absolute magnitude — may fall even though the rate of surplus value rises, and rises considerably. The amount of surplus value or its absolute magnitude must indeed fall, despite any rise whatever in the rate of surplus value, once the [...] of surplus value of the labour which is displaced by machinery is greater than the total amount of value, or labour, which steps into its place. Or the surplus time of the displaced worker[s] is greater than the total labour time of the workers who replace them. Thus if 50 are replaced by 5. And the surplus labour time of the 50 was 2 hours (with a normal working day of 12 hours). Their surplus labour time or the surplus value created by them = 100 hours. The total labour time or the value created [by the 5] (hence the necessary labour time + surplus) = 60 hours. Assume that these 5 workers provide twice as much surplus time, or that surplus value = 4 hours every day for each of them. So that for 5 there are 20 hours. The rate of surplus value has grown by 100%; the total amount of surplus value or the surplus value itself is only 4×5 = 20 hours. The surplus value is only 1/5 of the 100 created by the 50, smaller by 80%. If now 15 workers were employed at the new rate the amount of surplus value would rise to 60, if 20 to 80, if 25 to 100. Half as many workers would have to be employed at the new rate in order to produce as much surplus value as at the old rate. But if 50 were employed, they would produce twice as much, namely 200. Not only the rate of surplus value, but also the surplus value itself would have doubled.// //Assume that the 5 only produced surplus value at the same rate as the 50, hence only 10 hours. Then 50 workers would have to be employed just as before in order to produce the same surplus value, although they would produce 10 times as many commodities in the same time. This in the branches of industry where the product does not enter into the consumption of the workers themselves. Here the profit derives purely from the fact that the necessary labour time, over a certain average period, stands higher than the labour time needed by the capitalists who have introduced the new machinery; they therefore sell the commodity above its value. This is, however, different from sheer fraud. They sell it above the value it costs them, and below the value it costs society before the general introduction of the machinery. They sell the labour of their [...] higher labour, they buy it as yet at [...] With the [...] at the new rate. But there is also an increase in c[...] more significant [...] [XVI-1009] //In the latter case he sells the individual commodity cheaper than it can be produced given the still generally prevailing production costs, he sells it below its average value, but not cheaper in the same proportion as he himself produces it below its average value. He sells the total amount of the commodities produced in an hour, in a day — //and with the new means of production he provides a greater total amount in the same time// — above their value, above the hour or the day of labour time contained in them. If he produces 20 yards with the same production costs as the others incur in producing 5, and if he sells them % below the average price, he is selling them 3/5 above their value. If the 10 yards cost 10x and he sells the 20 at 20 × 4x/5 = 80x/5 = 16x, he is selling them at 6 over their value of 10. 1/5 of 10 is 2, or 3/6 of 10 is 5; 20 cost him 10; or 2 costs him 1 or 5/5. What now is the relation to his workers? If they continue to receive the same wages as before, they also receive commodities for their wages (i.e. in so far as the more cheaply produced commodity enters into their [XVI-1010] consumption). And let this take place for all the workers, each of whom would be able to buy more of this specific commodity with the aliquot part of their wage which is expended for it. The capitalist would make a surplus profit of 3/5 or 60%. He sells them the commodity 1/5 cheaper, but he sells the labour contained in it 3/5 dearer than the average labour, hence at a value standing 3/5 above the average labour. 3/5 of 12 hours of labour = (12 × 3) / 5 = 36/5 = 7 1/15. This surplus labour, which they have provided for him through the higher potentiation of their labour, he pockets. Let us assume that necessary labour time = 10. Thus under the old conditions they would obtain 10/12 of the product 10. In the old situation 1 hour of labour produces 1/12 of the product of a day, hence in 10, 10/12 = 8 thalers, for example. In the new situation 16/12 is produced in one hour of labour = 4/3, 1 1/3. In 3 hours 4 thalers, in 6 hours 8 thalers. Thus they work 6 hours of surplus labour. Previously it was only 2.// //Adam Smith correctly adduces in favour of an average profit — i.e. a profit purely determined by the magnitude of the capital — the example of the use of silver instead of iron, or gold instead of silver, of a more costly raw material in general, under otherwise identical conditions of production. Here the part of the capital advanced in the form of raw material may grow hundredfold, and more, ditto therefore the profit, with the same rate of average profit. Although not the slightest change takes place in the organic relations between the different components of the capital. // //The Yankee economist Wayland is very naïve. Because relative surplus value is only produced in branches of industry directly or indirectly involved in the production of articles destined for the workers’ consumption, hence it is there in particular that capital introduces cooperation, division of labour and machinery, and because this occurs to a much lesser extent in luxury production, he concludes that the capitalists work to the advantage of the poor, not the rich, and capital there develops its productivity in the interest of the former, not the latter. // Average surplus value — disregarding here absolute surplus value, and considering only relative surplus value, which arises from the curtailment of necessary labour time through the development of the productive powers of labour — is the total amount of surplus value in all specific branches of production, measured against the total capital laid out for living labour. Since the development of productive power is very uneven in the different branches of industry (which directly or indirectly produce the means of subsistence entering into the worker’s consumption), uneven not only in degree but often proceeding in opposed directions, as the productivity of labour is just as much [XVI-1011] bound up with natural conditions which may lead to a decline in productivity while the productivity of labour grows // the whole of the investigation into the extent to which natural conditions influence the productivity of labour independently of the development of social productivity and often in opposition to it, belongs into the analysis of rent// — it results from this that this average surplus value must stand very much below the level to be expected from the development of productive power in the individual branches of industry (the most prominent ones). This is in turn one of the main reasons why the rate of surplus value, although it grows, does not grow in the same proportion as the variable capital declines in its proportion to the total capital. This would only be the case (assuming that the proportion is correct in general; it is correct for the rate of surplus value, as has been shown previously,’ but not for surplus value) if those branches of industry in which the variable C declines the most against fixed, etc., were to make their products enter into the consumption of the worker in the same proportion. But take here, for example, the proportion between industrial and agricultural products, where the relation is precisely the opposite. Let us now consider a particular branch of industry. If an increase of productive power occurs in it, the increase which occurs in this particular branch absolutely does not imply a direct increase in the branch of industry which provides it with its raw material (with the exception of agriculture, since its product itself provides its raw material, in seeds, and this is again a peculiarity of agriculture). The raw material branch itself at first remains completely unaffected by the increase, and may also remain unaffected subsequently. //Nevertheless, a cheaper raw material does not step in to replace it, unless the same raw material becomes cheaper, as cotton does not replace sheep’s wool.// But the productivity is demonstrated by the fact that a greater quantity of raw material is needed to absorb the same quantity of labour. Thus this part of constant capital at first grows unconditionally with the greater productivity of labour. If 5 produce as much as 50, or more, 50 will work up 10 times more raw material. The raw material must initially increase in the same proportion as the productivity of labour. Or if we assume that 5 produce as much as 50, and 45 are dismissed, the 5 now need 10× as much capital as did the 5 previously, or as much as 50. This part of the capital has grown 10 times, at least, measured against the capital laid out in labour. //With greater exploitation this can be restricted somewhat, if on the one hand there is a relative reduction in waste through the improved quality of the labour, and on the other hand because the waste is absolutely more massive, more concentrated, can serve better as raw material once again for new, different production, hence in fact the same raw material stretches further, as to its value. This is an item, but an insignificant one.// However, this is not to say by any means that fixed capital, buildings, machinery (lighting, etc.) (apart from fixed capital the matierès instrumentales in general) increase in the same proportion, so that 10 times as much would now be required by the 5 as they required before. On the contrary. Although machinery of greater bulk becomes dearer absolutely, it becomes cheaper relatively. This is particularly true for the motive force, steam engines, etc., the production costs of which fall (relatively) with [the increase in] their horse power or other power. This part — hence the total constant capital — therefore by no means grows in proportion with the growth in productive power, although it does grow absolutely, to an insignificant degree. The total capital therefore does not grow [XVI-1012] proportionally in relation to the growth of productive power. If out of the 500 there were originally perhaps 300 for workers, 150 for raw material and 50 for instruments, it follows that a doubling of productive power through the application of machinery would require the employment of at least 300 for raw material, and if 50 workers’ produced this product of twice the size, 50 for labour; but it does not follow that the cost of machinery, etc., for these 30 workers would rise from 50 to 500, a tenfold increase. The cost of machinery would perhaps only rise to double the amount — to 100; so that the total capital would have fallen from 500 to 450. The ratio between the variable capital and the total capital would now be 30:450. 30/450 = 3/45 = 1/15. 1:15. Previously the ratio was 300:500, 300/500 = 3:5. 1/15 = 3 /45; and 3/5 = 27/45. According to this, however, the total capital required to produce a certain surplus value would have fallen. Assume in the first case that the surplus value = 2 hours out of 12 = 2/12, in the second case = 4/12 or 1/3. In the first case 1/6 of 300 (if a worker = 1 thaler) = 50. And this is 10% of 500. In the second case 1/3 of 30 = 10. 450 are required for the production of these 10. If we assume that 300 workers are employed at this new rate, they would produce 100. The total capital needed to produce the 100 would rise to 450×30 = 4,500×3 = 13,500. In the previous ratio it was 1,000 to produce 100. But assume that fixed capital falls still more, not perhaps relatively in proportion to the growth of the productive forces. If the 30 workers produce as much as the 300 did previously, they will need 500, just as before: 150 for raw material, 30 for labour (as previously 300), but perhaps only 100 for fixed capital. The total capital is now 210, of which variable capital is 3/21 = 1/7, [XVI-1013] previously = 3/5. (300 out of 500) If the surplus value were now to increase 5fold, the 30 would give a surplus value of 50, where the 300 gave one of 10. Thus on 300, 30, would be on 30 — 15. The total capital is 500 in the first case, 210 in the second case. 410 would now give 30, hence more than 500 previously. The growth of productive power allows more commodities to be produced in the same labour time. Therefore, it does not raise the exchange value of the commodities produced in this way, but only their quantity; it rather lessens the exchange value of the individual commodities, while the value of the total amount of commodities produced in a given time remains the same. To say that there is an increase in productivity is the same as saying that the same raw material absorbs less labour in the course of its conversion into the product, or that the same labour time requires more raw material for its absorption. For example, a pound of yarn requires exactly the same amount of cotton, whether a large or a small amount of labour is required for the conversion of the cotton into yarn. If the productivity of the spinner rises, the quantity of cotton contained in a pound of yarn absorbs less labour. The pound of yarn therefore falls in value, gets cheaper. If 20 times as many pounds of cotton as before are spun in an hour, e.g. 20 pounds instead of 1 pound, each pound of yarn falls by 1/20 in the value component the labour of spinning adds to it; in the differential value between a pound of cotton and a pound of yarn (leaving aside the value of the fixed capital present in the spun yarn). Nevertheless, the value of the product of the same time is now greater than before, not because more new value has been created, but only because more cotton has been spun, and the value of this has on our assumption remained the same. The newly created value would be the same amount for the 20 pounds as previously for the one pound alone. For 1 pound it would in the new mode of production be smaller by 1/20. Presupposing therefore that the commodities are sold at their value, the increase of productive power (with the exceptions mentioned earlier) only creates surplus value in so far as the cheapening of the commodities cheapens the production costs of labour capacity, hence shortens the necessary labour time, hence lengthens surplus labour time. The product of every particular sphere of production can therefore only create surplus value in so far as, and in the proportion in which, this specific product enters into the average consumption of the workers. But every such product — since a developed division of labour within society is a fundamental prerequisite for the development of commodities in general and even more for capitalist production — only forms an aliquot part of the worker’s total consumption. The increase of productive power in every particular sphere therefore creates a surplus value by no means in proportion to the increase of productive power but only in the much smaller proportion in which the product of this particular sphere forms an aliquot part of the worker’s total consumption. If a product formed 1/10 of the worker’s total consumption, a doubling of productive power would allow the production of 2/10 in the same time as ‘/to was produced previously. 1/10 of the wage would fall to 1/20, or by 50%, while the productive power would have risen by 100%. 50% on 1/10 x = 5% on 1x. E.g. 5% on 100 comes to 105. 50% on 100/10 or 10 comes to 5, the same total amount. The growth of productive power by 100% would in this case have cheapened wages by 5%. [XVI-1014] It is therefore clear why the striking growth of productive power in individual branches of industry appears to be entirely out of proportion with the fall of wages or the growth of relative surplus value. Hence capital too — to the extent that this depends on surplus value, a point we shall soon investigate more closely — is far from increasing in the same proportion as the growth in the productive power of labour. Only if productive power were to increase evenly in all branches of industry which directly or indirectly provide products for the worker’s consumption could the proportional growth of surplus value correspond to the proportional increase of productive power. But this is by no means the case. Productive power increases in very different proportions in these different branches. Contrary movements often take place in these different spheres (this is due partly to the anarchy of competition and the specific nature of bourgeois production, partly to the fact that the productive power of labour is also tied to natural conditions, which often become less productive in the same proportion as productivity rises, in so far as it depends on social conditions) so that the productivity of labour rises in one sphere while it falls in another. //Think for example of the simple influence of the seasons, on which the greater part of all the raw products of industry depends, exhaustion of forests, coal seams, mines and the like. // The growth of average total productivity is therefore always and unconditionally much less than this growth appears in a few particular spheres, and indeed in one of the main branches of industry, the products of which enter into the worker’s consumption, agriculture, it is as yet far from keeping pace with the development of the productive powers in the manufacturing industry. On the other hand, in many branches of industry the development of productive power has no influence, either directly or indirectly, on the production of labour capacity, hence of relative surplus value. Quite apart from the fact that the development of productive power is not only expressed in an increase in the rate of surplus value but also in a (relative) reduction in the number of workers. Hence the growth of surplus value is by no means in proportion to the growth of productive power in particular branches of production, and, secondly, it is also always smaller than the growth of the productive power of capital in all branches of industry (hence also those branches whose products enter neither directly nor indirectly into the production of labour capacity). Hence the accumulation of capital grows — not in the same proportion as productive power increases in a particular branch, and not even in the proportion in which productive power increases in all branches, but only in the average proportion in which it increases in all the branches of industry of which the products enter directly or indirectly into the overall consumption of the workers. The value of a commodity is determined by the total labour time, past and living, which enters into it, which is contained in it; hence not only by the labour time which is added in the final production process, from which the commodity as such emerges, but by the labour contained in the fixed capital and circulating capital, or in the conditions of production of the labour last to be added, by the labour time contained in the machinery, etc., the matières instrumentales and the raw material, in so far as their value reappears in the commodity, which is entirely the case with raw material and [XVI-1015] the matières instrumentales, whereas the value of the fixed capital only reappears partially in the product — in proportion to its wear and tear. If 1/4 of the value in a commodity consisted of constant capital and 3/4 of wages; if as a result of an increase of productive power in this particular branch the amount of living labour employed were to fall from 3/4 to 1/4, and if the number of workers employed in its production were to be reduced from 3/4 to 1/4, then, given the presupposition that the 1/4 of labour was exactly as productive as the 3/4 was previously (and not more so), the value of the new fixed and circulating capital, apart from the raw material contained in the 1/4, could rise to 2/4. Then the value of the commodity would remain unchanged, although the labour would have become more productive by 3/4 to 1/4, i.e. by 3 to 1, i.e. it would have tripled its productive power. Since the value of the raw material would have remained the same, the new fixed and circulating capital would not be able to rise as far as 2/4 of the old value of the commodity, thus permitting the commodity to become cheaper, with a real fall in its production costs. Or the difference between the new labour time and the old would have to be larger than the difference between the value of the old constant capital and the new (deducting the raw material). It is not possible to add the same amount more of past labour as a condition of labour as has been deducted of living labour. If the 1/4 of workers were to produce more than the 3/4 did previously, so that the increase in the productivity of their labour were greater than the reduction in their numbers or their total labour time, the new constant capital could grow //disregarding surplus value here and speaking only of the value of the commodity, on which after all the surplus value depends, because the cheapening of the production costs of labour capacity depends on the lessening of the value// by 2/4, and even by more than 2/4, only it would now have to grow in the same proportion as the productive power of the new labour. Secondly, however, this relation is also brought about, 1) by the fact that the fixed capital only enters in part into the value of the commodity; 2) the matières instrumentales, such as the coal consumed, the heating, lighting, etc., are proportionally economised by labour on a large scale, although their total value increases, and therefore a smaller value component of the same enters into the individual commodity. But the condition remains the same, that the value component of the machinery which enters into the individual commodity as wear and tear, and the matières instrumentales which enter into it, should be smaller than the difference in productivity between the new and the old labour. Nevertheless, this does not exclude the possibility that an equally large or even a larger quantity of constant capital might be used for the total amount of commodities, e.g. the number of pounds of twist, which are produced in a given period of time, e.g. a day, than was previously expended in the form of wages. Only a smaller quantity in respect of the individual commodity. Presupposing, therefore, that the 1/4 n workers produce exactly as much in one day as the 3/4 n workers produced previously, the law would remain absolute. Because the amount of commodities produced would remain the same in proportion to these 1/4 n workers as it was for the 3/4 n workers. The value of the individual commodity could therefore fall only if the new constant capital < than that previously expended in wages and now no longer in existence. It can therefore be said absolutely that in the proportion in which a smaller quantity of labour replaces a greater quantity of labour — [XVI-1016] does not need to be identical, but may be, and mostly is, greater than the proportion in which the number of workers is diminished (the relative number of workers) — the constant capital which enters into the commodity //and in practice also the interest and profit on the whole of the constant capital, which admittedly enters into the labour process but not into the valorisation process// must be greater than the proportion in which the new constant capital grows (here the raw material is left out). This is only an aspect to be introduced in distinction to the one-sided consideration in dealing with surplus value. To be inserted in the section on production costs. This does not, however, (owing to the way in which the fixed capital is reproduced) prevent the total capital //hence also the part of it which is not consumed in the labour process, but still enters into it// from being absolutely greater than the previous total capital. Thus if e.g. 1 replaces 10, the capital which is allotted to him in the form of machinery, etc., and matières instrumentales — in so far as it enters into his product — is smaller than the previous capital which was required for the 10 workers. The proportion of capital laid out in labour has fallen 10 times here, but the new constant capital has perhaps only risen 8 times. From this point of view, therefore, the capital laid out in labour has not fallen proportionally in the same degree as the capital required for its realisation [has increased]. Or the total amount of capital which enters into the production of the one worker is smaller than the total amount of capital which enters into the production of the 10 workers replaced by him. And, although the part of capital laid out in wages has fallen 10 times in comparison with previously, it still forms a larger part of this new capital than 1/10, because this new capital, which enters into the production of the one worker, has itself become smaller than the old capital, which entered into the production of the 20 workers. On the other hand, however, the total capital which is required as condition of production for this increase in the productivity of labour — including namely the part which does not enter as wear and tear into the product — but is rather consumed in a series of work periods — is greater — may be much greater than the previous total capital, so that the part of the total capital laid out in labour has declined in a still greater proportion than the productivity of labour has grown. The more the fixed capital develops, i.e. the productivity of labour, the greater this unconsumed part of the capital, the smaller the proportion of the part of capital laid out in labour in relation to the total capital. From this point of view it might appear as if the magnitude of the capital grew more rapidly than the productivity of labour //but even the total capital cannot grow to the extent that the interest and profit on it raise the production costs of the commodity to the level to which the productivity of labour has risen//. But this only means that the portion of the capital annually produced which is converted into fixed capital is always increased relatively to the portion of the capital which is laid out in wages; by no means, however, that the total capital — which is in part fixed, in part converted into wages — grows as quickly as the productivity of labour. If the part of capital laid out in labour thus falls, this is even more the case if the growth in the part of capital which consists of raw material is brought into consideration at the same time. [XVI-1017] Let us take an extreme case: the rearing of sheep on a modern scale, where previously small-scale agriculture predominated. But here two different branches of industry are being compared. The amount of labour — or of capital laid out in wages — which is suppressed here is enormous. Hence the constant capital can also grow enormously. And it is very much the question whether the total capital which is here allotted to the individual shepherds is greater than the total amount of the capitals which were previously divided among several hundred shepherds. It is questionable whether, in individual branches of industry in which the total capital undergoes extraordinary growth, profit originates at all from the surplus value produced in these branches and not rather, in connection with the calculations made by the capitalists between themselves, from the general surplus value produced by the sum total of all the capitals. Many ways of increasing productive power, particularly with the employment of machinery, require absolutely no relative increase in capital outlay. Often only relatively inexpensive alterations in the part of the machine which provides the motive force, etc. See examples. Here the increase in productive power is unusually great compared to the capital outlay which falls to the relative share of the individual worker — of the individual commodity as well. Thus here — at least as far as this part of the capital is concerned — the capital laid out in raw material grows the more rapidly — no noticeable reduction in the rate of profit — at least not to the extent that it would be caused by an increase in this part of the capital. On the other hand, although the capital does not grow here so much relatively speaking, it is true to say, as it is in the general case overall, that for the most part the absolute amount of capital employed — hence the concentration of capital or the scale on which work is done — must grow very significantly. More powerful steam engines (of more horsepower) are absolutely dearer than less powerful ones. But relatively speaking their price falls. Even so, a greater outlay of capital — a greater concentration of capital in one hand — is required for their employment. A bigger factory building is absolutely dearer, but relatively cheaper, than a smaller one. If every aliquot part of the total capital is smaller in proportion to the total capital employed by the labour saved, this aliquot part can mostly be employed solely in such multiples as will raise the total amount of capital employed to an extraordinary degree or in particular the part of the total capital not consumed in a single turnover, the part the consumption of which extends over a period of turnovers lasting many years. It is in general only with this work on a large scale that productive power is increased tremendously, since it is only in this way that: 1) the principle of multiples, which underlies simple cooperation, and is repeated in the division of labour and the employment of machinery, can correctly be applied. (See Babbage, on how this increases the scale of production, i.e. the concentration of capital.) 2) The greater altogether the number of workers employed on the new scale, the smaller, relatively, the portion of fixed capital which enters as wear and tear for buildings, etc. The greater the principle of the cheapening of production costs by joint utilisation of the same use values, as lighting, heating, common use of the motive power, etc. [XVI-1018] The more is it possible to employ absolutely dearer, but relatively cheaper, instruments of production. The circumstance that in some branches of production, railways, canals, etc., where an immense fixed capital is employed, these are not independent sources of surplus value, because the ratio between the labour exploited and the capital laid out is too small. A further remark needs to be added to the previous page: It is possible that if a capital of 500 was needed for 20 workers, and now a total capital of only 400 is needed for 2, 2,000 workers will now have to be employed, hence a capital of 400,000, in order to employ the aliquot parts of the 400 productively. It has already been shown’ that even with an increased rate of surplus value the relative reduction in the number of workers to be exploited can only be counterbalanced by a very great increase in the multiple of labour. This is seen (appears) in competition. Once the new invention has been introduced generally, the rate of profit becomes too small for a small capital to be able to continue to operate in the given branch of industry. The amount of necessary conditions of production grows in general in such a way that a significant minimum level comes into existence, which excludes all the smaller capitals from this branch of production for the future. It is only at the beginning that small capitals can exploit mechanical inventions in every sphere of production. The growth of capital only implies a reduction in the rate of profit to the extent that with the growth of capital the above-mentioned changes take place in the ratio between its organic components. However, despite the constant daily changes in the mode of production, capital, or a large part of it, always continues to accumulate over a longer or shorter period on the basis of a definite average ratio between those organic components, so that no organic change occurs in its constituent parts as it grows. On the other hand, a reduction in the rate of profit can only be enforced by a growth in capital — because of a growth in the absolute amount of profit — as long as the rate of profit does not fall in the same proportion as the capital grows. The obstacles which stand in the way of this are to be found in the considerations we have already brought forward. Absolute plethora of capital. Increase in workers, etc., despite the relative decline in variable capital or capital laid out in wages. However, this does not take place in all spheres of production [XVI-1019]. E.g. not in agriculture. Here the decline in the element of living labour is absolute. An increase in the amount of labour on the new production basis is in part necessary in order to compensate for the lessened rate of profit by means of the amount of profit; in part in order to compensate for the fall in the magnitude of surplus value which accompanies the rising rate of surplus value on account of the absolute reduction in the number of workers exploited by means of an increase in the number of workers on the new scale. Finally the principle of multiples touched on earlier, But it will be said that if the variable capital declines in sphere of production I, it increases in the others, namely those which are employed in the production of the constant capital needed for sphere of production I. Nevertheless, the same relation enters here, e.g. in the production of machinery, in the production of raw products, matières instrumentales, e.g. coal. The tendency is general, although it is first realised in the different spheres of production by fits and starts. It is counterbalanced by the fact that the spheres of production themselves increase. In any case, it is only a need of the bourgeois economy that the number of people living from their labour alone should increase absolutely, even if it declines relatively. Since labour capacities become superfluous for the bourgeois economy once it is no longer necessary to exploit them for 12 to 15 hours a day. A development of productive power which reduced the absolute number of workers, i.e. in fact enabled the whole nation to execute its total production in a smaller period of time, would bring about revolution, because it would demonetise the majority of the population. Here there appears once again the limit of bourgeois production, and the fact becomes obvious that it is not the absolute form for the development of productive power, that it rather enters into collision with the latter at a certain point. In part this collision appears constantly, with the crises, etc., which occur when now one now another component of the working class becomes superfluous in its old mode of employment. Its limit is the surplus time of the workers; it is not concerned with the absolute surplus time gained by society. The development of productive power is therefore only important in so far as it increases the surplus labour time of the workers, not in so far as it reduces labour time for material production in general. It is therefore embedded in a contradiction. The rate of surplus value — i.e. the ratio of surplus to necessary labour time for the individual worker (therefore in so far as surplus value is not modified in the different spheres of production by the proportion between the organic components of capital, turnover time, etc.) — is automatically balanced out in all the spheres of production, and this is a basis for the general rate of profit. (The modifications which in this way influence the necessary costs of production are compensated for by the competition between capitalists, by the different items which they bring into consideration when dividing among themselves the general surplus value.) [XVI-1020] That the rate of surplus value rises means nothing other than that the cost of production of labour capacity falls, hence necessary labour time falls, in the proportion to which the specific product of that particular sphere of production which has become cheaper enters into the general consumption of the workers. This cheapening of labour capacity, reduction in necessary labour time, increase in absolute labour time, therefore takes place uniformly, and influences all spheres of capitalist production uniformly, not only those in which the development of productive power has taken place, but also those whose products do not enter at all into the consumption of the workers, and in which the development of productive power can therefore create no relative surplus value. (It is therefore clear that in competition, once the monopoly in the new invention has come to an end, the price of the product is reduced to its production costs.) If, therefore, 20 workers who work 2 hours of surplus labour are replaced by 2, it is correct, as we have seen already, that these 2 can under no circumstances provide as much surplus labour as the 20 did previously. But in all spheres of production the surplus labour rises in proportion to the cheapening of the product of the 2 workers, and it rises without any alteration having taken place in the ratio of the organic components of the capitals employed by the spheres of production. On the other hand, an increase in the value of the product of a sphere of production of this kind, which enters into the reproduction of labour capacity, has just as general an effect; this may wholly or partially paralyse that surplus value. In the first case, however, the surplus labour time gained is not to be estimated by the sphere of production in which the increase of productive power has taken place, but by the sum total of the diminutions of necessary labour time in all spheres of capitalist production. But the more general the relation becomes, with 2 replacing 20 in all or most spheres of production, under the same proportions between total capital and variable capital, the more does the relation in the totality of capitalist production raise the relation in the particular spheres of production. I.e. no reduction in necessary labour time could create the amount of surplus value there was previously, when 20 worked instead of 2. And under all circumstances the rate of profit would then fall, even if the capital itself increased so much that a number [of workers] equally great or even greater than before could be employed under the new conditions of production. The accumulation of capital (considered materially) is double. It consists on the one hand in the growing amount of past labour, or the available amount of the conditions of labour; the material prerequisites, the already available products and numbers of workers, under which new production or reproduction takes place. Secondly, however, in the concentration, the reduction in the number of capitals, the growth of the capitals present in the hands of the individual capitalist, in short in a new distribution of capitals, of social capital. The power of capital as such grows thereby. The independent position achieved by the social conditions of production [XVI-1021] vis-à-vis the real creators of those conditions of production, as represented in the capitalist, thereby becomes increasingly apparent. Capital shows itself more and more as a social power (the capitalist is merely its functionary, and it no longer stands in any relation to what the labour of an individual creates or can create), but an alienated social power which has become independent, and confronts society as a thing — and through this thing as a power of the individual capitalist. On the other hand, constantly increasing masses [of people] are thereby deprived of the conditions of production and find them set over against them. The contradiction between the general social power which capital is formed into, and the private Power of the individual capitalist over these social conditions of production becomes ever more glaring, and implies the dissolution of this relation, since it implies at the same time the development of the material conditions of production into general, therefore communal social conditions of production. This development is given by the development of productive power along with capitalist production and by the manner in which this development of productive, power takes shape. The question now is, how is the accumulation of capital affected by the development of the productive forces, in so far as they find expression in change[s] in surplus value and the rate of profit, and how far is it influenced by other factors? Ricardo says that capital can grow in two ways: 1) in that a greater amount of labour is contained in the greater amount of products, hence the exchange value of the use values grows along with their quantity; 2) in that the quantity of use values grows, but not their exchange value, hence the increase occurs simply through an increase in the productivity of labour.
By Jack Carr (auth.) These notes are in accordance with a sequence of lectures given within the Lefschetz middle for Dynamical structures within the department of utilized arithmetic at Brown collage in the course of the educational yr 1978-79. the aim of the lectures was once to offer an advent to the functions of centre manifold thought to differential equations. many of the fabric is gifted in an off-the-cuff model, via labored examples within the wish that this clarifies using centre manifold conception. the most program of centre manifold concept given in those notes is to dynamic bifurcation thought. Dynamic bifurcation idea is worried with topological adjustments within the nature of the ideas of differential equations as para meters are different. Such an instance is the production of periodic orbits from an equilibrium element as a parameter crosses a serious price. In sure conditions, the applying of centre manifold idea reduces the size of the process below research. during this admire the centre manifold idea performs a similar function for dynamic difficulties because the Liapunov-Schmitt method performs for the research of static suggestions. Our use of centre manifold concept in bifurcation difficulties follows that of Ruelle and Takens [57) and of Marsden and McCracken [51). Read or Download Applications of Centre Manifold Theory PDF Similar topology books This assortment brings jointly influential papers by way of mathematicians exploring the learn frontiers of topology, probably the most vital advancements of recent arithmetic. The papers conceal quite a lot of topological specialties, together with instruments for the research of staff activities on manifolds, calculations of algebraic K-theory, a consequence on analytic buildings on Lie staff activities, a presentation of the importance of Dirac operators in smoothing idea, a dialogue of the reliable topology of 4-manifolds, a solution to the recognized query approximately symmetries of easily hooked up manifolds, and a clean viewpoint at the topological category of linear differences. 8 themes in regards to the unit cubes are brought inside this textbook: pass sections, projections, inscribed simplices, triangulations, 0/1 polytopes, Minkowski's conjecture, Furtwangler's conjecture, and Keller's conjecture. particularly Chuanming Zong demonstrates how deep research like log concave degree and the Brascamp-Lieb inequality can take care of the move part challenge, how Hyperbolic Geometry is helping with the triangulation challenge, how team earrings can care for Minkowski's conjecture and Furtwangler's conjecture, and the way Graph concept handles Keller's conjecture. Algebraic topology is the examine of the worldwide homes of areas by way of algebra. it really is an incredible department of recent arithmetic with a large measure of applicability to different fields, together with geometric topology, differential geometry, sensible research, differential equations, algebraic geometry, quantity concept, and theoretical physics. Common topology, topological extensions, topological absolutes, Hausdorff compactifications - General Topology and Homotopy Theory - Topology of Surfaces (Undergraduate Texts in Mathematics) - Proceedings of the Gökova Geometry-Topology Conference 2006 - Topology now Extra info for Applications of Centre Manifold Theory We now apply the theory given in the previous section to show that (3 . 8) has a periodic solution bifurcating from the origin for certain values of the parameters. 8) about y • :t • 0 is given by '26 rJ! -1 - y o z ] + OCt). If (3 . 8) is to have a Hopf bifurcation then we must have trace(J(£)) = 0 and '26 rJ! -1 - YZ > O. 4) rJ! > O. We do not attempt to ob- tain the general conditions under which the above conditions are satisfied, we only work out a special case. Lemma . Let Y1 ZYZ . < 6(£), x O(£), bO(t) Then for each such that t > 0, there exists 0 < Zx O(£) < I, bo(t) > 0, rJ! 6) has a centre mani- w· h(y,z,E). 6) is determined by the equation 3. 3. 7) £f 3 (h(y,z,£),y,Z,t) or in terms of the original time scale y. 8) i . f 3 (h(y,z,£),y,z,£) . We now apply the theory given in the previous section to show that (3 . 8) has a periodic solution bifurcating from the origin for certain values of the parameters. 8) about y • :t • 0 is given by '26 rJ! -1 - y o z ] + OCt). If (3 . 8) is to have a Hopf bifurcation then we must have trace(J(£)) = 0 and '26 rJ! -1 - YZ > O. 4) rJ! X· (X I ,X 2 ' ••• ,x n ) the existence of h(x,E) for then we can similarly prove m. < x. < m.. -1 1 1 The flow on the invariant manifold is given by the equation u' Au + Ef(u,h(u,E». 5) holds. Finally, we state an approximation result. Theorem S. Let ~: mn + l +mm I(M,)(x,E) I < CE P for satisfy Ixl ~ m where ~(O,O)· 0 and p is a positive integer, C is a constant and (M+)(x,E) • Dx+Cx,£) [Ax + e:f(x,+Cx,E»l - B+Cx,E) - Eg(X,+CX,E». Then, for Ixl ~ m, for some constant Cl . 8. Centre Manifold Theorems for Maps 33 Theorem 5 is proved in exactly the same way as Theorem 3 so we omit the proof.
1. Which of the following forms of payment is not an incentive plan? A. Commission plans for salesman B. Flat salary for a plant manager C. Bounses for managers that increase as profits increase D. None of the above 2. When relationship-specific exchange occurs in complex contractural environments, the best way to purchase inputs is through: A. Spot markets B. Vertical integration C. Short-term agency agreements D. Long-term contracts 3. Suppose compensation is given by W = 512,000 + 217X(Profits)+ 10.08S, where W = total compensation of the CEO, X = company profits (in millions) = $200, and S = Sales (in millions) = $400. How much will this CEO be compensated? A. $812,431 B. $43,400 C. $555,400 D. $559,432 4. Long-term contracts are not efficient if: A. A firm engages in relationship-specific exchange B. Specialized investments are unimportant C. The contractural environment is simple D. A and C, only 5. The solutions to the principal-agent problem ensures that the firm is operating A. On the production function B. Above the production function C. Below the production function D. Above the isoquant curve 6. Spot exchange typically involves A. No transaction costs B. Some transaction costs C. Extremely high transaction costs D. Long-term contracts 7. Given that the income of franchise restaurant managers is directly tied to profits and the income of the manager of the company owned restaurant is paid a flat fee, we might expect profits to be A. Higher in company-owned restaurants B. Lower in company-owned restaurants C. Equal in both types of restaurants D. None of the above 8. Which of the following is not a benefit associated with producing inputs within a firm? A. Reductions in transaction costs B. Reductions in opportunism C. Gains of specializing D. Mitigation of hold-up problems 9. A firm=s average cost is $20 and it charges a price of $20. The Lerner index for this firm is: A. .20 B. .50 C. .33 D. Insufficient information 10. The concentration and Herfindahl indices computed by the US Bureau of Census must be interpreted with caution because: A. They overstate the actual level of concentration in markets served by foreign firms. B. They undersate the degree of concentration in local markets, such as the gas market. C. All of the above. D. None of the above. 11. Suppose that there are 2 industries, A and B. There are five firms in industry A with sales at $5 million, $2 million, $1 million, $ 1 million, and $1 million, respectively. There are 4- firms in industry B with equal sales of $2.5 million for each firm. The firm concentration ratio for industry A is: A. 0.7 B. 0.8 C. 0.9 D. 1.0 12. As a general rule of thumb, industries with a Herfindahl index below are considered to be competitive, while those above are considered non-competitive. A. 1000, 3000 B. 1800, 1000 C. 1500, 2500 D. 1800, 3000 13. Which of the following measures market structure? A. Four-firm concentration ratio B. Lerner index C. Herfindahl-Hirshman index D. All of the above may be used to make inferences about market structures 14. Which of the following integration types exploits economies of scope? A. Vertical integration B. Horizontal integration C. Cointegration D. Conglomerate integration 15. Which of the following may transform an industry from oligopoly to monopolistic competition? A. Entry B. Takeover C. Exit D. Acquisition 16. Which market structure has the most market power? A. Monopolistic competition B. Perfect competition C. Monopoly D. Oligopoly 17. You are the manager of a firm that produces output in 2 plants. The demand for your firm’s product is P = 78 – 15Q, where Q = Q1 + Q2. The marginal cost associated with producing in 2 plants are MC 1= 3Q 1 and MC 2 = 2Q 2. How much output should be produced in plant 1 in order to maximize profits A. 1 B. 2 C. 3 D. 4 18. Which of the following is true? A. A monopolist produces on the inelastic portion of its demand. B. A monopolist always earns an economic profit. C. The more inelastic the demand, the closer marginal revenue is to price. D. In the short run a monopoly will shutdown if P < AVC. 19. You are the manager of a monopoly that faces a demand curve described by P = 63 – 5Q. Your costs are C = 10 + 3Q. Your firm’s maximum profits are A. 0 B. 66 C. 120 D. 170 20. If a monopolistically competitive firm’s marginal cost increases, then in order to maximize profits the firm will: A. Reduce output and increase price B. Increase output and decrease price C. Increase both output and price D. Reduce both output and price 21. Suppose that initially the price is $50 in a perfectly competitive market. Firms are making zero economic profits. Then the market demand shrinks permanently and some firms leave the industry and the industry returns back to a long run equilibrium. What will be the new equilibrium price, assuming cost conditions in the industry remain constant? A. $50 B. $45 C. Lower than $50 but exact value cannot be known without more information. D. Larger than $45 buy exact value cannot be known without more information. 22. A monoploy has 2 production plants with cost functions C 1 =50 +0.1Q1(squared) and C 2 =30+0.05Q The demand it faces is Q=500-10P. What is the condition for profit maximization? A. MC 1 (Q1 ) = MC 2 (Q2 ) = P (Q 1 + Q 2 ) B. MC 1 (Q1 ) = MC 2 (Q2) =MR (Q 1 + Q 2) C. MC 1 (Q1 + Q 2) = MC 2 (Q1 + Q2) = P ( Q1+ Q 2) D. MC 1 (Q 1+ Q 2) = MC 2 (Q 1+Q 2) = MR (Q1 + Q 2) 23. You are a manager in a perfectly competitive market. The price in your market is $14. Your total cost curve is C(Q) = 10 + 4Q + 0.5 Q(squared). What price should you charge in the short run. A. $18 B. $16 C. $14 D. $12 24. You are a manager for a monopolistically competitive firm. From experience, the profit- maximizing level of output of your firm is 100 units. However, it is expected that prices of other close substitutes will fall in the near future. How should you adjust your level of production in response to this change? A. Produce more than 100 units. B. Produces less than 100 units. C. Produce 100 units. D. Insufficient information to decide 25. Which of the following is true? A. In Bertrand oligopoly each firm believes that their rivals will hold their output constant if it changes its output B. In Cournot oligopoly firms produce an identical product at a constant marginal cost and engage in price competition C. In oligopoly a change in marginal cost never has an affect on output or price D. None of the above are true 26. Two firms compete in a Stackelberg fashion and firm 2 is the leader, then A. Firm 1 views the output of firm 2 as given B. Firm 2 views the output of firm 1 as given C. All of the above D. None of the above 27. A firm=s isoprofit curve is defined as: A. The combinations of outputs produced by a firm that earns it the same level of profits B. The combinations of outputs produced by all firms that yield the firm the same level of profit. C. The combinations of outputs produced by all firms that makes total industry profits constant D. None of the above 28.Two firms compete as a Stackelberg duopoly. The demand they face is P = 100 – 3Q. The cost function for each firm is C(Q) = 4Q. The profits of the 2 firms are: A. P L=leader) = $56; Q(F=follower) = $28 B. P L = $384; PF = $192 C. PL = 192; PF = 91 D. PL = 56; PF = -28 29. The spirit of equating marginal cost with marginal revenue is not held by: A. Perfectly competitive firms B. Oligopolistic firms C. Both A and B D. None of the above 30. Which would expect to make the highest profits, other things equal: A. Bertrand oligopolist B. Cournot oligopolist C. Stackelberg leader D. Stackelberg follower 31. The inverse demand in a Cournot duopoly is P = a – b (Q 1 + Q 2), and costs are C1 (Q1 ) = c 1 Q 1, and C 2 (Q2) = c 2 Q 2 . The Government has imposed a per unit tax of $t on each unit sold by each firm. The tax revenue is: A. t times the total output of the 2 firms should there be no sales tax B. Less than t times the total output of the 2 firms should there be no sales tax C. Greater than t times the total output of the 2 firms should there be no sales tax D. None of the above 32. A new firm enters a market which is initially serviced by a Cournot duopoly charging a price of $20. What will the new market price be should the 3 firms co-exist after the entry? A. $20 B. Below $20 C. Above $20 D. None of the above 33. Firm A has higher marginal cost than Firm B. They compete in a homogeneous product Cournot Duopoly. Which of the following results will not occur. A. Qa> Qb B. Profit A < Profit B C. Revenue of A< Revenue of B D. Price A= Price B 1. If a firm manager has a base salary of $50,000 and also gets 2% of all profits, how much will his income be if revenues are $8,000,000 and profits are $2,000,000. A. $250,000 B. $210,000 C. $90,000 D. $150,000 2. The industry elasticity of demand for telephone service is -2 while the elasticity of demand for a specific phone company is -5. What is the Rothchild index? A. 0.2 B. 0.4 C. 0.5 D. 0.7 3. You are the manager in a perfectly competitive market. The price in your market is $14. Your total cost curve is C(Q) = 10 + 4Q + 0.5 Q(squared). What will happen in the long-run if there is no change in the demand curve? A. Some firms will leave the market eventually. B. Some firms will enter the market. C. There will be neither entry nor leave. D. None of the above 4. 2 firms compete as a Stackelberg duopoly. The demand they face is P = 100 -3Q. The cost function for each firm is C(Q) = 4Q. The profits (X) of the 2 firms are: A. X L = $384; X F = $192 B. X L = $192; X F = $91 C. X L = $56; X F = (- $28) D. X L = $56; X F = $28 5. Sue and Jane own 2 local gas stations. They have identical constant marginal costs, but earn zero economic profits. Sue and Jane constitute: A. A Sweezy oligopoly B. A Cournot oligopoly C. A Bertrand oligopoly D. None of the above
- related to: theory of relativity equation - E = mc 2, equation in German-born physicist Albert Einstein’s theory of special relativity that expresses the fact that mass and energy are the same physical entity and can be changed into each other. In the equation, the increased relativistic mass (m) of a body times the speed of light squared (c 2) is equal to the kinetic energy (E) of that body. People also ask What are the main postulates of theory of relativity? How to study the theory of relativity? Can someone explain the theory of relativity? What are the principles of relativity? May 16, 2017 · Einstein’s Relativity Explained in 4 Simple Steps. The revolutionary physicist used his imagination rather than fancy math to come up with his most famous and elegant equation. Albert Einstein ... Mar 30, 2017 · One of the most famous equations in mathematics comes from special relativity. The equation — E = mc2 — means "energy equals mass times the speed of light squared." It shows that energy (E) and... To derive the equations of special relativity, one must start with two postulates: The laws of physics are invariant under transformations between inertial frames. In other words, the laws of physics will be the same whether you are testing them in a frame 'at rest', or a frame moving with a constant velocity relative to the 'rest' frame. May 28, 2021 · In the famous relativity equation, E = m c 2, the speed of light (c) serves as a constant of proportionality linking the formerly disparate concepts of mass (m) and energy (E).… History at your fingertips Sign up here to see what happened On This Day, every day in your inbox! E = mc 2 - Start with Newton - Different Force, Same Formula - The Problem with Newton - Why We Need Fields - Gravity and Spacetime - The Equation - Not Just One Equation - About This Article The general theory of relativity describes the force ofgravity. Einstein wasn't the first to come up with such a theory —back in 1686 Isaac Newton formulated his famous inverse square law ofgravitation. Newton's law works perfectly well on small-ish scales: we can use it to calculate how fast an objectdropped off a tall building will hurtle to the ground and even to sendpeople to the Moon. But whendistances and speeds are very large, or very massive objects are involved, Newton's lawbecomes inaccurate. It's a good place to start though, as it's easierto describe than Einstein's theory. Suppose you have two objects, say the Sun and the Earth, with masses and respectively. Write for the distance between the two objects. Then Newton’s law says that the gravitational force between them is where is a fixed number, known as Newton's constant. The formula makes intuitive sense: it tells us that gravity gets weaker over long distances (the larger the smaller ) and that the gravitational for... There is another formula which looks very similar, but describes adifferent force. In 1785 the French physicist Charles-Augustinde Coulomb came up with an equation to capture the electrostatic force that acts between two charged particles with charges and : Here stands for the distance between the two particles and is a constant which determines the strength of electromagnetism. (It has the fancy name permittivity of free space.) Newton's and Coulomb's formulas are nice and neat, but there is aproblem. Going back to Newton's law, suppose you took the Earth andthe Sun and very quickly moved them further apart. This would make theforce acting between them weaker, but, according to the formula, theweakening of the force would happen straight away, the instant you move thetwo bodies apart. The same goes for Coulomb's law: moving the chargedparticles apart very quickly would result in an immediate weakening ofthe electrostatic force between them. But this can't be true. Einstein's special theory of relativity,proposed ten years before the general theory in 1905, says thatnothing in the Universe can travel faster than light — not even the"signal" that communicates that two objects have moved apart and theforce should become weaker. This is one reason why the classical idea of a force needs replacing in modern physics. Instead, we need to think in terms of something — newobjects — that transmit the force between one object and another. This was the great contribution of the British scientist MichaelFaraday to theoretical physics. Faraday realised that spread throughoutthe Universe there are objects we today call fields, whichare involved in transmitting a force. Examplesare the electric and magnetic fields you are probably familiar withfrom school. A charged particle gives rise to an electric field, which is"felt" by another particle (which has its own electric field). Oneparticle will move in response to the other's electric field — that's whatwe call a force. When one particle is quickly moved away from the other, then thiscauses ripples in the first particle's electric field. The ripples travel through space, atthe speed of light, and eventually affect the other particle. In fact, theparticle that is moved a... So what about gravity? Just as with electromagnetism there needs tobe a field giving rise to what we perceive as the gravitational forceacting between two bodies. Einstein's great insight was that this field is made of something we already know about: space and time. Imagine a heavy body, like the Sun, sitting in space. Einsteinrealised that space isn't just a passive by-stander, but responds tothe heavy object by bending. Another body, like the Earth, movinginto the dent created by the heavier object will be diverted by thatdent. Rather than carrying on moving along a straight line, it will startorbiting the heavier object. Or, if it is sufficiently slow, willcrash into it. (It took Einstein many years of struggle to arrive athis theory — see thisarticleto find out more.) Another lesson ofEinstein's theory is that space and time can warp into each other —they are inextricable linked and time, too, can be distorted by massiveobjects. This is why we talk, not just about the curvature... The general theory of relativity is captured by a deceptively simple-looking equation: Essentially the equation tells us how a given amount of mass and energy warps spacetime. The left-hand side of the equation, describes the curvature of spacetime whose effect we perceive as the gravitational force. It’s the analogue of the term on the left-hand side of Newton’s equation. The term on right-hand side of the equation describes everything there is to know about the way mass, energy, momentum and pressure are distributed throughout the Universe. It is what became of the term in Newton’s equation, but it is much more complicated. All of these things are needed to figure out how space and time bend. goes by the technical term energy-momentum tensor. The constant that appears on the right-hand side of the equation is again Newton’s constant and is the speed of light. What about the Greek letters and that appear as subscripts? To understand what they mean, first notice that spacetime has f... In Einstein’s equation the Greek letters and are labels, which can each take on the values 0, 1, 2 or 3. So really, the equation above conceals a whole collection of equations corresponding to the possible combinations of values the and can take: and so on. The value of 0 corresponds to time and the values 1,2 and 3 to the three dimensions of space. The equation therefore relates to time and the 1-direction of space. The term on the right-hand side describes the momentum (speed and mass) of matter moving in the 1-direction of space. The motion causes time and the 1-direction of space to mix and warp into each other — that effect is described by the left-hand side of the equation. (The analogue goes for an equation with and equal to 2 or 3.) If the equation only has 1s, 2s or 3s, for example then it relates only to space. The term on the right-hand side now measures the pressurethat matter causes in the corresponding direction of space. The left-hand side tells you how that matter ca... David Tong is a theoretical physicist at the Universiy of Cambridge. He works on quantum theory and general relativity. The Einstein Field Equations are ten equations, contained in the tensor equation shown above, which describe gravity as a result of spacetime being curved by mass and energy. is determined by the curvature of space and time at a particular point in space and time, and is equated with the energy and momentum at that point. The solutions to these ... Einstein’s theory of special relativity describes what happens as things near the speed of light. Here are some important special-relativity equations that deal with time dilation, length contraction, and more. About the Book Author Steven Holzner, PhD, taught physics at Cornell University for more than a decade. The theory, which Einstein published in 1915, expanded the theory of special relativity that he had published 10 years earlier. Special relativity argued that space and time are inextricably ... - related to: theory of relativity equation
Shut the Box game for an adult and child. Can you turn over the cards which match the numbers on the dice? An old game but lots of arithmetic! Throw the dice and decide whether to double or halve the number. Will you be the first to reach the target? Can you use the numbers on the dice to reach your end of the number line before your partner beats you? Have a go at this game which involves throwing two dice and adding their totals. Where should you place your counters to be more likely to win? A game for 2 people. Use your skills of addition, subtraction, multiplication and division to blast the asteroids. Can you use the information to find out which cards I have used? This is an adding game for two players. Place six toy ladybirds into the box so that there are two ladybirds in every column and every row. Use the information about Sally and her brother to find out how many children there are in the Brown family. In this game for two players, the aim is to make a row of four coins which total one dollar. Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it. Who said that adding couldn't be fun? In this game for two players, the idea is to take it in turns to choose 1, 3, 5 or 7. The winner is the first to make the total 37. Use the number weights to find different ways of balancing the equaliser. Can you hang weights in the right place to make the equaliser balance? A game for 2 players. Practises subtraction or other maths operations knowledge. A game for 2 or more players. Practise your addition and subtraction with the aid of a game board and some dried peas! Find all the numbers that can be made by adding the dots on two dice. Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game. In this game, you can add, subtract, multiply or divide the numbers on the dice. Which will you do so that you get to the end of the number line first? Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? Place the numbers 1 to 6 in the circles so that each number is the difference between the two numbers just below it. This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code? Katie had a pack of 20 cards numbered from 1 to 20. She arranged the cards into 6 unequal piles where each pile added to the same total. What was the total and how could this be done? Using the cards 2, 4, 6, 8, +, - and =, what number statements can you make? Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? Tim had nine cards each with a different number from 1 to 9 on it. How could he have put them into three piles so that the total in each pile was 15? Make one big triangle so the numbers that touch on the small triangles add to 10. You could use the interactivity to help you. If you hang two weights on one side of this balance, in how many different ways can you hang three weights on the other side for it to be balanced? Strike it Out game for an adult and child. Can you stop your partner from being able to go? This dice train has been made using specific rules. How many different trains can you make? Choose four of the numbers from 1 to 9 to put in the squares so that the differences between joined squares are odd. Place the digits 1 to 9 into the circles so that each side of the triangle adds to the same total. Cassandra, David and Lachlan are brothers and sisters. They range in age between 1 year and 14 years. Can you figure out their exact ages from the clues? What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates yourself. In how many ways could Mrs Beeswax put ten coins into her three puddings so that each pudding ended up with at least two coins? Can you work out how many flowers there will be on the Amazing Splitting Plant after it has been growing for six weeks? Sam got into an elevator. He went down five floors, up six floors, down seven floors, then got out on the second floor. On what floor did he get on? Can you substitute numbers for the letters in these sums? What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros? How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square. Noah saw 12 legs walk by into the Ark. How many creatures did he see? A group of children are using measuring cylinders but they lose the labels. Can you help relabel them? This task, written for the National Young Mathematicians' Award 2016, focuses on 'open squares'. What would the next five open squares look like? Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? On Planet Plex, there are only 6 hours in the day. Can you answer these questions about how Arog the Alien spends his day? The clockmaker's wife cut up his birthday cake to look like a clock face. Can you work out who received each piece?
- Open Access Optimal inequalities for the Casorati curvatures of submanifolds of real space forms endowed with semi-symmetric metric connections © Lee et al.; licensee Springer. 2014 - Received: 17 May 2014 - Accepted: 30 July 2014 - Published: 1 September 2014 In this paper, we prove two optimal inequalities involving the intrinsic scalar curvature and extrinsic Casorati curvature of submanifolds of real space forms endowed with a semi-symmetric metric connection. Moreover, we show that in both cases, the equality at all points characterizes the invariantly quasi-umbilical submanifolds. - Casorati curvature - real space form - semi-symmetric metric connection The idea of a semi-symmetric linear connection on a differentiable manifold was introduced by Friedmann and Schouten in . The notion of a semi-symmetric metric connection on a Riemannian manifold was introduced by Hayden in . Later, Yano in studied some properties of a Riemannian manifold endowed with a semi-symmetric metric connection. In [4, 5], Imai found some properties of a Riemannian manifold and a hypersurface of a Riemannian manifold with a semi-symmetric metric connection. Nakao in studied submanifolds of a Riemannian manifold with semi-symmetric metric connections. On the other hand, the theory of Chen invariants, initiated by Chen in a seminal paper published in 1993, is presently one of the most interesting research topic in differential geometry of submanifolds. Chen established a sharp inequality for a submanifold in a real space form using the scalar curvature and the sectional curvature, and the squared mean curvature. That is, he established simple relationships between the main intrinsic invariants and the main extrinsic invariants of a submanifold in real space forms with any codimensions in . Many famous results concerned Chen invariants and inequalities for the different classes of submanifolds in various ambient spaces, like complex space forms [9–11]. Recently, in [12, 13], Mihai and Özgür proved Chen inequalities for submanifolds of real, complex, and Sasakian space forms endowed with semi-symmetric metric connections and in [14, 15], Özgür and Murathan gave Chen inequalities for submanifolds of a locally conformal almost cosymplectic manifold and a cosymplectic space form endowed with semi-symmetric metric connections. Moreover, Zhang et al. obtained Chen-like inequalities for submanifolds of a Riemannian manifold of quasi-constant curvature endowed with a semi-symmetric metric connection by using an algebraic approach. Instead of concentrating on the sectional curvature with the extrinsic squared mean curvature, the Casorati curvature of a submanifold in a Riemannian manifold was considered as an extrinsic invariant defined as the normalized square of the length of the second fundamental form. The notion of Casorati curvature extends the concept of the principal direction of a hypersurface of a Riemannian manifold. Several geometers in [17–21] found geometrical meaning and the importance of the Casorati curvature. Therefore, it is of great interest to obtain optimal inequalities for the Casorati curvatures of submanifolds in different ambient spaces. Decu et al. in obtained some optimal inequalities involving the scalar curvature and the Casorati curvature of a Riemannian submanifold in a real space form and the holomorphic sectional curvature and the Casorati curvature of a Kähler hypersurface in a complex space form. They also proved an inequality in which the scalar curvature is estimated from above by the normalized Casorati curvatures in . Recently, some optimal inequalities involving Casorati curvatures were proved in [24, 25] for slant submanifolds in quaternionic space forms. As a natural prolongation of our research, in this paper we will study these inequalities for submanifolds in real space forms, endowed with semi-symmetric metric connections. - (i)The normalized δ-Casorati curvature satisfiesMoreover, the equality sign holds if and only if is an invariantly quasi-umbilical submanifold with trivial normal connection in , such that with respect to suitable orthonormal tangent frame and normal orthonormal frame , the shape operators , , take the following forms: - (ii)The normalized δ-Casorati curvature satisfiesMoreover, the equality sign holds if and only if is an invariantly quasi-umbilical submanifold with trivial normal connection in , such that with respect to suitable orthonormal tangent frame and normal orthonormal frame , the shape operators , , take the following forms: for a 1-form ϕ, then the connection is called a semi-symmetric connection. Let g be a Riemannian metric on . If , then is called a semi-symmetric metric connection on . for any vector fields and on , where denotes the Levi-Civita connection with respect to the Riemannian metric g and P is a vector field defined by , for any vector field . We will consider a Riemannian manifold endowed with a semi-symmetric metric connection and the Levi-Civita connection denoted by . Let be an n-dimensional submanifold of an m-dimensional Riemannian manifold . On the submanifold , we consider the induced semi-symmetric metric connection, denoted by ∇ and the induced Levi-Civita connection, denoted by . Let be the curvature tensor of with respect to and the curvature tensor of with respect to . We also denote by R and the curvature tensors of ∇ and , respectively, on . where is the second fundamental form of in and h is a -tensor on . According to the formula (7) from , h is also symmetric. One denotes by the mean curvature vector of and . Let be a real space form of constant sectional curvature c endowed with a semi-symmetric metric connection . Denote by λ the trace of α. The submanifold M is called invariantly quasi-umbilical if there exist mutually orthogonal unit normal vectors such that the shape operators with respect to all directions have an eigenvalue of multiplicity and that for each the distinguished eigendirection is the same . for every tangent hyperplane L of M. Taking the infimum over all tangent hyperplane L, the theorem trivially follows. Remark We have a slightly modified coefficient in the definition of ; in fact, it was used the coefficient , as in [22, 23, 25], instead of , like in the present paper because we are working on the generalized normalized δ-Casorati curvature for a positive real number , as in . The authors would like to thank the referee for his valuable comments and suggestions which helped to improve the paper. - Friedmann A, Schouten JA: Über die Geometrie der halbsymmetrischen Übertragungen. Math. Z. 1924, 21: 211-223. 10.1007/BF01187468MathSciNetView ArticleMATHGoogle Scholar - Hayden HA: Subspaces of a space with torsion. Proc. Lond. Math. Soc. 1932, 34: 27-50.MathSciNetView ArticleMATHGoogle Scholar - Yano K: On semi-symmetric metric connection. Rev. Roum. Math. Pures Appl. 1970, 15: 1579-1586.MathSciNetMATHGoogle Scholar - Imai T: Hypersurfaces of a Riemannian manifold with semi-symmetric metric connection. Tensor 1972, 23: 300-306.MathSciNetMATHGoogle Scholar - Imai T: Notes on semi-symmetric metric connections. Tensor 1972, 24: 293-296.MathSciNetMATHGoogle Scholar - Nakao Z: Submanifolds of a Riemannian manifold with semi-symmetric metric connections. Proc. Am. Math. Soc. 1976, 54: 261-266. 10.1090/S0002-9939-1976-0445416-9MathSciNetView ArticleMATHGoogle Scholar - Chen B-Y: Some pinching and classification theorems for minimal submanifolds. Arch. Math. 1993, 60: 568-578. 10.1007/BF01236084View ArticleMathSciNetMATHGoogle Scholar - Chen B-Y: Relations between Ricci curvature and shape operator for submanifolds with arbitrary codimensions. Glasg. Math. J. 1999, 41: 33-41. 10.1017/S0017089599970271MathSciNetView ArticleMATHGoogle Scholar - Chen B-Y: A general inequality for submanifolds in complex space forms and its applications. Arch. Math. 1996,67(6):519-528. 10.1007/BF01270616MathSciNetView ArticleMATHGoogle Scholar - Chen B-Y: An optimal inequality for CR-warped products in complex space forms involving CR δ -invariant. Int. J. Math. 23(3): 1250045 (2012)View ArticleMathSciNetMATHGoogle Scholar - Oiagǎ A, Mihai I: B.-Y. Chen inequalities for slant submanifolds in complex space forms. Demonstr. Math. 1999, 32: 835-846.MathSciNetMATHGoogle Scholar - Mihai A, Özgür C: Chen inequalities for submanifolds of real space forms with semi-symmetric metric connection. Taiwan. J. Math. 2010, 14: 1465-1477.MathSciNetMATHGoogle Scholar - Mihai A, Özgür C: Chen inequalities for submanifolds of complex space forms and Sasakian space forms endowed with semi-symmetric metric connections. Rocky Mt. J. Math. 2011, 41: 1653-1673. 10.1216/RMJ-2011-41-5-1653View ArticleMathSciNetMATHGoogle Scholar - Özur C, Murathan C: Chen inequalities for submanifolds of a locally conformal almost cosymplectic manifold with a semi-symmetric metric connection. An. Univ. ‘Ovidius’ Constanţa, Ser. Mat. 2010,18(1):239-253.MathSciNetMATHGoogle Scholar - Özur C, Murathan C: Chen inequalities for submanifolds of a cosymplectic space form with a semi-symmetric metric connection. An. Univ. ‘Ovidius’ Constanţa, Ser. Mat. 2012,58(2):395-408.MathSciNetMATHGoogle Scholar - Zhang, P, Zhang, L, Song, W: Chen’s inequalities for submanifolds of a Riemannian manifold of quasi-constant curvature with a semi-symmetric metric connection. Taiwan. J. Math. (in press). doi:10.11650/tjm.18.2014.4045 10.11650/tjm.18.2014.4045Google Scholar - Albertazzi L: Handbook of Experimental Phenomenology: Visual Perception of Shape, Space and Appearance. Wiley, Chichester; 2013.View ArticleGoogle Scholar - Casorati F: Mesure de la courbure des surfaces suivant l’idée commune. Ses rapports avec les mesures de courbure gaussienne et moyenne. Acta Math. 1890 14(1), 95-110MathSciNetView ArticleGoogle Scholar - Haesen S, Kowalczyk D, Verstraelen L: On the extrinsic principal directions of Riemannian submanifolds. Note Mat. 2009,29(2):41-53.MathSciNetMATHGoogle Scholar - Verstraelen L: The geometry of eye and brain. Soochow J. Math. 2004,30(3):367-376.MathSciNetMATHGoogle Scholar - Verstraelen L: Geometry of submanifolds I. The first Casorati curvature indicatrices. Kragujev. J. Math. 2013,37(1):5-23.MathSciNetMATHGoogle Scholar - Decu S, Haesen S, Verstraelen L: Optimal inequalities involving Casorati curvatures. Bull. Transylv. Univ. Braşov, Ser. B 2007, 14(49):85-93. suppl.MathSciNetMATHGoogle Scholar - Decu S, Haesen S, Verstraelen L: Optimal inequalities characterising quasi-umbilical submanifolds. J. Inequal. Pure Appl. Math. 2008. 9(3): Article ID 79MATHGoogle Scholar - Lee, JW, Vîlcu, GE: Inequalities for generalized normalized δ-Casorati curvatures of slant submanifolds in quaternionic space forms. PreprintGoogle Scholar - Slesar V, Şahin B, Vîlcu GE: Inequalities for the Casoratic curvatures of slant submanifolds in quaternionic space forms. J. Inequal. Appl. 2014 2014: 123Google Scholar - Blair D: Quasi-umbilical, minimal submanifolds of Euclidean space. Simon Stevin 1977, 51: 3-22.MathSciNetMATHGoogle Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Presentation on theme: "Abstract This project is a structural analysis and design of a residential building located in JENIEN City, The building is consisted of 7 floors. The."— Presentation transcript: 2 AbstractThis project is a structural analysis and design of a residential building located in JENIEN City, The building is consisted of 7 floors.The final analysis and design of building is done using a three dimensional (3D) structural model by the structural analysis and design software sap2000. 4 The preliminary dimensions of the structural elements are determined using one dimensional structural analysis for the structural members for gravity loads. Contain analysis and design is used for this purpose.The structural model results are verified by simple calculations and by comparing with the one dimensional analysis. 6 CHAPTER TWO:(PRELIMINARY DESIGN ) * General* Design of Rib Slab* Design of columns* Design of beams 7 CHAPTER THREE: (Three Dimensional Analysis And Design) * General* Modeling The Building as 3D* Seismic Loads* Analysis and Design of Slabs* Analysis and Design of Beams* Analysis and Design of columns* Analysis and Design of Footings* Analysis and Design of Stair* Analysis and Design of walls 8 **CHAPTER ONE: INTRODUCTION *General: This project introduces analysis and design of reinforce concrete residential building. Also this project provides clear design structural drawings for construction.This project is a residential building , which consists of 7 stories above the ground level . The area of each story is about 372 m². 13 *Design of Rib Slab:-Minimum slab thickness is calculated according to ACI provision.ACI 14 One end continuous = L / 18.5 = 425/18.5 =23 cm Cantilever = L/8=160/8=20 cmThen we assume thickness of slab (h) rib= 25 cmCross section in Ribbed slab 15 The distribution of ribs in the typical slabs shown in figure 16 The ribs in the slab are analyzed using sap2000 program The ribs in the slab are analyzed using sap2000 program. As an example, the analysis result of rib are illustrated here: Use 2φ12 top and 2φ10 bottom steelmoment diagram for rib, ton.m 17 *Design of columnsIn this project rectangular columns are used. And these columns can carry axial load and no moment.-Design of Column :The dimensions of the column 30*35cm, Ag=1050cm²,And we use ρ between (1%- 4%)Pu=149 tonPn=229 tonAs =11 cm²Use 6 φ16 18 *Design of beamsAfter distributed the beam in the plan as shown in figure , we insert to sap2000 and insert each load on it which come from ribs slab or from external or internal wall and then design it. 19 Use 8φ18 top and 5φ18 bottom steel -Design of beam (65*25):moment diagram, ton.mArea of steel , cm²Use 8φ18 top and 5φ18 bottom steel 20 **CHAPTER THREE: Three Dimensional Analysis And Design *General: This chapter provides analysis and design of 3D model for the building using sap2000 program. Figure below show 3D Model of it.3D model 21 *Modeling The Building as 3D Structure : **Sections:-shear walls= 20cm-Ribbed slabs are presented as one way solid slabs in y-direction. The thickness is calculated to be equivalent to ribs moment of inertia.Z = (12*17*17*.5*52*8*21)/((12*17)+(52*8))Z=16.9 cmIc = (52*8^3)/ 12 +(52*8*4.1^2) + 12*17*8.4^2Ic= cm^452*(Equivalent)^3/ 12 =H=18.74cm 22 **Loads: -beams: Variable in sections, we use concrete covers of 3 cm Moment of inertia about 2 axis = Moment of inertia about 3 axis=0.35-columns :we use concrete covers of 4 cmMoment of inertia about 2 axis = Moment of inertia about 3 axis=0.7**Loads:-Own weight : will be calculated by the program.-Live load= 0.25 ton/m2.- Total super imposed dead load =0.37ton/m2 23 -seismic loads First we will define the equivalent static As shown in the picture 24 we define the cases of the seismic loads Seismic-x, Seismic-y 25 *Structural Model Verification: Equilibrium Check:-Live Load:Total Live load = KNLive load from SAP = KN% Error =0.8% < 5% ok 26 -Dead loads: Dead load from SAP =32216.6 KN Error= 1.5% < 5% ok. Total load= KNDead load from SAP = KNError= 1.5% < 5% ok. 27 super imposed Dead load: total S.D = 9870 KN From Sap = 9622.7 KN Error= 2% < 5% ok 31 Beam 1.2 (70*30) Out Side Of Building *Analysis and Design of Beams:Analysis and design output was taken from SAP2000.Beam (70*30) Out Side Of BuildingMain steelstirrups (2leg 1ö 10mm)total A/sSpacing(cm)span1bottom3ö16span 10.00110top0.048Span2span 20.051Span3span 30.002 33 *Analysis and Design of columns : Analysis and design output was taken from SAP2000.Design the worst column (has the max. axial load = 212 Ton)Assume column dimension: 30cm *60cmPd=φλ*0.85(300(Ag-As)+ 4200*As)= 0.65 *0.8 *0.85 (300(0.99*30*60)-(4200*0.01*30*60))=275.6 ton > 212 okAs = 0.01 *30*60 = 18cm2 →Use 8φ18 mm 37 Longitudinal Reinforcement # of bars in each direction Footing No.Footing dimensionsLongitudinal ReinforcementLength(m)Width (m)Depth (m)Area of steel (cm2)# of bars in each direction1320.5168Ф16/m1.67.74Ф16/m220.127.116.110.987Ф16/m18.104.22.1683.47.5963.611022.147Ф20/m 38 Footing wall dimensions -Wall FootingsSame steps of single footingFootingWall No.Footing wall dimensionsVertical steelHorizontal steelWidth (m)height (m)# of bars11.600.405Ф14/m6Ф16/m21.400.355Ф16/m341.304Ф16/m5 40 Flexural design: M1: Max negative moment from sap= 21.4 t.m Mn =23.7 As= 15cm²Use 8φ16Max. positive moment = 8 t.mMn= 9 t.mAs=8 cm²Use 5φ16 41 *Analysis and Design of Stair : Maximum ( –ve)and (+ve)moments = 0.52Ton .m/mMaximum ( –ve)and (+ve)moments = 0.52Ton .m/mMaximum ( –ve)and (+ve)moments = 0.52Ton .m/m*Analysis and Design of Stair :- going of the stair is 30cm as standards- Flights and landings thickness will be taken as simply supported solid slab:height of landing=ln/20=270/20=13.5 cm.15 cm thickness is suitable.Rise of stair= 16cm,-Design of the stair from sap M11:Maximum ( –ve )and (+ve )moments= 0.52Ton .m/m=0.0009<As =0.003 *100 *12 = 3.7 /m →use 3φ14 top and bottom steel is required 42 As =0.003 *100 *12 = 3.7 /m →use 3φ14 top and =0.003<< 0.0018 Maximum (–ve) and (+ve) moment is 1.8 Ton.m/m.Maximum (–ve) and (+ve) moment is 1.8 Ton.m/m.=0.003<<As =0.003 *100 *12 = 3.7 /m →use 3φ14 top andbottom steel is required 43 Design of shear walls Shear Wall No. Shear wall dimensions Vertical steelHorizontal steelWidth (m)length (m)# of bars10.224.54Ф14/m211.63454.9616.712.9
What is an example of a fractal? Fractals. A fractal is a detailed pattern that looks similar at any scale and repeats itself over time. Examples of fractals in nature are snowflakes, trees branching, lightning, and ferns. Consequently, What shape is a fractal? A Fractal is a type of mathematical shape that are infinitely complex. In essence, a Fractal is a pattern that repeats forever, and every part of the Fractal, regardless of how zoomed in, or zoomed out you are, it looks very similar to the whole image. Fractals surround us in so many different aspects of life. In this manner, Can any shape be a fractal? fractal, in mathematics, any of a class of complex geometric shapes that commonly have “fractional dimension,” a concept first introduced by the mathematician Felix Hausdorff in 1918. Fractals are distinct from the simple figures of classical, or Euclidean, geometry—the square, the circle, the sphere, and so forth. In the same way, Is a fractal possible? The consensus among mathematicians is that theoretical fractals are infinitely self-similar, iterated, and detailed mathematical constructs having fractal dimensions, of which many examples have been formulated and studied. Many real and model networks have been found to have fractal features such as self similarity. Are humans fractals? We are fractal. Our lungs, our circulatory system, our brains are like trees. They are fractal structures. Most natural objects - and that includes us human beings - are composed of many different types of fractals woven into each other, each with parts which have different fractal dimensions. Related Question for What Is An Example Of A Fractal? Are Butterflies fractal? After a nearly 40-year chase, physicists have found experimental proof for one of the first fractal patterns known to quantum physics: the Hofstadter butterfly. What do fractals do? Driven by recursion, fractals are images of dynamic systems – the pictures of Chaos. Geometrically, they exist in between our familiar dimensions. Fractal patterns are extremely familiar, since nature is full of fractals. For instance: trees, rivers, coastlines, mountains, clouds, seashells, hurricanes, etc. Is a circle a fractal? The most iconic examples of fractals have bumps along their boundaries, and if you zoom in on any bump, it will be covered in bumps, etc etc. Both a circle and a line segment have Hausdorff dimension 1, so from this perspective it's a very boring fractal. How do you identify fractals in nature? A fractal is a kind of pattern that we observe often in nature and in art. As Ben Weiss explains, “whenever you observe a series of patterns repeating over and over again, at many different scales, and where any small part resembles the whole, that's a fractal.” Is a snowflake a fractal? Part of the magic of snowflake crystals are that they are fractals, patterns formed from chaotic equations that contain self-similar patterns of complexity increasing with magnification. If you divide a fractal pattern into parts you get a nearly identical copy of the whole in a reduced size. Is cauliflower a fractal? This variant form of cauliflower is the ultimate fractal vegetable. Its pattern is a natural representation of the Fibonacci or golden spiral, a logarithmic spiral where every quarter turn is farther from the origin by a factor of phi, the golden ratio. Why are there fractals in nature? Fractals are hyper-efficient in their construction and this allows plants to maximize their exposure to sunlight and also efficiently transport nutritious throughout their cellular structure. These fractal patterns of growth have a mathematical, as well as physical, beauty. Is consciousness a fractal? In both plants and animals consciousness is fractal. Since fractals can only pass information in one direction it is impossible to extrapolate backward to find the rule that governs the fractal. Thus, similarly, it will be impossible to completely determine the rule or rules that govern consciousness. How are fractals observed in your life? USE OF FRACTALS IN OUR LIFE Fractal mathematics has many practical uses, too — for example, in producing stunning and realistic computer graphics, in computer file compression systems, in the architecture of the networks that make up the internet and even in diagnosing some diseases. What is a fractal painting? Fractal art is achieved through the mathematical calculations of fractal objects being visually displayed, with the use of self-similar transforms that are generated and manipulated with different assigned geometric properties to produce multiple variations of the shape in continually reducing patterns. Why are fractals so soothing? The results of many studies show that exposure to fractal patterns in nature reduce people's levels of stress up to 60%. It seems this stress reduction effect occurs because of a certain physiological resonance within the eye. Bringing nature and those repetitive patterns indoors can have a calming effect on patients. Is the brain a fractal? The human brain, with its exquisite complexity, can be seen as a fractal object, and fractal analysis can be successfully applied to analyze its wide physiopathological spectrum and to describe its self-similar patterns, in both neuroanatomical architecture and neurophysiological time-series. What does a butterfly landing on you mean? "A butterfly landing on you can be a sign that your unconscious mind approves of something, probably related to personal development or service to others, same as a butterfly is a servant of nature," it says. "It can symbolize that you can be trusted with delicate things." What is chaos theory in layman's terms? chaos theory, in mechanics and mathematics, the study of apparently random or unpredictable behaviour in systems governed by deterministic laws. A more accurate term, deterministic chaos, suggests a paradox because it connects two notions that are familiar and commonly regarded as incompatible. Can a butterfly wings cause a hurricane? It is not true that events of the magnitude of a butterfly flapping its wings do not affect major events such as hurricanes. It is impossible in practice to cause a specific hurricane by employing suitably trained butterflies. What is a fractal and what is it good for? Is life a fractal? It is the geometry of deterministic chaos and it can also describe geometry of mountains, clouds, and galaxies.” Al- though it is not widely known, the basic traits of a fractal can be applied to all aspects of life, because life exists in the form of a fractal abstraction. Is a spiral a fractal? Because this spiral is logarithmic, the curve appears the same at every scale, and can thus be considered fractal. What is the most famous fractal? Largely because of its haunting beauty, the Mandelbrot set has become the most famous object in modern mathematics. It is also the breeding ground for the world's most famous fractals. Who coined the term fractal in what year? Mandelbrot coined the term "fractal" in 1975 to describe irregular shapes in nature and in mathematics that exhibit self-similarity—like snowflakes or Romanesco broccoli, they look roughly the same at varying scales. Do fractals have infinite perimeter? A three-dimensional fractal constructed from Koch curves. The progression for the area converges to 2 while the progression for the perimeter diverges to infinity, so as in the case of the Koch snowflake, we have a finite area bounded by an infinite fractal curve. Is a Rose a fractal? The Figure 1 shows an example of Rose flower petals and Figure 2 shows a dried tree with branches. Both are fractals. Is a pineapple a fractal? Recurring patterns are found in nature in many different things. They are called fractals. Think of a snow flake, peacock feathers and even a pineapple as examples of a fractal. Is Sunflower a fractal? Sunflower: The pattern of sunflower seeds on a sunflower is a fractal because the seeds are created in the same way. The first seed was created then it rotates by a certain consistent angle to create another seed. Pine Cones: Pine cones also display fractal patterns. What are some famous fractals? Cantor set, Sierpinski carpet, Sierpinski gasket, Peano curve, Koch snowflake, Harter-Heighway dragon curve, T-Square, Menger sponge, are some examples of such fractals. Is snow a fractal? Branched constructions like snowflakes often exhibit fractal patterns. The defining characteristic of a fractal snowflake is a self-similar structure, where branches have sidebranches, which have their own smaller sidebranches, and so on. In fact, real snowflakes are only slightly fractal. How do you draw a Koch curve? How is broccoli a fractal? Fractals show self-similarity, or comparable structure regardless of scale. In other words, a small piece of broccoli, when viewed up close, looks the same as a larger chunk. (The broccoli isn't a true fractal, because at a certain magnification it loses its self-similar shape, revealing instead regular old molecules.) Is a mountain a fractal? Rivers are good examples of natural fractals, because of their tributary networks (branches off branches off branches) and their complicated winding paths. Mountains are the result of tecktonic forces pushing them up and weahtering breaking them down. Little surprise they are well-described by fractals. What are the four types of fractal patterns? They are tricky to define precisely, though most are linked by a set of four common fractal features: infinite intricacy, zoom symmetry, complexity from simplicity and fractional dimensions – all of which will be explained below. Why is pineapple a fractal? The laws that govern the creation of fractals seem to be found throughout the natural world. Pineapples grow according to fractal laws and ice crystals form in fractal shapes, the same ones that show up in river deltas and the veins of your body. Are clouds fractal? Clouds are not fractal. At the scales where such spatial patterns influence cloud dynamics, the structure will not be fractal. At smaller scales turbulence may make the structure fractal again. Was this helpful? 0 / 0
« PreviousContinue » after impinging will remain at rest. It is evident, that in this case, the smaller sphere must descend through a greater space than the larger, in order to acquire the necessary velocity. If the spheres move in the same or in opposite directions, with different momenta, and one strike the other, the body that impinges will lose exactly the quantity of momentum that the other acquires. Thus, in all cases, it is known by experience that reaction is equal and contrary to action, or that equal momenta in opposite directions destroy one another. Daily experience shows that one body cannot acquire motion by the action of another, without depriving the latter body of the same quantity of motion. Iron attracts the magnet with the same force that it is attracted by it; the same thing is seen in electrical attractions and repulsions, and also in animal forces; for whatever may be the moving principle of man and animals, it is found they receive by the reaction of matter, a force equal and contrary to that which they communicate, and in this respect they are subject to the same laws as inanimate beings. Mass proportional to Weight. 120. In order to show that the mass of bodies is proportional to their weight, a mode of defining their mass without weighing them must be employed; the experiments that have been described afford the means of doing so, for having arrived at the preceding results, with spheres formed of matter of the same kind, it is found that one of the bodies may be replaced by matter of another kind, but of different dimensions from that replaced. That which produces the same effects as the mass replaced, is considered as containing the same mass or quantity of matter. Thus the mass is defined independent of weight, and as in any one point of the earth's surface every particle of matter tends to move with the same velocity by the action of gravitation, the sum of their tendencies constitutes the weight of a body; hence the mass of a body is proportional to its weight, at one and the same place. 121. Suppose two masses of different kinds of matter, A, of hammered gold, and B of cast copper. If A in motion will destroy the motion of a third mass of matter C, and twice B is required to produce the same effect, then the density of A is said to be double the density of B. Mass proportional to the Volume into the Density. 122. The masses of bodies are proportional to their volumes multiplied by their densities; for if the quantity of matter in a given cubical magnitude of a given kind of matter, as water, be arbitrarily assumed as the unit, the quantity of matter in another body of the same magnitude of the density p, will be represented by p; and if the magnitude of the second body to that of the first be as m to 1, the quantity of matter in the second body will be represented by m×p. 123. The densities of bodies of equal volumes are in the ratio of their weights, since the weights are proportional to their masses; therefore, by assuming for the unit of density the maximum density of distilled water at a constant temperature, the density of a body will be the ratio of its weight to that of a like volume of water reduced to this maximum. This ratio is the specific gravity of a body. Equilibrium of two Bodies. 124. If two heavy bodies be attached to the extremities of an inflexible line without mass, which may turn freely on one of its points; when in equilibrio, their masses are reciprocally as their distances from the point of motion. Demonstration.-For, let two heavy bodies, m and m', fig. 34, be attached to the extremities of an inflexible line, free to turn round one of fers from two right angles by an indefinitely small angle amn, which may be represented by w. If g be the force of gravitation, gm, gm' will be the gravitation of the two bodies. But the gravitation gm acting in the direction na may be resolved into two forces, one in the direction mn, which is destroyed by the fixed point n, and another acting on m' in the direction m'm. Let mn=f, m'n=ƒ'; then m'mf+f' very nearly. Hence the whole force gm is to the part acting on m' :: na: mm', and the action of m on m', is gm (f+f'); but m'n : na :: 1 : w, for the arc is so small that it may be taken for its sine. Hence na w.f', and the action of m on m' is 5m. (ƒ +ƒ') In the same manner it may be shown that the action of m' on m gm' (f+f'); but when the bodies are in equilibrio, these forces must be equal: therefore gm (ƒ +ƒ') gm.f=gm'.f', or gm: gm' :: f'f, which is the law of equilibrium in the lever, and shows the reciprocal action of parallel forces. Equilibrium of a System of Bodies. 125. The equilibrium of a system of bodies may be found, when the system is acted on by any forces whatever, and when the bodies also mutually act on, or attract each other. Demonstration.-Let m, m', m', &c., be a system of bodies attracted by a force whose origin is in S, fig. 35; and suppose each body to act on all the other bodies, and also to be itself subject to the action of each, the action of all these forces on the bodies m, m', m", &c., are as the masses of these bodies and the intensities of the forces conjointly. Let the action of the forces on one body, as m, be first considered; and, for simplicity, suppose the number of bo- the bodies m' and m". Suppose m' and m" to remain fixed, and that m is arbitrarily moved to n: then mn is the virtual velocity of m; and if the per pendiculars na, nb, nc be drawn, the lines ma, mb, mc, are the virtual velocities of m resolved in the direction of the forces which act on m. Hence, by the principle of virtual velocities, if the action of the force at S on m be multiplied by ma, the mutual action of m and m' by mb, and the mutual action of m and m" by mc, the sum of these products must be zero when the point m is in equilibrio; or, m being the mass, if the action of S on m be F.m, and the reciprocal actions of m on m' and m" be p, p', then mF × ma + p × mb + p' × mc = 0. Now, if m and m" remain fixed, and that m' is moved to n', then m'F' x m'a' + p × m'b' + p'' × m'c' = 0. And a similar equation may be found for each body in the system. Hence the sum of all these equations must be zero when the system is in equilibrio. If, then, the distances Sm, Sm', Sm", be represented by s, s', s", and the distances mm', mm", m'm", by ƒ, f', ƒ", we shall have E.mFds+2.pdf + Σ.pdf' ±, &c. = 0, Σ being the sum of finite quantities; for it is evident that df = mb + m'b', df' = mc + m''c", and so on. If the bodies move on surfaces, it is only necessary to add the terms Ror, R'dr', &c., in which R and R' are the pressures or resistances of the surfaces, and dr dr' the elements of their directions or the variations of the normals. Hence in equilibrio Σ.mFds +2.pdf + &c. + Rdr + R'dr', &c. = 0. Now, the variation of the normal is zero; consequently the pressures vanish from this equation: and if the bodies be united at fixed distances from each other, the lines mm', m'm", &c., or f, f', &c., are constant:-consequently f= 0, f' 0, &c. The distance f of two points m and m' in space is ƒ = √(x − x)2 + (y' − y)2 + (z' — z)3, x, y, z, being the co-ordinates of m, and x', y', z', those of m'; so that the variations may be expressed in terms of these quantities: and if they be taken such that df = 0, dƒ' = 0, &c., the mutual action of the bodies will also vanish from the equation, which is reduced to 126. Thus in every case the sum of the products of the forces into the elementary variations of their directions is zero when the system is in equilibrio, provided the conditions of the connexion of the system be observed in their variations or virtual velocities, which are the only indications of the mutual dependence of the different parts of the system on each other. 127. The converse of this law is also true-that when the principle of virtual velocities exists, the system is held in equilibrio by the forces at S alone. Demonstration.-For if it be not, each of the bodies would acquire a velocity v, v', &c., in consequence of the forces mF, m'F', &c. If dn, dn', &c., be the elements of their direction, then The virtual velocities dn, dn', &c., being arbitrary, may be assumed equal to vdt, v'dt, &c., the elements of the space moved over by the bodies; or to v, v', &c., if the element of the time be unity. Hence Σ. mv 0. It has been shown that in all cases 2.mFds 0, if the virtual velocities be subject to the conditions of the system. Hence, also, Σ.mv3 = 0; but as all squares are positive, the sum of these squares can only be zero if v = = 0, v' = 0, &c. Therefore the system must remain at rest, in consequence of the forces Fm, &c., alone. 128. Rotation is the motion of a body, or system of bodies, about a line or point. Thus the earth revolves about its axis, and bil liard-ball about its centre. 129. A rotatory pressure or moment is a force that causes a system of bodies, or a solid body, to rotate about any point or line. It is expressed by the intensity of the motive force or momentum, multiplied by the distance of its direction from the point or line about which the system or solid body rotates. On the Lever. 130. The lever first gave the idea of rotatory pressure or moments, for it revolves about the point of support or fulcrum. When the lever mm', fig. 36, is in equilibrio, in consequence of forces applied to two heavy bodies at its extremities, the rotatory
Comparisons against baseline within randomised groups are often used and can be highly misleading Trials volume 12, Article number: 264 (2011) In randomised trials, rather than comparing randomised groups directly some researchers carry out a significance test comparing a baseline with a final measurement separately in each group. We give several examples where this has been done. We use simulation to demonstrate that the procedure is invalid and also show this algebraically. This approach is biased and invalid, producing conclusions which are, potentially, highly misleading. The actual alpha level of this procedure can be as high as 0.50 for two groups and 0.75 for three. Randomised groups should be compared directly by two-sample methods and separate tests against baseline are highly misleading. When we randomise trial participants into two or more groups, we do this so that they will be comparable in every respect except the intervention which they then receive. The essence of a randomised trial is to compare the outcomes of groups of individuals that start off the same. We expect to see an estimate of the difference (the "treatment effect") with a confidence interval and, often, a P value. However, rather than comparing the randomised groups directly, researchers sometimes look within groups at the change between the outcome measure from pre-intervention baseline to the final measurement at the end of the trial. They then perform a test of the null hypothesis that the mean difference is zero, separately in each randomised group. They may then report that in one group this difference is significant but not in the other and conclude that this is evidence that the groups, and hence the treatments, are different. For example, a recent trial received wide media publicity as the first "anti-ageing" cream "proven" to work in a randomised controlled clinical trial . Participants were randomised into two groups, to receive either the "anti-ageing" product or the vehicle as a placebo. Among other measures, the authors report the appearance of fine lines and wrinkles, measured on a scale of 0 to 8, at baseline and after six months. The authors gave the results of significance tests comparing the score with baseline for each group separately, reporting the active treatment group to have a significant difference (P = 0.013) and the vehicle group not (P = 0.11). This was interpreted as the cosmetic "anti-ageing" product resulted in significant clinical improvement in facial wrinkles. But we cannot draw this conclusion, because the lack of a significant difference in the vehicle group does not mean that subjects given this treatment do not improve, nor that they do not improve as well as those given the "anti-aging" product. It is the sizes of the differences which is important; they should be compared directly in a two sample test. The paper includes some data for the improvement in each group, 43% for the active group and 22% for controls. This was what was picked up by the media. No P value is given, but in the discussion the authors acknowledge that this difference was not significant. No confidence interval is given, either, the accepted preferred way to present the results of a randomised trial [2, 3]. The British Journal of Dermatology published a letter critical of many aspects of this trial . A different version subsequently appeared in Significance. This happened, of course, only because the publicity generated by Boots brought the paper to the attention of JMB. The "anti-ageing" skin cream trial made us think again about this method of analysis, which we have written about several times before [6–10]. In this paper we try to present a clearer explanation for why within group analysis is wrong. It is a greatly expanded version of ideas we introducing briefly in our Statistics Notes series in the British Medical Journal. We shall examine the statistical properties of testing within separate groups with a simulation. We consider the case where there is no true difference between the two treatments. Table 1 shows simulated data from a randomised trial, with two groups (A and B) of size 30 drawn from the same population, so that there is no systematic baseline difference between the groups. There is a baseline measurement, with standard deviation 2.0, and a final measurement, equal to the baseline plus a random variable with mean 0.0, standard deviation 1.0, plus a systematic increase of 0.5, half a standard deviation, in both groups. In this simulation, the proportion of possible samples which would give a significant difference between groups is 0.05. When the null hypothesis is true, this should be equal to the chosen type I error (alpha), which we have taken to be the conventional 0.05. There is no real difference, so the probability of a significant difference is 0.05, by definition. Within each group, there is an expected difference, so we can calculate the power to detect it, the probability of a significant difference, using the usual formula for a paired t test . For the chosen difference of half a standard deviation of the differences, using significance level 0.05, the power is 0.75. The usual way to analyse such data is to compare the mean final measurement between the groups using the two sample t method, or, better, to adjust the difference for the baseline measure using analysis of covariance or multiple regression . For these data, using the two sample t method, we get difference between groups in mean final measurement (A - B) = -0.61, P = 0.3, and adjusting the difference for the baseline measure using regression we get difference = 0.19, P = 0.5. In each case the difference is not statistically significant, which is not surprising because we know that the null hypothesis is true: there is no true difference in the population. There are other analyses which we could carry out on the data. For each group, we can compare baseline with final measurement using a paired t test. For group A, the mean increase is 0.48, which is statistically significant, P = 0.01; for group B the mean increase = 0.34, which is not significant, P = 0.08. The results of these significance tests are quite similar to those of the "anti-ageing" cream trial. We know that these data were simulated with an average increase of 0.5 from baseline to final measurement, so a significant difference in one group is not surprising. There are only 30 in a group so the power to detect the difference is not great. Indeed, only 75% of samples are expected to produce a significant difference, so the non-significant difference is not surprising, either. We would not wish to draw any conclusions from one simulation. We repeated it 10,000 times. In 10,000 runs, the difference between groups had P < 0.05 in the analysis of covariance 458 times, or for 4.6% of samples, very close to the 5% we expect. For the 20,000 comparisons between baseline and final measurement, 15,058 had P < 0.05, 75.3%, corresponding to the 75% power noted above. Of the 10,000 pairs of t tests for groups A and B, 617 pairs had neither test significant, 5,675 had both tests significant, and 3,708 had one test significant but not the other. So in this simulation, where is no difference whatsoever between the two "treatments", 37.1% of runs produced a significant difference in one group but not the other. Hence we cannot interpret a significant difference in one group but not the other as a significant difference between the groups. How many pairs of tests would be expected to have a significant difference in one group and a non-significant difference in the other? How many pairs of tests will have one significant and one non-significant difference depends on the power of the paired tests. First, we shall assume that there is no true difference between interventions and that the power of each test within the group is the same. This will be the case for equal-sized groups. If the population difference from baseline to final measurement is very large, nearly all within-group tests will be significant, whereas if the population difference is small nearly all tests will be not significant; in each case there will be few samples with only one significant difference. Intuitively the probability of one of the two tests being significant will rise in between these two extreme cases. Looking at the problem mathematically, if there is no difference between groups and power of the paired t test to detect the difference between baseline and final measurement is P, the probability that the first group will have a significant paired test is P, the probability that the second will be not significant is 1 - P and the probability that both will happen is thus P × (1 - P). Similarly, the probability that the first will be not significant and second significant will also be (1 - P)×P, so the probability that one difference will be significant and the other not will be the sum of these probabilities, or 2P × (1 - P). It will not be 0.05, which it should be for a valid test between the groups. When the difference in the population between baseline and final measurement is zero, the probability that a group will have a significant difference is 0.05, because the null hypothesis is true. The probability that one group will have a significant difference and the other will not is then 2P × (1 - P) = 2 × 0.05 × (1 - 0.05) = 0.095, not 0.05. So we expect 9.5% of samples to have one and only one significant difference. We ran 10,000 simulations of this completely null situation. In 10,000 runs, the difference between groups had P < 0.05 in the analysis of covariance 485 times, or for 4.9% of samples, very close to the 5% we expect. For the 20,000 comparisons between baseline and final measurement, 1,016 had P < 0.05, 5.1%, again very close to the 5% we expect. Of the 10,000 pairs of t tests for groups A and B, 9,008 pairs had neither test significant, 24 had both tests significant, and 968 had one test significant but not the other, 9.7%, very close to the 9.5% predicted by the theory but not to the 5% which we would want if this procedure were valid. If the power of the within-group tests is 50%, as it would be here if the underlying difference were 37% of the within-group standard deviation, rather than 50%, as in our first simulation, then 2P × (1 - P) = 2 × 0.50 × (1 - 0.50) = 0.50. So we would expect 50% of two-sample trials to have one and only one significant difference. We ran 10,000 simulations of this situation, where the power for a within group difference is 50% but there is no between group difference. In 10,000 runs, the difference between groups had P < 0.05 in the analysis of covariance 490 times, or for 4.9% of samples. For the 20,000 comparisons between baseline and final measurement, 9,938 had P < 0.05, 49.7%, very close to the 50% power within the group which this simulation was designed to have. Of the 10,000 pairs of t tests for groups A and B, 2,518 pairs had neither test significant, 2,456 had both tests significant, and 5,026 had one test significant but not the other, 50.3%, very close to the 50% predicted by the theory but not to the 5% which we would want if this procedure were valid. Figure 1 shows the actual alpha for a two-group trial against the power of the within-group test. This curve starts at P = 0.05, because this is the minimum possible power for a test with alpha = 0.05. The peak value is at P = 0.5 (the case just considered) and then actual alpha declines as P increases, because with high power both within-group tests are likely to be significant. If the randomised groups represent populations which really are different after treatment, so that the null hypothesis of trial is not true, the calculations are more complicated. The power of the within-group tests will be different for the two groups. This is because the population difference will not be the same. If the power of the within-group test for group A is P 1 and for group B it is P 2, then the actual alpha for the within-groups procedure is P 1×(1 - P 2) + (1 - P 1) ×P 2. This will have its maximum, not surprisingly, when one test has high power and the other has low power. This might be the case when one treatment is ineffective, such as a placebo, though not when both treatments are active. Other examples of testing within randomised groups The anti-ageing cream trial is by no means unusual in having tested within groups when the difference between groups is not significant. Altman gave the following example. Toulon and colleagues divided patients with chronic renal failure undergoing dialysis into two groups with low or with normal plasma heparin cofactor II (HCII) . Five months later, the acute effects of haemodialysis were examined by comparing the ratio of HCII to protein in plasma before and after dialysis. The data were analysed by separate paired Wilcoxon tests in each group. Toulon and colleagues published the data, which appear in Table 2, taken from Altman . They analysed the data using two paired Wilcoxon tests. For the Low HCII group the before to after change was significant, P < 0.01. For the normal HCII group the difference was not significant, P > 0.05. What should they have done? They could have done a two sample t test between groups on the ratio before dialysis minus ratio after. This gives t = 0.16, 22 d.f., P = 0.88, or for the log transformed data t = 1.20, P = 0.24. The variability is not the same in the two groups, so they might have done a two sample rank-based test, the Mann Whitney U test. This gives z = 0.89, P = 0.37. So either way, the difference is not statistically significant. In that example, we could tell what the between groups test would give because the raw data were given. We cannot usually tell what the between group comparison would show when researchers test within groups. Bland and Peacock gave the next two examples . In a randomized trial of morphine vs. placebo for the anaesthesia of mechanically ventilated pre-term babies, it was reported that morphine-treated babies showed a significant reduction in adrenaline concentrations during the first 24 hours (median change -0.4 nmol/L, P < 0.001), which was not seen in the placebo group (median change 0.2 nmol/L, P < 0.79 (sic)) . There is no way to test whether the between group difference is significant. Even though the median changes in this example are in opposite directions, this does not imply that there is good evidence that the treatments are different. In a study of treatments for menorrhagia during menstruation, 76 women were randomized to one of three drugs . The effects of the drugs were measured within the subjects by comparing three control menstrual cycles and three treatment menstrual cycles in each woman. The women were given no treatment during the control cycles. For each woman the control cycles were the three cycles preceding the treatment cycles. The authors reported that patients treated with ethamsylate used the same number of sanitary towels as in the control cycles. A significant reduction in the number of sanitary towels used was found in patients treated with mefenamic acid (P < 0.05) and tranexamic acid (P < 0.01) comparing the control periods with the treatment periods. For three groups, when the differences between interventions are all zero, the probability that one test will be significant and the other two will not is 3P(1 - P)2 and that two tests will be significant and one not significant is 3P 2(1 - P). The probability of getting at least one significant and one not significant test between baseline and final measurement is, therefore, 3P(1 - P)2 + 3P 2(1 - P), which is equal to 3P(1 - P). The graph of this probability is shown in Figure 2. The value when all the null hypotheses within groups are true is 0.14, even greater than for two groups, and the maximum value, when P = 0.5, is 0.75. So if we compare three groups with within-group tests, and the interventions all have identical effects, we could have an actual alpha for the test as high as 0.75, rather than the 0.05 we need for a valid test. Sometimes authors test within groups when a between groups procedure would have given a significant difference. Kerrigan and colleagues assessed the effects of different levels of information on anxiety in patients due to undergo surgery. They randomized patients to receive either simple or detailed information about the planned procedure and its risks. Anxiety was measured again after patients had been given the information. Kerrigan et al. calculated significance tests for the mean change in anxiety score for each group separately. In the group given detailed information the mean change in anxiety was not significant (P = 0.2), interpreted incorrectly as "no change". In the group given simple information the reduction in anxiety was significant (P = 0.01). They concluded that there was a difference between the two groups because the change was significant in one group but not in the other. As before, we should compare the two groups directly. We carried out an alternative analysis which tested the null hypothesis that, after adjustment for initial anxiety score, the mean anxiety scores are the same in patients given simple and detailed information. This showed a significantly higher mean score in the detailed information group . A different reason for testing within groups was given by Grant and colleagues . They compared acupuncture with Transcutaneous Electrical Nerve Stimulation (TENS) in patients aged 60 or over with a complaint of back pain of at least 6 months duration. Patients were randomly allocated to 4 weeks treatment with acupuncture or TENS. The intention was to compare the two treatments. The authors report that, if 75% of patients responded to acupuncture and 40% to TENS then a sample size of 30 in each group would give the trial a power of 80% to detect statistical significance at a probability level of P = 0.05. Four outcome measures were recorded: (1) visual analogue scale (VAS); (2) pain subscale of the 38-item Nottingham Health Profile Part 1 (NHP); (3) number of analgesic tablets consumed in the previous week; (4) spinal flexion measured from C7 to S1. The two groups appeared different at baseline, with patients in the acupuncture group having higher VAS and NHP pain scores, reduced spinal flexion and lower tablet consumption compared to the TENS group. The authors carried out significant tests comparing the randomised groups for these baseline variables. They reported that the differences were "of borderline statistical significance: P = 0.064 for NHP, P = 0.089 for VAS, P = 0.10 for tablets and P = 0.16 for flexion". We think that these tests are meaningless, because if the groups were allocated randomly we know the null hypothesis, which is about the population, not the sample, is true . Grant and colleagues thought that these baseline differences would make post-treatment comparisons between groups difficult as even a small imbalance between initial values might affect the pain relief obtained by different treatments. They therefore analysed each group separately, comparing post-treatment final measurement with baseline. They obtained highly significant pain reductions in each group. They made some qualitative comparison between the treatments, but completely abandoned their original objective. They could have done this by using an analysis which adjusted for the baseline using regression. They should have done this whether or not the groups differed at baseline, because it would reduce variability and so increase power and precision. Calculating a confidence interval for each group separately is essentially the same error as testing within each group separately. Bland gave this example. Salvesen and colleagues reported follow-up of two randomized controlled trials of routine ultrasonography screening during pregnancy. At ages 8 to 9 years, children of women who had taken part in these trials were followed up. A subgroup of children underwent specific tests for dyslexia. The test results classified 21 of the 309 screened children (7%, 95% confidence interval 3% to 10%) and 26 of the 294 controls (9%, 95% confidence interval 4% to 12%) as dyslexic. They should have calculated a confidence interval for the difference between prevalences (-6.3 to +2.2 percentage points) or their ratio (0.44 to 1.34), because we could then compare the groups directly. Some authors test or estimate between groups, but then use a within groups test to suggest that, even though there is insufficient evidence for a difference between groups, the test against baseline suggests that their test treatment is superior. In a study of spin put on the results of 72 statistically non-significant randomised controlled trials, Boutron and colleagues identified focus on a statistically significant within-group comparison as a common method to slant interpretation of results in favour of the test treatment in 11% (95% CI 5% to 21%) of abstracts and in 14% (7% to 24%) of results sections. In fairness, they note that all these articles also reported the statistically non-significant results for the primary outcome in the abstract and in the main text. Using separate paired tests against baseline and interpreting only one being significant as indicating a difference between treatments is a frequent practice. It is conceptually wrong, statistically invalid, and consequently highly misleading. When the null hypothesis between groups is true, the Type I error can be as high as 50%, rather than the nominal 5%, and even higher when more than two groups are compared. The actual alpha for the flawed separate tests method is a minimum when the null hypothesis comparing outcome with baseline is true, but this is not likely to be the case in practice. The condition of patients is likely either to improve or to deteriorate over time, depending on the natural history of the disease; people either get better or worse. Placebo effects or regression towards the mean may also lead to changes over time. Hence the population mean difference is likely to be non-zero and so the power to detect it will be greater than 0.05, and the actual alpha for two within-group tests will be greater than the 0.095 found when all null hypotheses are true. Only when the power within the group is very high, with either large differences from baseline or large sample sizes, will the actual alpha be lower than 0.095. Tests comparing the final measurement with baseline are useless in most cases. We cannot conclude that a treatment has effect because a before vs. after test is significant, because of natural changes over time and regression towards the mean . We need a direct comparison with a randomised control. We wondered whether this practice is declining with what are, we hope, improvements in medical statistical education and in research quality. A survey of 80 trial reports in major journals in 1987 found that in 8 (10%) trials analyses were done only within treatment groups . A survey reported in 2011 of 161 trials identified from Cochrane reviews and published between 1966 and 2009 found that 16 (10%) reported a within-group comparison only . There is not much evidence of progress. This practice is widespread in non-randomised studies also. In a review of 513 behavioural, systems and cognitive neuroscience articles in five top-ranking journals, 79 articles were found to contain this incorrect procedure . These authors reported that an additional analysis suggested that this method of analysis is even more common in cellular and molecular neuroscience. It is not just used as the main analysis for comparing randomised groups. Why do researchers do this? We know of no statistics text books which advocate this approach and ours explicitly warn against it [6, 8, 9]. To anybody who understands what "not significant" means, it should be obvious that within-group testing is illogical. It should also appear so to anyone who has attended an introductory research methods course, which would have mentioned the importance and use of a control group. Do researchers invent this for themselves, or do they copy published papers which have gone down this misleading road? Every statistical advisor has come across consulters who say, when told that their proposed method is wrong, that some published paper has used it, so it must be correct. Simple ignorance could be the explanation and we have no way of knowing in any particular case how the mistake came about. We should not assume that an author testing within groups is doing it to hide an underlying non-significant difference and one of the examples given above showed a significant difference when a valid analysis was used. But as Darrell Huff wrote in 1954: "As long as the errors remain one-sided, it is not easy to attribute them to bungling or accident." . Watson REB, Ogden S, Cotterell LF, Bowden JJ, Bastrilles JY, Long SP, Griffiths CEM: A cosmetic 'anti-ageing' product improves photoaged skin: a double-blind, randomized controlled trial. Br J Dermatol. 2009, 161: 419-426. 10.1111/j.1365-2133.2009.09216.x. Gardner MJ, Altman DG: Confidence intervals rather than P values: estimation rather than hypothesis testing. BMJ. 1986, 292: 746-50. 10.1136/bmj.292.6522.746. The CONSORT Statement. http://www.consort-statement.org/consort-statement/ Bland JM: Evidence for an 'anti-ageing' product may not be so clear as it appears. Br J Dermatol. 2009, 161: 1207-1208. 10.1111/j.1365-2133.2009.09433.x. Bland M: Keep young and beautiful: evidence for an "anti-aging" product?. Significance. 2009, 6: 182-183. 10.1111/j.1740-9713.2009.00395.x. Altman DG: Practical Statistics for Medical Research. 1991, London: Chapman and Hall Bland JM, Altman DG: Informed consent. BMJ. 1993, 306: 928- Bland M: An Introduction to Medical Statistics. 2000, Oxford: University Press Bland M, Peacock J: Statistical Questions in Evidence-based Medicine. 2000, Oxford: University Press Boutron I, Dutton S, Ravaud P, Altman DG: Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010, 303: 2058-2064. 10.1001/jama.2010.651. Bland JM, Altman DG: Comparisons within randomised groups can be very misleading. BMJ. 2011, 342: d561-10.1136/bmj.d561. Machin D, Campbell MJ, Tan SB, Tan SH: Sample Size Tables for Clinical Studies. 2009, Chichester: Wiley Vickers AJ, Altman DG: Analysing controlled trials with baseline and follow up measurements. BMJ. 2001, 323: 1123-1124. 10.1136/bmj.323.7321.1123. Toulon P, Jacquot C, Capron L, Frydman MO, Vignon D, Aiach M: Antithrombin-III and heparin cofactor-II in patients with chronic-renal-failure undergoing regular hemodialysis. Thrombosis Haemostasis. 1987, 57: 263-268. Quinn MW, Wild J, Dean HG, Hartley R, Rushforth JA, Puntis JW, Levene MI: Randomised double-blind controlled trial of effect of morphine on catecholamine concentrations in ventilated pre-term babies. Lancet. 1993, 342: 324-327. 10.1016/0140-6736(93)91472-X. Bonnar J, Sheppard BL: Treatment of menorrhagia during menstruation: randomised controlled trial of ethamsylate, mefenamic acid, and tranexamic acid. BMJ. 1996, 313: 579-582. 10.1136/bmj.313.7057.579. Kerrigan DD, Thevasagayam RS, Woods TO, McWelch I, Thomas WEG, Shorthouse AJ, Dennison AR: Who's afraid of informed consent?. BMJ. 1993, 306: 298-300. 10.1136/bmj.306.6873.298. Grant DJ, Bishop-Miller J, Winchester DM, Anderson M, Faulkner S: A randomized comparative trial of acupuncture versus transcutaneous electrical nerve stimulation for chronic back pain in the elderly. Pain. 1999, 82: 9-13. 10.1016/S0304-3959(99)00027-5. Altman DG: Comparability of randomized groups. Statistician. 1985, 34: 125-136. 10.2307/2987510. Salvesen KA, Bakketeig LS, Eik-nes SH, Undheim JO, Okland O: Routine ultrasonography in utero and school performance at age 8-9 years. Lancet. 1992, 339: 85-89. 10.1016/0140-6736(92)90998-I. Bland JM, Altman DG: Regression towards the mean. BMJ. 1994, 308: 1499-10.1136/bmj.308.6942.1499. Altman DG, Doré CJ: Randomization and base-line comparisons in clinical-trials. Lancet. 1990, 335: 149-153. 10.1016/0140-6736(90)90014-V. Vollenweider D, Boyd CM, Puhan MA: High prevalence of potential biases threatens the interpretation of trials in patients with chronic disease. BMC Med. 2011, 9: 73-10.1186/1741-7015-9-73. Nieuwenhuis S, Forstmann BU, Wagenmakers BJ: Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci. 2011, 14: 1105-7. 10.1038/nn.2886. Huff D: How to Lie with Statistics. 1954, London: Gollancz We thank the referees for helpful suggestions. JMB is funded by the University of York. His travel for discussions on this paper was funded by an NIHR Senior Investigator Award. DGA is supported by Cancer Research UK. The authors declare that they have no competing interests. JMB carried out the simulations and the algebra. JMB and DGA each contributed examples and wrote and agreed the manuscript jointly. About this article Cite this article Bland, J.M., Altman, D.G. Comparisons against baseline within randomised groups are often used and can be highly misleading. Trials 12, 264 (2011). https://doi.org/10.1186/1745-6215-12-264 - type I error
Can you arrange these numbers into 7 subsets, each of three numbers, so that when the numbers in each are added together, they make seven consecutive numbers? Your school has been left a million pounds in the will of an ex- pupil. What model of investment and spending would you use in order to ensure the best return on the money? All CD Heaven stores were given the same number of a popular CD to sell for £24. In their two week sale each store reduces the price of the CD by 25% ... How many CDs did the store sell at. . . . Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37. How many solutions can you find to this sum? Each of the different letters stands for a different number. If a sum invested gains 10% each year how long before it has doubled its value? Can you find an efficient method to work out how many handshakes there would be if hundreds of people met? A game for 2 or more people, based on the traditional card game Rummy. Players aim to make two `tricks', where each trick has to consist of a picture of a shape, a name that describes that shape, and. . . . Here are four tiles. They can be arranged in a 2 by 2 square so that this large square has a green edge. If the tiles are moved around, we can make a 2 by 2 square with a blue edge... Now try to. . . . If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. Take any four digit number. Move the first digit to the 'back of the queue' and move the rest along. Now add your two numbers. What properties do your answers always have? Explore the effect of reflecting in two parallel mirror lines. Can you explain the surprising results Jo found when she calculated the difference between square numbers? A 2-Digit number is squared. When this 2-digit number is reversed and squared, the difference between the squares is also a square. What is the 2-digit number? Many numbers can be expressed as the difference of two perfect squares. What do you notice about the numbers you CANNOT make? On the graph there are 28 marked points. These points all mark the vertices (corners) of eight hidden squares. Can you find the eight A car's milometer reads 4631 miles and the trip meter has 173.3 on it. How many more miles must the car travel before the two numbers contain the same digits in the same order? How many different symmetrical shapes can you make by shading triangles or squares? Different combinations of the weights available allow you to make different totals. Which totals can you make? Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153? Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces? What size square corners should be cut from a square piece of paper to make a box with the largest possible volume? What is the greatest volume you can get for a rectangular (cuboid) parcel if the maximum combined length and girth are 2 metres? Can you maximise the area available to a grazing goat? Sissa cleverly asked the King for a reward that sounded quite modest but turned out to be rather large... Have a go at creating these images based on circles. What do you notice about the areas of the different sections? Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? The diagonals of a trapezium divide it into four parts. Can you create a trapezium where three of those parts are equal in area? Chris is enjoying a swim but needs to get back for lunch. If she can swim at 3 m/s and run at 7m/sec, how far along the bank should she land in order to get back as quickly as possible? A circle is inscribed in a triangle which has side lengths of 8, 15 and 17 cm. What is the radius of the circle? Can you find the area of a parallelogram defined by two vectors? Which has the greatest area, a circle or a square inscribed in an isosceles, right angle triangle? A spider is sitting in the middle of one of the smallest walls in a room and a fly is resting beside the window. What is the shortest distance the spider would have to crawl to catch the fly? Why does this fold create an angle of sixty degrees? A 1 metre cube has one face on the ground and one face against a wall. A 4 metre ladder leans against the wall and just touches the cube. How high is the top of the ladder above the ground? A square of area 40 square cms is inscribed in a semicircle. Find the area of the square that could be inscribed in a circle of the The area of a square inscribed in a circle with a unit radius is, satisfyingly, 2. What is the area of a regular hexagon inscribed in a circle with a unit radius? Two ladders are propped up against facing walls. The end of the first ladder is 10 metres above the foot of the first wall. The end of the second ladder is 5 metres above the foot of the second. . . . In a three-dimensional version of noughts and crosses, how many winning lines can you make? If the hypotenuse (base) length is 100cm and if an extra line splits the base into 36cm and 64cm parts, what were the side lengths for the original right-angled triangle? A napkin is folded so that a corner coincides with the midpoint of an opposite edge . Investigate the three triangles formed . Show that is it impossible to have a tetrahedron whose six edges have lengths 10, 20, 30, 40, 50 and 60 units... Explore the effect of combining enlargements. Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have? This shape comprises four semi-circles. What is the relationship between the area of the shaded region and the area of the circle on AB as diameter? If you are given the mean, median and mode of five positive whole numbers, can you find the numbers? Can you find rectangles where the value of the area is the same as the value of the perimeter?
The interests of the scientific community working on the Soil Moisture and Ocean Salinity (SMOS) ocean salinity level 2 processor definition are currently focused on improving the performance of the retrieval algorithm, which is based on an iterative procedure where a cost function relating models, measurements, and auxiliary data is minimized. For this reason, most of the effort is currently focused on the analysis and the optimization of the cost function. Within this framework, this study represents a contribution to the assessment of one of the pending issues in the definition of the cost function: the optimal weight to be given to the radiometric measurements with respect to the weight given to the background geophysical terms. A whole month of brightness temperature acquisitions have been simulated by means of the SMOS-End-to-End Performance Simulator. The level 2 retrieval has been performed using the Universitat Politècnica de Catalunya (UPC) level 2 processor simulator using four different configurations, namely, the direct covariance matrices, the two cost functions currently described in the SMOS literature, and, finally, a new weight (the so-called effective number of measurement). Results show that not even the proposed weight properly drives the minimization, and that the current cost function has to be modified in order to avoid the introduction of artifacts in the retrieval procedure. The calculation of the brightness temperature misfit covariance matrices reveals the presence of very complex patterns, and the inclusion of those in the cost function strongly modifies the retrieval performance. Worse but more Gaussian results are obtained, pointing out the need for a more accurate modeling of the correlation between brightness temperature misfits, in order to ensure a proper balancing with the relative weights to be given to the geophysical terms. a. The SMOS mission In May 1999 the European Space Agency (ESA) approved the Soil Moisture and Ocean Salinity (SMOS) Mission as the second of its Living Planet Programme Earth Explorer Opportunity Missions to provide global and frequent soil moisture and sea surface salinity (SSS) maps. SMOS was launched on 2 November 2009, and after the first calibration and checkout period (the so-called “commissioning phase”), SSS level 3 products will be distributed; the expected accuracy is 0.1–0.4 psu over 100 × 100–200 × 200 km2 in 30–10 days, respectively (Font et al. 2004). The single payload embarked on SMOS is the Microwave Imaging Radiometer by Aperture Synthesis (MIRAS; McMullan et al. 2008); it is a 2D interferometric radiometer operating at the protected L band with a nominal frequency of 1413.5 MHz and a bandwidth of 27 MHz. It consists of three deployable arms connected to a central hub (8-m-diameter radiometer when completely deployed). The arms are equally spaced with an angular separation of 120°. Each arm encompasses three segments, each one containing six L-band radiometers [Lightweight Cost-Effective Frontend (LICEF)]; four more radiometers are placed in the central hub, for a total of 66 radiometers. In addition, there are three noise injection radiometers located in the central hub, each of which consists of two LICEF receivers coupled to a single antenna. The total number of elements is, therefore, 69 antennas and 72 receivers. b. The measurements acquisition SMOS is an interferometric radiometer. The basic concept of interferometric radiometry is to synthesize a large aperture using a number of small antennas. The output voltage of each pair of antennas [e.g., antenna 1 and 2, located at (x1, y1) and (x2, y2)] are cross correlated to obtain the “visibility samples” as expressed by the following equation: where u and υ are the spatial frequencies of the visibility sample , kB is the Boltzmann constant , B1 and B2 the receivers’ noise bandwidth, G1 and G2 the available power gains, and b1(t) and b2(t) are the signals measured by the elements 1 and 2, respectively. The complete set of visibility samples is called the visibility map, and it is approximately the Fourier transform of the brightness temperature distribution of the scene. To invert this process either the inverse Fourier transform can be applied as a first approximation (Camps et al. 1997) or a more sophisticated G-matrix inversion (Camps et al. 2008; Anterrieu and Camps 2008) can be used. The major advantage of interferometric radiometry is the multiangular measurement capability: the output of an interferometric radiometer is, in fact, an image; this permits several views, under different incidence angles, of the same point on the earth before it exits the field of view (FOV). According to the MIRAS instrument design, the distance (d) between antennas does not satisfy the Nyquist criterion (Camps et al. 1997), and part of the FOV is affected by aliasing. The SMOS FOV always contains both the earth and the sky; because the sky is a very stable and well-known target, both its direct contribution and the alias that it induces can be estimated and removed by the visibility map. The resulting so-called Extended Alias-Free (EAF)-FOV has the shape of a distorted hexagon. Figure 1 shows the EAF-FOV and the variation of the incidence angle (dashed line) and the spatial resolution inside it (dash–dot). A point in the boresight (the center of the swath for the case of SMOS) of the satellite is observed approximately 150 times (75 for each polarization) under an incidence angle ranging from 0° to 65°, and with a spatial resolution from 30 to 100 km. c. SSS retrieval in SMOS The SMOS level 2 (from brightness temperatures to SSS) retrieval algorithm has been defined according to a Bayesian approach: it embodies prior information to ease the retrieval. Assuming normal statistics on both the a priori information and the observations, the general maximum likelihood estimation (MLE) reduces to a least squares problem, the solution of which can be found through the minimization of a “cost function” (χ2) expressed by where is the misfit in the observations (measurement minus model) and is the misfit covariance matrix. Until a proper estimation of is obtained in the official ocean salinity level 2 processor, it is defined as being diagonal and the misfits are considered completely uncorrelated; this consideration is equivalent to writing Eq. (2) as where is the nth element of , which is a function of the sea surface temperature, salinity, and roughness; and is the radiometric noise for the nth observation. Previous studies (e.g. Gabarró et al. 2009) showed that defining the optimal cost function is not straightforward and that auxiliary external information, in particular, wind speed (U10), sea surface temperature (SST), and possibly modeled or climatological SSS must be added to Eq. (3). The following two different cost functions are currently used within the SMOS community (Zine et al. 2008): In both formulations the cost function is composed of the following two main contributions (or information providers): the first term is representative of MIRAS measurements, which is a function of the original (true) geophysical parameters , and modeled observables, which is a function of the parameters that are going to be retrieved , weighted by the radiometric noise of the nth observation as in Eq. (3); and - the constraints for the auxiliary SSS, SST, and 10-m-height wind speed U10 (as sea surface roughness descriptor) are the second, third, and fourth terms, respectively, ; these are weighted by the inverse of the variance of the misfit existing considering the corresponding auxiliary field with respect to the original one, as defined in Eq. (6): In the real SMOS case (not the simulation), Porig is not known and allows weighting of the a priori information with the value of the geophysical parameter. The value of is representative of the reliability of this information: large indicates that the estimation is not reliable, leading to a very small weight within the total χ2 minimization, and vice versa. The difference between the two formulations lies in the factor 1/Nobs weighting of the observables term in Eq. (5). Actually, Eqs. (4) and (5) represent two extreme cases; in the first each misfit is assumed to provide the maximum information content, and their contributions are thus summed up (once squared and normalized by the radiometric noise) to construct the brightness temperatures term in the cost function [Eq. (4)]. The second option [Eq. (5)] is appropriate for the case of completely redundant misfit samples: the average contribution is used to define the cost function. d. Expected correlation in the brightness temperatures Because of MIRAS’ characteristics, some correlation is expected between the brightness temperature errors of different grid points within the same snapshot and among consecutive snapshots. Any imaging radiometer, in fact, is affected by the following three types of noise (Font et al. 2008): the radiometric resolution (Randa et al. 2008) (ΔT) (the temporal standard deviation of the zero mean random error resulting from the finite integration time); the radiometric bias (the spatial average of all of the systematic errors); and the radiometric accuracy [the spatial standard deviation of all of the systematic errors; see Torres et al. (2005)]. The first type of noise is random within the same snapshot as well as from snapshot to snapshot. The second and third types are random from pixel to pixel within the same snapshot, but they are systematic from one snapshot to another, and are responsible for the above-mentioned correlation. In addition, concerning SMOS, the following two other sources of spatial correlation can be identified: the finite spatial resolution of the instrument (Fig. 1), which is larger than the icosahedral Snyder equal area hexagonal grid of aperture 4 and resolution 9 (ISEA 4H9; see Snyder 1992; Suess et al. 2004) in which level 2 data will be projected (SMOS 50-km-average resolution pixels are oversampled to the 15-km ISEA 4H9 grid size). Thus, to summarize, correlation is present in the SMOS processing chain at level 0 (for radiometric bias and accuracy) and at level 1 (for image reconstruction and projection of the brightness temperatures), as shown in Fig. 2. Considering the earth’s reference frame instead of the satellite’s reference frame, each grid point “observes” the satellite and samples its antenna pattern with a frequency related to the time between snapshots (whose upper limit is fixed by the ISEA grid spacing). As it is explained, the SMOS synthetic antenna pattern presents a correlation length that is larger than this sampling frequency, inducing correlation among the errors on the various measurements of the same grid point. As remarked in section 1c, according to the SMOS level 2 retrieval procedure, the misfit in the SMOS measured brightness temperatures is assumed to be uncorrelated, which is equivalent to considering the matrix in Eq. (2) as being diagonal, and thus leading to Eqs. (3) and (4). Taking into account Eq. (4), the presence of correlation between different misfits results in a loss of the information provided by the observables with respect to the background terms. Two different approaches have been followed so far concerning salinity retrieval from MIRAS brightness temperatures: misfits can be, once squared and normalized, summed up [Eq. (4); see Zine et al. (2008)] or averaged [Eq. (5); see Camps et al. (2005) and Talone et al. (2007)]. However, the correlation induced by the instrument generates an intermediate and more complex situation between Eqs. (4) and (5). Aiming at evaluating the impact of the correlation among the misfits on the SMOS-measured brightness temperatures, a whole month of overpasses has been simulated. The simulation scenario is presented in section 2. The level of correlation of the measurement errors, and thus the weight to be given to the observables term in the cost function, is assessed in section 3. In this section, the covariance matrices are estimated, and a new weight regarded as the “effective number of observations” is introduced. The comparison of the retrieval results using the four formulations is considered in section 4, and, finally, the main conclusions of this work are summarized in section 5. 2. Simulation scenario Because at the time of this study SMOS output was not yet fully calibrated, SMOS-like brightness temperatures were simulated using the SMOS End-to-End Performance Simulator (SEPS; see Camps et al. 2003; SEPS 2006a) in its full mode (including copolarized and cross-polarized measured antenna patterns for each antenna, all instrument errors, and G-matrix image reconstruction). To model sea surface emission, the Klein and Swift model for the seawater dielectric constant (Klein and Swift 1977) and the linear fit to Hollinger’s measurements (Hollinger 1971) for the wind speed contribution to brightness temperature have been used. Because the objective of this study is the estimation of the correlation induced by the instrument, in order to avoid further contributions from other sources the radiometric sensitivity has been set to zero in the simulations, increasing the integration time to very large values (Randa et al. 2008), that is, . The radiometric sensitivity is, in fact, according to its definition, already taken into account by the term in Eqs. (3)–(5). Sixty-four ascending and descending overpasses have been simulated during the month of March 2007 (SEPS time) consisting, on average, of more than 200 snapshots each. As mentioned, the measurement acquisition has been simulated using the measured MIRAS antenna patterns, instrument drifts, and current G-matrix inversion algorithm. Simulations output has been projected onto the ISEA grid as real SMOS data. Concerning the geophysical parameters, the following two databases are defined: original data (used to feed SEPS and generate the brightness temperatures): daily outputs of a 0.5° configuration of the Nucleus for European Modelling of the Ocean (NEMO)–Océan Parallélisé (OPA) ocean model (Madec 2008; Mourre et al. 2008) used as original SSSorig and SSTorig, while U10orig fields come from 40-yr European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-40; Uppala et al. 2005); and auxiliary data [used in the level 2 cost function; see Eqs. (4) and (5)]: SSSaux and SSTaux come from the Levitus climatology (Levitus 1998), and U10aux are extracted from the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis (Kalnay et al. 1996); Simulations have been carried out following these steps: SEPS-generated brightness temperatures (Fig. 3a) have been masked to eliminate the transition areas at the beginning and at the end of the sequence, and the remaining grid points are shown in Fig. 3b. As an example, the number of observations for one of the ascending overpasses is shown in Fig. 3c as a function of the distance to the ground track. Selected brightness temperatures have been compared to the ones resulting from running only the forward model (using the same geophysical and orbital parameters); the difference between them is the instrument-induced radiometric error (radiometric bias plus radiometric accuracy, because radiometric sensitivity has been set to zero). The calculated differences have been sorted and grouped by a number of observations. For each one of the bins, the covariance matrix has been computed as follows: for each one of the nth observation samples, the Galton–Pearson’s correlation coefficient (Rodgers and Nicewander 1988) has been calculated between all the possible pairs to construct the covariance matrices [estimate of , in Eq. (2)]. Bins with a number of samples smaller than the number of observations have not been taken into account because they do not provide representative results. Sea surface salinity retrievals for all the 64 overpasses have been performed using the estimated covariance matrices as expressed by Eq. (7): Results have been considered as a master case to be compared with the various approximations of the cost function. At this stage, aiming at adapting the current cost functions to the characteristics of the misfits’ covariance matrix, a new weight is defined. To do so, an analysis of the estimated covariance matrices has been carried out. The eigenvector decomposition has been applied to the inverse covariance matrices and the number of eigenvectors describing 99% of the variance has been defined as the effective number of measurements Neff. To test the impact of introducing the Neff in the cost function, the SSS for the simulated scenario has been retrieved by means of the level 2 processor (Talone et al. 2007), the cost function in Eq. (8): First, the covariance matrices have been calculated and analyzed. Figure 4 presents an example of a covariance matrix (78 pairs of TH–TV observations), the matrix is plotted for (a) horizontal polarization (H pol), (b) vertical polarization (V pol), and (c) the first Stokes parameter in brightness temperature (TI) (Randa et al. 2008); the color scale is in decibels to enlarge the observed dynamic range. According to Butora and Camps (2003), the error in the radiometric measurements is correlated; correlation clusters are evident in Fig. 4 and characterized by a very complex pattern. As explained in section 3, the eigenvector decomposition has been applied to the inverse covariance matrices; in Fig. 5 an example of the eigenvalue spectrum is shown for the case using the first Stokes parameter in brightness temperatures, with (a) 22 and (b) 78 pairs of observations, respectively. As can be observed, the trends are very different, in the first case (22 observation pairs) the spectrum is almost constant and sharply decreases in the last three eigenvectors; in the second case, instead, the decrease is more uniform. This change of regime can be also noticed in the trend of the effective number of observations (Neff), derived by the analysis of the covariance matrices as described in section 3. Results are shown in Fig. 6 for (a) H pol, (b) V pol, and (c) the first Stokes parameter in brightness temperature (TI). The number of (TH, TV) pairs (approximately half the number of observations) is represented in the abscissas, and the ordinates shows the ratio between the effective number of observations Neff and the total Nobs. The ratio Neff/Nobs is shown as a density plot, with color showing the occurrence of any particular pair Nobs–Neff/Nobs simulated along the whole month; the solid line is the linear fit of Neff. Figure 6d is the normalized histogram of the number of observations (the sum of all of the bins is equal to 1). Even though some differences can be noticed between TH, TV, and TI (H gives higher results), the trend is very similar, and two different regimes can be observed. For Nobs ∈ (1, 30], Neff changes from being equal to Nobs to the asymptotic value of 0.82Nobs for H pol, 0.76Nobs for V pol, and 0.79Nobs for TI; when is Nobs ≥ 31 (with a distance to ground track <350 km) the ratio Neff/Nobs remains almost constant. The steep change in the regime around 30 observations is basically due to the lack of samples between 30 and 70 pairs of observations. This behavior is apparent from Figs. 6d and 3c: in the former figure the normalized histogram of the number of observations is shown, whereas in the last figure the number of observations is presented as a function of the cross-track distance. The very steep increase of Nobs shown in Fig. 3c is related to the along-track dimension of the SMOS FOV, shown in Fig. 1, and is the cause of the lack of estimates of Neff in Figs. 6–c, as explained in section 2. In fact, when only a few measurements are available for a certain Nobs, the covariance matrix is not calculated because it is not representative. This change of regime suggests an objective way of defining the useful swath width of an SMOS overpass at approximately 700 km, which is a bit larger than the official Q swath or the “narrow swath” (631 and 640 km; see SEPS 2006b; Barré et al. 2008). Another feature to be highlighted in Figs. 6a–c is the banded structure of the ratio Neff/Nobs for low values of Nobs, which was not expected, and is probably due to the specific shape of the SMOS FOV and the consequent distribution of Nobs in the cross-track dimension. Figure 7 shows W/Nobs as a function of the total number of observations; the Neff calculated for the first Stokes parameter in the brightness temperatures is marked with the solid line, while the dashed and dash–dot lines stand, respectively, for the cases in Eqs. (4) and (5), where the symbol ( )′ indicates the fitting result and not the calculated Neff/Nobs. As can be observed, Neff takes intermediate values between 1 and Nobs, as expected. The four configurations [using the cost functions defined in Eq. (7): (master), Eq. (4): (W = Nobs), Eq. (5): (W = 1), and Eq. (8): (W = Neff)] have been tested and results have been compared. SSS has been retrieved using the brightness temperature resulting from simulating the same scenarios, but now the SMOS nominal value for the integration time (τ = 158 ms) is applied. Error statistics considering only the grid points fully observed for all 64 overpasses are summarized in Table 1 through its mean value (μ), standard deviation (σ), rms (defined as , with N being the total number of grid points taken into account), and the X2 factor. The latter is defined as the quadratic sum of the difference between the retrieved SSS error–normalized histogram (observable) and the pdf of a normal distribution with the same mean and variance (model), weighted by the uncertainty associated with each observation, as expressed in Eq. (11) (Barlow 1989), The normalized histograms of the SSS retrieval error (the sum of all of the bins is equal to 1) using Eqs. (7), (4), (5), and (8) are shown in Figs. 8a–d, respectively. In order to not alter the results, σi in Eq. (11) has been considered constant and equal to 1. To calculate X2, both the SSS error and the Gaussian pdf have been quantisized in 0.1-psu bins (which is the expected resolution of SMOS at level 2 in one overpass); moreover the sum, which should be calculated in the interval (−∞, ∞), has been computed only for the interval (−10 psu, 10 psu). According to Table 1, retrieval results are only slightly affected by the change of W, because for all of the configurations the rms error is constant at 2.39 psu, with very high X2 (~5.2 − 5.4). The difference is, instead, noticeable if it is compared with the case of directly using the covariance matrices in the retrieval. In this case, the rms error is equal to 3.78 psu [mostly resulting from the increase of the error standard deviation σ (3.75 psu)]; on the other hand, error statistics are much more Gaussian presenting a X2 = 0.89. Comparing Figs. 8a–d it is evident that results are better when using whichever of the Eqs. (4), (5), or (8); nevertheless, the low Gaussianity of the error is also manifest, indicating that some artifacts are introduced in the retrieval procedure resulting from the high weight given to the constraints of the auxiliary parameters. Previous studies (Sabia et al. 2010; Gabarró et al. 2009) remarked on the necessity of correctly balancing the different terms in the cost function. The result of the minimization of the cost function, in fact, strongly depends on the relative weight given to each of the factors of χ2. In particular, because all of the elements of the covariance matrices, calculated as explained in section 3, are positive when Eqs. (4), (5), or (8) are used, the contribution of the constraints for SST and U10 to the total cost function is much larger than the contribution of the measured brightness temperature , if compared to the case using Eq. (7). The consequence is a good, but fake, retrieval, when the retrieved parameters drift toward the reference parameters. According to the results of this study, the inclusion of the brightness temperature misfit covariance matrices [Eq. (7)] gives a very different result with respect to the case of using the approximated cost functions [Eqs. (4), (5), and (8)] and should be taken into account in the choice of the relative weights, which should be updated. To improve the characterization of the cost function used in the SMOS ocean salinity level 2 processor, the correlation between measurement misfits has been analyzed using simulated data. Correlation is expected, due to the intrinsic nature of any imaging radiometer, to the possible structures induced by the image reconstruction algorithm, and, finally, to the projection of the brightness temperatures onto the ISEA grid. To assess this point, one complete month of overpasses (64 in total) in the North Atlantic Ocean have been simulated using the SMOS End-to-End Performance Simulator (SEPS) in its full mode. The SMOS level 2 processor simulator (SMOS-L2PS) has been used to retrieve SSS from the brightness temperatures calculated by SEPS. As geophysical input parameters (original and auxiliary data), a North Atlantic configuration of the NEMO-OPA ocean model and the Levitus climatology have been used for SSS and SST, while ERA-40 and NCEP–NCAR products have been chosen for U10. The SEPS-simulated brightness temperatures have been compared to the ones obtained by directly forwarding a brightness temperature model to estimate the correlation of the radiometric errors induced by the instrument. To do so, the covariance matrices of the misfit between the SEPS-retrieved and the forward model brightness temperatures, sorted and grouped by the number of observations (Nobs), have been computed. In addition, as a test, eigenvalue decomposition has been applied and the number of eigenvectors required to describe the 99% of the variance has been defined as the effective number of measurements (Neff). Its trend as a function of the number of observations has been analyzed, and the results suggest the presence of two regimes: the first one is noise dominated, where Neff is almost equal to Neff; and the second one is where Neff increases with Nobs according to the constant slope of 0.8. Introducing Neff in the cost function resulted in applying a weight to the average residual term of the observational part of the SMOS ocean salinity level 2 cost function equal to the factor . The consequent impact has been assessed by either comparing the retrieval performance with that obtained using the estimated covariance matrices directly or with both of the cost functions present in the SMOS literature. Conclusions can be summarized by the following three points: Based on the two regimes of Neff, a threshold can be established to define objectively the useful swath of SMOS as 700 km centered on the satellite ground track, where the relation Neff/Nobs is constant. The three approximated cost functions [Eqs. (4), (5), and (8)] give very similar performances. The analysis of the cost functions suggests that both the current configurations [Eqs. (4) and (5)] and the proposed weight [Neff, see Eq. (8)], although ensuring better performance, may be introducing nonlinearities in the retrieval procedure if compared to the results obtained using the misfit covariance matrices directly. According to previous studies (Sabia et al. 2010), nonlinearities may be due to a nonoptimum balancing of the cost function that should be modified. Furthermore, the inclusion of the brightness temperature misfit covariance matrices strongly modifies the error statistics, revealing the need for a more accurate modeling of the correlation in the brightness measurement misfits. This must be introduced in the cost function and its impact on the relative weights must be applied to the assessed auxiliary parameters. The authors would like to acknowledge the anonymous reviewers whose valuable suggestions and remarks contributed to the strengthening of this paper, as well as Dr. Baptiste Mourre for providing NEMO-OPA geophysical auxiliary data. This study has been founded by the Spanish National Program on Space through Project ESP2005-06823-C05 and the Spanish Ministry of Science and Innovation through the Formación de Personal Investigador (FPI) Fellowship ESP2005-06823-C05-02. Current affiliation: Serco SpA, Frascati, Italy. Current affiliation: European Space Agency-ESRIN, Frascati, Italy.
Scientific Papers of Josiah Willard Gibbs, Volume 2/Chapter XIV A COMPARISON OF THE ELASTIC AND THE ELECTRICAL THEORIES OF LIGHT WITH RESPECT TO THE LAW OF DOUBLE REFRACTION AND THE DISPERSION OF COLORS. [American Journal of Science, ser. 3, vol. xxxv, pp. 467–475, June, 1888.] It is claimed for the electrical theory of light that it is free from serious difficulties, which beset the explanation of the phenomena of light by the dynamics of elastic solids. Just what these difficulties are, and why they do not occur in the explanation of the same phenomena by the dynamics of electricity, has not perhaps been shown with all the simplicity and generality which might be desired. Such a treatment of the subject is however the more necessary on account of the ever-increasing bulk of the literature on either side, and the confusing multiplicity of the elastic theories. It is the object of this paper to supply this want, so far as respects the propagation of plane waves in transparent and sensibly homogeneous media. The simplicity of this part of the subject renders it appropriate for the first test of any optical theory, while the precision of which the experimental determinations are capable, renders the test extremely rigorous. It is moreover, as the writer believes, an appropriate time for the discussion proposed, since on one hand the experimental verification of Fresnel's Law has recently been carried to a degree of precision far exceeding anything which we have had before, and on the other, the discovery of a remarkable theorem relating to the vibrations of a strained solid has given a new impulse to the study of the elastic theory of light. Let us first consider the facts to which a correct theory must conform. It is generally admitted that the phenomena of light consist in motions (of the type which we call wave-motions) of something which exists both in space void of ponderable matter, and in the spaces between the molecules of bodies, perhaps also in the molecules themselves. The kinematics of these motions is pretty well understood; the question at issue is whether it agrees with the dynamics of elastic solids or with the dynamics of electricity. In the case of a simple harmonic wave-motion, which alone we need consider, the wave-velocity () is the quotient of the wave-length () by the period of vibration (). These quantities can be determined with extreme accuracy. In media which are sensibly homogeneous but not isotropic the wave-velocity for any constant value of the period, is a quadratic function of the direction cosines of a certain line, viz., the normal to the so-called "plane of polarization." The physical characteristics of this line have been a matter of dispute. Fresnel considered it to be the direction of displacement. Others have maintained that it is the common perpendicular to the wave-normal and the displacement. Others again would define it as that component of the displacement which is perpendicular to the wave-normal. This of course would differ from Fresnel's view only in case the displacements are not perpendicular to the wave-normal, and would in that case be a necessary modification of his view. Although this dispute has been one of the most celebrated in physics, it seems to be at length substantially settled, most directly by experiments upon the scattering of light by small particles, which seems to show decisively that in isotropic media at least the displacements are normal to the "plane of polarization," and also, with hardly less cogency, by the diflSculty of accounting for the intensities of reflected and refracted light on any other supposition. It should be added that all diversity of opinion on this subject has been confined to those whose theories are based on the dynamics of elastic bodies. Defenders of the electrical theory have always placed the electrical displacement at right angles to the "plane of polarization." It will, however, be better to assume this direction of the displacement as probable rather than as absolutely certain, not so much because many are likely to entertain serious doubts on the subject, as in order not to exclude views which have at least a historical interest. The wave-velocity, then, for any constant period, is a quadratic function of the cosines of a certain direction, which is probably that of the displacement, but in any case determined by the displacement and the wave-normal. The coefficients of this quadratic function are functions of the period of vibration. It is important to notice that these coefficients vary separately, and often quite differently, with the period, and that the case does not at all resemble that of a quadratic function of the direction-cosines multiplied by a quantity depending on the period. In discussing the dynamics of the subject we may gain something in simplicity by considering a system of stationary waves, such as results from two similar systems of progressive waves moving in opposite directions. In such a system the energy is alteniately entirely kinetic and entirely potential. Since the total energy is constant, we may set the average kinetic energy per unit of volume at the moment when there is no potential energy, equal to the average potential energy per unit of volume when there is no kinetic energy. We may call this the equation of energies. It will contain the quantities and and thus furnish an expression for the velocity of either system of progressive waves. We have to see whether the elastic or the electric theory gives the expression most conformed to the facts. Let us first apply the elastic theory to the case of the so-called vacuum. If we write for the amplitude measured in the middle between two nodal planes, the velocities of displacement will be as and the kinetic energy will be represented by where is a constant depending on the density of the medium. The potential energy, which consists in distortion of the medium, may be represented by where is a constant depending on the rigidity of the medium. The equation of energies, on the elastic theory, is therefore Let us now consider how these equations will be modified by the presence of ponderable matter, in the most general case of transparent and sensibly homogeneous bodies. This subject is rendered much more simple by the fact that the distances between the ponderable molecules are very small compared with a wave-length. Or, what amounts to the same thing, but may present a more distinct picture to the imagination, the wave-length may be regarded as enormously great in comparison with the distances between neighboring molecules. Whatever view we take of the motions which constitute light, we can hardly suppose them (disturbed as they are by the presence of the ponderable molecules) to be in strictness represented by the equations of wave-motion. Yet in a certain sense a wave-motion may and does exist. If, namely, instead of the actual displacement at any point, we consider the average displacement in a space large enough to contain an immense number of molecules, and yet small as measured by a wave-length, such average displacements may be represented by the equations of wave-motion; and it is only in this sense that any theory of wave-motion can apply to the phenomena of light in transparent bodies. When we speak of displacements, amplitudes, velocities (of displacement), etc., it must therefore be understood in this way. The actual kinetic energy, on either theory, will evidently be greater than that due to the motion thus averaged or smoothed, and to a degree presumably depending on the direction of the displacement. But since displacement in any direction may be regarded as compounded of displacements in three fixed directions, the additional energy will be a quadratic function of the components of velocity of displacement, or, in other words, a quadratic function of the direction-cosines of the displacement multiplied by the square of the amplitude and divided by the square of the period. This additional energy may be understood as including any part of the kinetic energy of the wave-motion which may belong to the ponderable particles. The term to be added to the kinetic energy on the electric theory may therefore be written where is a quadratic function of the direction-cosines of the displacement. The elastic theory requires a term of precisely the same character, but since the term to which it is to be added is of the same general form, the two may be incorporated in a smgle term of the form where is a quadratic function of the direction-cosines of the displacement. We must, however, notice that both and are not entirely independent of the period. For the manner in which the flux of the luminiferous medium is distributed among the ponderable molecules will naturally depend somewhat upon the period. The same is true of the degree to which the molecules may be thrown into vibration. But and will be independent of the wave-length (except so far as this is connected with the period), because the wave-length is enormously great compared with the size of the molecules and the distances between them. The potential energy on the elastic theory must be increased by a term of the form where is a quadratic function of the direction-cosines of the displacement. For the ponderable particles must oppose a certain elastic resistance to the displacement of the ether, which in æolotropic bodies will presumably be different in different directions. The potential energy on the electric theory will be represented by a single term of the same form, say where a quadratic function of the direction-cosines of the displacement, takes the place of the constant which was sufficient when the ponderable particles were absent. Both and will vary to some extent with the period, like and and for the same reason. In regard to that potential energy, which on the elastic theory is independent of the direct action of the ponderable molecules, it has been supposed that in sdolotropic bodies the effect of the molecules is such as to produce an asolotropic state in the ether, so that the energy of a distortion varies with its orientation. This part of the potential energy will then be represented by where is a function of the directions of the wave-normal and the displacement. It may easily be shown that it is a quadratic function both of the direction-cosines of the wave-normal and of those of the displacement Also, that if the ether in the body when undisturbed is not in a state of stress due to forces at the surface of the body, or if its stress is uniform in all directions, like a hydrostatic pressure, the function must be symmetrical with respect to the two sets of direction-cosines. The equation of energies for the elastic theory is therefore If we now return to the equation of energies obtained from the elastic theory, we see at once that it does not suggest any such relation as experiment has indicated, either between the wave-velocity and the direction of displacement, or between the wave-velocity and the period. It remains to be seen whether it can be brought to agree with experiment by any hypothesis not too violent. In order that may be a quadratic function of any set of direction-cosines, it is necessary that and shall be independent of the direction of the displacement, in other words, in the case of a crystal like Iceland spar, that the direct action of the ponderable molecules upon the ether, shall affect both the kinetic and the potential energy in the same way, whether the displacement take place in the direction of the optic axis or at right angles to it. This is contrary to everything which we should expect. If, nevertheless, we make this supposition, it remains to consider This must be a quadratic function of a certain direction, which is almost certainly that of the displacement If the medium is free from external stress (other than hydrostatic), as we have seen, is symmetrical with respect to the wave-normal and the direction of displacement, and a quadratic function of the direction-cosines of each. The only single direction of which it can be a function is the common perpendicular to these two directions. If the wave-normal and the displacement are perpendicular, the direction-cosines of the common perpendicular to both will be linear fimctions of the direction-cosines of each, and a quadratic function of the direction-cosines of the common perpendicular will be a quadratic function of the direction-cosines of each. We may thus reconcile the theory with the law of double refraction, in a certain sense, by supposing that and are independent of the direction of displacement, and that and therefore is a quadratic function of the direction-cosines of the common perpendicular to the wave-normal and the displacement. But this supposition, besides its intrinsic improbability so far as and are concerned, involves a direction of the displacement which is certainly or almost certainly wrong. We are thus driven to suppose that the undisturbed medium is in a state of stress, which, moreover, is not a simple hydraulic stress. In this case, by attributing certain definite physical properties to the medium, we may make the function become independent of the direction of the wave-normal, and reduce to a quadratic function of the direction-cosines of the displacement. This entirely satisfies Fresnel's Law, including the direction of displacement, if we can suppose and independent of the direction of displacement. But this supposition, in any case difficult for aeolotropic bodies, seems quite irreconcilable with that of a permanent (not hydrostatic) stress. For this stress can only be kept up by the action of the ponderable molecules, and by a sort of action which hinders the passage of the ether past the molecules. Now the phenomena of reflection and refraction would be very different from what they are, if the optical homogeneity of a crystal did not extend up very close to the surface. This implies that the stress is produced by the ponderable particles in a very thin lamina at the surface of the crystal, much less in thickness, it would seem probable, than a wave-length of yellow light. And this again implies that the power of the ponderable particles to pin down the ether, as it were, to a particular position is very great, and that the term in the energy relating to the motion of the ether relative to the ponderable particles is very important. This is the term containing the factor which it is difficult to suppose independent of the direction of displacement because the dimensions and arrangement of the particles are different in different directions. But our present hypothesis has brought in a new reason for supposing depend on the direction of displacement, viz., on account of the stress of the medium. A general displacement of the medium midway between two nodal planes, when it is restrained at innumerable points by the ponderable particles, will produce special distortions due to these particles. The nature of these distortions is wholly determined by the direction of displacement, and it is hard to conceive of any reason why the energy of these distortions should not vary with the direction of displacement, like the energy of the general distortion of the wave-motion, which is partly determined by the displacement and partly by the wave-normal. But the difficulties of the elastic theory do not end with the law of double refraction, although they are there more conspicuous on account of the definite and simple law by which they can be judged. It does not easily appear how the equation of energies can be made to give anything like the proper law of the dispersion of colors. Since for given directions of the wave-normal and displacement, or in an isotropic body, is constant, and also and except so far as the type of the vibration varies, the formula requires that the square of the index of refraction (which is inversely as ) should be equal to a constant diminished by a term proportional to the square of the period, except so far as this law is modified by a variation of the type of vibration. But experiment shows nothing like this law. Now the variation in the type of vibration is sometimes very important,—it plays the leading rôle in the phenomena of selective absorption and abnormal dispersion,—but this is certainly not always the case. It seems hardly possible to suppose that the type of vibration is always so variable as entirely to mask the law which is indicated by the formula when and (with ) are regarded as constant. This is especially evident when we consider that the effect on the wave-velocity of a small variation in the type of vibration will be a small quantity of the second order. The phenomena of dispersion, therefore, corroborate the conclusion which seemed to follow inevitably from the law of double refraction alone. - The term electrical seems the most simple and appropriate to describe that theory of light which makes it consist in electrical motions. The cases in which any distinctively magnetic action is involved in the phenomena of light are so exceptional, that it is difficult to see any sufficient reason why the general theory should be called electromagnetic unless we are to call all phenomena electromagnetic which depend on the motions of electricity. - In the recent experiments of Professor Hastings relating to the index of refraction of the extraordinary ray in Iceland spar for the spectral line D2 and a wave-normal inclined at about 31° to the optic axis, the difference between the observed and the calculated values was only two or three units in the sixth decimal place (in the seventh significant figure), which was about the probable error of the determinations. See Am. Jour. Sci. ser. 3, vol. xxxv, p. 60. - Sir Wm. Thomson has shown that if an elastic incompressible solid in which the potential energy of any homogeneous strain is proportional to the sum of the squares of the reciprocals of the principal elongations minus three is subjected to any homogeaeons strain by forces applied to its surface, the transmission of plane waves of distortion, superposed on this homogeneous strain, will follow exactly Fresnel's law (including the direction of displacement), the three principal velocities being proportional to the reciprocals of the principal elongations. It must be a surprise to mathematicians and physicists to learn that a theorem of such simplicity and beauty has been waiting to be discovered in a field which has been so carefully gleaned. See page 116 of the current volume (xxv) of the Philosophical Magazine. - "At the same time, if the above reasoning be valid, the question as to the direction of the vibrations in polarized light is decided in accordanoe with the view of Fresnel. . . . I confess I cannot see any room for doubt as to the result it leads to. . . . I only mean that if light, as is generally supposed, consists of transversal vibrations similar to those which take place in an elastic solid, the vibration must be normal to the plane of polarization." Lord Rayleigh "On the light from the Sky, its Polarization and Color;" Phil. Mag. (4), xli (1871), p. 109. "Green's dynamics of polarization by reflexion, and Stokes' dynamics of the diffraction of polarized light, and Stokes' and Rayleigh's dynamics of the blue sky, all agree in, as it seems to me, irrefragably, demonstrating Fresnel's original conclusion, that in plane polarized light the line of vibration is perpendicular to the plane of polarization." Sir Wm. Thomson, loc. citat. - The terms kinetic energy and potential energy will be used in this paper to denote these average values. - For proof in extenso of this proposition, when the motions are supposed electrical, the reader is referred to page 187 of this volume. - But and considered as funotions of the direction of displacement, are all subject to any law of symmetry which may belong to the structure of the body considered. The resulting optical characteristics of the different crystallographic systems are given on pages 192–194. - This will appear most distinctly if we consider that divided by the velocity of light in vacuo gives the reciprocal of the index of refraction, and multiplied by the same quantity gives the wave-length in vacuo. - See note on page 224. - The reader may perhaps ask how the above reasoning is to be reconciled with the fact that the law of double refraction has been so often deduced from the elastic theory. The troublesome terms are and the variable part of which express the direct action of the ponderable molecules on the ether. So far as the (quite limited) reading and recollection of the present writer extend, those who have sought to derive the law of double refraction from the theory of elastic solids have generally either neglected this direct action—a neglect to which Professor Stokes calls attention more than once in his celebrated "Report on Double Refraction" (Brit. Assoc., 1862, pp. 264, 268)—or taking account of this action they have made shipwreck upon a law different from Fresnel's and contradicted by experiment. - See pages 190, 191 of this volume, or Lord Rayleigh's Theory of Sound, vol. i, p. 84.
Feynman had an example of a disk with a temperature distribution and say some ants that live in the disk where it is hotter nearer the edge. If their rulers changed sizes with the different temperatures at different locations then they might lay some identically manufactuted measuring sticks down and notice a circle about the center of the disk requires different than $\pi$ times as many to go around the circumference than to go through a diameter. This is because the disk is hotter near the edge so the measuring sticks out there expand and so fewer are needed. These ants might think they live on a curved disk, but we know they don't, we know that their measurement devices just aren't measuring actual distances. And if they moved their devices fast enough to not have time to get to equilibrium with the disk they would notice that as well as other effects. Could the same thing happen to us? Yes. Our meter sticks might not measure proper distance and our clocks might not measure proper duration. However when we postulate that they do and then make a theory about what these distances and durations are then we can use a small number of physical insights to single out a theory with a small number of parameters that agrees astoundingly well with what we actually see. So the world acts as if it is curved, and if we make a theory about the actual measurements we make can make a curved theory fit the data and that allows us to interpret the measurements to be of actual distances and durations. Could we be wrong like the ants? Sure, any science could be wrong if repeated measurements do not hold up or if new measurements in new domains or to new levels of accuracy don't hold up. But then we still have a nice theory that a new theory has to reduce to in the appropriate limit where the old theory was good. Any manifold with a straightforward topology could be modelled as a section of $\mathbb R^4$ with possibly some parts removed. And maybe there are flat space distances in that section of $\mathbb R^4$ and the things we measure with our clocks and meter sticks are complicated functions of the real distances and durations that only looked like results from a curved manifold in some limit. But this could happen with Bell's theorem too. Maybe some kind of ER=EPR holds and so entangled particles are never truly spatially separated so it just isn't possible experimentally to have entangled particles that can be manipulated independently. So if we were wrong about which particles are far enough apart to test locality, we don't know about locality. Or if there is a kind of superdeterminism that forces supposedly random choices to collude. Or maybe there are very very rare events that will adjust the experimental results to be within Bell's bounds if we simply collected enough data for enough time. And I realize you weren't asking for certainty, just strong experimental evidence. What usually happens is you make a large parameter space of possible gravitational theories and based on your experimental evidence you start ruling out vast swaths of possible theories. And the fact that GR is still viable is reassuring. But other theories can fit the results too. If we are wrong about something as basic as which things are close and which are far then there is lots of room to be wrong about a bunch of things. But stuff works pretty well. And the fact that the universe acts so much like it were curved is enough of a thing to explain that it makes sense to study the theory. Since there is always room to be wrong, you really look for principles, for reasons. We want to understand the universe. A curved space explains why different things move the same way under gravity, you say that spacetime is curved that way. If you want a different theory you should state a principled reason for a particular theory then we can see where the two theories are different and see if it is feasible to distinguish them experimentally. That's the good way. The bad way would be to be prejudiced against GR and just pick a random flat space theory that agrees with observations so far to the extent we've tested them so far. It's bad because you picked it randomly so even if we rule out the section of parameter space it lives in you can just randomly pick another one all over again. No progress is made. And if instead you made a flat space theory that just makes perfectly the exact same predictions about experiments it isn't really a different theory it's just a reformulation and then you'd need a principle for your theory other than that the universe can look curved without really having to be. Getting into a conversation about the way things really are without making different predictions can easily be physics adjacent rather than physics. So to contrast again with Bell, long ago a minority thought that distant correlations would be different than quantum theory predicts. And they thought so for principled reasons. But the predictions of quantum theory held, as almost everyone expected. This took away the principle from the alternative. There are still hidden variable theories that are studied seriously but almost all hidden variable theories today are designed to agree with quantum predictions not to make different predictions. So they are like the reformulation of GR as a flat space theory. And most physicists won't be interested. If the alternative formulations end up being easier to compute with, remember, teach, simulate, store, record, transmit or even if they make it easier to make parameter spaces of alternatives or easier to unify with other branches of physics there can be value. But those are the values of reformulations. Not of a principled alternative theory. So while I haven't cited a no-go result hopefully you realize that productivity comes from either having a principled reason for an alternative that makes different predictions. Or else to make a mere reformulation you know the kinds of things that could make that valuable. And for that value they need to be valuable to people that also learn GR since if the universe acts like it is curved people will want to know what a curved universe acts like. So it has to be useful to people that also learn GR.
Experimental Design Details Note: Most of the design elements can be seen in the included Qualtrics instruments. Key details are below. ##Substitution (and happiness) [2018 clarification: These were administered to the Student respondents to the Omnibus after the first 600, and to *all* the Nonstudent respondents.] Reinstein (2006) ran multi-stage lab experiments to measure and test how one appeal/ask (and direct donation responses) affects later and simultaneous donation choices. We extend this, focusing on a small set of treatments, and relying mainly on *between-subject* variation in an isolated decision, avoiding contrast and experimenter demand effects. Our basic design involves a participant’s decisions, at one or two points in time (henceforth "phases"), to divide an endowment between herself and one or more charities. Between participants, we vary 1. Whether the participant is asked to make a donation from her earnings in both the first and the second phase (separated by days or weeks), or only in the second phase. 2. Whether the charities in each phase are (typically seen as) similar or very distinct. - Charities were chosen based on prominence in the UK and the potential to easily divide into disjoint similarity classes. We excluded charities with very similar names. - We confirmed the similarity classes in a separate survey (via Prolific Academic on 4 March 2017) of a demographically-similar group (N=104). Similarity was measured using both unincentivized and beauty-contest elicitation. See: similaritysurveymaterial.zip. 3. The time gap between the first and the second time the participant is *invited to* participate in each phase. The context and presentation can be seen in the instruments attached. These differ slightly: for Nonstudents this is paired with a reward for signing up for the ESSExLab pool; for students ... with a reward for completing the Omnibus. A participant is randomized into one of three ask treatments: 1. No ask 2. Asked to donate to Oxfam 3. Asked to donate to the British Heart Foundation > Before we explain how to claim your reward… We are giving you the opportunity to donate from your reward to Oxfam. For every pound you donate, we will add an extra 25p. Please click on the image below for further information about this charity (link will open in a new tab). From your £10 Amazon gift certificate WOULD you be willing to donate to Oxfam? If you donate, your donation will be automatically deducted from your reward and passed on to this charity, plus an additional 25% from our own funds. Donations will be made within 7 days and receipts will be kept at the ESSExLab office. We will not pass your personal information on to the charity. Please enter the amount you would like to donate, if anything, in the box below. (Enter a whole number between 0 and 10; you do not need to enter the £ sign.) Notes: The respondent must enter *some* number (possibly 0). We are using this 'mandated decision' in all treatments (all of the treatments mentioned in this file) because we believe it is likely to increase the baseline incidence of giving, allowing for more powerful statistical measures of the impact of the treatments. We are only allowing integer responses to aid our administration; in previous similar trials non-integer responses are rare anyways. **Happiness**: As a secondary treatment (both here and in the second phase), we ask each participant to rate their happiness on a seven-point Likert scale ranging from "Extremely unhappy" to "Extremely happy". In each case this question follows the neighbor questions. However, for treatments 2-3 we vary whether this is asked before or after the donation screen, exactly balancing across treatments (i.e., administering this orthogonally to the charity ask treatments). Those who make a donation in phase 1 are thanked (within the survey) for making this specific donation. The context can be seen in the survey/experiment instruments attached. Again, the contexts are slightly different between the Nonstudent and Student samples. For the Nonstudents this is paired with a reward for completing the Omnibus; for Students this is paired with a reward for completing an Employability survey. Each participant is randomized into one of two charitable ask treatments. We balance this randomization by the phase-1 treatment, so that the empirical probability of being assigned to a phase-2 treatment is exactly equal for each phase-1 treatment. (This is done in Qualtrics by assigning an embedded data variable, and running a separate randomizer for each value of this variable.) 1. Asked to donate to Save the Children 2. Asked to donate to Cancer Research UK Happiness: this treatment is administered as in Phase 1. ##Giving and probability: ###1/2 chance of winning [2018 note: ordering changed -- earlier discussion] These treatments will be administered to the 401-600th Nonstudent responders to the initial email inviting them to sign up to the ESSExLab pool. [2018: Note this number was not reached] ... and to the first 600 students who respond to the Omnibus. These participants are told: > If you complete this survey, you have a 50% chance of winning a £10 Amazon voucher. After you complete this survey, we will reveal whether you have won this prize and explain how to claim it. [Nonstudents: If you complete this form and register as an ESSEXLab participant before the deadline specified in your email, you will have a 50% chance of winning a £10 Amazon gift certificate. ... ] In this treatment, participants have an equal chance of any of the following. 1. 'Before ask', wins: Asked, before learning outcome, to donate to either Oxfam or BHF conditional on winning the prize. Wins the prize. 2. 'Before ask', loses: ... Does not win the prize. 3. 'After ask': Asked, after learning of winning £10, to donate to either Oxfam or BHF conditional on winning the prize 4. 'Loses, no ask' (this is self-explanatory) Sample language ('Before ask'): > Before we reveal if you have won the £10 Amazon voucher... We are giving you the opportunity to donate from your prize to one of two charities: either Oxfam or the British Heart Foundation. For every pound you donate, we will add an extra 25p. ... > ... IF you win the £10 Amazon voucher, WOULD you be willing to donate to one of the above charities? This will not affect your chance of winning, as the prize winners have already been chosen through a random draw. > ...Please enter the amount you would like to donate (if anything) if you win the prize, in the box below. (Enter a number between 0 and 10). > [If chooses a positive amount this appears:] Please select the charity you would like to donate to, if you win [may tick either Oxfam or BHF]. ###Ambiguous chance of winning *Note*: we refer to these as 'ambiguous' because the student participants will not know in advance how many other participants there will be, and their chances of winning depend on the number of participants, as explained below. These treatments will be administered to any students after the first 600 who respond to the Omnibus. [2018: Note we revised to have the this number was never reached; we thus removed the details below] Note that for each of the Giving and Probability treatments the happiness (Likert scale) question is asked *after* the donation request in Before treatments (or the information about not winning). We do not envision this being part of our main analysis for the 'does being asked to give affect happiness' questions, as the context is different. ##Requested Salary and Gender We are interested in how the subjects' gender correlates with their answers in a vignette study. Participants are asked (by hypothetical interviewers for an Assurance Trainee position) to state a desired starting salary; next given industry salary information; and then asked again.. This vignette occurs at the *beginning* of the employability survey. The vignette asks respondents about how they would answer specific questions within an interview context. (For space reasons, we give this in a separate file) 30 Jul 2017 addition: see "Interventions (Hidden)" box 2018: Noting changed student randomisation ordering + stylistic edits to this form
Presentation on theme: "Appendix – Compound Interest: Concepts and Applications"— Presentation transcript: 1 Appendix – Compound Interest: Concepts and Applications FINANCIAL ACCOUNTINGAN INTRODUCTION TO CONCEPTS,METHODS, AND USES10th EditionClyde P. Stickney and Roman L. Weil 2 Learning Objectives1. Begin to master compound interest concepts of future value, present value, present discounted value of single sums and annuities, discount rates, and internal rates of return on cash flows.2. Apply those concepts to problems of finding the single payment or, for annuities, the amount for a series of payments required to meet a specific objective.3. Begin using perpetuity growth models in valuation analysis.4. Learn how to find the interest rate to satisfy a stated set of conditions.5. Begin to learn how to construct a problem from a description of a business situation. 3 Appendix Outline1. Compound interest concepts2. Future value concepts3. Present value concepts4. Nominal and effective rates5. Annuitiesa. Ordinary annuities or annuities in arrearsb. Annuities due6. Perpetuities7. Implicit interest rates: finding internal rates of returnAppendix Summary 4 1. Compound interest concepts A dollar to be received in the future is not the same as a dollar presently held, because of:Risk -- you may not get paidInflation -- the purchasing power of money may declineOpportunity cost -- cash received today can be invested and turn a positive returnCompound interest concepts (a.k.a. present value or time value of money or discounted cash flow) are mathematical methods of ascribing value to a future cash flow recognizing that a future cash payment is not as valuable as the same amount received earlier. 5 1.a. Compound interest concepts All three factors--risk, inflation and opportunity cost--can be captured in a single number, the interest rate.That is, the interest rate contains a component to compensate for risk and inflation and alternative opportunities for investment.The time value of future cash flows can be measured by the amount of interest the cash flow would earn at an appropriate rate of interest.Also, interest earned accumulates and earns interest itself -- this is called compound interest. 6 1.b. Compound interest example Consider a $10,000 loan at 12% interest for one year.Simple interest would be 12% divided by 12 months or 1% per month. One percent of $10,000 is $100 so the interest would be $100 per month or $1,200 for the year or $11,200 in total including the principle.If you were entitled to the interest each month, you could withdraw the interest. If you did not, then that interest itself has time value and should earn further interest. The total cost of the loan under monthly compounding of the interest is $11, (You will learn how to compute this value in the next section).This is a small increase, $68.25, but it is an increase and could be significant for long periods or high interest rates. 7 2. Future value conceptsThe future value of one dollar is the amount to which it will grow at a given interest rate compounded for a specified number of periods.The future value of P dollars is P time the future value of one dollar.The future value F is considered the equivalent in value of the present value P because the F will not be received until some time in the future. 8 2. Future value concepts (cont) P dollars invested at r percent interest will grow to P(1+r) at the end of the first period.If this amount, P(1+r), continues to earn r percent interest, the at the end of the second period it will be P(1+r)(1+r) which is P(1+r)2.In like manner, it will grow to P(1+r)3 in 3 periodsAnd in general it will grow after n periods to the future value Fn given by:Fn = P(1+r)n where P is the principler is the rate of interest andn is the number of periods 9 2. Future value (example) Consider a certificate of deposit, CD, which pays a nominal rate of 6% per year compounded monthly.You invest $5,000. How much will your CD be worth when it matures in one year?Fn = P(1+r)nsince the CD compounds monthly,r = 6% /12months = 0.5 % per monthn = 1 year * 12months = 12 periodsFn = $5000(1.005)12 = 5000( ) = $which is a little better than 6% simple interest.Thus, your CD will earn $ on $5000. 10 3. Present value concepts Present value is the reverse of future value.If the future value of x dollars is y; then the present value of y dollars is x.Present value answers the question, how much must be invested to grow at r rate of interest compounded for n periods.The present value P is considered the equivalent in value of the future value F because the F will not be received until some time in the future. 11 3. Present value concepts P dollars invested at r percent interest for n periods will grow to P(1+r) n.Recall that the future value is given by:Fn = P(1+r)nSolving for P gives the equation for the present value:where P is the principler is the rate of interest andn is the number of periods 12 3. Present value exampleYou hold a bond which pays no interest but will pay $10,000 upon maturity in three years. You need cash now, so you try to sell the bond. A bank says that they can’t pay you $10,000 because money has time value, but that they will pay you the present value discounted at 9% compounded annually.How much is the bank offering you?P = Fn (1+r)-n = (1.09)-3 = 10000/1.295 = $7,722Thus, the bank will pay you $7,772 for your $10,000 bond. Is this a good price? 13 4. Nominal and effective rates By convention and subject to some federal regulations, many interest rates are stated as an annual rate and do not include the effects of compounding. This rate is called the nominal rate of interest.The rate which includes the effects of compounding is called the effective rate.As we saw in an earlier example, 12% per year nominal rate of interest compounded monthly actually yields 12.68% return because of compounding effects.Nominal rates are given for simplicity and are almost always stated as an annual rate for purposes of comparing different alternative rates. 14 4. Effect of compounding periods What difference does the compounding period make if the nominal rate is the same?Consider the following loans all with a 12% nominal or annualized rate of interest:Yield increases as the compounding period is shortened 15 5. AnnuitiesAn annuity is a series of equal payments, one per period equally spaced through time.Examples include monthly rental payments, semiannual corporate bond coupon payments and mortgage payments.Mathematically, an annuity can be solved as the sum of individual compound interest problems.If time periods are not equally spaced or if the amounts vary, then the series of payments is not an annuity. 16 5. Annuities (cont)Annuity concepts are important in the accounting for bonds and leases.The present value of an annuity is its present day cash value -- conceptually you can sell or buy an annuity for this value.The future value of an annuity is the amount to which payments will grow if invested an left to compound.The non-discounted value of an annuity is the sum of the payments which is the number of payments times the payment amount.Annuities are of two types:Ordinary annuities, orAnnuities due. 17 5.a. Ordinary annuitiesOrdinary annuities payments are due at the end of each period.Consider an ordinary annuity of $100 per period for five periods:The payments are made at the end of each period.Coupon payments on a bond are ordinary annuities; payment is made after the period. 18 5.a. Ordinary annuities -- example Consider the same ordinary annuity of $100 per period for five periods:What is the present value if the appropriate rate of interest is 7%?This can be solved by several methods:Present value tablesComputers or calculatorFormula 19 5.a. Ordinary annuities -- example One good way to understand annuities is to work the problem as five separate present value problems and then add the results:PVannuity = PV1 + PV2 + PV3 + PV4 + PV5= 100(1.07) (1.07) (1.07) (1.07) (1.07)-5= 100/(1.07) +100/(1.145) +100/(1.225) +100/(1.311) +100/(1.403)= = $410.02Thus, the non-discounted value of the annuity is the sum of the payments ($500), but the value discounted at 7% is $ 20 5.b. Annuities dueAnnuities due payments are due at the beginning of each period.Consider an annuity due of $100 per period for five periods:The payments are made at the beginning of each period.A monthly rent payment is an annuity due; you pay in advance of usage. 21 5.b. Annuities due -- example This problem is similar to the ordinary annuity except that all payments are moved forward by one period. The first payment is received immediately so it is not discounted. Note that using zero to designate the present makes the formula work:PVannuity = PV0 + PV1 + PV2 + PV3 + PV4= 100(1.07) (1.07) (1.07) (1.07) (1.07)-4= 100/(1) +100/(1.07) +100/(1.145) +100/(1.225) +100/(1.311)= = $437.72Notice that the present value of the annuity due is exactly 1.07 times the present value of the ordinary annuity. 22 5.c. Mathematical reconciliation An annuity due is the same as an ordinary annuity with each payment shifted forward one period.Since the annuity due is received earlier and money has time value, the annuity due is more valuable.Since each payment is shifted by one period, you can adjust from an annuity due to an ordinary annuity (or back) by the following formula:annuity due = (1+r)*ordinary annuity 23 6. PerpetuitiesPerpetuities are annuities that last forever.There are few real perpetuities, but they give good insight into annuities.The present value of a perpetuity is:Pperpetuity=A*(1+1/r)Examples of perpetuities include some Canadian and some British government bonds. 24 7. Implicit interest rates The present value of a lump sum problem has four components:P, the present valueF, the future valuer, the rate of interest andn, the number of periodsWhich are related by the formula:Fn = P(1+r)nAny three of the components determines the fourth, or you can solve for any component if you know the other three. 25 7. Implicit interest rates (cont) In implicit interest rate problems, we solve for the interest rate.That is, given a P, V and the number of periods, the r which makes the equation balance is know as the implicit interest rate, a.k.a. the internal rate of return.There is often no direct solution to these types of problems, instead, the solution is reached through iterative mathematical methods. 26 Appendix SummaryThis appendix introduces compound interest problems and the related problems of present value, discounted cash flows and time value of money.The applications of present value and future value of both an annuity and a lump sum are introduced.Perpetuities and implicit interest rates are introduced.These methods are very valuable to the accountant in valuing liabilities.
Watt Hours to Milliamp Hours (Wh to mAh) Conversion Calculator Wh × 1,000 / V = Conversion formula: mAh = Wh × 1,000 / V What Is Watt-Hours? Watt-hours (Wh) is a unit that measures the amount of electrical energy consumed or produced by electrical appliances over time. To be more descriptive, it is the amount of electrical energy generated or used when a certain amount of power is used over one time. What Is Milliamp-Hours? Milliamp-hours (mAh) is a unit that measures battery capacity. It is equivalent to the amount of electric charge a power source will discharge over time. While milliamp-hours (mAh) is more commonly used to represent battery capacity, watt-hours (Wh) may also be used for the same purpose. Why Convert Watt-Hours to Milliamp-Hours? To Calculate Battery Runtime When mA Load Is Known Converting watt-hours (Wh) to milliamp-hours (mAh) may come in handy when trying to determine your battery’s runtime when connected to a given current (electric charge). Ordinarily, if you know the wattage on your battery, you can calculate the battery’s runtime from the battery’s watt-hours. However, if you only know the electric charge in milliamps (mA), you’d have to do a Wh to mAh conversion then calculate the battery’s runtime from the milliamp-hours (mAh). For instance, if 50 W is connected to a 250 Wh, 20 v battery, the battery’s runtime would be 250/50 = 5 hours. However, if we did not know the wattage, we will not be able to calculate the runtime. So, what if we only know that the electric charge on the battery is 1000 mA. What would be the runtime? To answer this, we’ll do a watt-hours (Wh) to milliamp-hours (mAh) conversion = 250/20 x 1000 = 12,500 mAh Then, we’ll get the runtime by dividing mAh by mA = 12,500/1000 = 12.5 hours To Figure Out the Best Charging System for Our Battery When trying to figure out the best electric charge current of your battery’s charger, you’ll need to know the recommended C-rate for the battery type and the battery capacity in Ah or mAh. C-rate is the ratio of a battery’s charge current to its battery capacity. So, with its value and the value of the battery’s capacity, we can calculate charge current. For instance, if your battery’s C-rate is 2C, and your battery capacity is 6000 mAh, the ideal maximum electric charge rate of the charger would be: 2 x 6000 = 12,000 mA How to Convert Watt Hours to Milliamp Hours (Wh to mAh) Watt-Hours (Wh) to Milliamp-Hours (mAh) Conversion Formula The formula for calculating watt hours to milliamp hours (Wh to mAh) is mAh = Wh x 1000/v. To derive the formula, we started out with the formula of power in watts and electrical energy in watt-hours: electrical energy e (Wh) = power (W) x time (h) (1) power (W) = voltage (v) x current (A) (2) Then we replaced power with voltage x current in (1): electrical energy e (Wh) = voltage (v) x current (A) x time (h) (3) Using just units, we rewrote (3) as: Wh = v x Ah (4) Then we converted Ah to mAh in (4) using Ah = mAh/1000: Wh = v x mAh/1000 (5) Finally, we made mAh the subject of the formula: mAh = Wh/v x 1000 (6) Basically, to convert watt-hours to milliamp-hours (Wh to mAh), divide watt-hours by voltage then multiply by 1000. This is pretty much what Wh to mAh electrical calculators do. To do a milliamp-hours (mAh) – watt-hours (Wh) conversion, we’ll multiply voltage and mAh then divide by 1000. Doing a milliamp-hours (mAh) – watt-hours (Wh) conversion is as simple as that. Of course, with our watt-hours to milliamp-hours (Wh to mAh) calculator, the process is simplified. Just enter energy in watt hours (Wh), enter voltage in volts (v), then click on the calculate button on the Wh to mAh calculator & Wh becomes mAh. We have a solar generator that can give off 30 W from its DC compartment for up to 8 hours before running out of juice. If said DC compartment has a voltage of 20 v, what is the milliamp-hours (mAh) rating of the solar generator? To convert Wh to mAh for this solar generator, we will calculate the energy in watt-hours (Wh) first. Then we’ll divide the value of energy in watt-hours (Wh) by the voltage in volts, then multiply by 1000. energy in watt-hours (Wh) = 30 x 8 = 240 Wh Now that we know the energy in watt-hours (Wh), let’s calculate the battery’s milliamp-hours (mAh): milliamp-hours (mAh) = 240/20 x 1000 = 12,000 mAh We have a 24 v battery that powers a 300 W solar refrigerator for up to 5 hours when fully charged. What’s the milliamp-hours (mAh) rating of the battery? As with the previous example, to convert Wh to mAh, we’ll start by calculating the total energy in watt-hours (Wh) stored by the battery. energy in watt-hours (Wh) = 300 x 5 = 1500 Wh Next, we’ll divide the value of energy in watt-hours by voltage then multiply by 1000 to get milliamp-hours: battery in milliamp-hours = 1500/24 x 1000 = 62,500 mAh How to Convert Kilowatt-Hours to Milliamp-Hours (kWh to mAh) The formula used to convert kWh to mAh is not too different from that used to convert Wh to mAh. In fact, the only difference is that we’ll multiply by 1,000,000 in the kWh to mAh formula instead of 1000 like the Wh to mAh conversion formula. To derive the formula for converting kWh to mAh, we’ll start with the Wh to mAh conversion formula: mAh = Wh/v x 1000 Next, we’ll define the kWh to Wh formula: Wh = kWh x 1000 Now, we’ll substitute Wh for kWh x 1000 into the Wh to mAh conversion formula: mAh = kWh/v x 1000 x 1000 = kWh/v x 1,000,000 Simply put, to convert kWh to mAh, divide kWh by voltage then multiply by 1,000,000. This is what kWh to mAh electrical calculators do. How many milliamp-hours (mAh) is a 48 v battery bank rated 6 kWh? milliamp-hours rating of the battery bank = 6/48 x 1,000,000 = 125,000 mAh Quick Watt-Hours to Amp-Hours (Wh to mAh) Conversion Chart As mentioned earlier, to convert Wh to mAh with our calculator, enter energy in watt hours (Wh), enter voltage in volts (v), then click on the calculate button on the Wh to mAh calculator & Wh becomes mAh. But whenever you have no access to a calculator, you may use these charts: For a 12v battery: For a 20v battery: For a 24v battery: We are sorry that this post was not useful for you! Let us improve this post! Tell us how we can improve this post?
FPT-algorithms for some problems related to integer programming In this paper, we present fixed-parameter tractable algorithms for special cases of the shortest lattice vector, integer linear programming, and simplex width computation problems, when matrices included in the problems’ formulations are near square. The parameter is the maximum absolute value of the rank minors in the corresponding matrices. Additionally, we present fixed-parameter tractable algorithms with respect to the same parameter for the problems, when the matrices have no singular rank submatrices. In this paper, we will show that the width of simplices defined by systems of linear inequalities can be computed in polynomial time if some minors of their constraint matrices are bounded. Additionally, we present some quasi-polynomial-time and polynomial-time algorithms to solve the integer linear optimization problem defined on simplices minus all their integer vertices assuming that some minors of the constraint matrices of the simplices are bounded. We consider the problem of planning the ISS cosmonaut training with different objectives. A pre-defined set of minimum qualification levels should be distributed between the crew members with minimum training time differences, training expenses or a maximum of the training level with a limitation of the budget. First, a description of the cosmonaut training process is given. The model are considered for the volume planning problem. The objective of the model is to minimize the differences between the total time of the preparation of all crew members. Then two models are considered for the timetabling planning problem. For the volume planning problem, two algorithms are presented. The first one is aheuristic with a complexity of O(n) operations. The second one consists of a heuristic and exact parts, and it is based on the npartition problem approach. The central question that motives this paper is the problem of making up a freight train and the routes on the railway. It is necessary from the set of orders available at the stations to determine time-scheduling and destination routing by railways in order to minimize the total completion time. In this paper it was suggested formulation of this problem by applying integer programming. We consider the problem of trainings planning on ISS. Shown that the problem is a combination of a k Partition Problem and an Assignment Problem. NP-compleeteness is proofed. A heuristic and an exact algorithms are proposed. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Did you know that mass and weight are not the same? This lesson describes the difference between the two as well as the effect of gravity on weight. Examples are used to teach you how to calculate weight based on mass and acceleration of gravity. Newton’s Laws: Weight, Mass and Gravity Most of us have seen images of men walking on the moon. Now, even though the astronauts are wearing really heavy suits, they seem to bounce around the surface of the moon with very little effort. How is it that we can bounce around on the moon with ease while jumping here on Earth requires a lot of effort? The answer to this question lies within the difference between mass and weight.Mass is a measure of how much matter an object contains, while weight is a measure of the force of gravity on the object. An object has the same composition, and therefore mass, regardless of its location. For example, a person with a mass of 70 kg on Earth has a mass of 70 kg in space as well as on the moon. However, that same person’s weight is not the same since gravity is different in these locations. The person will weigh less on the moon because the moon has less gravity. To better understand the concepts of weight and mass, we must first consider gravity and its effect on objects. What is Gravity? So what is gravity? Gravity is the attractive pull between two objects that have mass. The strength of gravity is directly proportional to the amount of mass of each object. In other words, the larger the objects, the greater the gravitational attraction between them. For example, the gravitational pull you experience on Earth is much greater than it would be on the moon because the Earth’s mass is greater. An object with twice as much mass will exert twice as much gravitational pull on other objects. On the other hand, the strength of gravity is inversely related to the square of the distance between two objects. For example, if the distance between two objects doubles, meaning they’re twice as far apart, the gravitational pull decreases by a factor of 4. This is because 2 squared is equal to 4. This means the effect of distance on gravitational attraction is greater than the effect of the masses of the objects. Gravity as a Force Gravity is a force. A force is simply a push or a pull experienced by objects that interact with each other. The interaction can be direct or at a distance, which is the case of gravity. Newton’s laws tell us that if an unbalanced force acts on an object, it will change the object’s state of motion. In other words, the object will accelerate. Since gravity is a force, gravity causes objects to accelerate. Acceleration Due to Gravity Let’s look at an example of how gravity causes acceleration. If you drop a ball from a cliff, you will notice that the speed increases as it falls – it accelerates due to gravity. We have determined the acceleration of gravity is 9.8 m/sec^2 – that is, for free-falling objects on Earth. Free falling simply means no other forces, except gravity, are acting on the object. For example, any effect of wind resistance would be neglected. The velocity of a free-falling object increases by 9.8 meters per second every second.Let’s look at the speed of the ball as it drops over time. This is going to help us understand how gravity causes acceleration. As you see on the screen, the ball will accelerate to a speed of 9.8 meters per second in the first second of travel. Over the next second, the speed of the ball will again increase by 9. 8 meters per second, meaning it’s traveling at 19.6 meters per second. The same thing will happen during the third second of time, so the ball will be traveling at 29.4 meters per second. With each second, the ball’s speed increases by 9.8 meters per second.The acceleration of gravity is so important that it has its own symbol. It is often abbreviated with the letter g. g = 9.8 m/sec^2 – that’s the acceleration of gravity here on Earth for free-falling objects. Gravity and Weight Well, what about weight? Weight is a measure of the force of gravity acting on an object. According to Newton’s laws of motion, force is directly proportional to both mass and acceleration, and the equation for force is F = m * a, where m = mass and a = acceleration. We can use this equation to solve for weight. All objects on Earth, whether they are falling, thrown, or even sitting still, experience the effect of gravity. Therefore, we can determine the weight of an object using the acceleration of gravity. Calculation of Weight Let’s look at an example. How much does a 100-kg man weigh on Earth?Let’s first recall the formula for force. F = m * aNow substitute weight for force and the acceleration of gravity (g) for acceleration.Weight = m * gNow plug in the values for m and g and solve for weight.Weight = 100 kg * 9.8 m/sec^2Weight = 980 kg * m/sec^2A newton (N) of force equals 1 kg * m/sec^2; therefore, we can say the man has a weight of 980 newtons. Weight = Force = 980 N. Now, there are approximately 4.5 newtons in a pound. Therefore, the person in our example weighs about 218 pounds. Let’s look at a common misconception. If a 100-kg man and a 10,000-kg elephant jumped off a cliff, which would hit the ground first? One might think the elephant would land first due to its greater mass. However, they both land at the same time (assuming they’re both free falling). We can use some simple math to understand this phenomenon by rearranging the formula for force to solve for acceleration.First, recall the formula for force.F = m * aNow, let’s rearrange and solve for acceleration.a = F/mAs seen by the formula, acceleration is directly proportional to the force and inversely proportional to the mass of the object. Increased force tends to increase acceleration, while increased mass tends to decrease acceleration. Now, the elephant in our example has 100 times as much mass as the person – this would decrease its acceleration.However, because the elephant has 100 times the mass, it experiences 100 times as much gravitational force. The greater force exerted on more massive objects is offset by the inverse influence of greater mass. Therefore, all objects free fall at the same acceleration regardless of their mass. In summary, mass is a measure of how much matter an object contains, and weight is a measure of the force of gravity acting on the object. Gravity is the attraction between two objects that have mass. The amount of gravity is directly proportional to the amount of mass of the objects and inversely proportional to the square of the distance between the objects.Gravity is a force that increases the velocity of falling objects – they accelerate. The acceleration of gravity is abbreviated by the letter g, and it has a value of 9.8 m/sec^2. All objects on Earth, regardless of their mass, accelerate due to gravity at the same rate – that is, 9.8 m/sec^2. The weight of an object can be calculated using the formula for force – F = m * a – where F equals the weight of the object and now the acceleration (a) is the acceleration of gravity (g). By the end of this lesson, you should be able to: - State the difference between mass and weight - Recall the value for the acceleration of gravity - Use the formula for force to calculate the weight of an object - Explain why all free falling objects released together will hit the ground simultaneously