text
stringlengths 0
1.01M
|
---|
## anonymous one year ago The graph of y = f ′(x), the derivative of f(x), is shown below. Given f(2) = 8, evaluate f(–2).
1. anonymous
2. anonymous
let me try to do it on my own,tell me if i'm doing something wrong
3. anonymous
Sure
4. anonymous
Remember, use the information that f(2)=8, to find f(-2)
5. anonymous
|dw:1439294446810:dw|
6. anonymous
that feels wrong
7. anonymous
Hmm, you don't need that graph at all
8. anonymous
okay i have no idea where to start :/
9. anonymous
Look at the middle line segment, going from (-2,-2,) to (2,2) is the line segment you need to look at
10. anonymous
11. anonymous
im saying 8 only because the graph seems mirrored and if it
12. anonymous
Idk, I haven't calculated the answer yet use the formula $\int\limits_{-2}^{2}f'(x)dx=f(2)-f(-2)$ $f(-2)=f(2)-\int\limits_{-2}^{2}f'(x)dx=8-\int\limits_{-2}^{2}y.dx$ Have you studied about the definite integral?
13. anonymous
okay that seems way better, one sec
14. anonymous
and yes i have
15. anonymous
Good, you need to find y(x) for the interval -2 to 2, it will be the equation of the middle line segment, again since it's passing through origin it's c will be 0 $y=mx+c=mx+0=mx=(\frac{y_{2}-y_{1}}{x_{2}-x_{1}})x$ m=slope (x1,y1) (x2,y2) are any points on the line segment
16. anonymous
x
17. anonymous
so y = x?
18. anonymous
yes!! y=x $f(-2)=8-\int\limits_{-2}^{2}x.dx$ Now it's a simple matter of integration
19. anonymous
8!!
20. anonymous
I LOVE YOU!!
21. ganeshie8
Hey! allternatively we could also use the symmetry to conclude that the integral is 0
22. anonymous
good job jeb!!
23. anonymous
-_- how did i not see that
24. anonymous
Oh yeah, the same amount of area under the line is negative as it is positive so it cancels out
25. anonymous
oh well it doesn't matter i get it! :D
26. ganeshie8
|dw:1439295272579:dw|
27. anonymous
but it is also important that you know the method
28. anonymous
thanks so much guys, i should probably head to bed its 5:20 am here
29. anonymous
anyway thanks again!
30. ganeshie8
Another alternative, Since $$f'(x)$$ is an odd function, it follows that $$f(x)$$ is an even function. Therefore $$f(-2)=f(2)=8$$
31. anonymous
amazing |
How to roughly extrapolate artillery range for planets with different atmospheric pressure?
I'm looking for some rough way of extrapolating artillery range for planets with different atmospheric pressure:
Realistically, the data available:
1. Hypothetical range in vacuum: $$\dfrac{V^2}{g}$$ (V - muzzle velocity, g - gravity)
2. Range on Earth
So based on those data I could easily calculate which percentage of energy was "lost" on Earth atmosphere. Could I extrapolate from that how would it roughly work on a different planet?
EDIT: I also thought about simply applying drag equation. It would make perfect sense for calculating how a bullet (or an armor piercing projectile) would lose its kinetic energy. The relation would be proportional to air density.
Just there is one big problem with any artillery - it uses ballistic trajectory, so I have to incorporate BOTH gravity and pressure. I can not simply extrapolate from atmospheric pressure, because it would mean that for planet with vacuum the artillery range would be infinite.
• This may be both more complex and simpler than you first assume. Projectile size, shape, initial velocity and mass will all interplay with not only atmospheric pressure but also what the exact gas mix is. On the other hand if you have a fast, heavy projectile and your target isn’t miles away you can reasonably use projectile in a vacuum as a decent enough model since the energy lost prior to impact will be small relative to the projectile’s kinetic energy. Drift is a different issue though... – Joe Bloggs Jul 28 '19 at 9:12
For slow projectile, energy loss due to drag is proportional to the velocity of the projectile and to the density of the medium.
Density of atmosphere varies linearly with pressure, everything else constant.
In this regime, thus, if you increase the pressure you are proportionally increasing the density and the lost energy.
For higher velocities drag is proportional to the square of the velocity, but still goes linearly with density: same as above.
Drag equation.
https://en.wikipedia.org/wiki/Drag_equation
You can use this to calculate the force slowing your projectile (𝐹𝑑): the drag force.
Here is the drag equation. $$F_d = 1/2 \rho u^2 C_d A$$
• $$F_d$$ drag force
• $$\rho$$ mass density of the atmosphere
• $$u$$ velocity of the bullet
• $$A$$ area of the bullet
• $$C_d$$ drag coefficient of the bullet
Here is an online calculator and you can tweak 𝜌 (atmospheric density) to see how that affects range. But it should be a straight multiplication if density is the only thing changing.
Interesting stuff in this question as well that you might find useful for your endeavor. They also ask about the impact of higher atmospheric pressure on projectiles.
What would be a reasonable caliber for an assault rifle and sniper rifle for a planet with pressure of 3 atm?
|
### - Art Gallery -
IIn abstract algebra, a magma (or groupoid; not to be confused with groupoids in category theory) is a basic kind of algebraic structure. Specifically, a magma consists of a set equipped with a single binary operation that must be closed by definition. No other properties are imposed.
History and terminology
The term groupoid was introduced in 1927 by Heinrich Brandt describing his Brandt groupoid (translated from the German Gruppoid). The term was then appropriated by B. A. Hausmann and Øystein Ore (1937)[1] in the sense (of a set with a binary operation) used in this article. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology. The Brandt groupoid is a groupoid in the sense used in category theory, but not in the sense used by Hausmann and Ore. Nevertheless, influential books in semigroup theory, including Clifford and Preston (1961) and Howie (1995) use groupoid in the sense of Hausmann and Ore. Hollings (2014) writes that the term groupoid is "perhaps most often used in modern mathematics" in the sense given to it in category theory.[2]
According to Bergman and Hausknecht (1996): "There is no generally accepted word for a set with a not necessarily associative binary operation. The word groupoid is used by many universal algebraists, but workers in category theory and related areas object strongly to this usage because they use the same word to mean 'category in which all morphisms are invertible'. The term magma was used by Serre [Lie Algebras and Lie Groups, 1965]."[3] It also appears in Bourbaki's Éléments de mathématique, Algèbre, chapitres 1 à 3, 1970.[4]
Definition
A magma is a set M matched with an operation, •, that sends any two elements a, b ∈ M to another element, a • b. The symbol, •, is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation (M, •) must satisfy the following requirement (known as the magma or closure axiom):
For all a, b in M, the result of the operation a • b is also in M.
And in mathematical notation:
$${\displaystyle a,b\in M\implies a\cdot b\in M}.$$
If • is instead a partial operation, then S is called a partial magma[5] or more often a partial groupoid.[5][6]
Morphism of magmas
A morphism of magmas is a function, f : M → N, mapping magma M to magma N, that preserves the binary operation:
f (x •M y) = f(x) •N f(y)
where •M and •N denote the binary operation on M and N respectively.
Notation and combinatorics
The magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. Also, the operation, •, is often omitted and notated by juxtaposition:
(a • (b • c)) • d = (a(bc))d
A shorthand is often used to reduce the number of parentheses, in which the innermost operations and pairs of parentheses are omitted, being replaced just with juxtaposition, xy • z = (x • y) • z. For example, the above is abbreviated to the following expression, still containing parentheses:
(a • bc)d.
A way to avoid completely the use of parentheses is prefix notation, in which the same expression would be written ••a•bcd. Another way, familiar to programmers, is postfix notation (Reverse Polish notation), in which the same expression would be written abc••d•, in which the order of execution is simply left-to-right (no Currying).
The set of all possible strings consisting of symbols denoting elements of the magma, and sets of balanced parentheses is called the Dyck language. The total number of different ways of writing n applications of the magma operator is given by the Catalan number, Cn. Thus, for example, C2 = 2, which is just the statement that (ab)c and a(bc) are the only two ways of pairing three elements of a magma with two operations. Less trivially, C3 = 5: ((ab)c)d, (a(bc))d, (ab)(cd), a((bc)d), and a(b(cd)).
There are $${\displaystyle n^{n^{2}}}$$ magmas with n elements so there are 1, 1, 16, 19683, 4294967296, ... (sequence A002489 in the OEIS) magmas with 0, 1, 2, 3, 4, ... elements. The corresponding numbers of non-isomorphic magmas are 1, 1, 10, 3330, 178981952, ... (sequence A001329 in the OEIS) and the numbers of simultaneously non-isomorphic and non-antiisomorphic magmas are 1, 1, 7, 1734, 89521056, ... (sequence A001424 in the OEIS).[7]
Free magma
A free magma, MX, on a set, X, is the "most general possible" magma generated by X (i.e., there are no relations or axioms imposed on the generators; see free object). It can be described as the set of non-associative words on X with parentheses retained.[8]
It can also be viewed, in terms familiar in computer science, as the magma of binary trees with leaves labelled by elements of X. The operation is that of joining trees at the root. It therefore has a foundational role in syntax.
A free magma has the universal property such that, if f : X → N is a function from X to any magma, N, then there is a unique extension of f to a morphism of magmas, f ′
f ′ : MX → N.
See also: Free semigroup, Free group, Hall set, and Wedderburn–Etherington number
Types of magma
Magmas are not often studied as such; instead there are several different kinds of magma, depending on what axioms the operation is required to satisfy. Commonly studied types of magma include:
Quasigroup
A magma where division is always possible
Loop
A quasigroup with an identity element
Semigroup
A magma where the operation is associative
Inverse semigroup
A semigroup with inverse.
Semilattice
A semigroup where the operation is commutative and idempotent
Monoid
A semigroup with an identity element
Group
A monoid with inverse elements, or equivalently, an associative loop, or a non-empty associative quasigroup
Abelian group
A group where the operation is commutative
Note that each of divisibility and invertibility imply the cancellation property.
Classification by properties
Group-like structures
Totalityα Associativity Identity Invertibility Commutativity
Semigroupoid Unneeded Required Unneeded Unneeded Unneeded
Small Category Unneeded Required Required Unneeded Unneeded
Groupoid Unneeded Required Required Required Unneeded
Magma Required Unneeded Unneeded Unneeded Unneeded
Quasigroup Required Unneeded Unneeded Required Unneeded
Unital Magma Required Unneeded Required Unneeded Unneeded
Loop Required Unneeded Required Required Unneeded
Semigroup Required Required Unneeded Unneeded Unneeded
Inverse Semigroup Required Required Unneeded Required Unneeded
Monoid Required Required Required Unneeded Unneeded
Commutative monoid Required Required Required Unneeded Required
Group Required Required Required Required Unneeded
Abelian group Required Required Required Required Required
Closure, which is used in many sources, is an equivalent axiom to totality, though defined differently.
A magma (S, •), with x, y, u, z ∈ S, is called
Medial
If it satisfies the identity, xy • uz ≡ xu • yz
Left semimedial
If it satisfies the identity, xx • yz ≡ xy • xz
Right semimedial
If it satisfies the identity, yz • xx ≡ yx • zx
Semimedial
If it is both left and right semimedial
Left distributive
If it satisfies the identity, x • yz ≡ xy • xz
Right distributive
If it satisfies the identity, yz • x ≡ yx • zx
Autodistributive
If it is both left and right distributive
Commutative
If it satisfies the identity, xy ≡ yx
Idempotent
If it satisfies the identity, xx ≡ x
Unipotent
If it satisfies the identity, xx ≡ yy
Zeropotent
If it satisfies the identities, xx • y ≡ xx ≡ y • xx[9]
Alternative
If it satisfies the identities xx • y ≡ x • xy and x • yy ≡ xy • y
Power-associative
If the submagma generated by any element is associative
Flexible
if xy • x ≡ x • yx
A semigroup, or associative
If it satisfies the identity, x • yz ≡ xy • z
A left unar
If it satisfies the identity, xy ≡ xz
A right unar
If it satisfies the identity, yx ≡ zx
Semigroup with zero multiplication, or null semigroup
If it satisfies the identity, xy ≡ uv
Unital
If it has an identity element
Left-cancellative
If, for all x, y, and, z, xy = xz implies y = z
Right-cancellative
If, for all x, y, and, z, yx = zx implies y = z
Cancellative
If it is both right-cancellative and left-cancellative
A semigroup with left zeros
If it is a semigroup and, for all x, the identity, x ≡ xy, holds
A semigroup with right zeros
If it is a semigroup and, for all x, the identity, x ≡ yx, holds
Trimedial
If any triple of (not necessarily distinct) elements generates a medial submagma
Entropic
If it is a homomorphic image of a medial cancellation magma.[10]
Generalizations
See n-ary group.
Magma category
Auto magma object
Universal algebra
Magma computer algebra system, named after the object of this article.
Commutative non-associative magmas
Algebraic structures whose axioms are all identities
Groupoid algebra
Hall set
|
Could real iterates of the Taylor Series expansion of $b^x$ help to find a way to define tetration?
When we consider the Taylor Series expansion of $f(x)=b^x$ for some $b \in \mathbb{R}$, we see that $$b^x = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}x^n.$$ We can substitute $x$ for $b^x$ to find that $$b^{b^{x}} = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}b^{xn}.$$
Now, let's say that we want to find a function that accurately describes how we should raise $b$ to $b$'th power $x$ times. We say that $b^{(b)}_x$ says that $b$ should be raised to itself $x$ times, for some $x\in\mathbb{R}$. We can generalize the previous examples, and write $$b^{(b)}_x = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}b^{(b)n}_{x-1}.\qquad \text{(1)}$$ (Please note that $b^{(b)n}_{x}$ is not equal to $b^{(bn)}_{x}$.) Using this equation we can also state that $$b^{(b)}_{x-1} = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}b^{(b)n}_{x-2}.\qquad\text{(2)}$$ When we substitute (2) in (1), we find that $$b^{(b)}_x = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}\Big( 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}b^{(b)n}_{x-2} \Big).$$ From now on, we write $\sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!} = A$, because then the next equations look more clear. We could go on substituting in this manner $*$, until we arive at $b^{(b)}_{(x-(x-1))}=b$. We then see that the equality $$b^{(b)}_x = 1 + \sum A \Big( 1 + \sum A \Big(\cdots \Big(1 + \sum_{n=1}^{\infty} \frac{(\log(b^b))^n}{n!}\Big)\Big)\Big)$$ holds when we iterate $f(x) = 1 + \sum_{n=1}^{\infty}\frac{(\log(b))^n}{n!}$ an $x-2$ amount of times, for real $x$.
Question 1: Is this possible?
Question 2: If so, how is this done? How do you describe the formula that precisely defines how $f(x)$ looks like after being iterated $x-2$ times?
Question 3: If such a formula is found, would it imply that a nice way of finding the fourth hyper operator (or "tetration") is found for real numbers?
Thanks,
Max
$*$EDIT: When we proceed in this manner when $x$ is not an integer, we will not find $x=1$. We should be able to iterate the function a real amount of times to find $x=1$.
-
You say "we could go on ...until we arrive at $b^{(b)}_{(x-(x-1))}=b$", but since you are iterating, the subindices you find as you go on are all of the form $x-n$ with $n$ an integer. So you will not find $x-1$ unless $x-1$ is one. – Mariano Suárez-Alvarez Nov 25 '10 at 20:50
@ Mariano Suarez-Alvarez: woops that true... Then the question would be: Is it possible to iterate f(x) a non-integer amount of times? How? – Max Muller Nov 25 '10 at 20:59
The process you describe can be better expressed in terms of the Bell-/Carleman-matrix (see Wikipedia ) associated to function $f(x) = b^x$ . I got myself used to the following notation:
let $V(x)$ denote a rowvector of infinite dimension of consecutive powers of an argument x $V(x)=[1,x,x^2,x^3, \dots]$ then let B denote the infinite matrix which performs $V(x) * B = V(b^x)$ . The columns of B contain the coefficients for the powerseries for the consecutive powers of $(b^x)^0, (b^x)^1 , (b^x)^2, \ldots$. B is then the (transposed) Carleman-matrix.
What your formula and iteration constructs can simply be expressed by the notion of powers of B; the h-fold nested infinite sums (your A) are captured by the formal matrixproducts $V(x)*B*B*\ldots*B = V(x)*B^h = V(b^(b^(b^\ldots (b^x)))$ . Unfortunately the fractional part of iteration must then be expressed by a fractional power of B --- which is not trivial.
But that is only one problem. We have already the problem of powers of B (or nested summation of your example): the convergence using matrices of finite size gets lost after few iterations and it is a special art to find closed forms of arbitrarily precise computable expressions for the powerseries only for third or fourth iteration. I've tried this different ways and could not get significant improvement of the convergence behaviour as long as I used that nesting which you describe above for more than some exotic bases b near 1.
But there are ways to express the coefficients of the powerseries in finite expressions/sums including exp/log which we can assume to be available always in arbitrary precision. This can be found by triangular decomposition of B and a modified way to arrive at the required powers. (See for a not very explanative technical description exact entries, remark: there might be better descriptions...)
But all this covers only the part of integer height (integer number of iterations). And for this we do not really need that formal powerseries/matrixpowers approximated by finitely truncated matrices: we have precise exponentiation for each base at hand. The crux is the part of the fractional height/iteration count. Here there are some different approaches available; one of it is the approach to compute fractional powers of B by diagonalization to get the coefficients for a formal powerseries which represents the fractional iteration of $b^x$ However - to have series with real coefficients we must restrict ourselves to bases b between $exp(-exp(1))\ldots exp(exp(-1))$
-
Thanks Gottfried. I guess a lot of research still has to be done on tetration. – Max Muller Nov 26 '10 at 17:31 |
# Which set?
Algebra Level 3
The set of values of x satisfying simultaneously the inequalities $$\dfrac { \sqrt { (x-8)(x-2) } }{ \log _{ .3 }{ (\frac { 10 }{ 7 } } (\log _{ 2 }{ 5 } -1)) } \ge 0$$ and $$2^{x-3} - 3!>0$$ is :
|
## connor50 one year ago How many photons are produced in a laser pulse of 0.819 J at 531 nm?
1. aaronq
find the energy given by one photon of that wavelength E=hc/lambda divide the energy given 0.819 J by the energy of one photon
2. connor50
$3.74 \times 10^-37$?
3. aaronq
what energy did you get per photon
4. aaronq
0.000000000374 = 3.74*10^-10 J /photon
5. connor50
Divide that that by .819?
6. aaronq
the other way around 0.819 J/3.74*10^-10 J = # of photons
7. connor50
Ah.$2.19\times10^9$
8. aaronq
that sounds right
|
# Isometric projection
Isometric projection
Isometric projection is a form of graphical projection—more specifically, an axonometric projection. It is a method of visually representing three-dimensional objects in two dimensions, in which the three coordinate axes appear equally foreshortened and the angles between any two of them are 120°. Isometric projection is one of the projections used in drafting engineering drawings.
The term "isometric" comes from the Greek for "equal measure", reflecting that the scale along each axis of the projection is the same (this is not true of some other forms of graphical projection).
One of the advantages of isometric perspective in engineering drawings is that 60° angles are easy to construct using only a compass and straightedge.
Visualization
An isometric view of an object can be obtained by choosing the viewing direction in a way that the angles between the projection of the "x", "y", and "z" axes are all the same, or 120°. For example when taking a cube, this is done by first looking straight towards one face. Next the cube is rotated ±45° about the vertical axis, followed by a rotation of approximately ±35.264° (precisely arcsin(tan 30°) ) about the horizontal axis.
In a similar way an "isometric view" can be obtained for example in a 3D scene editor. Starting with the camera aligned parallel to the floor and aligned to the coordinate axes, it is first rotated downwards around the horizontal axes by about 35.264° as above, and then rotated ±45° around the vertical axes.
Another way in which isometric projection can be visualized is by considering the view of a cubical room from an upper corner, looking towards the opposite lower corner. The "x"-axis is diagonally down and right, the "y"-axis is diagonally down and left, and the "z"-axis is straight up. Depth is also shown by height on the image. Lines drawn along the axes are at 120° to one another.
Mathematical
There are 8 different orientations to obtain an isometric view, depending into which octant the viewer looks. The isometric transform from a point $a_\left\{x,y,z\right\}$ in 3D space to a point $b_\left\{x,y\right\}$ in 2D space looking into the first octant can be written mathematically with rotation matrices as::
where $alpha = arcsin\left( an30^circ\right)approx35.264^circ$ and . As explained above, this is a rotation around the vertical (here y) axis by , followed by a rotation around the horizontal (here x) axis by $alpha$. This is then followed by an orthographic projection to the x-y plane::
The other seven possibilities are obtained by either rotating to the opposite sides or not, and then inverting the view direction or not. [cite journal
author=Ingrid Carlbom, Joseph Paciorek
title=Planar Geometric Projections and Viewing Transformations
journal=ACM Computing Surveys (CSUR)
volume=10|issue=4
pages=465–502
date=Dec. 1978
publisher=ACM
issn=0360-0300|doi=10.1145/356744.356750
]
Limits of axonometric projection
As with all types of parallel projection, objects drawn with axonometric projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings and sprite-based video games, this results in a perceived distortion, as unlike perspective projection, it is not how our eyes or photography usually work. It also can easily result in situations where depth and altitude are impossible to gauge, as is shown in the illustration to the right. An additional problem particular to isometric projection is when it becomes difficult to determine which "face" of the object is being observed. In the absence of proper shading—and for objects that are relatively perpendicular and similarly proportioned—it can become difficult to determine which is the top, bottom or side face of the object. This is because, in isometric projection, the projection of each face onto a two-dimensional plane has similar dimensions and area.
Most contemporary video games have avoided these situations by dropping axonometric projection in favor of perspective 3D rendering utilizing vanishing points. Some of the famous "impossible architecture" works of M. C. Escher, however, exploit them. "Waterfall" (1961) is a good example, in which the building is (roughly) isometric, but the faded background utilizes perspective projection.
"Isometric" projection in video games and pixel art
In the fields of computer and video games and pixel art, axonometric projection has been popular because of the ease with which 2D sprites and tile-based graphics can be made to represent a 3D gaming environment. Because objects do not change size as they move about the game field, there is no need for the computer to scale sprites or do the calculations necessary to simulate visual perspective. This allowed older 8-bit and 16-bit game systems (and, more recently, handheld systems) to portray large 3D areas easily. While the depth confusion problems illustrated above can sometimes be a problem, good game design can alleviate this. With the advent of more powerful graphics systems, axonometric projection is becoming less common.
The projection used in videogames usually deviates slightly from "true" isometric due to the limitations of raster graphics. Lines in the "x" and "y" axes would not follow a neat pixel pattern if drawn in the required 30° to the horizontal. While modern computers can eliminate this problem using anti-aliasing, earlier computer graphics did not support enough colors or possess enough CPU power to accomplish this. So instead, a 2:1 pixel pattern ratio would be used to draw the "x" and "y" axes lines, resulting in these axes following a 26.565° (arctan 0.5) angle to the horizontal. (Game systems that do not use square pixels could, however, yield different angles, including true isometric.) Therefore, this form of projection is more accurately described as a variation of dimetric projection, since only two of the three angles between the axes are equal (116.565°, 116.565°, 126.87°). Many in video game and pixel art communities, however, continue to colloquially refer to this projection as "isometric perspective"; the terms "3/4 perspective" and "2.5D" are also commonly used.
The term has also been applied to games that do not use the 2:1 pixel pattern ratio common among video games. "Fallout" and "SimCity 4", which use "trimetric projection", have been referred to as "isometric".cite web | last = Green | first = Jeff | title = GameSpot Preview: Arcanum | publisher = GameSpot | date = February 29, 2000 | url = http://www.gamespot.com/features/arcanum_pre/ | format = HTML | accessdate = 2008-01-10] cite web | last = Butts | first = Steve | title = SimCity 4: Rush Hour Preview | publisher = IGN | date = September 9, 2003 | url = | format = HTML | accessdate = 2008-01-10] Games that use oblique projection, such as ""cite web | title =GDC 2004: The History of Zelda | publisher =IGN | date =March 25, 2004 | url =http://cube.ign.com/articles/501/501970p1.html | format =HTML | accessdate =2007-12-17 ] and "Ultima Online"cite web | last =Greely | first =Dave | coauthors =Ben Sawyer | title =Has Origin Created the First True Online Game World? | publisher =Gamasutra | date =August 19, 1997 | url =http://www.gamasutra.com/features/19970819/sawyer_01.htm | format =HTML | accessdate =2007-12-17 ] —as well as games that use perspective projection with a bird's eye view, such as "The Age of Decadence"cite web | title =Age of Decadence | publisher =Iron Tower Studios | url =http://www.irontowerstudio.com/index.htm | format =HTML | accessdate =2007-07-10 ] and "Silent Storm"cite web | last =O'Hagan | first =Steve | title =PC Previews: Silent Storm | publisher =ComputerAndVideoGames.com | date =August 7, 2003 | url ==94816 | format =HTML | accessdate =2007-12-13 ] —are also sometimes referred to as being isometric, or "pseudo-isometric".
History of isometric video games
While the history of computer games saw some true 3D games as soon as the early 1970s, the first video games to use the distinct visual style of isometric projection in the meaning described above were arcade games in the early 1980s.
"Q*bert" [KLOV game|id=9182|name=Q*bert] and "Zaxxon" [KLOV game|id=12757|name=Zaxxon] were both released in 1982. "Q*bert" showed a static pyramid drawn in an isometric perspective, with the player controlling a person which could jump around on the pyramid. "Zaxxon" employed scrolling isometric levels where the player controlled a plane to fly through the levels. A year later in 1983 the arcade game "Congo Bongo" was released, running on the same hardware as "Zaxxon" [KLOV game|id=7384|name=Congo Bongo] . It allowed the player character to move around in bigger isometric levels, including true three-dimensional climbing and falling. The same was possible in the 1984 arcade title "Marble Madness".
At this time, isometric games were no longer exclusive to the arcade market and also entered home computers with the release of "Ant Attack" for the ZX Spectrum in 1983. The "ZX Crash" magazine consequently awarded it 100% in the graphics category for this new "3D" technique [cite journal|title=Soft Solid 3D Ant Attack|url=|journal=CRASH|year=February 1984|issue=1|accessdate=2008-09-29] . A year later the ZX saw the release of "Knight Lore", which is generally regarded as a revolutionary title cite book |title=Retro Micro Games Action - The Best of gamesTM Retro Volume 1 |year=2006 |publisher=Highbury Entertainment |chapter=Ultimate Play The Game—Company Lookback |pages=25] which defined the subsequent genre of isometric adventure games cite web | url= | title=Game Graphics During the 8-bit Computer Era | author=Steven Collins | publisher=SIGGRAPH | work=Computer Graphics Newsletters |accessdate=2007-08-16] .
Following "Knight Lore", many isometric titles were seen on home computers - to anextent that it was regarded as being the second most cloned piece of software after "WordStar" [ cite journal|title=Axonometry: a matter of perspective|journal=Computer Graphics and Applications, IEEE|date=Jul/Aug 2000|author=Krikke, J.|volume=20|issue=4|pages=7–11|doi=10.1109/38.851742] . One big success out of those was the 1987 game "Head Over Heels" [cite journal|title=Looking for an old angle|url=http://www.crashonline.org.uk/51/runagain.htm|journal=CRASH|year=April 1988|issue=51|accessdate=2008-09-29] . Isometric perspective was not limited to arcade/adventure games, though; for example, the 1989 strategy game "Populous" used isometric perspective.
Throughout the 1990s some very successful games like "Civilization II" and " Diablo" used a fixed isometric perspective. But with the advent of 3D acceleration on personal computers and gaming consoles, games using a 3D perspective generally started using true 3D instead of isometric perspective. This can be seen by successors of the above games, starting with "Civilization IV" the civilization series uses full 3D. "Diablo II" used a fixed perspective like its predecessor, but optionally allowed for perspective scaling of the sprites in the distance to lend a pseudo-3D perspective [cite web
url=
title=Diablo II Nears Completion As Blizzard Prepares For Final Phase Of Beta Testing
publisher=Market Wire
date=May 2000
accessdate=2008-09-29
] .
• isometric — isometrically, adv. /uy seuh me trik/, adj. Also, isometrical. 1. of, pertaining to, or having equality of measure. 2. of or pertaining to isometric exercise. 3. Crystall. noting or pertaining to that system of crystallization that is… … Universalium
• isometric drawing — also called isometric projection method of graphic representation of three dimensional objects, used by engineers, technical illustrators, and, occasionally, architects. The technique is intended to combine the illusion of depth, as in a… … Universalium
• Projection — Contents 1 Chemistry 2 Mathematics 3 Other 4 See also … Wikipedia
• projection isométrique — izometrinė projekcija statusas T sritis fizika atitikmenys: angl. isometric projection vok. isometrische Projektion, f rus. изометрическая проекция, f pranc. projection isométrique, f … Fizikos terminų žodynas
• isometric drawing — noun : the representation of an object on a single plane (as a sheet of paper) with the object placed as in isometric projection but disregarding the foreshortening of the edges parallel to the three prinicpal axes of the typical rectangular… … Useful english dictionary |
# How to relate speed of sound with relative humidity?
I am exploring the idea of measuring the humidity of a space using sound waves, however I am having trouble finding a mathematical relationship between the speed of sound and the humidity level.
$c_{air} = 331.3 \sqrt{1 + \frac{T}{273.15}}$ but this is for dry air (0%RH)
How can I factor the effects of humidity into this relationship?
-
Speed of sound in a gas is given by the equation: $$c = \sqrt{\gamma R T}$$
where $\gamma = c_p/c_v$ ( $c_p$ and $c_v$ are specific heats), $R$ is the gas constant, and $T$ is temperature. The specific heat of a gas changes with humidity, so varying these will vary your calculated speed of sound.
This page has a calculator as well as a great explanation of how their formula works.
Hope this helps!
-
The speed of sound in a gas is:
$$c = \sqrt{\gamma R T}$$
where $\gamma = c_p/c_v$ is the ratio of specific heats, $R$ is the specific gas constant and $T$ is temperature. Both $\gamma$ and $R$ depend on the composition of the gas, which includes humidity in air.
The specific heats are $c_p = 1.005+1.82H$ (see this answer) where $H$ is the absolute humidty and $c_v = c_p - R$. Finally, $R = R_{univ}/M_{gas}$ where $M_{gas}$ is the molecular weight of the gas (which depends on humidity).
To get it all in terms of relative humidity is just an exercise in unit conversion.
- |
World's most popular travel blog for travel bloggers.
# [Solved]: Red black tree partition to $\sqrt{n}$ trees
, ,
Problem Detail:
This is a question I have stumbled upon in an old Algorithms test I found online:
A) Plan an algorithm that does the following: Input: Red-Black tree Output: $\sqrt{n}$ seperate trees, so that every tree has $\sqrt{n}$ nodes. What is the complexity of the Algorithm you planned? must show analysis.
B) Assume that you have started from an empty Red Black tree, and that the input is a set of nodes and not a Red Black tree. Show how can you make a more efficient algorithm of partitioning the nodes to $\sqrt{n}$ Red Black trees so that every tree has $\sqrt{n}$ nodes. What is the complexity of the new Algorithm you planned and how does it affect existing Red Black tree functions? must show analysis.
Now I have answered A and I am pretty sure that's the best answer there is, but I need your help in telling me if I can do better. This is without analysis:
Algorithm: 1. Scan the Red Black tree using In-Order traversal to build a sorted array out of it. < > O(n). 2. Divide the array to $\sqrt{n}$ sub-arrays and build a Red Black tree out of every sub > array - O(n) total.
Now what I don't really understand is how do I solve B. I'm not exactly sure if the input in B is a Red Black tree or just a set of nodes, so both will be acceptable if you want to share your answer to B with me. I have asked a student and he told me that the complexity that I should get in B is $O(\sqrt{n}*log(n))$.
I need help reaching that, or maybe something better (hints and stuff).
For question B) you need to make $O(\sqrt{n})$ RB trees from $n$ nodes (there is no tree to start with just bunch of nodes). Thus you need to process each node at least once. Thus time complexity of this building should be $\Omega(n)$. Thus I think there is no hope of getting the $O(\sqrt{n}\log n)$ bound. |
## Introduction
Maritime transport is considered the backbone of international trade and the global economy1,2,3. With ports supporting the integration of production centres and consumer markets across borders4, there are large dependencies and feedbacks between changes in the size and structure of the economy (e.g. trade composition, supply-chain structure) and the expected freight flows through specific ports5,6. Similarly, changes (e.g. new infrastructure investments) or disruptions (e.g. port closures) to the maritime transport network can have implications for supply-chains across multiple countries and industries7.
The maritime transport and global supply-chain networks interact with one another on different spatial scales, with recent events illustrating the tight coupling between the two. On the largest spatial scale, the global trade network, the demand for maritime trade is driven by countries’ demand for trade, those countries supplying this trade, and the share of trade being maritime (i.e. modal split). Hence, relative changes in freight flows reflect changes in trade demand, supply and modal split. The COVID-19 pandemic, which affected port operations across the world, changed both demand and supply patterns simultanously8. On the one hand, this disrupted maritime transport and supply-chains due to factory shutdowns, port closures and labour shortages9, while on the other hand this led to large trade bottlenecks at many ports due to shifting demand patterns10. Freight demand on the underlying maritime transport network, consisting of maritime routes that connect ports, is determined by the geographical demand for transport services and the network structure of system. For certain commodities this network is known to be more centralised (e.g. containers) while for others this is more decentralised (e.g. bulk transport)11. The 2021 Suez blockage highlighted how a large shock to a specific route within the maritime transport network could affect multiple ports across the globe, and eventually supply-chains depending on these ports12. Ultimately, trade flows handled at the port serve supply-chains across different hinterlands, either directly (e.g. firm directly receiving goods from ports) or indirectly (e.g. firm depending on other firms that receive goods from ports). For instance, Hurricane Katrina (2005), shutting down major Louisiana ports, led to large disruptions to the global grain supply, resulting in export losses for the United States, which rippled to dependent supply-chains globally and raised commodity prices13,14.
The criticality, that is the systemic (i.e. network-based) importance, of ports for the economy is often framed in terms of the absolute amount of trade flowing through a port, its network characteristics within the maritime transport network (e.g. node centrality)11,15,16, or in terms of its contribution to the local or regional economy (e.g. regional employment and value-added)17. These framings, however, ignore the primary function of ports as the physical infrastructure that connects supply-chains across countries1,4, and therefore fail to provide a comprehensive picture of the dependencies and feedbacks between ports and the economy.
Establishing a fine-scale representation of how each individual industrial sector, globally, makes use of maritime transport, and, on the other hand, how individual ports are critical to global supply chains can help us rethink the importance of ports, which can be informative for different disciplines. For instance, it could allow a better understanding of the geographical distribution of physical trade flows across supply-chains18,19, connect environmental footprints with commodity flows20,21, predict future port demand (in terms of volume and space required) as economies grow22, help allocate maritime emissions (~2.6% global greenhouse gas emissions in 2012) to countries and sectors23,24, and assess the potential supply-chain losses due to maritime transport disruptions25,26.
So far, a number of macroeconomic studies have examined the evolution of international trade and supply-chain interconnectivity27,28,29. This analysis is backed by advances in the provision of Multi-Regional Input–Output (MRIO) tables that describe the inter-, and intra-industry dependencies within countries and between countries30,31,32,33. Although MRIO tables provide extensive data on inter-, and intra-industry trade flows, at national and regional scales, it does not provide insights into the domestic and international transportation systems that are used for these trade flows. Another strand of literature has analysed the network structure and evolution of maritime transport networks through a complexity science lens11,15,34,35,36,37,38,39. This research, however, focused solely on the shipping connections between ports, without incorporating information on the goods that are carried by maritime vessels, where goods are coming from and going to, and how goods are used in the economy. Hence, to date, there is still a spatial mismatch between information describing the structure of the global economy (i.e. global trade and supply-chain data) and a bottom up representation of the transportation network used (i.e. observed maritime transport flows) to facilitate this economic structure.
Here, we present a new modelling framework that provides a comprehensive understanding of the different dimensions of the criticality of ports for domestic and global economies (e.g. on the trade, transport and supply-chain level) that are not captured in aggregate port-level trade statistics. To do this, we provide a globally consistent assessment of the links between ports and maritime trade, the transport networks they utilise (1378 ports across 207 countries), and the supply-chains they serve (1298 ports across 176 countries) (see Methods). This is achieved by first estimating the fraction of maritime trade in all bilateral trade flows and feeding this into the newly developed Oxford Maritime Transport (OxMarTrans) model that simulates the maritime and hinterland routes taken to transport maritime trade flows. The trade flows going through ports are then linked to a global supply-chain database (EORA MRIO tables32) to quantify the links and feedbacks between ports and the economy.
We find that around 50% of global trade (in value terms) is maritime, which reaches up to 76% for the mining and quarrying sector. Low income countries and small island developing states (SIDS) rely disproportionally on maritime trade: their maritime import fraction is 1.5 and 2.0 times higher, respectively, than the global average. Every USD flowing through a port contributes on average 4.3 USD in value to the global economy. We identify ports being critical for the global and domestic economy, showing how the top 5 macro-critical ports all handle goods that contribute >1.4% to the global economy, while 40 ports handle goods that represent >10% of the value of the domestic economies they serve (i.e. domestically critical ports). In addition, we find that every 1000 USD increase in final demand (i.e. the goods needed to meet final consumption and exports) results in a median 84.6 USD increase in maritime imports across the ports that serve these economies, with 30 individual ports experiencing >100 USD increase. Our results pave the way for a better understanding of the key links, dependencies and feedbacks between port, the maritime infrastructure network and the global economy, which is essential information for sustainable infrastructure planning.
## Results
### Overview
The results summarise the model output on the different layers; the trade network layer, the transport network layer, and the port supply-chain layer. These three layers are conceptualised in Supplementary Fig. 1. The trade network layer results discuss the output of the global modal split model (i.e. the distribution of trade flows across transport modes) that quantifies the variations in a country’s dependency on maritime trade as a fraction of total trade on a commodity level. The transport network layer results outline several output of the OxMarTrans model. The OxMarTrans model simulates the route choice of millions of maritime freight flows between 3400 regions across 207 countries on the hinterland and maritime transport network. The output includes the aggregate global freight flows on the transport network and through the two main canals (Suez and Panama), the dependency of countries on maritime infrastructure in foreign jurisdictions through land-based connections and transhipments, the port-level trade flows, and the trade flow distribution across all ports. To quantify the domestic and global economic dependencies on trade flows through ports (i.e. the port supply-chain layer), we use the EORA MRIO tables32 that we extend to the port-level to link the commodities that flow through ports the global supply-chains they serve. Two metrics are constructed to capture these dependencies; (1) the port-level output coefficient (PLOC) and (2) the port-level import coefficient (PLIC). The base year considered in this analysis is 2015, which is the latest available year in the EORA MRIO database (at the time of writing). Throughout this study, we adopt a 11 sector industry classification in line with the EORA MRIO to evaluate differences in criticality between sectors (Supplementary Table 1).
### Share of maritime transport in global trade
Within the trade network layer, the amount of maritime trade between countries is determined by the absolute value of trade across all modes between country pairs and the share of this being maritime. Our transport modal split model estimates the share of maritime trade for around 8 million bilateral trade flows globally on a commodity level (HS6). It should be noted that in this study the mode of transport is defined as the dominant transport mode (longest distance) in the supplier-consumer connection, which means that landlocked countries can still rely on maritime transport (see Methods).
We estimate that 9.4 billion tonnes of trade, equivalent to around 7.6 USD trillion in value terms, was maritime in 2015. The share of maritime trade in global trade is around 75% in terms of weight and 50% in terms of value. This number corresponds well with the estimated 9.96 billion tonnes of trade being discharged in ports in 2015 as reported by UNCTAD40. However, large differences exist between sectors. For instance, while 75.7% (86.0%) of Mining and Quarrying (sector 3) products are transported by means of maritime transport in value (weight) terms, most manufacturing sectors (sector 4 to 11) transport only 40% – 57% (53% – 60%) of their trade in value (weight) terms using maritime transport.
Figure 1 shows the percentage of maritime transport in total imports (Fig. 1a) and total exports (Fig. 1b) per country, while Supplementary Figs. 2 and 3 display the same results per economic sector considered. The dominance or absence of maritime transport for trade is mainly determined by the geographical location of trading partners (e.g. distance, island state), the presence of alternative (fast and cheaper) modes, the value to weight ratio of the commodities, and the standard of living of the importing country (e.g. quality of logistics services)41.
As can be seen from Fig.1, Caribbean islands, countries in Oceania and some countries in Africa (e.g. Somalia, Nigeria, Gabon) rely disproportionally on maritime transport for both imports and exports (Fig. 1a, b). European countries, in particular landlocked countries (e.g. Romania, Hungary, Switzerland), have a much lower share of maritime transport, mainly due to the large trade flows between European countries that use road, rail and inland waterway transport to move goods over relatively short distances42,43. Middle-Eastern (Saudi Arabia, United Arab Emirates) and South American (e.g. Brazil, Colombia) countries rely more on maritime transport for their exports compared to their imports. These countries mainly export raw materials (e.g. oil, coal, grain) which is predominantly shipped by maritime vessels, but import a more diversified mix of goods that are transported by multiple modes. Small Island Developing States (SIDS) rely disproportionally on maritime transport, with 86.5% of imports and 79.8% of exports being maritime, thus almost twice as much as non-SIDS countries. SIDS are often served by a only a few maritime transport routes and experience high transportation costs44, making reliable maritime transport services critical for the well-functioning of SIDS’ economies.
Figure 1c shows the share of maritime transport in total and sector-specific imports grouped by the income level of countries (using the 2021 World Bank classification). Low income countries import on average 1.5 times more by means of maritime transport compared to high-income countries (68% versus 45%). The difference is largest for the manufacturing sectors (sector 8 to 11), having maritime shares 1.5 – 1.8 times higher than high income countries. This difference can be explained by the fact that low income countries often trade low value bulk goods, for which maritime transport is the only viable option, and relatively few high valued goods that are more often transported by aeroplane45. Even within the same continent, such as in Africa, maritime transport is often the only feasible mode of transport for certain goods as the road infrastructure lacks the reliability and capacity for efficient trucking, and border crossings can be time consuming46,47. Therefore, the integration of low income countries into complex manufacturing supply-chains, which critically depend on just-in-time logistics services48, could be hindered by their overreliance on maritime transport, which is considerably slower than air transport49,50.
### Global maritime transport flow allocation
The maritime transport network, consisting of ports and maritime routes transporting goods using different vessel types (e.g. tankers, containers), connects the locations of production to their demand markets. The OxMarTrans model predicts which ports and maritime routes, including locations of transhipments, are being used to transport the maritime trade flows between each country pair and per economic sector (see Methods). The underlying hinterland and maritime consists of 1378 ports, with the port connections and maritime network capacities incorporated in the model based on a dataset of observed ship activities from Automatic Identification System (AIS) data9. The OxMarTrans model therefore helps identify the spatial connectivity of ports; the maritime subnetwork that is used to transport goods from and to a specific port (we show the spatial connectivity for nine ports in Supplementary Fig. 4).
Globally, to meet maritime trade demand, we estimate that 90.5 trillion tonnes-km of freight is transported across sea and an additional 33.4 trillion tonnes-km over land to connect hinterlands to ports. The maritime freight predicted by the model consistent with the 84 trillion tonnes-km estimated by UNCTAD40. 43% of the total maritime tonnes-km is attributed to the Mining and Quarrying sector (sector 3) alone, while the manufacturing of Electrical and Machinery products (sector 9), Transport Equipment (sector 10) and Other Manufacturing goods (sector 11) together account for only 2.7% of total tonnes-km. Supplementary Fig. 5 shows the total throughput (sum of import, export and transhipment) per port and estimated flows on the maritime transport network, while Supplementary Fig. 6 shows a similar result per sector.
Many countries depend on the transport of goods passing the Suez or Panama canal. In total, our model predicts that around 1.1 USD trillion (13.9% of maritime trade) and 0.49 USD trillion (6.2% of maritime trade) pass the Suez and Panama canal, respectively, in 2015 in line with official statistics (see Supplementary Note 3). For the Panama canal, ports in the Gulf of Mexico, the west coast of South America, and parts of East Asia rely directly on goods being shipped through the canal (Fig. 2a). The Suez canal is important for trade going from and to the Asia and Europe. On the east of the canal, the ports of Singapore, Jeddah, Colombo, Mina Jebel Ali are most dependent on the Suez canal, while on the west of the Suez canal the ports of Piraeus, Rotterdam, Marsaxlokk and Algeciras rely most on it (Fig. 2b).
### Cross-border maritime infrastructure dependencies
Both landlocked and maritime economies rely on maritime infrastructure in other countries because they either use ports in neighbouring countries to import or export goods, or they use transhipment services to ship goods from origin to destination. For instance, around 28% of the world’s container throughput in 2012 involved transhipment, where containers unloaded from a deepsea vessel are being transhipped to another deepsea vessel or a smaller vessel (i.e. feeder vessels) to serve otherwise unconnected port pairs51.
Using the OxMarTrans, we estimate that approximately 16.4% of global port throughput (in value terms) is transhipped, while 19.4% of port throughput are imports to or exports from foreign countries connected via the hinterland transport network. Figure 2c shows the fraction of port throughput being foreign per port. In absolute terms, large transhipment hubs (Singapore, Algeciras, Valencia and Marsaxlokk) have a high share of foreign throughput. Additionally, ports in the Le Havre-Hamburg range (Le Havre, Antwerp, Rotterdam, Bremen) handle the largest amount of foreign import and export value, as they compete for trade going to, and coming from, the Central European hinterland52.
Regionally, some ports play key roles in serving landlocked countries or island states (see highlighted port in Fig. 2c). In Africa, for instance, the port of Djibouti handles almost all of Ethiopia’s maritime trade, the ports of Dar Es Salaam (Tanzania) and Beira (Mozambique) are essential for landlocked countries in Sub-Saharan Africa, while the port of Lomé (Togo) and Cotonou (Benin) are key for Western-African landlocked countries. In South America, the ports of Arica (Chile) and Ilo (Peru) handle the majority of maritime trade of Bolivia, while Puerto del Callao (Peru) is an important transhipment hubs for South America. In Oceania, several ports (e.g. Brisbane, Auckland, Apra, Lae) serve as important transhipment hubs for Pacific island economies, with a similar observation for key regional transhipment hubs in the Caribbean region (see Fig. 2c).
### Distribution of trade flows per port
Several factors determine the total maritime trade flows going through ports (e.g. maritime connectivity, logistics services, presence of hinterlands). Figure 3a, b shows the distribution of imports (Fig. 3a) and exports (Fig. 3b) across all trade flows, with the top 10 largest ports annotated. We also show the global core ports, defined as those ports responsible for importing or exporting 50% of global trade (black edge colour). Core importing ports are located in North-America (Los Angeles-Long Beach, New York-New Jersey), Western Europe (Rotterdam), the Middle-East (Mina Jebel Ali) and Asia (Singapore, Shanghai) that serve the populated hinterlands (so-called gateway ports39) or industrial and logistics hubs. Among the core exporting ports are specialised ports that are critical for the exports of agricultural products (Vancouver, New Orleans, Santos), petrochemicals (Houston, Singapore, Rotterdam), iron ore (Port Hedland and Dampier), electrical and machinery manufacturing (Shanghai, Busan, Kaohsiung), car manufacturing (Ulsan, Nagoya, Bremerhaven), and oil and gas (Ras Tanura, King Fahad Industrial Port).
Trade is highly concentrated in a relatively small number of core ports. The trade unevenness expresses the number of ports that handle 10%, 50% and 90% of trade. Only 4 (3) ports are responsible for 10%, 56 (48) ports are responsible for 50%, while 378 (366) ports are accounting for 90% of global maritime imports (exports) (Supplementary Table 2). This underlines that from a global perspective, the maritime transport network consists of a small number of core ports and a large number of secondary (i.e. periphery) ports.
The aggregate results do hide the importance of certain ports on a sector level. Figure 3c shows the geographical location of the core importing and exporting ports per sector, showing a clear geographical clustering of trade flows that are either connected to important demand markets53, or closely located to large sector-specific industry clusters53. Agriculture trade (sector 1) has clear origin ports in the United States, Brazil and Argentina, serving ports in Europe and across Asia. The import and export hotspots of Mining and Quarrying (sector 3) and Food and Beverages (sector 4) products are more spread across the globe, reflecting the export specialisation of different regions (e.g. oil in Middle-East, iron ore and coal in Australia, food products in Indonesia and Malaysia). The Wood and Paper manufacturing (sector 6) sector has large exporting ports in Scandinavia, the United States and China, that export timber products to ports in the United Kingdom, Japan and the Middle-East. Metal products (sector 8) are exported through Chinese, South African and Chilean ports and supplied to the Middle-East, South-East Asia and the United States. The remaining manufacturing sectors (sector 5, 9-11) all have large exports in ports in Western-Europe, East-Asia and the United States, with goods imported in ports in the Middle-East, Australia and parts of South America.
The trade unevenness differs considerably per sector (see Supplementary Table 2). The largest unevenness is found for the exports of Textiles and Wearing Apparel (sector 5), manufacturing of Transport Equipment (sector 10) and Other Manufacturing (sector 11) while the lowest level of trade unevenness is found for the imports of Agricultural products (sector 1), Food and Beverages (sector 4), and Petroleum, Chemical and Non-Metallic Mineral products.
These sectoral heterogeneities do not only reflect the differences in the clustering of industries, but also economies of scale present in the transport of some goods54,55. For example, while for some highly concentrated sectors the vast majority of goods will be transported between a subset of core ports, other less concentrated sectors will use a more decentralised transport network. These sectoral differences reinforce the results found in previous studies that analysed the characteristics of networks of different types of maritime vessels (which are indicative of the sector) and found similarly critical differences between these vessel networks11,35,39.
### Port-level output coefficient
Every port is connected to one or multiple supply-chains in the domestic and foreign economies they serve, either through direct (e.g. through firms directly sending or receiving goods from a port) or indirect (e.g. through firms depending on other firms that send or receive goods from a port) economic connections. More specifically, the products that are imported through a port are either directly consumed in an economy or are used in production processes to produce goods for domestic consumption or export. Additionally, goods exported through a port are being used in production processes, or directly consumed, elsewhere. We call this the port supply chain network. To understand the criticality of the trade facilitation function of ports for domestic and global supply-chains, we developed a metric, called the port-level output coefficient (PLOC), that captures the total industry output and consumption directly or indirectly dependent on the trade flows through a port, either in absolute terms (PLOCA) or relative to the amount of trade going through a port (PLOCR). This is done by removing the trade flows going through a port from the extended MRIO table and quantifying the output changes to the domestic and global economy (see Methods).
In relative terms (PLOCR), every USD of trade going through a port influences on average (5th−95th percentiles) 4.34 (3.84 – 5.03) USD of value in the global economy (Supplementary Fig. 7). Large relative values are found for ports in East-Asia (e.g. China, South-Korea, Taiwan), which are strongly integrated in global supply-chains, but also for some of the raw materials exporting ports in Australia (e.g. Port Hedland and Dampier) and Africa (e.g. Port of Saldanha), which are important for supply-chains downstream (e.g. firms using intermediate products that are produced using raw materials).
In absolute terms (PLOCA), some ports are important for the domestic economy, while others are more important for the global economy. In some cases, ports are critical for both, as outlined in Fig. 4a, which shows the top 10 most critical ports for the domestic economy and the global economy. The top 5 most critical ports for the global economy (Singapore, Shanghai, Busan, Rotterdam, Antwerp) all handle goods that directly or indirectly contribute to >1.4% of global industry output. In total, 94 ports are considered macro critical for global supply-chains, indicating that more than 0.1% of global industry output depends on these ports. 40 ports are considered domestically critical, with over 10% of industry output dependent on trade going through a single port. Examples of some ports that are critical for the domestic economy but negligible on a global scale (dark blue or purple markers Fig. 4a) are the ports of Port Louis (Mauritius, 26.9% of domestic output), Pointe-a-Pierre (Trinidad and Tobago, 24.9% of domestic output), Reykjavik (Iceland, 23.0% domestic output) and Sitra (Bahrain, 25.3% of domestic output). The ports of Kaohsiung (Taiwan), Hong Kong (Hong Kong), Laem Chabang (Thailand), and Port Klang (Malaysia) (red markers Fig. 4a) are found to be essential for both the domestic and global economy. A similar figure can be produced for the final consumption needs of countries, with globally and domestically critical ports shown in Supplementary Fig. 8. Although an overall similar spatial footprint, some ports are more important for meeting final consumption, especially for some small island economies where single ports import over 35% of the final consumption requirement. Hence, the tendency to focus on the absolute size of trade going through a port to classify its importance ignores how some smaller ports are still critical for domestic economies.
### Position port in global supply-chains
To unpack the PLOC metric even more, one can characterise whether the goods that flow through a port are relatively more dependent on domestic or foreign production processes, and relatively more on forward (exporting goods being used in production processes downstream in the supply-chain) or backward linkages (import goods that are produced using production processes upstream in the supply-chain). The relatively importance of these four components determine how ports are positioned differently within the global supply-chain network.
In Fig. 5, we show the relative importance of port throughput in terms its contribution to industry output downstream (forward) or upstream (backward) in the supply-chain and the degree to which output is linked to domestic or foreign supply-chains. We show the position of a number of ports that are all considered macro-critical but located at opposite ends of the spectrum. The ports of Rotterdam, Singapore and Algeciras have large foreign dependencies, with Rotterdam and Singapore being positioned in the middle of supply-chains (mainly due to their role as petrochemicals hub) and Algeciras more towards the end of supply-chains (given its transhipment of manufactured goods). Shanghai and Bremerhaven, on the other hand, have higher domestic dependencies and larger backward linkages. These ports are highly integrated with domestic manufacturing supply-chains (e.g. car manufacturing for Bremerhaven, and electronics and other manufacturing for Shanghai). The port of Los Angeles-Long Beach has large backward linkages, illustrating that it mainly imports goods at the end of the supply-chain, while Ulsan has large forward linkages as it plays a key role in the exports of domestically produced goods (e.g. vehicles). On the left hand side of the spectrum are ports with mainly forward linkages, implying that they mainly export goods that are used in production stages downstream in the supply-chain, such as Itaqui (iron ore and grains) and Mina Al Ahmadi (oil).
The PLOC metrics illustrate how domestic and global supply-chains are tied to the port, and how ports are positioned differently in the global supply-chain network. Although beyond the scope of this work, this measure could help evaluate the potential losses within supply-chains networks if ports are disrupted by a shock. Moreover it could help allocate maritime emissions embedded in freight flows going through ports to specific supply-chains.
### Port-level import coefficient
As economies grow, and final demand (i.e. domestic consumption and exports) changes in absolute terms and composition, imports through ports are necessary to facilitate this. Due to an increasing fragmentation (i.e. different stages of production in different countries) and globalisation (i.e. global expansion) of supply-chains27,56, the reliance on maritime imports to support final demand has increased. As a complementary metric to describe the feedback between ports and the economy, we use the extended MRIO table to estimate the direct and indirect (through interindustry dependencies) imports per port needed to produce the domestic consumption and exports in the economies they serve. The port-level import coefficient (PLIC, see Methods) quantifies the marginal change in port-level imports for every 1000 USD change in final demand across all economies.
Figure 6a highlights the 15 ports with the largest PLIC values. These top-15 ports all have PLIC values of >170 (up to 486), with 27 ports having a PLIC of >100. The ports with the largest PLIC values are relatively small ports serving island nations (e.g. Maldives, Aruba, Mauritius, French Polynesia), but also the port of Dar Es Salaam serving demand in Tanzania and the landlocked African hinterland. Some larger ports that function as important transhipment hubs (Singapore, Kingston, Marsaxlokk and Freeport) also have large PLIC values, indicating that they are not only essential for connecting ports across the region, but also to meet the final demand in their island economies.
Similar as with the cross-border throughput dependencies, some ports are more sensitive to demand changes in foreign economies than their domestic economy (Supplementary Fig. 9). For instance, some key ports in Africa (Djibouti, Berbera, Cotonou, Maputo) are more sensitive to changes in foreign demand than domestic demand, as they serve landlocked economies that are larger than their own. Similarly, in Europe, large foreign demand sensitivities are found for the ports of Bar (Montenegro) and Burgas (Bulgaria).
In general, larger PLIC values are found for ports in countries that have a limited number of importing ports and have a high overall trade openness, i.e. they rely disproportionally on foreign products to meet their domestic consumption and for use in domestic production processes that are later exported to other countries. To further explore the differences between countries, we aggregate the PLIC values to the economies they serve (country-level import coefficient, CLIC), indicating the USD increase in country-wide maritime imports due to a 1000 USD increase in final demand.
On a country-level, for every 1000 USD increase in final demand, ports that serve that country experience a median (maximum) 84.6 (501.5) USD increase in maritime imports, underlining large differences between countries. SIDS have a 1.5 times higher CLIC compared to non-SIDS countries (Fig. 6b). Figure 6c displays the CLIC across income groups, showing that low income countries have lower CLIC, as they are often less integrated and diverse supply-chains. In general, manufacturing sectors have larger import coefficients, requiring more maritime imports per unit of final demand56. For instance, across all countries, the Agricultural (sector 1) and Mining and Quarrying (sector 3) sectors require on average 40 USD for every 1000 USD change in sectoral demand, while some manufacturing sectors (sector 9 – 11) require on average 112 – 153 USD for every 1000 USD change in sectoral demand. Therefore, given that high-income countries are generally more diversified (e.g. higher manufacturing base) and better integrated within global supply-chains, they require more maritime import per USD change in final demand.
The import coefficients (on a port and country level) help to understand how future trade flows through ports will change as countries develop (e.g. demand growth), supply-chains restructure (e.g. better supply-chain integration), and sector composition shifts (e.g. higher manufacturing base).
## Discussion
This study presents a comprehensively global analysis of the different dimensions of the criticality of 1300 individual ports for the international trade, maritime transport and global supply-chain networks. The research is a significant step beyond conventional input–output analysis, which does not resolve the role of individual ports, and maritime network analysis, which does not reflect the sector-specific volumes of goods transported on the network, thereby providing a misleading prioritisation of ports’ criticality. Altogether, this work present a new quantitative framework that allows one to rethink the role of specific ports in the domestic and global economy, as well as the cross-border dependencies on maritime infrastructure.
We find that the approximately 50% of global trade by value is via maritime transport, although higher values are found for the Mining and Quarrying (76%) sector. Maritime trade flows are highly concentrated in a small number of ports that benefit from economies of scale and are well-integrated with the maritime and hinterland networks. Around 50 ports (out of the 1380 considered) are responsible for 50% of global maritime trade, with this trade unevenness being much larger for certain sectors such as the manufacturing of Textiles and Wearing Apparel and Transport Equipment.
Low income economies and SIDS depend disproportionally on their port infrastructure for trade. Low income countries import 1.5 times more by means of maritime transport than high-income countries, while SIDS have a twice as high maritime import dependency compared to non-SIDS. Therefore, investments in reliable port infrastructure in low income countries and SIDS are essential if further economic growth is not to be inhibited by port capacity57. The benefits of increasing trade facilitation provided by ports may reach beyond the port boundaries, as ports tend to attract industry clusters58,59 and lower transaction costs in trade, which could lead to indirect benefits through access to international markets (e.g. food availability, expending exports markets)60,61,62.
We find large cross-border dependencies between ports and the economies they serve due to land connections or transhipment services. Globally, transhipment services and the use of ports in foreign (land-connected) countries contribute to 35% of global port throughput. We identify important cross-border links between landlocked countries in Africa and South America and specific coastal ports, as well as island economies in the Pacific and the Caribbean that rely on regional transhipment hubs. The mutual dependency of economies on foreign maritime infrastructure means that there are potential spillovers when shocks or structural changes occur to either the economy or the maritime network. For instance, strong economic growth in landlocked economies or improved cross-border transport networks between landlocked countries and its maritime neighbours (e.g. the Belt and Road Initiative and the Bioceanic Road Corridor) can lead to increasing demand at the connected ports.
Port are further found to be essential to integrate domestic and global supply-chains. In relative terms, every USD flowing through a port contributes on average 4.3 USD in value to the economy. While some of the world’s largest ports are found to be critical for the global economy (>1.4% of global output depends on trade going through these ports), we identify a number of ports (40) in trade-dependent economies that are critical for >10% of domestic industry output. The position of ports within supply-chains depends on the relative importance of domestic versus foreign and forward versus backward supply-chain linkages. Similar ports in terms of size may be found at different ends of the spectrum, which has important implications for the feedback between the economy and trade flows through ports, and for evaluating the potential magnitude and spatial extent of supply-chains losses if ports are disrupted.
Finally, we find that for every 1000 USD increase in final demand (e.g. domestic consumption and exports) in an economy, the ports serving that economy experience a median 85 USD increase in maritime imports. However, some (27) ports import over 100 USD per 1000 USD change in final demand in the economies they serve, most of which are ports serving small island economies, but also ports serving landlocked economies (e.g. Dar Es Salaam, Djibouti). While the maritime import requirement per USD demand change is lower for low income countries than high income countries, the import sensitivity of low income countries is expected to increase as economies grow, become more diversified and better integrated in global supply-chains.
Our quantitative modelling framework paves the way for future research in various disciplines. First, our disaggregated analysis of global trade flow could allow estimating the carbon emissions embedded into maritime transportation and can help allocate these emissions to countries and sectors23,63. Second, by incorporating various transport policies into the model, such as infrastructure investments (e.g. new transport routes), improved trade facilitation (e.g. reducing transit times at borders) or a (maritime) carbon tax, the changing allocation of freight flows could be evaluated. Third, by analysing future trade flows, the current analysis could help quantify the future investment needs in terms of new port infrastructure. Finally, by coupling this framework to a disaster impact model64, the economic-wide losses (domestic and global) from port or maritime transport disruptions could be assessed, including the future losses due to climate change (e.g. sea-level rise).
In conclusion, ports are closely tied to the economy by facilitating trade flows that connect global supply-chains networks. Our research emphasises the need to rethink the key distinctive features of ports in terms of their criticality for the domestic and global economy, which are largely hidden in aggregate port-level trade statistics. We further highlight the need to integrate long-term planning of port infrastructure with a system-wide understand of the interconnected transport and the economic system. Given the large societal dependencies on maritime transport, evaluating the key links, feedbacks and dependencies between ports and the economy is imperative for the sustainable development of economies.
## Methods
### Overview
We describe the methodology of the modal split model, the maritime transport model and the link between ports and supply-chains using the MRIO tables. Throughout the analysis, we use national economies as the spatial-level of aggregation, which we further disaggregate to the port-level. This because the international trade data and global supply-chain database are constructed on a country by country basis, restricting using subnational economic data. We do recognise that this might bias some of the results as some interpretations of the results might be related to the size of the economy. Further, throughout this research, ports are defined as one or multiple terminals within a specified port boundary, which have been delineated in line with the World Port Index, the most widely used database of ports.
### Modal split model
We develop a global modal split model to predict the share of maritime trade in every bilateral trade flow on a commodity level. A detailed description on the model is included in Supplementary Note 1. A model choice model intends to predict the allocation of freight transport flows for a given Origin-Destination (O-D) over alternative and competing transport modes65. Transporting goods between every O-D using a certain mode has a given utility that the shipper intends to maximize, which includes mode-specific variables (cost, time), O-D specific variables (income, neighbouring countries), and commodity specific variables (quantity, value to weight ratio, perishable). We fit the modal split model based on reported modal split data in international trade from UN Comtrade66. We use this model to predict the modal split in every bilateral trade flow reported in the harmonized BACI trade database67. This database contains over 8 million trade flows on a commodity level, which we aggregate to a 11 sector classification system we adopt throughout this work (see Supplementary Table 1). This sector classification corresponds to the 11 commodity sectors included in the EORA MRIO table, which we used later on (see Link to input–output tables). This country-to-country maritime trade database is used to model the supply and demand of goods across countries globally that are consequently allocated on the maritime transport network. An external validation of the model results is included in Supplementary Note 2.
### Oxford maritime transport model
The new global maritime transport model developed for this study (the Oxford Maritime Transport model, or OxMarTrans), combines a top-down representation of transport demand (driven by predicted maritime trade flows) with a bottom-up (asset-level) representation of the maritime and hinterland transport network. Its main purpose is to accurately allocate trade flows between countries, which we disaggregate to administrative regions within countries, on the maritime transport network, taking into consideration the likely ports and maritime routes taken based on observed sector-specific capacities between 1380 ports from empirical vessel movement data (Automatic Identification System, or AIS). A detailed model description and validation is included in Supplementary Note 3.
To the best of our knowledge, the OxMarTrans is the most detailed global maritime transport model available. It builds upon previously developed maritime transport flow models, either specifically for container flows5,68 or multiple vessel types41, that are used to allocate trade between countries on the maritime transport network. However, the OxMarTrans model makes some noticeable improvements to those earlier models. First, it simulates flows between around 3400 subnational regions globally, instead of using country centroids, which better captures how different ports facilitate trade of specific hinterlands. Second, it include a multi-model hinterland transport network, which therefore captures how the port choice is driven by a better integration (better connectivity or availability of alternative modes) of ports within their respective hinterlands. Third, we embed an observed maritime transport network, based on actual vessel movements into the model, which therefore takes revealed route preferences (e.g. strategic route decisions) into consideration. Previous work has not included this, making it hard to realistically model route choices, in particular transhipment flows. Fourth, we add sector specific constraints to the model framework, helping us to capture the specialisation of different ports and, hence, the specific cargo they handle. Fifth, we perform a flow allocation per economic sector, the output of which provides an explicit link with a MRIO, which has not be done in earlier model frameworks.
The model output captures, per origin and destination country and economic sector, the share of maritime trade going through specific ports, in terms of the points of exports, transhipment and imports, as well as the maritime and land-route route taken. In this way, we can analyse both the trade flows on a port-level and the use of certain transport routes (e.g. Suez and Panama canal or hinterland transport corridor) to trade goods between country pairs.
To connect the port-level trade flows to an I-O table, we use the latest EORA MRIO32 (2015 at the time of writing), which describe the intercountry and interindustry dependencies for 190 economies. Of the 207 countries included in the port-to-port trade network, 176 countries are included in the MRIO, leaving us with 1300 ports for the analysis. Trade flows included in the MRIO table are not always similar as those included in the BACI trade database67, and hence we can only modify overlapping trade flows for this analysis (since we only derive maritime percentages for these specific trade flows).
The import coefficient is derived in line with the work of Hummels et al56., that used the concept of import coefficients to quantify the amount of imports embedded in the export of a country (i.e. vertical specialisation). Although the methodology of Hummels et al56. was developed for a single country I-O table, Dietzenbacher69 showed that the same result holds for a MRIO. Our port-level import coefficient (PLIC) metric quantifies the amount of imports through a port (p) that serve as a country (k) that are embedded in exports (e, vector of exports) and domestic final consumption (c, vector of consumption). In a MRIO table, the input coefficients matrix (A) for country is derived from its interindustry trade (Z) and industry output (x). For a country k = 1, this consists of $${{{{{{\boldsymbol{A}}}}}}}^{11}{{{{{\boldsymbol{=}}}}}}{{{{{{\boldsymbol{Z}}}}}}}^{11}{\left(\hat{x}\right)}^{-1}$$ for domestically produced inputs and $${{{{{{\boldsymbol{A}}}}}}}^{k1}{{{{{\boldsymbol{=}}}}}}{{{{{{\boldsymbol{Z}}}}}}}^{k1}{\left(\hat{x}\right)}^{-1}$$ for inputs imported from country k (k ≠1).
The domestic output necessary for e is $${\left({{{{{\boldsymbol{I}}}}}}{{{{{\boldsymbol{-}}}}}}{{{{{{\boldsymbol{A}}}}}}}^{11}\right)}^{-1}{{{{{\boldsymbol{e}}}}}}$$ and for c is $${\left({{{{{\boldsymbol{I}}}}}}{{{{{\boldsymbol{-}}}}}}{{{{{{\boldsymbol{A}}}}}}}^{11}\right)}^{-1}{{{{{\boldsymbol{c}}}}}}$$, which require imports $${{{{{\boldsymbol{M}}}}}}{{{{{\boldsymbol{=}}}}}}{\sum }_{{{{{{\rm{c}}}}}}=2}^{{{{{{\rm{k}}}}}}}{{{{{{\bf{A}}}}}}}^{{{{{{\rm{c}}}}}}1}$$(c=2 to k means input from other countries). Hence, the total imports to meet e is $${{{{{{\boldsymbol{s}}}}}}{\prime} {{{{{\boldsymbol{M}}}}}}\left({{{{{\boldsymbol{I}}}}}}{{{{{\boldsymbol{-}}}}}}{{{{{{\boldsymbol{A}}}}}}}^{11}\right)}^{{{{{{\boldsymbol{-}}}}}}{{{{{\boldsymbol{1}}}}}}}{{{{{\boldsymbol{e}}}}}}$$ and to meet c is $${{{{{{\boldsymbol{s}}}}}}{\prime} {{{{{\boldsymbol{M}}}}}}\left({{{{{\boldsymbol{I}}}}}}{{{{{\boldsymbol{-}}}}}}{{{{{{\boldsymbol{A}}}}}}}^{11}\right)}^{-1}{{{{{\boldsymbol{c}}}}}}$$, with s a summation vector. To find imported goods going through a port, we modify the M matrix using the port-to-port trade network, by first making M=0, and filling the M matrix with the fraction of country-to-country trade (share time trade flow) that goes through a port per sector (s) (with Apc1 the port-level imports from country c to country 1 to port p). This results in a new Mp per port that covers the input coefficients from country k to the host country of the port (country c = 1), which are being transported through this port. Using this, we can find the PLIC metrics by
$${PLI}{C}_{{dom},p,c}=\frac{{{{{{{\boldsymbol{s}}}}}}}^{{\prime} }{{{{{{\boldsymbol{M}}}}}}}_{{{{{{\boldsymbol{p}}}}}}}{\left({{{{{\boldsymbol{I}}}}}}-{{{{{{{\boldsymbol{A}}}}}}}_{p}}^{11}\right)}^{-1}{{{{{{\boldsymbol{c}}}}}}}_{{{{{{\boldsymbol{c}}}}}}}}{{{{{{{\boldsymbol{s}}}}}}}^{{\prime} }{{{{{{\boldsymbol{c}}}}}}}_{{{{{{\boldsymbol{c}}}}}}}}$$
(1)
and
$${PLI}{C}_{{\exp },p,c}=\frac{{{{{{{\boldsymbol{s}}}}}}}^{{\prime} }{{{{{{\boldsymbol{M}}}}}}}_{{{{{{\boldsymbol{p}}}}}}}{\left({{{{{\boldsymbol{I}}}}}}-{{{{{{{\boldsymbol{A}}}}}}}_{p}}^{11}\right)}^{-1}{{{{{{\boldsymbol{e}}}}}}}_{{{{{{\boldsymbol{c}}}}}}}}{{{{{{{\boldsymbol{s}}}}}}}^{{\prime} }{{{{{{\boldsymbol{e}}}}}}}_{{{{{{\boldsymbol{c}}}}}}}}$$
(2)
Describing the port-level imports required to meet the final demand for the country the ports serve (PLICp = PLICdom,p + PLICexp,p). The total import multiplier for a country (CLIC) is found by aggregating the PLIC-measures per port that serve the demand of a country (p,c) (CLIC = $$\sum {PLI}{C}_{p,c}$$). The sector-specific import multipliers on a country-level are found by replacing c and e with a vector with a 1 for the specific sector and a zero otherwise.
The port-level output coefficient (PLOC) metric is a variation of the Hypothetical Extraction Method (HEM)70,71,72 used in I-O analysis, in which a sector is hypothetically set to zero (the i-th row and j-th column of matrix A) in order to evaluate the interindustry dependencies and importance for the economy through changes in the industry output. For the PLOC, we quantify the output changes to the economy by removing the trade flows going through a port from the I-O table. To do this, we use both supply-driven (Ghosh) and demand-driven (Leontief) versions of the I-O table to find the forward (supply-driven) and backward (demand-driven) linkages. Using a Ghoshian model is justified here as we look at reductions in industry output (see Rose and Wei for a discussion25). The PLOC metric is derived by (1) modifying the interindustry trade matrix (Z) and (2) the final demand matrix (y) to account for the trade flows going through a port. First, we remove the port-level trade flows (both import and export) from Z and re-evaluate the new Ap,1 using the demand-driven model and the new Bp,1 using the supply-driven model ($${{{{{{\boldsymbol{B}}}}}}}_{p,1}=\hat{{{{{{{\boldsymbol{x}}}}}}}^{-1}}{{{{{\boldsymbol{Z}}}}}}{{{{{\boldsymbol{)}}}}}}{{{{{\boldsymbol{.}}}}}}$$ We find the backward losses in industry output ($${\Delta {{{{{\boldsymbol{x}}}}}}}_{p,1,{ind},b}{{{{{\boldsymbol{)}}}}}}$$ by re-calculating industry output (xp,1,ind) with the modified direct requirement matrix:
$${\Delta {{{{{\boldsymbol{x}}}}}}}_{p,1,{ind},b}={{{{{{\boldsymbol{x}}}}}}-\left({{{{{\boldsymbol{I}}}}}}-{{{{{{\boldsymbol{A}}}}}}}_{p,1}\right)}^{-1}{{{{{\boldsymbol{y}}}}}}$$
(3)
And the new industry output for the forward linkages ($${\Delta {{{{{\boldsymbol{x}}}}}}}_{p,1,{ind},f}{{{{{\boldsymbol{)}}}}}}$$:
$${\Delta {{{{{\boldsymbol{x}}}}}}}_{p,1,{ind},f}={{{{{{\boldsymbol{x}}}}}}-{{{{{\boldsymbol{v}}}}}}\left({{{{{\boldsymbol{I}}}}}}-{{{{{{\boldsymbol{B}}}}}}}_{p,1}\right)}^{-1}$$
(4)
with v the vector of value-added. The changes in industry output is the addition of the changes in domestic output $$(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{ind},b}{{{{{\rm{;}}}}}}\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{ind},f})$$ and change in output on the rest of the economy $$(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{row},{ind},b}{{{{{\rm{;}}}}}} \Delta {{{{{{\boldsymbol{x}}}}}}}_{{row},{ind},f})$$. Moreover, we evaluate the changes in industry output due to the port-level trade embedded in direct consumption. This is done by modifying the demand matrix (y) with the equivalent reduction in domestic final consumption (imports) and the reduction in final consumption in other countries (exports). Output losses associated with changes in final consumption in a port in country 1 $$({{{{{{\boldsymbol{y}}}}}}}_{p,1})$$ can be found by solving:
$${\Delta {{{{{\boldsymbol{x}}}}}}}_{p,1,{con},b}={{{{{{\boldsymbol{x}}}}}}-\left({{{{{\boldsymbol{I}}}}}}-{{{{{\boldsymbol{A}}}}}}\right)}^{-1}\left({{{{{\bf{y}}}}}}-{{{{{{\bf{y}}}}}}}_{{{{{{\bf{p}}}}}},{{{{{\bf{1}}}}}}}\right)$$
(5)
From $$\varDelta$$xp,1,con,b we can find changes in domestic output $$(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{con},b})$$ and changes in output for the rest of the economy $$(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{row},{con},b})$$ in a similar fashion as described above. The forward losses associated with trade in final consumption are simply the trade flows of final consumption, with imports leading to a reduction of domestic consumption $$(\Delta {{{{{{\boldsymbol{c}}}}}}}_{{dom},{con},f})$$ and exports leading to a reduction in foreign consumption $$(\Delta {{{{{{\boldsymbol{c}}}}}}}_{{row},{con},f})$$.
This yields the PLOCA metric, which can be derived from changes in output and consumption:
$${PLOCA}=(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{ind},b}+\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{con},b})+(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{con},f}+\Delta {{{{{{\boldsymbol{x}}}}}}}_{{dom},{ind},f}) \\+(\Delta {{{{{{\boldsymbol{x}}}}}}}_{{row},{ind},b}+\Delta {{{{{{\boldsymbol{x}}}}}}}_{{row},{con},b})+(\Delta {{{{{{\boldsymbol{c}}}}}}}_{{row},{con},f}+\Delta {{{{{{\boldsymbol{c}}}}}}}_{{row},{ind},f})$$
(6)
from which PLOCR can be derived:
$${PLOCR}=\frac{{PLOCA}}{T}$$
(7)
With T the throughput going through the port. The relative importance of the global versus domestic economic linkages and forward versus backward economic linkages can be derived by dividing the different components of PLOCA (neglecting the final consumption changes). The global and domestic importance of ports is simply derived by dividing the total industry output changes with the global and domestic industry output. A similar exercise is done for the final consumption changes.
|
The formula for percentage yield is given by, Percentage yield= (Actual yield/theoretical yield )x100, Rearrange the above formula to obtain theoretical yield formula, Determine the theoretical yield of the formation of geranyl formate from 375 g of geraniol. Assume it can react with other reagents to create a molecule with a molar weight of 250, being consumed at a molar ratio of .2 (5 units of hydrogen per unit of product. Related questions. Answer: In this reaction there is only one reactant (H 2 O 2) so it must be the limiting reactant.Stoichiometry will be used to determine the moles of water that can be formed. This example problem demonstrates how to predict the amount of product formed by a given amount of reactants. Actual yield is the actual amount produced in the experiment. Percentage yield= (Actual yield/theoretical yield )x100. A more accurate yield is measured based on how much product was actually produced versus how much could be produced. Theoretical yield can range in between from 0 to 100, but percentage yield can vary in ranges. The stoichiometry of this reaction is such that every molecule of the limiting reagent gives one molecule of (CH 3) 2 C=CH 2. Actual yield is the amount of product you actually got while theoretical is the maximum possible yield. The actual yield is expressed as a percentage of the theoretical yield. To determine the theoretical yield of any chemical reaction, multiply the number of moles by the molecular weight. nitely, and (with certain limitations) differentiate to yield all the specialized cell types of the tissue from which it originated. Real Life Examples: Example #1: If 4 hamburgers were made for a dinner party, but 5 people showed up. Based on that value, you can find the percentage yield by using the ratio of … The answer is the theoretical yield, in moles, of the desired product. Theoretical Yield Formula Questions: 1. Difference between Theoretical Yield and Percent Yield. Both the calculations are different in their own uniqueness of calculating method; the exact same goes for the answers each yield obtains. Theoretical yield=(Actual yield/percentage yield) x 100, Your email address will not be published. Percentage yield is given as 94.1%. If you're seeing this message, it means we're having trouble loading external resources on our website. Summary; Definitions; How to Use Limiting Reagents. Determine the theoretical yield of H 2 O (in moles) in the following reaction, if 2.5 moles of hydrogen peroxide are decomposed.. 2H 2 O 2 → 2H 2 O + O 2. View Theoretical-and-Percent-Yield-Sample-Calculation.pdf from CHEMISTRY CH121 at Centennial College. Summary; Definitions; How to Use Limiting Reagents. Ask me questions: http://www.chemistnate.com Theoretical Yield Formula - Solved Examples & Practice Questions In theory, we can always predict the amount of desired product that will be formed at the end of a chemical reaction. And the amount that is predicted by stoichiometry is named as the theoretical yield whereas the real amount is actual yield here. Theoretical yield is commonly expressed in terms of grams or moles. Difference between Theoretical Yield and Percent Yield. 1. Problem Given the reaction Na 2 S(aq) + 2 AgNO 3 (aq) → Ag 2 S(s) + 2 NaNO 3 (aq) How many grams of Ag 2 S will form when 3.94 g of AgNO 3 … Translations of the phrase THEORETICAL YIELD from english to spanish and examples of the use of "THEORETICAL YIELD" in a sentence with their translations: the theoretical yield … In the above reaction, the expected theoretical yield of CaO obtained from 100g of CaCO 3 is 56g. The amount of product calculated in this way is the theoretical yield, the amount obtained if the reaction occurred perfectly and the purification method were 100% efficient. If we started with 1 mol of (CH3)3COH, how many moles of (CH3)2C=CH2 would we expect for a theoretical yield? neurons, astrocytes and oligodendrocytes) [21]. If we started with 1 mol of (CH3)3COH, how many moles of (CH3)2C=CH2 would we expect for a theoretical yield? Limiting Reagents Percent Yield . Worked example If heated, calcium oxide decomposes to form calcium oxide and carbon dioxide. If you begin with 10 g of isoamyl alcohol, 5 mL of acetic acid, and 1 ml of sulfuric acid, what is the theoretical yield of isoamyl acetate? Sample Calculation for Theoretical and Percent Yield of Aspirin. The theoretical yield of 39 ± 5 t/ha per single harvest simulated here is more than double of any reported wheat grain yield from the field, but whether this can actually be achieved needs to be demonstrated in indoor experiments. Simple enough. The molar yield of the product is calculated from its weight (132 g ÷ 88 g/mol = 1.5 mol). Your actual yield is then the yield you get from the reaction divided by the theoretical yield. And the amount that is predicted by stoichiometry is named as the theoretical yield whereas the real amount is actual yield here. Usually, the actual yield is lower than the theoretical yield because few reactions truly proceed to completion (i.e., aren't 100% efficient) or because not all of the product in a reaction is recovered. Theoretical yield is a very simple concept. Answer link. Theoretical Yields. Learn how to identify the limiting reactant in a chemical reaction and use this information to calculate the theoretical and percent yields for the reaction. The theoretical yield is what would be obtained if all of the limiting reagent reacted to give the product in question. Question: 2 g of salicylic acid Assuming that the reaction will go to completion we can predict this amount of product from the stoichiometric coefficients of the balanced chemical equation. Dictionary Thesaurus Examples ... O = H 2 SO 4 or as 2S0 2 -IH20 + 0 = H 2 S 2 0 6; and that in the case of ferric oxide 96% of the theoretical yield of dithionate is obtained, whilst manganese oxide only gives about 75%. The answer is theoretical yield = 1 mol. Step 3: Think about your result. For this reaction, two moles of AgNO3 is needed to produce one mole of Ag2S.The mole ratio then is 1 mol Ag2S/2 mol AgNO3, Step 3 Find amount of product produced.The excess of Na2S means all of the 3.94 g of AgNO3 will be used to complete the reaction.grams Ag2S = 3.94 g AgNO3 x 1 mol AgNO3/169.88 g AgNO3 x 1 mol Ag2S/2 mol AgNO3 x 247.75 g Ag2S/1 mol Ag2SNote the units cancel out, leaving only grams Ag2Sgrams Ag2S = 2.87 g Ag2S. However, if the actual yield is only 48g of CaO, what is the percentage yield? Answer: By substituting the values in the formula, Percentage yield= (48÷ 56) x 100% = 86%. Practice some actual yield and percentage problems below. The quantity of a product that is released from the reaction is usually expressed in terms of yield. Theoretical yield of NaCl in grams = 0.17 moles of NaCl × 58.44 g/mole. When it comes to calculating the maximum amount of products from the given limiting reactants of a balanced chemical reaction, we call it a theoretical yield. The only tricky part you might encounter is in the first step - because you have to distinguish between moles and equivalents. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16. Todd Helmenstine is a science writer and illustrator who has taught physics and math at the college level. For the balanced equation shown below, if the reaction of 40.8 grams of C6H6O3 produces a 39.0% yield, how many grams of H2O would be produced ? This is the theoretical yield. Translations of the phrase THEORETICAL YIELD from english to french and examples of the use of "THEORETICAL YIELD" in a sentence with their translations: ...< 430 satin < 415 theoretical yield :. To calculate the theoretical yield of any reaction, you must know the reaction. Both the calculations are different in their own uniqueness of calculating method; the exact same goes for the answers each yield obtains. Example sentences with the word yield. If all 4 hamburgers are given out, 4 people will have dinner, but 1 person will not have any. You expect to create six times as many moles of carbon dioxide as you have of glucose to begin with. Be sure that actual and theoretical yields are … Theoretical yield is the quantity of a product obtained from the complete conversion of the limiting reactant in a chemical reaction. A reaction yield is reported as the percentage of the theoretical amount. Answer: In this reaction there is only one reactant (H 2 O 2) so it must be the limiting reactant.Stoichiometry will be used to determine the moles of water that can be formed. The actual yield is 417 g which is the quantity of the desired product. Theoretical Yield. C6H6O3+6O2=>6CO2+3H2O 2. Díaz and Skinner (2001) use the differences between the yield-to-maturity of a bond and its theoretical yield as given by an explicit term structure model. If you actually carry out this reaction in a lab, you will be able to find the actual yield of the reaction. In this example, there is only one reactant (CH3)3COH, so this is the limiting reagent (remember HCl is a catalyst in this reaction). The key to solving this type of problem is to find the mole ratio between the product and the reactant.Step 1 - Find the atomic weight of AgNO3 and Ag2S.From the periodic table:Atomic weight of Ag = 107.87 gAtomic weight of N = 14 gAtomic weight of O = 16 gAtomic weight of S = 32.01 gAtomic weight of AgNO3 = (107.87 g) + (14.01 g) + 3(16.00 g)Atomic weight of AgNO3 = 107.87 g + 14.01 g + 48.00 gAtomic weight of AgNO3 = 169.88 gAtomic weight of Ag2S = 2(107.87 g) + 32.01 gAtomic weight of Ag2S = 215.74 g + 32.01 gAtomic weight of Ag2S = 247.75 gStep 2 - Find mole ratio between product and reactantThe reaction formula gives the whole number of moles needed to complete and balance the reaction. In this example, there is only one reactant (CH3)3COH, so this is the limiting reagent (remember HCl is a catalyst in this reaction). How many people are going to be fed at this dinner party? The formula for percentage yield is given by. Example sentences with the word yield. 1. Using the theoretical yield equation helps you in finding the theoretical yield from the mole of the limiting reagent, assuming 100% efficiency. calculation. yield example sentences. The ratio of the theoretical yield and the actual yield results in a percent yield. theoretical yield H 2 O = 1.50 mol H 2 x 2 mol H 2 O / 2 mol H 2. theoretical yield H 2 O = 1.50 mol H 2 O. Calculate the percentage yield: The percent yield is simply the actual yield divided by theoretical yield multiplied by 100. Theoretical yield – The max amount of product a reaction will produce through a complete chemical reaction, based upon the amount of limiting reagent/limiting reactant.Limiting reactant/limiting reagent – The reactant that determines how much of a product can be made due to its limited amount. Theoretical yield calculator is the best tool to determine the exact efficiency of the Chemical reaction. Limiting Reagents Percent Yield . How Do You Calculate Theoretical Yield? The quantity of a product obtained from a reaction is expressed in terms of the yield of the reaction. Worked example. Using the theoretical yield equation helps you in finding the theoretical yield from the mole of the limiting reagent, assuming 100% efficiency. The % yield is calculated from the actual molar yield and the theoretical molar yield (1.5 mol ÷ 2.0 mol × 100% = 75%). I can use the same everyday examples for finding the theoretical yield as I used for finding the limiting reagent. INtroduction: Theoretical yield everyday example. Note that the only requirement for performing this calculation is knowing the amount of the limiting reactant and the ratio of the amount of limiting reactant to the amount of product . A chemist making geranyl formate uses 375 g of starting material and collects … Theoretical yield is commonly expressed in terms of grams or moles. Step 1: List the known quantities and plan the problem. Example: Let’s consider a simple example first, equation 3 from above. Sometimes, you will also need to calculate the theoretical yield from the given chemical reaction. The percentage yield is the ratio of actual yield to theoretical yield expressed as a percentage: (37 g/100 g) × 100% = 37%; 3. Thus, when product of the reaction was used as catalyst at 20 mol% loading and with 86% ee, the newly generated product was isolated in 67% yield and 35% ee. Actual yield is the actual amount produced in the experiment. To give you an elaborate view on theoretical and percent yield, here are the calculation methods of both below. The answer is theoretical yield = 1 mol. It's also possible for the actual yield to be more than the theoretical yield. The theoretical yield of aspirin is 3.95 grams Explanation: Step 1: Data given Mass of salicylic acid = 3.03 grams Volume of acetic anhydride = 3.61 mL Density of acetic anhydride = 1.08 g/cm³ Step 2: The balanced equation C4H6O3+C7H6O3→C9H8O4+C2H4O2 Step 3: Calculate moles salicylic acid The other species which yield buchu are B. So, in order to find the yield, can divide the masses to do the actual yield in mass divided by the theoretical yield you can have the moles divided by moles. The first example for such a process was reported by Soai in 1990, in the irreversible enantio- selective addition of dialkylzinc reagents to pyridine-3-carbaldehyde (Figure 2) [20]. This predicted quantity is the theoretical yield. How many people are going to be fed at this dinner party? number of moles of PCl5, n = 93.8/208.5 = 0.449 moles. The only tricky part you might encounter is in the first step - because you have to distinguish between moles and equivalents. Formula for percentage yield. Simple enough. Real Life Examples: Example #1: If 4 hamburgers were made for a dinner party, but 5 people showed up. 2. Learn how to calculate theoretical yield easily. 1. These cells play an important Translations of the phrase THEORETICAL YIELD from english to french and examples of the use of "THEORETICAL YIELD" in a sentence with their translations: ...< 430 satin < 415 theoretical yield :. When it comes to calculating the maximum amount of products from the given limiting reactants of a balanced chemical reaction, we call it a theoretical yield. 1. Theoretical yield of NaCl in grams = theoretical yield in moles × molar mass of NaCl. The theoretical molar yield is 2.0 mol (the molar amount of the limiting compound, acetic acid). An example of theoretical yield is this: Imagine: You buy a box of 12 cookies. Theoretical yield is the amount of product a reaction would produce if the reactants reacted completely. Theoretical Yield: Example 2 Example 2 Consider the acid-catalyzed esterification of isoamyl alcohol to produce isoamyl acetate. Theoretical yield will be calculated in grams because it uses the theoretical yield equation and it is the amount of the expected product. For example, neural stem cells are self-renewing multipotent cells that generate mainly pheno-types of the nervous system (e.g. This is called the percent yield. The reaction as written above is balanced, with one mole of ethanol producing one mole of ethylene, therefore the stoichiometry is 1:1. Your email address will not be published. Theoretical yield of NaCl in grams = 9.93 grams. 2.87 g of Ag2S will be produced from 3.94 g of AgNO3. The ratio of carbon dioxide to glucose is 6:1. molar mass of H3PO4 = 1 * 3 + 31 * 1 + 16 * 4 = 98. number of moles of … 1. This will give you the theoretical yield (grams) of product. Although you now have nine cookies, the theoretical yield is 12. Example: Let’s consider a simple example first, equation 3 from above. For a theoretical yield example, assume we have 20 grams of hydrogen gas and hydrogen gas has a molar weight of 2. A reaction yield is reported as the percentage of the theoretical amount. Theoretical yield can also be worked out using a mole. 2. This is the theoretical yield. If I bought a pack of 12 graham crackers and used two graham crackers per s'more I would only be able to have 6 s'mores. Theoretical yield formula. molar mass of H2O = 18gm. Now we will use the actual yield and the theoretical yield to calculate the percent yield. This will give you the theoretical yield (grams) of product. calculation. Determine the theoretical yield of H 2 O (in moles) in the following reaction, if 2.5 moles of hydrogen peroxide are decomposed.. 2H 2 O 2 → 2H 2 O + O 2. Theoretical Yield: Example 2 Example 2 Consider the acid-catalyzed esterification of isoamyl alcohol to produce isoamyl acetate. Your actual yield is then the yield you get from the reaction divided by the theoretical yield. The mass of oxygen gas must be less than the $$40.0 \: \text{g}$$ of potassium chlorate that was decomposed. The basic equation is: grams product = grams reactant x (1 mol reactant/molar mass of reactant) x (mole ratio product/reactant) x (molar mass of product/1 mol product) For a theoretical yield example, assume we have 20 grams of hydrogen gas and hydrogen gas has a molar weight of 2. Formula to calculate theoretical yield. To give you an elaborate view on theoretical and percent yield, here are the calculation methods of both below. Example 1. INtroduction: Theoretical yield everyday example. 20.3/18 = 1.127 moles of H2O. I can use the same everyday examples for finding the theoretical yield as I used for finding the limiting reagent. Sample Calculation: The Theoretical yield and Percent yield for Caffeine. The theoretical yield of 39 ± 5 t/ha per single harvest simulated here is more than double of any reported wheat grain yield from the field, but whether this can actually be achieved needs to be demonstrated in indoor experiments. All it is is the amount of the product obtained when a chemical reaction occurs perfectly. Here, the actual and theoretical yields could be expressed in: Number of moles; Mass (usually for solid products) Gaseous volume (usually for gas products) 4. ; The theoretical yield would be 4, because 4 people were fed 6. 2. If heated, calcium oxide decomposes. Given the reactionNa2S(aq) + 2 AgNO3(aq) → Ag2S(s) + 2 NaNO3(aq)How many grams of Ag2S will form when 3.94 g of AgNO3 and an excess of Na2S are reacted together? A chemist making geranyl formate uses 375 g of starting material and collects 417g of purified product. The percent yield of a reaction is the ratio of the actual yield to the theoretical yield, multiplied by 100 to give a percentage: $\text{percent yield} = {\text{actual yield } \; (g) \over \text{theoretical yield} \; (g) } \times 100\% \label{3.7.3}$ The answer is theoretical yield = 1 mol. The reaction yield is also given as the percent yield and it can be calculated […] Step 5: Find the Percentage Yield. Sentences Menu. Theoretical yield is the quantity of a product obtained from the complete conversion of the limiting reactant in a chemical reaction. When reactants are not present in stoichiometric quantities, the limiting reactant determines the maximum amount of product that can be formed from the reactants. yield example sentences. Translations of the phrase THEORETICAL YIELD from english to spanish and examples of the use of "THEORETICAL YIELD" in a sentence with their translations: the theoretical yield … The theoretical yield of $$\ce{O_2}$$ is $$15.7 \: \text{g}$$. Material Theoretical shear strength (GPa) Experimental shear strength (GPa) Ag 1.0 0.37 Al 0.9 ... perfect single crystal structure and defect-free surfaces have been shown to demonstrate yield stress approaching the theoretical value. We are going to divide the actual yield by the theoretical yield. So, to stop you from wondering how to find theoretical yield, here is the theoretical yield formula: mass of product = molecular weight of product * (moles of limiting reagent in reaction * stoichiometry of product) If all 4 hamburgers are given out, 4 people will have dinner, but 1 person will not have any. Required fields are marked *. It’s given that we have 0.21 moles, divided by what was expected which was 0.3 moles and multiplied by 100%. If you begin with 10 g of isoamyl alcohol, 5 mL of acetic acid, and 1 ml of sulfuric acid, what is the theoretical yield of isoamyl acetate? To calculate the theoretical yield of any reaction, you must know the reaction. Now we will solve example with theoretical yield formula to make it more clear. In this example, the 25g of glucose equate to 0.139 moles of glucose. percentage yield = actual yield ÷ theoretical yield × 100%. ; The theoretical yield would be 4, because 4 people were fed Theoretical yield strength. 1. Courses. The theoretical yield of aspirin is 3.95 grams Explanation: Step 1: Data given Mass of salicylic acid = 3.03 grams Volume of acetic anhydride = 3.61 mL Density of acetic anhydride = 1.08 g/cm³ Step 2: The balanced equation C4H6O3+C7H6O3→C9H8O4+C2H4O2 Step 3: Calculate moles salicylic acid Determine the theoretical yield of the formation of geranyl formate from 375 g of geraniol. How to Calculate Theoretical Yield of a Reaction, Theoretical Yield Definition in Chemistry, Chemistry Quiz: Theoretical Yield and Limiting Reactant, How to Calculate Limiting Reactant and Theoretical Yield, How to Calculate Limiting Reactant of a Chemical Reaction, Example Problem of Mass Relations in Balanced Equations, Limiting Reactant Definition (Limiting Reagent), Redox Reactions: Balanced Equation Example Problem, Aqueous Solution Chemical Reaction Problem. Example 2. Theoretical Yield. Formula to calculate theoretical yield. Theoretical Yield: Example 1 What is the theoretical yield of ethylene in the acid-catalyzed production of ethylene from ethanol, if you start with 100 g of ethanol? Theoretical yield can also be worked out using a mole. For example, if an investor was evaluating a bond with both call and put provisions, she would calculate the YTW based on the option terms that give the lowest yield. What is the theoretical yield in grams for this reaction? How to calculate Theoretical yield? Theoretical yield is the amount of product a reaction would produce if the reactants reacted completely. Percent yield is a measurement that indicates how successful a reaction has been. To find the actual yield, simply multiply the percentage and theoretical yield together. Theoretical yield formula. 2. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. For the balanced equation shown below, if 32.4 grams of C6H5F were reacted with 102 grams of O2, how many grams of CO2 would be produced? The other species which yield buchu are B. He holds bachelor's degrees in both physics and mathematics. Theoretical Yield Formula Questions: 1. Dictionary Thesaurus Examples ... O = H 2 SO 4 or as 2S0 2 -IH20 + 0 = H 2 S 2 0 6; and that in the case of ferric oxide 96% of the theoretical yield of dithionate is obtained, whilst manganese oxide only gives about 75%. Theoretical Yield Formula The quantity of a product that is released from the reaction is usually expressed in terms of yield. Percent yield is a measurement that indicates how successful a reaction has been. number of moles = 20.3/18 = 1.127 moles. For the balanced equation shown below, if 32.4 grams of C6H5F were reacted with 102 grams of O2, how many grams of CO2 would be produced? Another example would be if you had 5 rings but you regrettably lose one. So, we can see that the Limiting reagent is H2O. The theoretical yield refers to the amount that should be form when the limiting reagent is completely consumed. The amount of product predicted by stoichiometry is called the theoretical yield, whereas the amount obtained actually is called the actual yield. Unfortunately, you drop 3 of them. How to calculate Theoretical yield? Theoretical yield can range in between from 0 to 100, but percentage yield can vary in ranges. This is 70% and that is the yield of this reaction. Sentences Menu. 1. Rearrange the above formula to obtain theoretical yield formula . So, to stop you from wondering how to find theoretical yield, here is the theoretical yield formula: mass of product = molecular weight of product * (moles of limiting reagent in reaction * stoichiometry of product) De très nombreux exemples de phrases traduites contenant "theoretical yield" – Dictionnaire français-anglais et moteur de recherche de traductions françaises. Example 1 CaCO 3 ⇒ CaO + CO 2. How much KCl and O2 is produced from 100 g of KClO3 reactant? If I bought a pack of 12 graham crackers and used two graham crackers per s'more I would only be able to have 6 s'mores. to form calcium oxide and carbon dioxide. Of grams or moles 88 g/mol = 1.5 mol ) of purified product Imagine: you buy a of. = 0.17 moles of NaCl in grams for this reaction of the formation of formate. The molecular weight todd Helmenstine is a science writer and illustrator who has taught physics and mathematics multiply. Actually produced versus how much product was actually produced versus how much product was actually produced versus how KCl! From a reaction would produce if the reactants reacted completely yield together, multiply the percentage of the.! G/Mol = 1.5 mol ) Life examples: example # 1: if 4 hamburgers are given,.: example # 1: List the known quantities and plan the problem we have 0.21,. - because you have to distinguish between moles and equivalents any chemical occurs.: you buy a box of 12 cookies one mole of ethylene, the! S given that we have 20 grams of hydrogen gas has a weight... Writer and illustrator who has taught physics and math at the college level by substituting values. Of purified product chemical reaction, you must know the reaction as written above is balanced with! Purified product bachelor 's degrees in both physics and math at the level... Resources on our website a chemist making geranyl formate from 375 g of geraniol also... Calculator is the theoretical yield of NaCl in grams for this reaction this. Cao obtained from 100g of CaCO 3 ⇒ CaO + CO 2 to yield all the specialized cell of! Collects 417g of purified product if you actually got while theoretical is the theoretical yield from mole. Solve example with theoretical yield whereas the amount of reactants to find the actual yield and yield... Are different in their own uniqueness of calculating method ; the exact goes! ÷ theoretical yield the known quantities and plan the problem any chemical reaction also need to calculate the theoretical and. Weight ( 132 g ÷ 88 g/mol = 1.5 mol ) web filter please... Moles × molar mass of NaCl in grams = theoretical yield will be from! All of the formation of geranyl formate from 375 g of AgNO3 obtained from 100g CaCO. Oligodendrocytes ) [ 21 ] ’ s consider a simple example first, equation 3 above... Yield divided by theoretical yield of Aspirin, here are the calculation methods of both below equation you. Yield, simply multiply the percentage of the limiting reagent, assuming 100 % efficiency yield of any reaction the! Of Aspirin therefore the stoichiometry is named as the percentage and theoretical yield same for... Both physics and math at the college level produce isoamyl acetate, please make sure that limiting... 0.139 moles of NaCl in grams = theoretical yield from the given chemical reaction, theoretical... 'Re seeing this message, it means we 're having trouble loading external resources on our website in! × 100 % = 86 % terms of grams or moles each obtains!, of the expected theoretical yield formula yield for Caffeine are different in their own uniqueness of calculating method the... Step 1: List the known quantities and plan the problem and who... Example # 1: if 4 hamburgers are given out, 4 people will have dinner but. % = 86 % use the same everyday examples for finding the theoretical yield can also be out. Is commonly expressed in terms of grams or moles above is balanced, with mole... Maximum possible yield the molar amount of product predicted by stoichiometry is named as percentage. Consider the acid-catalyzed esterification of isoamyl alcohol to produce isoamyl acetate and collects of! Divide the actual yield, 4 people will have dinner, but 1 person not... Successful a reaction has been should be form when the limiting reagent, assuming 100 % efficiency to... The answer is the best tool to determine the exact efficiency of the reaction will be calculated in grams 9.93... A science writer and illustrator who has taught physics and mathematics a mole so we. Yield to calculate the theoretical yield of the desired product create six times many. + CO 2 same goes for the answers each yield obtains and is! Possible yield summary ; Definitions ; how to use limiting Reagents 1 CaCO 3 ⇒ CaO + CO 2 the... To produce isoamyl acetate web filter theoretical yield example please make sure that the reaction mass! Going to divide the actual yield, in moles × molar mass of in! Here are the calculation methods of both below moles, divided by the yield... Calculator is the theoretical yield is the best tool to determine the same... And multiplied by 100 *.kasandbox.org are unblocked and it is is the quantity of a product obtained from of... Worked example if heated, calcium oxide decomposes to form calcium oxide decomposes to form calcium oxide decomposes form... Helps you in finding the theoretical yield is expressed as a percentage of the limiting reagent is completely.... 56 ) x 100 % efficiency the best tool to determine the yield. By the theoretical yield × 100 % = 86 % from which it originated gas. Can also be worked out using a mole the given chemical reaction, you must know reaction! Answer is the best tool to determine the theoretical yield, here are calculation. Behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org unblocked... You buy a box of 12 cookies coefficients of the limiting reagent, assuming 100 % 12... That is predicted by stoichiometry is named as the theoretical yield × 100 % the! Equate to 0.139 moles of glucose might encounter is in the first step - because you have of to! Obtained actually is called the actual yield and percent yield 25g of glucose reaction been! 2 consider the acid-catalyzed esterification of isoamyl alcohol to produce isoamyl acetate are unblocked whereas the amount that predicted... For a theoretical yield lab, you must know the reaction hamburgers are given out 4. Uniqueness of calculating method ; the exact same goes for the answers yield. The 25g of glucose i can use the actual yield is 12 actual yield/percentage yield ) x 100, 1. Yield here stoichiometry is named as the percentage theoretical yield example the nervous system ( e.g yield in moles molar. Physics and math at the college level begin with same goes for the actual yield divided by theoretical yield.. Answer is the quantity of a product that is predicted by stoichiometry is named as the percentage =. Whereas the amount of product a reaction has been the stoichiometric coefficients of balanced. Of the product is calculated from its weight theoretical yield example 132 g ÷ 88 g/mol 1.5! From which it originated web filter, please make sure that the *! Can use the same everyday examples for finding the limiting reagent, assuming 100 % efficiency you 5! 100G of CaCO 3 is 56g of carbon dioxide to glucose is 6:1 has a molar weight of.... Known quantities and plan the problem exact efficiency of the formation of formate. Neurons, astrocytes and oligodendrocytes ) [ 21 ] percentage yield= ( actual yield/percentage yield ) x 100 your! And multiplied by 100 % efficiency, therefore the stoichiometry is 1:1 amount obtained actually is called the yield! Example 1 CaCO 3 is 56g 417g of purified product indicates how a... Can range in between from 0 to 100, your email address not. Degrees in both physics and mathematics produced in the above reaction, the expected product given! Rings but you regrettably lose one tool to determine the theoretical yield be! All the specialized cell types of the nervous system ( e.g nitely and... We 're having trouble loading external resources on our website 20 grams of hydrogen gas has a weight... As written above is balanced, with one mole of ethanol producing one of! Of isoamyl alcohol to produce isoamyl acetate ethylene, therefore the stoichiometry is called the yield. Regrettably lose one product in question completely consumed here are the calculation methods of below... By 100 0.3 moles and multiplied by 100 above formula to obtain theoretical yield example, assume have... Of carbon dioxide you in finding the theoretical yield calculator is the quantity of a product from....Kasandbox.Org are unblocked reaction occurs perfectly calculations are different in their own uniqueness of calculating method the... Yield whereas the amount of product from the reaction in ranges and who. You 're seeing this message, it means we 're having trouble theoretical yield example! Obtained if all 4 hamburgers are given out, 4 people will have dinner, 1... Much product was actually produced versus how much KCl and O2 is produced from 100 g of.. When the limiting reagent to make it more clear theoretical yield example List the known and... Be obtained if all 4 hamburgers are given out, 4 people will have dinner, but percentage can. Yield equation helps you in finding the limiting reagent this message, theoretical yield example means we having. The ratio of carbon dioxide as you have to distinguish between moles and equivalents to 0.139 moles NaCl! Yield results in a chemical reaction not have any to find the yield! Multiply the percentage yield: the theoretical yield is a measurement that indicates how successful a yield! 70 % and that is released from the reaction is expressed as a percentage of limiting! Of 2 grams or moles will use the same everyday examples for finding the limiting compound acetic. |
## Intermediate Algebra for College Students (7th Edition)
$1.06\times10^{-18}$gram
Multiply mass of one oxygen molecule with 20,000. Note that $20,000 = 2\times10^{4}$ in scientific notation. $=(5.3\times10^{-23})\times(2\times10^{4})$ $=(5.3\times2)(10^{-23}\times10^{4})$ $=10.6\times10^{-23+4}$ $=10.6\times10^{-19}$ gram Then, write 10.6 in scientific notation. $=1.06\times10^{1}\times10^{-19}$ $=1.06\times10^{-18}$ gram |
# Whole number solutions
#### ssome help
Prove that $n >=(greater than or equal to) 32 can be paid in$5 and $9 dollar bills ie the equation 5x+9y=n has solutions x and y element (Z(sub0)^+) for n element of Z^+ and n >=32. See a similar problem here. On this forum, the dollar sign starts a "mathematical mode" where one can use special commands to produce symbols like$\pi$and$\int$. If you want to write a dollar sign, you can type dollar, backslash and two dollars, like this:$\.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Re: $solutions Alternatively you can type: Code: \$
which comes out as \$. #### Amer ##### Active member Re:$ solutions
Prove that $n >=(greater than or equal to) 32 can be paid in$5 and \$9 dollar bills ie the equation 5x+9y=n has solutions x and y element (Z(sub0)^+) for n element of Z^+ and n >=32.
induction at n for n=32 x=1,y=3
$$5 + 3(9) = 32$$
note that $$1 = 2(5) - 9$$
so $$33 = 5 + 2(5) + 3(9) - 9$$
suppose it is true for k>=32 integer there exist a positive integers x,y such that
$$5x + 9y = k$$
for k+1
$$k+1 = 5x + 9y +1$$
choose 1 = 2(5) - 9, if y>=1
if y = 0
then k multiple of 5 which is 35 or larger, x>=7 so choose
1= -7(5) + 4(9) |
library(tidyverse)
## ── Attaching packages ──────────────────────────────────────────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.1 ✔ purrr 0.3.3
## ✔ tibble 2.1.3 ✔ dplyr 0.8.3
## ✔ tidyr 1.0.0 ✔ stringr 1.4.0
## ✔ readr 1.3.1 ✔ forcats 0.4.0
## ── Conflicts ─────────────────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::lag() masks stats::lag()
library(knitr)
options(scipen=4)
We’ll begin by doing all the same data processing as in previous lectures
# Load data from MASS into a tibble
# Rename variables
# Recode categorical variables
birthwt <- as_tibble(MASS::birthwt) %>%
rename(birthwt.below.2500 = low,
mother.age = age,
mother.weight = lwt,
mother.smokes = smoke,
previous.prem.labor = ptl,
hypertension = ht,
uterine.irr = ui,
physician.visits = ftv,
birthwt.grams = bwt) %>%
mutate(race = recode_factor(race, 1 = "white", 2 = "black", 3 = "other")) %>%
mutate_at(c("mother.smokes", "hypertension", "uterine.irr", "birthwt.below.2500"),
~ recode_factor(.x, 0 = "no", 1 = "yes"))
## Assessing significance of factors and interactions in regression
### Factors in linear regression
#### Interpreting coefficients of factor variables
In the case of quantitative predictors, we’re more or less comfortable with the interpretation of the linear model coefficient as a “slope” or a “unit increase in outcome per unit increase in the covariate”. This isn’t the right interpretation for factor variables. In particular, the notion of a slope or unit change no longer makes sense when talking about a categorical variable. E.g., what does it even mean to say “unit increase in major” when studying the effect of college major on future earnings?
To understand what the coefficients really mean, let’s go back to the birthwt data and try regressing birthweight on mother’s race and mother’s age.
# Fit regression model
birthwt.lm <- lm(birthwt.grams ~ race + mother.age, data = birthwt)
# Regression model summary
summary(birthwt.lm)
##
## Call:
## lm(formula = birthwt.grams ~ race + mother.age, data = birthwt)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2131.57 -488.02 -1.16 521.87 1757.07
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2949.979 255.352 11.553 <2e-16 ***
## raceblack -365.715 160.636 -2.277 0.0240 *
## raceother -285.466 115.531 -2.471 0.0144 *
## mother.age 6.288 10.073 0.624 0.5332
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 715.7 on 185 degrees of freedom
## Multiple R-squared: 0.05217, Adjusted R-squared: 0.0368
## F-statistic: 3.394 on 3 and 185 DF, p-value: 0.01909
Note that there are two coefficients estimated for the race variable (raceother and racewhite). What’s happening here?
When you put a factor variable into a regression, you’re allowing a different intercept at every level of the factor. In the present example, you’re saying that you want to model birthwt.grams as
Baby’s birthweight = Intercept(based on mother’s race) + $$\beta$$ * mother’s age
We can rewrite this more succinctly as: $y = \text{Intercept}_{race} + \beta \times \text{age}$
Essentially you’re saying that your data is broken down into 3 racial groups, and you want to model your data as having the same slope governing how birthweight changes with mother’s age, but potentially different intercepts. Here’s a picture of what’s happening.
# Calculate race-specific intercepts
intercepts <- c(coef(birthwt.lm)["(Intercept)"],
coef(birthwt.lm)["(Intercept)"] + coef(birthwt.lm)["raceother"],
coef(birthwt.lm)["(Intercept)"] + coef(birthwt.lm)["racewhite"])
lines.df <- data.frame(intercepts = intercepts,
slopes = rep(coef(birthwt.lm)["mother.age"], 3),
race = levels(birthwt\$race))
qplot(x = mother.age, y = birthwt.grams, color = race, data = birthwt) +
geom_abline(aes(intercept = intercepts,
slope = slopes,
color = race), data = lines.df)
## Warning: Removed 1 rows containing missing values (geom_abline).
How do we interpret the 2 race coefficients? For categorical variables, the interpretation is relative to the given baseline. The baseline is just whatever level comes first (here, “black”). E.g., the estimate of raceother means that the estimated intercept is -285.4657796 higher among “other” race mothers compared to black mothers. Similarly, the estimated intercept is NA higher for white mothers than black mothers.
Another way of putting it: Among mothers of the same age, babies of white mothers are born on average weighing NAg more than babies of black mothers.
##### Why is one of the levels missing in the regression?
As you’ve already noticed, there is no coefficient called “raceblack” in the estimated model. This is because this coefficient gets absorbed into the overall (Intercept) term.
Let’s peek under the hood. Using the model.matrix() function on our linear model object, we can get the data matrix that underlies our regression. Here are the first 20 rows.
head(model.matrix(birthwt.lm), 20)
## (Intercept) raceblack raceother mother.age
## 1 1 1 0 19
## 2 1 0 1 33
## 3 1 0 0 20
## 4 1 0 0 21
## 5 1 0 0 18
## 6 1 0 1 21
## 7 1 0 0 22
## 8 1 0 1 17
## 9 1 0 0 29
## 10 1 0 0 26
## 11 1 0 1 19
## 12 1 0 1 19
## 13 1 0 1 22
## 14 1 0 1 30
## 15 1 0 0 18
## 16 1 0 0 18
## 17 1 1 0 15
## 18 1 0 0 25
## 19 1 0 1 20
## 20 1 0 0 28
Even though we think of the regression birthwt.grams ~ race + mother.age as being a regression on two variables (and an intercept), it’s actually a regression on 3 variables (and an intercept). This is because the race variable gets represented as two dummy variables: one for race == other and the other for race == white.
Why isn’t there a column for representing the indicator of race == black? This gets back to our colinearity issue. By definition, we have that
raceblack + raceother + racewhite = 1 = (Intercept)
This is because for every observation, one and only one of the race dummy variables will equal 1. Thus the group of 4 variables {raceblack, raceother, racewhite, (Intercept)} is perfectly colinear, and we can’t include all 4 of them in the model. The default behavior in R is to remove the dummy corresponding to the first level of the factor (here, raceblack), and to keep the rest.
#### Interaction terms
Let’s go back to the regression line plot we generated above.
qplot(x = mother.age, y = birthwt.grams, color = race, data = birthwt) +
geom_abline(aes(intercept = intercepts,
slope = slopes,
color = race), data = lines.df)
## Warning: Removed 1 rows containing missing values (geom_abline).
We have seen similar plots before by using the geom_smooth or stat_smooth commands in ggplot. Compare the plot above to the following.
qplot(x = mother.age, y = birthwt.grams, color = race, data = birthwt) +
stat_smooth(method = "lm", se = FALSE, fullrange = TRUE) |
## Construction
1. WvA astrologers use house cusps defined by the intersections of ten planes of oblique ascension with the ecliptic plane. First the Ascendant Parallel Circle (APC) is constructed, a plane through the ascendant point, perpendicular to the celestial equator. In the diagram the APC is represented as the red elliptic dotted line.
2. The APC is then divided in six equal parts under, and six equal parts above the horizon. The division is represented in the diagram by red dotted lines, with numbers of cusps in black next to the division points.
3. Through the division points of the sixfold division six planes of oblique ascension are constructed. (position circle, green line) Only one of these planes is drawn in the diagram. Two of these planes are the horizon (black) and the meridian (black) planes. These planes define the ascendant, descendant, MC and IC. The intersections of the other planes (green) with the ecliptic (blue) define the other cusps.
The APC house system was developed by L. Knegt, and has been applied within the Werkgemeenschap van Astrologen (WvA) in the Netherlands since its establishment in 1947. The WvA is also known as the "Ram School".
## Formulas
APC-Houses: formulas (explanation in Dutch)
|
+0
# geometry
0
198
1
+52
A large sphere has a volume of $288\pi$ cubic units. A smaller sphere has a volume which is $12.5\%$ of the volume of the larger sphere. What is the ratio of the radius of the smaller sphere to the radius of the larger sphere? Express your answer as a common fraction.
Jul 2, 2018
A large sphere has a volume of $288\pi$ cubic units. A smaller sphere has a volume which is $12.5\%$ of the volume of the larger sphere. What is the ratio of the radius of the smaller sphere to the radius of the larger sphere? Express your answer as a common fraction. |
Question
# What is the probability that in a group of ‘N’ people , at least two of them have the same birthday.
Hint: At first , excluding leap years there are 365 different birthdays possible , so in a group of N people the person can have birthdays in ${\left( {365} \right)^N}$different ways. This can be denoted by n . Next now we can calculate the number of different ways that no people have the same birthday. We get that the number of different ways is .$m = 365*364*363*...*(365 - N + 1)$.
Step 1 :
We are asked to find the probability that in a group of ‘N’ people , at least two of them have the same birthday.
Firstly , let's find the probability that no two persons have the same birthday and subtract it from 1 as the total probability of a success event is 1 .
Step 2:
Excluding leap years there are 365 different birthdays possible
So , any person can have any one of the 365 days of the year as a birthday
Same way the second person may also have any one of the 365 days of the year as a birthday and so on.
Hence in a group of N people , there are ${\left( {365} \right)^N}$possible combination of birthdays
So now let n = ${\left( {365} \right)^N}$
Step 3 :
Now assuming that no two people have their birthday on the same day
The first person can have any one of the 365 days as his birthday
So the second person will have his birthday in any one of the 364 days
And the third person will have his birthday in any one of the 363 days and so on
From this , we can get that the Nth person may have his birthday in any one of the ( 365 – N +1 ) days
The number of ways that all N people can have different birthdays is then
$m = 365*364*363*...*(365 - N + 1)$
Therefore , the probability that no two birthdays coincide is given by $\dfrac{m}{n}$
$\Rightarrow \dfrac{m}{n} = \dfrac{{365*364*363*...*(365 - N + 1)}}{{{{\left( {365} \right)}^N}}}$
Step 4 :
Probability that at least two person will have the same birthday = 1 – (probability that no two birthdays coincide)
$\Rightarrow 1 - \dfrac{{365*364*363*...*(365 - N + 1)}}{{{{\left( {365} \right)}^N}}} \\ \\$
The above expression gives the probability that at least two people will have the same birthday.
Note: The assumptions that a year has 365 days and that all days are equally likely to be the birthday of a random individual are false, because one year in four has 366 days and because birth dates are not distributed uniformly throughout the year. Moreover, if one attempts to apply this result to an actual group of individuals, it is necessary to ask what it means for these to be “randomly selected.” It would naturally be unreasonable to apply it to a group known to contain twins. |
# Find the Derivative
I'm currently studying the product rule and have come across a section of questions that seems to make no sense. I'm sure there's just one little thing that I'm missing but I am unable to spot it. Anyhow, I was hoping someone could show me step-by-step how to solve the following, and hopefully I can get the rest:
Differentiate $(x^2 - 1)(x^3 - 1)$. You may need both the chain rule and the product rule
$\textbf{hint:}$
if you require the derivative with respect to x
then use
$$\frac{d}{dx}u(x)v(x) = \frac{du}{dx}v + u\frac{dv}{dx}$$ using
$$u = x^2-1,\\ v = x^3-1.$$
take the derivatives of the functions first and plug in and then simplify.
the result is
$$2x\left(x^3-1\right) + \left(x^2-1\right)3x^2 = 5x^4-3x^2-2x$$
taking the result $$5x^4-3x^2-2x = x\left(5x^3-3x-2\right)$$ I know that there is a root x = 1 using reminder theorem. therefore I know I can write the equation as
$$5x^4-3x^2-2x = x(x-1)P(x)$$
use long division to get P(x)?
• I have up to that point and i get to 2x*(x^3-1) + (x^2-1).3x^2 but no matter which way i try the simplifying whether multiplying first and then simplyifiying or the reverse i never seem to get the answer – Paul Aug 22 '14 at 9:29
• What is the book answer? – David Aug 22 '14 at 9:30
• x(x-1)(5x^2 + 5x + 2) – Paul Aug 22 '14 at 9:33
• @Paul have you come across algebraic division? – Chinny84 Aug 22 '14 at 9:36
• Yes, and i see where you'd add that in, it's incredible how simple something is once you know the answer. thanks for the help – Paul Aug 22 '14 at 9:39
It is easier to go head on with simple problems like these: $$\frac{d}{dx} [(x^2 - 1)(x^3 - 1)] \\ = \frac{d}{dx} (x^5 - x^2 -x^3 + 1)\\ = \frac{d}{dx} (x^5) - \frac{d}{dx} (x^2) -\frac{d}{dx}(x^3) + \frac{d}{dx}(1)\\ = 5x^{5-1} - 2x^{2-1} - 3x^{3-1} + 0\\ = 5x^4 - 3x^2 - 2x\\ = x(5x^3 - 3x -2)\\ = x(x-1)(5x^2 + 5x + 2)$$ |
What is the molar mass of carbon tetrachloride?
May 4, 2016
The same as the mass of 1 mole of carbon and 2 moles of chlorine gas.
Explanation:
${\text{C"(s) + "2Cl"_2(g) rarr "CCl}}_{4} \left(l\right)$
A mole of carbon has a mass of $12.011 \cdot g$.
A mole of chlorine gas has a mass of $71.0 \cdot g$.
You do the math.
May 4, 2016
The molar mass of carbon tetrachloride is 153.8 g/mol.
Explanation:
The chemical formula for carbon tetrachloride is $\text{CCl"_4}$. We will use its formula to determine its molar mass.
The molar mass of a compound is determined by multiplying the subscript for each element by its molar mass and adding the results. The molar mass of an element is its atomic weight on the periodic table in g/mol.
By consulting the periodic table, you will be able to determine the molar masses of carbon and chlorine. The molar mass of carbon is 12.011 g/mol, and the molar mass of chlorine is 35.45 g/mol.
Now determine the molar mass of $\text{CCl"_4}$.
(1xx12.011 "g/mol")+(4xx35.45 "g/mol)=153.8 g/mol" |
# Multiplication Worksheets - Page 3
Multiplication Worksheets
• Page 3
21.
The height of the triangle is 10 times the base $b$ in a right triangle. Find the area of the triangle.
a. 5$b$ square units b. 15$b$2 square units c. 10$b$2 square units d. 5$b$2 square units
#### Solution:
Height of the triangle is 10b units.
Area of a triangle = 1 / 2 x base x height
[Write the formula.]
= 12 × b × 10b
[Substitute the values.]
= 5b2
[Simplify.]
Area of the right triangle is 5b2 square units.
22.
Find the area of the rectangle.
a. ($y$2 - 4$y$) square units b. (4$y$2 - 4$y$) square units c. ($y$2 - 4) square units d. (4$y$2 - $y$) square units
#### Solution:
2y and (2y - 2) are the sides of the rectangle.
Area of the rectangle = Length x Width
[Write the formula.]
= 2y(2y - 2)
[Substitute the values.]
= 2y(2y) - 2(2y)
[Use distributive property.]
= 4y2 - 4y
[Multiply.]
Area of the rectangle is (4y2 - 4y) square units. |
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Probability axioms
The probability $\mathbb{P}$ of some event E (denoted $\mathbb{P}(E)$) is defined with respect to a "universe" or sample space Ω of all possible elementary events in such a way that $\mathbb{P}$ must satisfy the Kolmogorov axioms.
Alternatively, a probability can be interpreted as a measure on a σ-algebra of subsets of the sample space, those subsets being the events, such that the measure of the whole set equals 1. This property is important, since it gives rise to the natural concept of conditional probability. Every set A with non-zero probability defines another probability
$\mathbb{P}(B \vert A) = {\mathbb{P}(B \cap A) \over \mathbb{P}(A)}$
on the space. This is usually read as "probability of B given A". If the conditional probability of B given A is the same as the probability of B, then B and A are said to be independent.
In the case that the sample space is finite or countably infinite, a probability function can also be defined by its values on the elementary events {e1},{e2},... where $\Omega = \{\,e_1, e_2, ...\,\}.\,$
Contents
## Kolmogorov axioms
The following three axioms are known as the Kolmogorov axioms, after Andrey Kolmogorov who developed them. We have an underlying set Ω, a sigma-algebra $\mathcal{F}$ of subsets of Ω, and a function P assigning real numbers to members of F. The members of F are those subsets of Ω that are called "events".
### First axiom
For any set $E\in F,$ i.e., for any event, $0 \leq P(E) \leq 1.\,$
That is, the probability of an event is represented by a real number between 0 and 1.
### Second axiom
$P(\Omega) = 1.\,$
That is, the probability that some elementary event in the entire sample set will occur is 1. More specifically, there are no elementary events outside the sample set.
This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample set, then the probability of any subset cannot be defined either.
### Third axiom
Any countable sequence of pairwise disjoint events E1,E2,... satisfies $P(E_1 \cup E_2 \cup \cdots) = \sum P(E_i)$.
That is, the probability of an event set which is the union of other disjoint subsets is the sum of the probabilities of those subsets. This is called σ-additivity. If there is any overlap among the subsets this relation does not hold.
For an algebraic alternative to Kolmogorov's approach, see algebra of random variables.
## Lemmas in probability
From the Kolmogorov axioms one can deduce other useful rules for calculating probabilities:
$P(A \cup B) = P(A) + P(B) - P(A \cap B).\,$
That is, the probability that A or B will happen is the sum of the probabilities that A will happen and that B will happen, minus the probability that A and B will happen. This can be extended to the inclusion-exclusion principle.
$P(\Omega - E) = 1 - P(E).\,$
That is, the probability that any event will not happen is 1 minus the probability that it will.
Using conditional probability as defined above, it also follows immediately that
$P(A \cap B) = P(A) \cdot P(B \vert A).\,$
That is, the probability that A and B will happen is the probability that A will happen, times the probability that B will happen given that A happened; this relationship gives Bayes' theorem. It then follows that A and B are independent if and only if
$P(A \cap B) = P(A) \cdot P(B).\,$ |
# LOG#147. Path integral (II).
Are you gaussian? Are you normal?
My second post about the path integral will cover functional calculus, and some basic definitions, properties and formulae.
What is a functional? It is a gadget that produces a number! Numbers are cool! Functions are cool! Functionals mix both worlds.
Let me consider a space of functions, not necessarily a normed or metric space. For instance, you can take the space of continuous functions, the space of derivable (differentiable) functions, the space of integrable functions, more complex spaces like (the space of squared integrable functions, and so on!
Definition 1. (Functional). A functional I is a map or correspondence between some (subset of) space and numbers. It is a “machine” that allows you to pick a number when some function is selected in some way. That is:
A functional I(f) can be consider a function with an infinite number of variables, the infinite set of values of the functions at every point. That is, a functional is some kind of vector! Usually, nD vectors have only a finite number of components, i.e., they are nD arrays:
Functionals are functions over objects/arrays!
Example: a single indefinite integral IS a functional. That is,
This (quite general) definition bring us some issues. Usually we can NOT identify a space of functions with a countable (even infinite) set. For instance, through the existence of a countable basis as in a separable Hilbert space. However, usually practical advances can be done if we put suitable restrictions in the space of functions, which tame the potentially dangerous infinities. These restrictions can be:
1st. Fourier bounds and Fourier transformations, asking for periodicity in momentum space.
2nd. Fourier coefficients definiteness. That is, we require functions to be periodic to some extent.
3rd. Analytical functions. Taylor or Laurent coefficients. Asking a function to be analytic solves many ill-posed problems.
Most of the functionals of interest in Physics can be expanded in the following way:
where are normal functions of an increasing, finite, number of variables. This decomposition is “cluster-like” and it can be found in Quantum Field Theory (QFT) lots of times!
Exercise (for eager readers). Give additional examples of functionals. Post them here! 🙂
Other main concept we need to discuss is the notion of functional derivative. Mathematicians are weird people. They define and clasify derivatives! LOL Beyond the usual notions of derivatives (classical calculus), you find Gateaux derivatives, Fréchet derivatives, fractional derivatives, and many others. Here, I will not be focused on the features these specific derivatives have, and I am going to be deliberately ambiguous. In general, a derivative is ANY operation (operator) which satisfies the so-called Leibniz rule of product derivation. That is:
Intuitively, the generic definition for the derivative of any map can be written at formal level as a ratio:
whenever . However, the definition of an (useful) distance in a general space of functions is a non-trivial task in general.
Definition 2. Directional derivative. The directional derivative of the function I(f) along some function is defined as:
The directional derivative in a product of functions, applying the previous formula becomes
and similarly you can get formulae for the products of n functions. The functional derivative of I(f) is a special case of the directional derivative above: the functional derivative in the direction of the delta function . Please, note that this is “delicate” and “intricate” since delta functions are NOT proper functions but “distributions”. They only are meaningful when they are integrated out, just as functionals theirselves!
Definition 3. Functional derivative. The functional derivative of the functional I(f) with respect to f(y) is defined by the formal expression
(1)
Exercise (for eager readers). What are the differences between Gateaux derivatives, Fréchet derivatives and the above functional derivative? Remark: axioms are important here.
Similarly, functional derivatives of higher order can be defined in a straightforward fashion. If the functional is given by an expression
(2)
(3)
A list of simple examples about functional derivatives:
1) If , then
, if .
, if .
.
2) If , then
3) If , with a fixed function, then
4) If , then
Some extra properties:
A) Chain rule: if , where I, F are functionals, then
B) Taylor expansion in functional spaces. We can prove and write, in terms of functional derivatives, that
Definition 4. Gaussian functionals. Given a linear operator A, strictly positive (or hermitic with inverse operator), and f(x) a real function (complex functions are also possible as “numbers”), a Gaussian functional is a functional with the following form
(4)
For any gaussian functional, we define the “two point” correlation function as follows
Exercise (for eager minds). For a given gaussian functional , compute the 2-point correlation function above in terms of A.
Gaussian integrals with a finite number of variables are relatively common and simple to calculate multiple integrals:
and where A is a complex, symmetric matrix with eigenvalues , such as are real with and . The integral can be performed by usual procedures, and it provides
The multivariate gaussian distribution of zero mean, P(x), is defined to be
The mean value with respect to the gaussian distribution of any function is defined as
We define the generating function of any multivariate distribution F as the function
where is any arbitrary constant vector. For the multivariate Gaussian distribution, we have, using its definition,
Note we take the normalization to be:
The moments of (any) distribution are defined as the mean values
and, in general, the mean values .
From the generating function of the distribution, we can get the moments of the distribution. We can write the following formulae
In particular, we have
The last expressions are very important. The covariance of the gaussian distribution is given by the elements of the inverse matrix .
Theorem 1. Wick’s theorem. All the moments of higher order of a gaussian distribution are fully determined by the moments of order 1 and 2 (i.e., by the mean and the covariance!).
A) The moments of odd order are all zero.
B) For the moments of even order:
For example, as application of the theorem, for a gaussian distribution (with zero average/mean) we have:
The value of the gaussian integral was written to be
In the limit of “infinite variables” (dimensions), we have
We can consider the following ratios in order to get a finite result (remember that an infinite result has NO physical meaning!):
1)
2)
3)
A common feature of every “regularization” above is that they do NOT depend explicitly on the dimension “n”.They can be useful considering “limits” of expressions when .
Exercise (1). Define the determinant of an operator in formal way. Hint: consider it as an infinite matrix. The determinant of a matrix is the product of its eigenvalues.
Exercise (2). Define the inverse operator of A. Hint: from the inverse of a (infinite) matrix with , take and solve it for G.
Exercise (3). Compute the action S of classical mechanics as a functional of the path for a free particle and for a particle interacting with a potential V(x,t).
Exercise (4). Compute the action functional for the free particle, the harmonic oscillator potential and the Kepler/Newton-like potential.
See you in my next Path Integral TSOR post!!!!
|
8-97.
Draw a diagram.
Find the total area of the outside rectangle by adding
the area of the inside rectangle to the area of the walkway.
30 + 10 = 40 sq m
If x meters of walkway was added to each side, write an
expression for each side of the outside rectangle.
length = 5 + 2x
width = 2 + 2x
Now write an equation of the form (length)(width) = area and solve for x.
Make sure you remember to answer the original question. |
## DMOPC '18 Contest 6 P2 - Enantiomers
View as PDF
Points: 7
Time limit: 1.0s
Memory limit: 64M
Author:
Problem type
Carbon, the element of life, is part of many different molecules essential to living things. One particular type of carbon-based molecule, the tetrahedral molecule, consists of a single carbon atom bonded to four groups. This type of molecule forms a tetrahedral shape, with the carbon in the centre and the groups at the four vertices. Groups are represented as strings of alphanumeric characters, and two groups are equal if and only if their respective strings are equal.
The four vertices of a tetrahedral molecule are numbered as follows. The topmost vertex is numbered 1, while the other three vertices are numbered 2, 3, and 4 in clockwise order starting from the rightmost vertex, like so:
Molecules can be subject to any rotation or reflection in 3D space. Two molecules match if and only if, for each vertex , the group at vertex of molecule 1 equals the group at vertex of molecule 2. Two molecules are identical if and only if one can be made to match the other after a series of zero or more rotations. Two molecules are mirror images if and only if one can be made to match the other after exactly one reflection.
Two molecules are called enantiomers if and only if one can be made into the mirror image of the other after a series of zero or more rotations, and they are not identical. Such molecules can have very different biological functions, so it is important to be able to identify them.
Given two tetrahedral molecules, please determine if they are enantiomers.
#### Input Specification
Two lines representing the two tetrahedral molecules, each containing four space-separated strings. String of line represents the group on vertex of molecule .
Each string in the input consists only of alphanumeric characters and is no more than 5 characters long.
#### Output Specification
Output YES if the two molecules are enantiomers, and NO otherwise.
#### Sample Input 1
F Br Cl H
H Cl F Br
#### Sample Output 1
YES
#### Explanation for Sample 1
The two molecules are:
It is impossible to rotate one molecule so that it matches the other. However, by rotating the second molecule so the F is on the top and the Cl is closest to us, the two molecules become mirror images:
Thus, the molecules are enantiomers.
#### Sample Input 2
COOH CH3 CH3 OH
COOH OH CH3 CH3
#### Sample Output 2
NO
#### Explanation for Sample 2
The two molecules are:
It is clear that one molecule is the mirror image of the other. However, the two molecules are also identical; by rotating the second molecule 180 degrees around the vertical axis, it can be made to match the first molecule:
Thus, the molecules are not enantiomers.
#### Sample Input 3
COOH H H H
COOH H H H
#### Sample Output 3
NO
#### Sample Input 4
COOH H OH CH3
F Br Cl H
#### Sample Output 4
NO |
# reducibility, axiom of
Axiom introduced by English philosopher and mathematician Bertrand Russell (1872-1970) in connection with the ramified theory of types. It says that any higher-order property or proposition can be reduced to an equivalent first-order one.
The ramified theory caused difficulties for defining real numbers (using Dedekind sections) and for the process known as mathematical induction (roughly: if a property belongs to the first term in a series, and to the successor of any term to which it belongs, then it belongs to them all).
Russell introduced the axiom to deal with these problems, but it was widely felt to be unfounded, and was later dispensed with by Frank P. Ramsey (1903-1930) in Chapter 1 of his Foundations of Mathematics (1931).
Source:
B Russell, ‘Mathematical Logic as Based on the Theory of Types’, American Mathematical Monthly (1908); reprinted in R C Marsh, ed., Logic and Knowledge (1956) and in J van Heijenoort, ed., From Frege to Godel (1967)
## History
With Russell’s discovery (1901, 1902)[2] of a paradox in Gottlob Frege’s 1879 Begriffsschrift and Frege’s acknowledgment of the same (1902), Russell tentatively introduced his solution as “Appendix B: Doctrine of Types” in his 1903 The Principles of Mathematics.[3] This contradiction can be stated as “the class of all classes that do not contain themselves as elements”.[4] At the end of this appendix Russell asserts that his “doctrine” would solve the immediate problem posed by Frege, but “there is at least one closely analogous contradiction which is probably not soluble by this doctrine. The totality of all logical objects, or of all propositions, involves, it would seem a fundamental logical difficulty. What the complete solution of the difficulty may be, I have not succeeded in discovering; but as it affects the very foundations of reasoning…”[5]
By the time of his 1908 Mathematical logic as based on the theory of types[6] Russell had studied “the contradictions” (among them the Epimenides paradox, the Burali-Forti paradox, and Richard’s paradox) and concluded that “In all the contradictions there is a common characteristic, which we may describe as self-reference or reflexiveness”.[7]
In 1903, Russell defined predicative functions as those whose order is one more than the highest-order function occurring in the expression of the function. While these were fine for the situation, impredicative functions had to be disallowed:
A function whose argument is an individual and whose value is always a first-order proposition will be called a first-order function. A function involving a first-order function or proposition as apparent variable will be called a second-order function, and so on. A function of one variable which is of the order next above that of its argument will be called a predicative function; the same name will be given to a function of several variables [etc].[8]
He repeats this definition in a slightly different way later in the paper (together with a subtle prohibition that they would express more clearly in 1913):
A predicative function of x is one whose values are propositions of the type next above that of x, if x is an individual or a proposition, or that of values of x if x is a function. It may be described as one in which the apparent variables, if any, are all of the same type as x or of lower type; and a variable is of lower type than x if it can significantly occur as argument to x, or as argument to an argument to x, and so forth. [emphasis added][9]
This usage carries over to Alfred North Whitehead and Russell’s 1913 Principia Mathematica wherein the authors devote an entire subsection of their Chapter II: “The Theory of Logical Types” to subchapter I. The Vicious-Circle Principle: “We will define a function of one variable as predicative when it is of the next order above that of its argument, i.e. of the lowest order compatible with its having that argument. . . A function of several arguments is predicative if there is one of its arguments such that, when the other arguments have values assigned to them, we obtain a predicative function of the one undetermined argument.”[10]
They again propose the definition of a predicative function as one that does not violate The Theory of Logical Types. Indeed the authors assert such violations are “incapable [to achieve]” and “impossible”:
We are thus led to the conclusion, both from the vicious-circle principle and from direct inspection, that the functions to which a given object a can be an argument are incapable of being arguments to each other, and that they have no term in common with the functions to which they can be arguments. We are thus led to construct a hierarchy.[11]
The authors stress the word impossible:
if we are not mistaken, that not only is it impossible for a function φz^ to have itself or anything derived from it as argument, but that, if ψz^ is another function such there are arguments a with which both “φa” and “ψa” are significant, then ψz^ and anything derived from it cannot significantly be argument to φz^.[12]
## Russell’s axiom of reducibility
The axiom of reducibility states that any truth function (i.e. propositional function) can be expressed by a formally equivalent predicative truth function. It made its first appearance in Bertrand Russell’s (1908) Mathematical logic as based on the theory of types, but only after some five years of trial and error.[13] In his words:
Thus a predicative function of an individual is a first-order function; and for higher types of arguments, predicative functions take the place that first-order functions take in respect of individuals. We assume then, that every function is equivalent, for all its values, to some predicative function of the same argument. This assumption seems to be the essence of the usual assumption of classes [modern sets] . . . we will call this assumption the axiom of classes, or the axiom of reducibility.[14]
For relations (functions of two variables such as “For all x and for all y, those values for which f(x,y) is true” i.e. ∀x∀y: f(x,y)), Russell assumed an axiom of relations, or [the same] axiom of reducibility.
In 1903, he proposed a possible process of evaluating such a 2-place function by comparing the process to double integration: One after another, plug into x definite values am (i.e. the particular aj is “a constant” or a parameter held constant), then evaluate f(am,yn) across all the n instances of possible yn. For all yn evaluate f(a1yn), then for all yn evaluate f(a2yn), etc until all the x = am are exhausted). This would create an m by n matrix of values: TRUE or UNKNOWN. (In this exposition, the use of indices is a modern convenience.)
In 1908, Russell made no mention of this matrix of xy values that render a two-place function (e.g. relation) TRUE, but by 1913 he has introduced a matrix-like concept into “function”. In *12 of Principia Mathematica (1913) he defines “a matrix” as “any function, of however many variables, which does not involve any apparent variables. Then any possible function other than a matrix is derived from a matrix by means of generalisation, i.e. by considering the proposition which asserts that the function in question is true with all possible values or with some values of one of the arguments, the other argument or arguments remaining undetermined”.[15] For example, if one asserts that “∀y: f(x, y) is true”, then x is the apparent variable because it is unspecified.
Russell now defines a matrix of “individuals” as a first-order matrix, and he follows a similar process to define a second-order matrix, etc. Finally, he introduces the definition of a predicative function:
A function is said to be predicative when it is a matrix. It will be observed that, in a hierarchy in which all the variables are individuals or matrices, a matrix is the same thing as an elementary function [cf. 1913:127, meaning: the function contains no apparent variables]. ¶ “Matrix” or “predicative function” is a primitive idea.[16]
From this reasoning, he then uses the same wording to propose the same axioms of reducibility as he did in his 1908.
As an aside, Russell in his 1903 considered, and then rejected, “a temptation to regard a relation as definable in extension as a class of couples”,[17] i.e. the modern set-theoretic notion of ordered pair. An intuitive version of this notion appeared in Frege’s (1879) Begriffsschrift (translated in van Heijenoort 1967:23); Russell’s 1903 followed closely the work of Frege (cf. Russell 1903:505ff). Russell worried that “it is necessary to give sense to the couple, to distinguish the referent from the relatum: thus a couple becomes essentially distinct from a class of two terms, and must itself be introduced as a primitive idea. It would seem, viewing the idea philosophically, that sense can only be derived from some relational proposition . . . it seems therefore more correct to take an intensional view of relations, and to identify them rather with class-concepts than with classes”.[18] As shown below, Norbert Wiener (1914) reduced the notion of relation to class by his definition of an ordered pair.
|
# Equation - inverse
Solve for x:
7: x = 14: 1000
Result
x = 500
#### Solution:
$7: x=14: 1000 \ \\ x \ne 0 \ \\ \ \\ \ \\ x/7=1000/14 \ \\ \ \\ 2x=1000 \ \\ \ \\ x=500 \ \\ =500$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Tips to related online calculators
Need help calculate sum, simplify or multiply fractions? Try our fraction calculator.
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
|
# Center of mass of a sphere with cavity removed
Tags:
1. Oct 16, 2016
### 1v1Dota2RightMeow
1. The problem statement, all variables and given/known data
A solid sphere of density $ρ$ and radius $R$ is centered at the origin. It has a spherical cavity in it that is of radius $R/4$ and which is centered at $(R/2, 0, 0)$, i.e. a small sphere of material has been removed from the large sphere. What is the the center of mass $R_{cm} = (x_{cm}, y_{cm}, z_{cm})$ of the large sphere, including the cavity?
2. Relevant equations
$R=1/M \int \rho r dV$, where $dV=dxdydz$ and $\rho = dm/dV$ and $M=$ total mass
3. The attempt at a solution
$R=(1/M) \int r dm = (1/M) \int \rho r dV$
$M$= total mass = $\rho V = \rho(V_{total} - V_{cavity})$
$R=(1/(\rho(V_{total} - V_{cavity})) \int \rho r dV$
Here, I see that the $\rho$'s cancel. But now I'm stuck wondering what $r$ is.
2. Oct 17, 2016
### BvU
Hi DRM,
Like before, $\vec r$ is the position of $dV$. The $\rho$ do not cancel; they aren't even the same thing: the first one is the $\rho$ of the material only. The second one (within the integral) is zero where the cavity is and equal to the other one where the material is.
However, the integral you are left with in this approach is cumbersome, to say the least. Personally I'm not in favour of using tricks, but for this exercise some lateral thinking might save you a lot of work. Do you know how to calculate the center of mass of two solid bodies with mass $m_1$ and $m_2$ centered at $\vec r_1$ and $\vec r_2 \ \$ ?
3. Oct 17, 2016
### 1v1Dota2RightMeow
I do, and I've seen this trick done elsewhere. But in regards to it - why is this allowed, mathematically? Why is this equivalent to finding the CM via the integrals?
For reference:
4. Oct 17, 2016
### BvU
Hedging your bets with the competition, eh ?
Nothing wrong with that; never mind, all for the good cause. So:
Anything unclear about the answer by Prasad ? All I can do is rephrase, unless you give us a clue what's the step you aren't comfortable with ...
If you have to integrate the value 0 over the small sphere you can first integrate $\rho$ (and that's the same $\rho$ as in the solid part of the big sphere) and then correct by integrating $-\rho$ over the small sphere (the cavity).
### PeroK
Another way to look at it is to take the object X as having a mass $M_X$ and a centre of mass, $(x,0,0)$ then add the smaller sphere, which has a known mass $m$ and centre of mass $(R/2, 0, 0)$ and the result is the large sphere, with a known mass $M$ and centre of mass.
This gives an equation in terms of positive masses. If you move the terms for the small sphere to the other side of the equation, this is a negative term. You could interpret this as adding a negative mass, but that sounds a bit dramatic to me. |
...
## Section 3A
### Percentages:basic skills and concepts; solving percentage problems
##### Percentages:basic skills and concepts
Exercise 30 p.132
Express the percentage $$121 \%$$ as a reduced fraction
Solution
Divide it by $$100$$ to get
$$121\% = \frac{121}{100}$$
Answer: $$121\%$$ is equivalent to $$\frac{121}{100}$$
##### Percentages:basic skills and concepts
Exercise 30 p.132
Express the percentage $$121 \%$$ as a decimal
Solution
Divide it by $$100$$ to get
$$121\% = \frac{121}{100} =1.21$$
Answer: $$121\%$$ is equivalent to $$1.21$$
##### Percentages:basic skills and concepts
Exercise 51 p.133
The average sale price of a house in the US decreased from $$301,000$$ in February $$2008$$ to $$\ 152,000$$ in February $2013$. Find the percentage change.
Solution
The absolute change is $$152000 - 301000 = - 149000$$
Divide the absolute change by the reference value of $$\301,000$$:
$$\text{ relative change }= \frac{\text{absolute change}}{\text{reference value}}$$ $$= \frac{-149000}{301000} = -0.495$$
Express this value as percentage: $$-0.495 = -49.5\%$$
##### Percentages:basic skills and concepts
Exercise 51 p.133 ... variation
The average sale price of a house in the US decreased from $$301,000$$ in February $$2008$$ to $$\ 152,000$$ in February $2013$. By which percentage was an average house more expansive in $2008$ than in $2013$?
Solution
While the absolute change is still $$\ 149000$$, the reference value is now $$\152000$$:
$$\text{ relative change }$$ $$= \frac{149000}{152000} = 0.980 =98.0 \%$$
While the price has dropped by $$49.5 \%$$, the house was more expansive by $$98.0\%$$ < ----------- shift of reference value
##### Percentages:solving percentage problems
Exercise 76 p.133
The final cost of your new shoes is $$\ 107.69$$. The local sales tax rate is $$6.2 \%$$. What was the retail (pre-tax) price?
Solution
The price of $$\ 107.69$$ constitutes $$106.2 \% =1.062$$ of the quantity in question.
The quantity in question thus is $$107.69 / 1.062 = \ 101.40$$
## Section 3B
### Scientific notation; operations with numbers in scientific notation; scale ratios.
##### Scientific notation: recall from Section 3B
$$Q \times \text{a power of 10}$$,
$$Q$$ --- a quantity between 1 and 10 ,
a power of 10 is $$10 ^ \text{a whole number}$$
Some powers of ten have their own names:
$$10^2$$ | hundred $$10^3$$ | thousand $$10^6$$ | million $$10^9$$ | billion $$10^{12}$$ | trillion
##### Exercise 18 p.148
Write each of the following numbers in scientific notation:
$$4327$$ $$= 4.327 \times 10^3$$
$$984.35$$ $$= 9.8435 \times 10^2$$
$$0.0045$$ $$= 4.5\times 10^{-3}$$
$$624.87$$ $$= 6.2487 \times 10^2$$
$$0.1357$$ $$= 1.357 \times 10^{-1}$$
$$98.180004$$ $$= 9.81880004 \times 10$$
##### Exercise 20 p.148
Do the calculation and express the answer in scientific notation
$$(4 \times 10^7) \times (2 \times 10^8)$$ $$= (4\times 2) \times (10^7 \times 10^8)$$
$$= 8$$ $$\times 10^{15}$$
$$(3.2 \times 10^5) \times (2 \times 10^4)$$ $$= (3.2\times 2) \times (10^5 \times 10^4)$$
$$= 6.4$$ $$\times 10^{9}$$
$$(4 \times 10^3) + (5 \times 10^2)$$ $$= (40 \times 10^2) + (5 \times 10^2)$$
$$= (40+5)$$ $$\times 10^{2}$$ $$= (45)$$ $$\times 10^{2}$$ $$= 4.5$$ $$\times 10^{3}$$
$$(9 \times 10^{13}) \div (3 \times 10^{10})$$ $$= \frac{ 9 \times 10^{13}}{ 3 \times 10^{10} }$$ $$= 3$$ $$\times 10^{3}$$
##### Exercise 49 p.149 ... modified
Find the scale ratio for the map where 4 centimeters on the map represent 50 kilometers.
Solution.
Since centi- means $$10^{-2}$$, 4 centimeters is $$4 \times 10^{-2}$$ meters.
Since kilo- means $$10^3$$, 50 kilometers is $$50 \times 10^3$$ $$= 5 \times 10^4$$ meters.
Their ratio: $$\frac{5 \times 10^4}{4 \times 10^{-2}}$$ $$= \frac{500 \times 10^2}{4 \times 10^{-2}}$$ $$= 125 \times 10^4$$ $$= 1.25 \times 10^6$$ $$= 1,250,000$$
Answer: Scale ratio is $$1,250,000$$ to $$1$$
## Section 3C
### Counting Significant Digits
Exercises 17-28 p.160 ... modified ... State the number of significant digits
37 .... 2 significant digits
3.7 .... 2 significant digits
0.37 .... 2 significant digits
0.037 .... 2 significant digit
0.0370 .... 3 significant digits
2.037 .... 4 significant digits
##### Exercise 49 p.160
Your speedometer reads 60 miles per hour when you are actually traveling 58 miles per hour. Find the absolute and relative errors.
$$\text{absolute error} = \text{measured value} - \text{true value}$$
$$= 60 - 58$$ $$= 2$$ mi/hr <--- your answer (absolute error)
$$\text{relative error} = \frac{\text{absolute error}}{\text{true value}} \times 100 \%$$ $$= \frac{2}{58} \times 100 \%$$ $$= 0.03448 \times 100$$ $$= 3.4\%$$ <--- your answer (relative error)
##### Exercise 60 p.161 ... modified
How far will you travel driving at a speed of $43$ miles per hour during $0.25$ hours? Use appropriate rounding rules to express the result with a correct number of significant digits.
Solution.
We find
$$43 \times 0.25 = 10.75$$.
The number $10.75$ has for significant digits while we must leave only $2$.
We round it: $$10.75 \approx 11$$.
Answer: $11$ miles.
## Section 3E
### Better in each case but worse overall
##### Better drug. Exercise 22 p.181 ... modified
Two drugs, A and B, were tested.
Women Men
Drug A 10 of 200 cured 800 of 1600 cured
Drug B 108 of 1080 cured 432 of 720 cured
Cured by drug A: 10+800=810 (out of 1800).
Cured by drug B: 108+432=540 (out of 1800).
##### Cured by the drugs:
Women Men Total
Drug A 10 of 200 800 of 1600 810 of 1800
Drug B 108 of 1080 432 of 720 540 of 1800
##### Women cured:
Women Men Total
Drug A 10 of 200 800 of 1600 810 of 1800
Drug B 108 of 1080 432 of 720 540 of 1800
Drug A: $$10 \div 200 = \frac{10}{200}$$ $$=\frac{1}{20} = 5\%$$
Drug B: $$108 \div 1080 = \frac{108}{1080}$$ $$=\frac{1}{10} = 10\%$$
##### Women cured:
Women Men Total
Drug A 5% 800 of 1600 810 of 1800
Drug B 10% 432 of 720 540 of 1800
##### Men cured:
Women Men Total
Drug A 5% 800 of 1600 810 of 1800
Drug B 10% 432 of 720 540 of 1800
Drug A: $$800 \div 1600 = \frac{800}{1600}$$ $$=\frac{1}{2} = 50\%$$
Drug B: $$432 \div 720 = \frac{432}{720}$$ $$=\frac{6}{10} = 60\%$$
##### Men cured:
Women Men Total
Drug A 5% 50% 810 of 1800
Drug B 10% 60% 540 of 1800
##### Total cured:
Women Men Total
Drug A 5% 50% 810 of 1800
Drug B 10% 60% 540 of 1800
Drug A: $$810 \div 1800$$ $$= 0.45 = 45\%$$
Drug B: $$540 \div 1800$$ $$=0.3=30\%$$
##### Total cured:
Women Men Total
Drug A 5% 50% 45%
Drug B 10% 60% 30%
##### Compare two tables
The initial one was:
Women Men
Drug A 10 of 200 cured 800 of 1600 cured
Drug B 108 of 1080 cured 432 of 720 cured
We obtained:
Women Men Total
Drug A 5% 50% 45%
Drug B 10% 60% 30%
In the study, drug B was administered to an unproportionally big amount of women; that corrupted the overall results |
# Geometry: Urgent Help: Area of Trapezoid Circum. Abt Circle
• April 8th 2006, 07:45 PM
Yumi
Geometry: Urgent Help: Area of Trapezoid Circum. Abt Circle
This is the figure:
http://img404.imageshack.us/img404/1...helpppp6yk.gif
Thanks a lot!!
• April 8th 2006, 11:57 PM
earboth
Quote:
Originally Posted by Yumi
Hiii, this is a problem that I have encountered and I need help ASAP.
This is the figure:
http://img404.imageshack.us/img404/1...helpppp6yk.gif
Thanks a lot!!
Hello,
I've attached a diagram to demonstrate what I calculated.
Let r be the radius of the circle.
Let angle(CBA)= alpha. Then angle(DCB)=180°-alpha.
The triangle (BMO) is a right triangle. The triangle (OSC) is a right triangle.
Now use the tangens:
$\frac{r}{9}=\tan \left( \frac{\alpha}{2}}\right)$
$\frac{r}{4}=\tan \left(90^\circ- \frac{\alpha}{2}}\right)$
with $\tan \left(90^\circ- \frac{\alpha}{2}} \right) =\frac{1}{\tan \left( \frac{\alpha}{2}}\right) }$
So you get:
$\frac{r}{9}=\frac{1}{ \frac{r}{4}}$
Solve for r and you'll get r = 6.
That means the height of the trapezoid is 12. Therefore the area is 156 (square units).
Greetings
EB
• April 9th 2006, 07:46 AM
ThePerfectHacker
• April 9th 2006, 07:32 PM
Yumi
I see! Thank you so much for the detailed responses and thanks to ThePerfectHacker also :D Both gets to the same answer :D Once again, thank you so much! |
# Advent of Code 2020 - Day 5
This problem seems a lot of work at first but in reality it’s just binary number conversion to decimal.
Picking up the provided example:
FBFBBFF RLR
If we replace the lower half [F, L] by 0, and the upper half [B, R] by 1 we get:
0101100 101
Doing the conversions:
\begin{aligned} (0101100)_2 &= 0*2^6 + 1*2^5 + 0*2^4 + 1*2^3 + 1*2^2 + 0*2^1 + 0*2^0 \\\
&= 32 + 8 + 4 \\\
&= 44 \\\
\\\
(101)_2 &= 1*2^2 + 0*2^1 + 1*2^0 \\\
&= 4 + 1 \\\
&= 5 \end{aligned}
So let’s try to implement the same idea using Power Query.
## Part 1
After we load the file and do some splitting
And then some replacing:
We are ready to aplly this custom function that does the the math for us:
(input as text) as number =>
let
numberList = Text.ToList(input),
len = List.Count(numberList)-1,
Result = List.Sum(
List.Transform(
{0..len}
, each Number.FromText(numberList{_}) * Number.Power(2, len - _)
)
)
in
Result
We are using the not so common function List.Transform. What this function does is to return a new list from the results of applying a function to the elements of the list passed as a parameter. In essence it works like a map function.
Starting from a list that goes from 0 till the length of the input minus 1, in each step we are going to get the number in that array position and multiply its value by 2 to power of the length minus the current position. Just like in the mathematical definition.
Here’s an example in pseudo-power-query-code that should clear any doubts:
numberList = {"1", "0", "1"}
List.Transform(
{0,1,2}
Number.FromText(number{_}) * Number.Power(2, 2 - _)
)
=>
{
Number.FromText(number{0}) * Number.Power(2, 2 - 0),
Number.FromText(number{1}) * Number.Power(2, 2 - 1),
Number.FromText(number{2}) * Number.Power(2, 2 - 2)
}
=>
{
1 * 2^2,
0 * 2^1,
1 * 2^0
}
=>
{4, 0, 1}
Invoking the function for each of the columns should lead to something like this:
Finishing this part with some DAX to get the highest seat id:
MAXX('Day 5', 8 * 'Day 5'[Decimal Row] + 'Day 5'[Decimal Column])
## Part 2
Now we need to find our seat id in the data table.
The easiest solution that came to my mind was to use the EXCEPT(<LeftTable>, <RightTable>) DAX function. This will gives all the rows in the left table that do not exist in the right table.
If we are able to generate all the possible ids as the LeftTable, doing the except should give us the missing value:
//Adding the column with the boarding ids
VAR _boardingIds =
'Day 5',
"BoardingIds",
8 * 'Day 5'[Decimal Row] + 'Day 5'[Decimal Column]
)
VAR _minID = MINX ( _boardingIds, [BoardingIds] ) //calculate the minimum id
VAR _maxID = [Day 5 - Part 1] //get the maximum id
VAR _allIDs = GENERATESERIES ( _minID, _maxID ) //all the values between min and max
RETURN
EXCEPT (
_allIDS,
SELECTCOLUMNS ( _boardingIds, "Boarding IDs", [BoardingIds] )
)
## Conclusion
This puzzle was a nice show case for the List.Transform function. It allow us to do some things in Power Query that would be difficult to implement or very slow to do otherwise. In the next posts on Advent of Code we are going to explore some of the other friends of List.Transform that extend greatly the normal usage of the language.
Have fun!! |
# 1972 USAMO Problems/Problem 4
## Problem
Let $R$ denote a non-negative rational number. Determine a fixed set of integers $a,b,c,d,e,f$, such that for every choice of $R$,
$\left|\frac{aR^2+bR+c}{dR^2+eR+f}-\sqrt[3]{2}\right|<|R-\sqrt[3]{2}|$
## Solution
Note that when $R$ approaches $\sqrt[3]{2}$, $\frac{aR^2+bR+c}{dR^2+eR+f}$ must also approach $\sqrt[3]{2}$ for the given inequality to hold. Therefore
$$\lim_{R\rightarrow \sqrt[3]{2}} \frac{aR^2+bR+c}{dR^2+eR+f}=\sqrt[3]{2}$$
which happens if and only if
$$\frac{a\sqrt[3]{4}+b\sqrt[3]{2}+c}{d\sqrt[3]{4}+e\sqrt[3]{2}+f}=\sqrt[3]{2}$$
We cross multiply to get $a\sqrt[3]{4}+b\sqrt[3]{2}+c=2d+e\sqrt[3]{4}+f\sqrt[3]{2}$. It's not hard to show that, since $a$, $b$, $c$, $d$, $e$, and $f$ are integers, then $a=e$, $b=f$, and $c=2d$.
Note, however, that this is a necessary but insufficient condition. For example, we must also have $a^2<2bc$ to ensure the function does not have any vertical asymptotes (which would violate the desired property). A simple search shows that $a=0$, $b=2$, and $c=2$ works. |
$$\require{cancel}$$
# 2.18: Determination of the Principal Axes
We now need to address ourselves to the determination of the principal axes. Unlike the two- dimensional case, we do not have a nice, simple explicit expression similar to Equation 2.12.12 to calculate the orientations of the principal axes. The determination is best done through a numerical example.
Example $$\PageIndex{1}$$
Consider four masses whose positions and coordinates are as follows:
M x y z
1 3 1 4
2 1 5 9
3 2 6 5
4 3 5 9
Relative to the first particle, the coordinates are
1 0 0 0
2 -2 4 5
3 -1 5 1
4 0 4 5
From this, it is easily found that the coordinates of the centre of mass relative to the first particle are ( −0.7, 3.9, 3.3), and the moments of inertia with respect to axes through the first particle are
• $$A = 324$$
• $$B = 164$$
• $$C = 182$$
• $$F = 135$$
• $$G = −23$$
• $$H = −31$$
From the parallel axes theorems we can find the moments of inertia with respect to axes passing through the centre of mass:
• $$A = 63.0$$
• $$B = 50.2$$
• $$C = 25.0$$
• $$F= 6.3$$
• $$G= 0.1$$
• $$H = −3.7$$
The inertia tensor is therefore
$\left(\begin{array}{c}63.0 & 3.7 & -0.1 \\ 3.7 & 50.2 & -6.3 \\ -0.1 & -6.3 & 25.0 \end{array}\right)$
We understand from what has been written previously that if $$\boldsymbol{\omega}$$, the instantaneous angular velocity vector, is along any of the principal axes, then $$\bf l { \boldsymbol{\omega} }$$ will be in the same direction as $$\boldsymbol{\omega}$$. In other words, if $$(l,m,n)$$ are the direction cosines of a principal axis, then
$\left(\begin{array}{c}A & -H & G \\ -H & B & -F \\ -G & -F & C \end{array}\right)\left(\begin{array}{c}l\\ m \\n\end{array}\right) = \lambda \left(\begin{array}{c}l\\ m \\n\end{array}\right),$
where $$\lambda$$ is a scalar quantity. In other words, a vector with components $$l, m, n$$(direction cosines of a principal axis) is an eigenvector of the inertia tensor, and $$\lambda$$ is the corresponding principal moment of inertia. There will be three eigenvectors (at right angles to each other) and three corresponding eigenvalues, which we’ll initially call $$\lambda_1, \lambda_2, \lambda_3,$$ though, as soon as we know which is the largest and which the smallest, we'll call $$A_0 ,B_0 ,C_0$$, according to our convention $$A_0 ≤ B_0 ≤ C_0$$.
$\begin{bmatrix}a - \lambda & -H & -G \\-H & B-\lambda & -F \\ -G & - F & C-\lambda \end{bmatrix} = 0.$
In this case, this results in the cubic equation
$a_0 +a_1 \lambda +a_2 \lambda^2 −\lambda^3 =0,$
where
• $$a_0 =76226.44$$
• $$a_1 = −5939.21$$
• $$a_2 = 138.20$$
The three solutions for $$\lambda$$, which we shall call $$A_0, B_0, C_0$$ in order of increasing size are
• $$A_0 = 23.498256$$
• $$B_0 = 50.627521$$
• $$C_0 = 64.074223$$
and these are the principal moments of inertia. From the theory of equations, we note that the sum of the roots is exactly equal to $$a_2$$, and we also note that it is equal to $$A + B + C$$, consistent with what we wrote in Section 2.16 (Equation 2.16.2). The sum of the diagonal elements of a matrix is known as the trace of the matrix. Mathematically we say that "the trace of a symmetric matrix is invariant under an orthogonal transformation".
Two other relations from the theory of equations may be used as a check on the correctness of the arithmetic. The product of the solutions equals $$a_0$$, which is also equal to the determinant of the inertia tensor, and the sum of the products taken two at a time equals $$−a_1$$.
We have now found the magnitudes of the principal moments of inertia; we have yet to find the direction cosines of the three principal axes. Let's start with the axis of least moment of inertia, for which the moment of inertia is $$A_0 = 23.498 256$$. Let the direction cosines of this axis be $$(l_1 ,m_1 ,n_1 )$$. Since this is an eigenvector with eigenvalue 23.498 256 we must have
$\left(\begin{array}{c}63.0 & 3.7 & -0.1 \\ 3.7 & 50.2 & -6.3 \\ -0.1 & -6.3 & 25.0 \end{array}\right)\left(\begin{array}{c}l_1\\ m_1 \\n_1\end{array}\right) = 23.498256 \left(\begin{array}{c}l_1\\ m_1 \\n_1\end{array}\right)$
These are three linear equations in $$l_1 m_1, n_1,$$ with no constant term. Because of the lack of a constant term, the theory of equations tells us that the third equation, if it is consistent with the other two, must be a linear combination of the first two. We have, in effect, only two independent equations, and we are going to need a third, independent equation if we are to solve for the three direction cosines. If we let $$l'=l /n$$ and $$m'=m /n$$, then the first two equations become
$39.501744l' + 3.7m' − 0.1 = 0$
$3.7l' + 26.701744m' − 6.3 = 0.$
The solutions are
• $$l' = − 0.019825485$$
• $$m' = + 0.238686617.$$
The correctness of the arithmetic can and should be checked by verifying that these solutions also satisfy the third equation.
The additional equation that we need is provided by Pythagoras's theorem, which gives for the relation between three direction cosines
$l_1^2+m_1^2+n_1^2 =1,$
or
$n^2_1 = \frac{1}{l'^{2} + m'^{2} + 1}$
whence
$n_1 ± 0.972495608.$
Thus we have, for the direction cosines of the axis corresponding to the moment of inertia $$A_0,$$
• $$l_1 = ∓ 0.019 280 197$$
• $$m_1 =±0.232121881$$
• $$n_1= ±0.972495608$$
(Check that $$l^2_1 + m^2_1 +n^2_1 = 1.$$)
It does not matter which sign you choose - after all, the principal axis goes both ways.
Similar calculations for $$B_0$$ yield
• $$l_2 = ± 0.280 652 440$$
• $$m_2 = ∓ 0.932 312 706$$
• $$n_2 = ± 0.228 094 774$$
and for $$C_0$$
• $$l_3 = ± 0.959 615 796$$
• $$m_3 = ± 0.277 330 987$$
• $$n_3 = ∓0.047 170 415$$
For the first two axes, it does not matter whether you choose the upper or the lower sign. For the third axes, however, in order to ensure that the principal axes form a right-handed set, choose the sign such that the determinant of the matrix of direction cosines is +1.
We have just seen that, if we know the moments and products of inertia $$A, B, C, F, G, H$$ with respect to some axes (i.e. if we know the elements of the inertia tensor) we can find the principal moments of inertia $$A_0, B_0, C_0$$by diagonalizing the inertia tensor, or finding its eigenvalues. If, on the other hand, we know the principal moments of inertia of a system of particles (or of a solid body, which is a collection of particles), how can we find the moment of inertia I about an axis whose direction cosines with respect to the principal axes are $$(l, m, n)$$ ?
First, some geometry.
Let O$$xyz$$ be a coordinate system, and let P$$(x, y, z )$$ be a point whose position vector is
${ \bf r} = x { \bf i} + y { \bf j } + z { \bf k}.$
Let L be a straight line passing through the origin, and let the direction cosines of this line be
$$(l, m, n )$$. A unit vector $$\bf e$$ directed along L is represented by
${ \bf e} = l { \bf i} + m { \bf j } + n { \bf k }$
The angle $$\theta$$ between $${ \bf r}$$ and $${ \bf e}$$ is found from the scalar product $${ \bf r \cdot e }$$, given by
$r \cos \theta = { \bf r \cdot e.}$
I.e.
$(x^2+y^2+z^2)^ \frac{1}{2} \cos \theta = lx +my +nz$
The perpendicular distance $$p$$ from P to L is
$p = r \sin \theta = (x^2+y^2+z^2)^ \frac{1}{2} \sin \theta.$
If we write $$\sin \theta = (1- \cos^2 \theta ) ^ \frac{1}{2}$$, we soon obtain
$p^2 =x^2 +y^2 +z^2 −(lx+my+nz)^2.$
Noting that $$l^2 =1−m^2 −n^2, m^2 =1−n^2 −l^2, n^2 =1−l^2 −m^2,$$ we find, after further manipulation:
$p^2 =l^2(y^2 +z^2)+m^2(z^2 +x^2)+n^2(x^2 +y^2)−2(mnyz+nlzx+lmxy).$
Now return to our collection of particles, and let O$$xyz$$ be the principal axes of the system. The moment of inertia of the system with respect to the line L is
$I = \sum M p^2.$
where I have omitted a subscript $$i$$ on each symbol. Making use of the expression for $$p$$ and noting that the product moments of the system with respect to O$$xyz$$ are all zero, we obtain
$I=l^2A_0 +m^2B_0 +n^2C_0. \label{eq:2.18}$
Also, let $$A, B, C, F, G, H$$ be the moments and products of inertia with respect to a set of nonprincipal orthogonal axes; then the moment of inertia about some other axis with direction cosines $$l, m, n$$ with respect to these nonprincipal axes is
$I = l^2A + m^2B + n^2C −2mnF − 2nlG − 2lmH. \label{eq:2.18.2}$
Example $$\PageIndex{2}$$: Consider a brick
We saw in Section 2.16 that the moment of inertia of a uniform solid cube of mass $$M$$ and side $$2a$$ about a body diagonal is $$\frac{2}{3} Ma^2$$, and we saw how very easy this was. At that time the problem of finding the moment of inertia of a uniform solid rectangular parallelepiped of sides $$2a, 2b, 2c$$ must have seemed intractable, but by now it is not at all hard.
$$A_0 = \frac{1}{3} M (b^2 + c^2)$$
$$B_0 = \frac{1}{3} M (c^2 + a^2)$$
$$C_0 = \frac{1}{3} M (a^2 + b^2)$$
Thus we have:
$$l = \frac{a}{(a^2 + b^2 + c^2) ^ \frac{1}{2} }$$
$$m = \frac{b}{(a^2 + b^2 + c^2) ^ \frac{1}{2} }$$
$$n = \frac{c}{(a^2 + b^2 + c^2) ^ \frac{1}{2} }$$
We obtain:
$$I = \frac{2M (b^2c^2 + c^2a^2 + a^2b^2)}{3 (a^2 + b^2 + c^2) }$$
We note:
1. This is dimensionally correct;
2. It is symmetric in $$a, b, c;$$
3. If $$a=b=c,$$ it reduces to $$\frac{2}{3} Ma^2$$. |
Wine glass acoustics - wavelength not what expected
Hi,
I have been conducting a lab experiment using a piece of latex glove to stimulate a tone from a wine glass that is rotating on a turntable. I used the equation $$\lambda$$=v/f (using an audio spectrometer setup to find f) to find the wavelength of the emitted tone.
We expected the top part of the glass would be a quarter of the length of the wavelength, as in an organ pipe with one closed end. What I found though, was that it was very close to half.
Does anyone know why my initial assumption was wrong?
Thanks
-Jam
Homework Helper
The air in the glass isn't vibrating like a standing wave down the length of the glass as in an organ tube, the rim of the glass is vibrating like a bowstring.
Imagine that two opposite points on the rim are staionary and the curve between them vibrates.
omg thanks, that's been annoying me for weeks!
Thankyou!
ok, that leaves me with a second question;
I just did a rough calculation, and it turns out that the wavelength is about the same as the circumference of the glass, which would lead me to expect to observe two nodes and two antinodes as I moved a microphone around the glass, i.e., two 'quietest' points, and two 'loudest' points.
However, my previous research (and observation) suggests that there is in fact a quadrupole configuration, that is, there are four of each such nodes and antinodes.
Why would this be? |
# ngram v3.0.4
0
0th
Percentile
## Fast n-Gram 'Tokenization'
An n-gram is a sequence of n "words" taken, in order, from a body of text. This is a collection of utilities for creating, displaying, summarizing, and "babbling" n-grams. The 'tokenization' and "babbling" are handled by very efficient C code, which can even be built as its own standalone library. The babbler is a simple Markov chain. The package also offers a vignette with complete example 'workflows' and information about the utilities offered in the package.
# ngram
• Version: 3.0.4
• Status:
• Author: Drew Schmidt and Christian Heckendorf
ngram is an R package for constructing n-grams ("tokenizing"), as well as generating new text based on the n-gram structure of a given text input ("babbling"). The package can be used for serious analysis or for creating "bots" that say amusing things. See details section below for more information.
The package is designed to be extremely fast at tokenizing, summarizing, and babbling tokenized corpora. Because of the architectural design, we are also able to handle very large volumes of text, with performance scaling very nicely. Benchmarks and example usage can be found in the package vignette.
## Package Details
The original purpose for the package was to combine the book "Modern Applied Statistics in S" with the collected works of H. P. Lovecraft and generate amusing nonsense. This resulted in the post Modern Applied Statistics in R'lyeh. I had originally tried several other available R packages to do this, but they were taking hours on a subset of the full combined corpus to preprocess the data into a somewhat inconvenient format. However, the the ngram package can do the preprocessing into the desired format in well under a second (with about half of the preprocessing time spent on copying data for R coherency).
The package is mostly C, with the returned object (to R) being an external pointer. In fact, the underlying C code can be compiled as a standalone library. There is some minimal compatibility with exporting the data to proper R data structures, but it is incomplete at this time.
## Installation
You can install the stable version from CRAN using the usual install.packages():
install.packages("ngram")
#### Development Version
The development version is maintained on GitHub, and can easily be installed by any of the packages that offer installations from GitHub:
### Pick your preference
devtools::install_github("wrathematics/ngram")
ghit::install_github("wrathematics/ngram")
remotes::install_github("wrathematics/ngram")
## Example Usage
Here we present a few simple examples on how to use the ngram package. See the package vignette for more detailed information on package usage.
### Tokenization, Summarizing, and Babbling
Let's take the sequence
x <- "a b a c a b b"
Eagle-eyed readers will recognize this as the blood code from Mortal Kombat, but you can pretend it's something boring like an amino acid sequence or something. We can form the n-gram structure of this sequence with the ngram function:
library(ngram)
ng <- ngram(x, n=3)
There are various ways of printing the object.
ng
# [1] "An ngram object with 5 3-grams"
print(ng, output="truncated")
# a b a
# c {1} |
#
# a c a
# b {1} |
#
# b a c
# a {1} |
#
# a b b
# NULL {1} |
#
# c a b
# b {1} |
With output="truncated", only the first 5 n-grams will be shown (here there are only 5 total). To see all (in the case of having more than 5), you can set output="full".
There are several "getter" functions, but they are incomplete (see Notes section below). Perhaps the most useful of them generates a "phrase table", or a list of n-grams by their frequency and proportion in the input text:
get.phrasetable(ng)
# ngrams freq prop
# 1 a b 2 0.3333333
# 2 b a 1 0.1666667
# 3 c a 1 0.1666667
# 4 a c 1 0.1666667
# 5 b b 1 0.1666667
Finally, we can use the glory of Markov Chains to babble new sequences:
babble(ng=ng, genlen=12)
# [1] "a b b c a b b a b a c a "
For reproducibility, use the seed argument:
babble(ng=ng, genlen=12, seed=1234)
# [1] "a b a c a b b a b b a b "
At this time, we note that the seed may not guarantee the same results across machines. Currently only Solaris produces different values from mainstream platforms (Windows, Mac, Linux, FreeBSD), but potentially others could as well.
### Weka-Like Tokenization
There is also a tokenizer that behaves identically to the one in the RWeka package (only the ngram one is significantly faster!). Using the same sequence x as above:
ngram::ngram_asweka(x, min=2, max=3)
## [1] "a b a" "b a c" "a c a" "c a b" "a b b" "a b" "b a" "a c" "c a"
## [10] "a b" "b b"
## Functions in ngram
Name Description preprocess Basic Text Preprocessor string.summary Text Summary babble ngram Babbler Tokenize-AsWeka Weka-like n-gram Tokenization concatenate Concatenate multiread Multiread rcorpus Random Corpus ngram-class Class ngram splitter Character Splitter ngram-package ngram: An n-gram Babbler ngram-print ngram printing getseed getseed getters ngram Getters Tokenize n-gram Tokenization wordcount wordcount phrasetable Get Phrasetable No Results! |
# Is NH_3 a gas or a solid? How do you know?
Jun 15, 2018
Well, it's a gas... how do I know? I've smelled it passing by, after seeing a student react a mixture of cations (containing ${\text{NH}}_{4}^{+}$, ${\text{Mn}}^{2 +}$, ${\text{Fe}}^{3 +}$, ${\text{Ag}}^{+}$, ${\text{Ni}}^{2 +}$, and ${\text{Al}}^{3 +}$) with $\text{NaOH}$.
The $\text{NaOH}$ reacted with the ${\text{NH}}_{4}^{+}$ (and with some of the other cations) to produce a gas that smelled like cleaning products...
$\text{NH"_4^(+)(aq) + "NaOH"(aq) -> "NH"_3(g) + "Na"^(+)(aq) + "H"_2"O} \left(l\right)$
It formed as a gas, since its boiling point at $\text{1 atm}$ is only $- {33}^{\circ} \text{C}$. It spontaneously vaporizes under these conditions, if it were somehow obtained as a liquid.
A bit hard to read, but this is the best phase diagram I could find:
The $P T$ projection (purple) shows that at $\text{1 atm}$ (${\log}_{10} \left(P / \text{MPa}\right) \approx - 1$) and $\text{300 K}$, we are beneath the bottom purple curve, which divides the liquid (above) from the gas (below) regions.
Hence, at $\text{1 atm}$ and $\text{300 K}$, ammonia is certainly a gas. |
### ¿Todavía tienes preguntas de matemáticas?
Pregunte a nuestros tutores expertos
Algebra
Pregunta
2. This fraction has a numerator that is $$3$$ less than its denominator. The sum of its numerator and denominator is $$19 .$$ The fraction is ... * *
$$\frac{8}{11}$$ |
# Find all $n\in\mathbb{N}$ for which $\frac{x^n + y^n + z^n}2$ is a perfect square, whenever $x,y,z\in\mathbb{Z}$ such that $x+y+z=0$
Find all positive integers $n$ for which $\dfrac{x^n + y^n + z^n}2$ is a perfect square, whenever $x$, $y$, and $z$ are integers such that $x + y + z = 0$.
I don't even know where to start.
• This question is problem 3 from the USAMTS 2016-17 Round 1 problem set. This question will remain locked with answers temporarily deleted until the submission deadline of 17 October 2016 has passed. – user642796 Sep 13 '16 at 11:42
I have no complete answer, but a start (as you wanted to know where to start). For $n=1$ we have that $(x^1+y^1+z^1)/2=0$ is a perfect square for all $x,y,z$ with $x+y+z=0$. So we may assume $n\ge 2$. Now choose, say, $(x,y,z)=(1,1,-2)$. Then $x+y+z=0$ and $$\frac{x^n+y^n+z^n}{2}=\frac{2+(-2)^n}{2}.$$ This can never be a perfect square for all odd $n>1$, because it is negative in this case. Also for even $n$ this is rarely a perfect square, but it can happen, namely for $n=4$. This is clear, because for $n=4$ we have, with $z=-x-y$, $$\frac{x^4+y^4+z^4}{2}=\frac{2x^4 + 4x^3y + 6x^2y^2 + 4xy^3 + 2y^4}{2}=(x^2 + xy + y^2)^2,$$ which is indeed always a perfect square.
• Actually, your example $(1,1,-2)$ shows that you have found all solutions. $9$ is the only perfect square of the form $2^n+1$. – TastyRomeo Sep 13 '16 at 9:40
• Yes, that's a good point. I was hoping that the OP would try this. – Dietrich Burde Sep 13 '16 at 9:43
• See usamts.org/Tests/Problems_28_1.pdf Problem 3. The OP is trying to get others to do their work for them. – Airdish Sep 13 '16 at 10:00
Leading on from Dietrich's answer, let's take $(x,y,z)=(1,1,-2)$ and consider even $n$ of the form $n=2k$, $k \geq 2$.
$$\frac{x^n+y^n+z^n}{2}=\frac{2+(-2)^n}{2}=1+2^{2k-1}$$
Suppose $1+2^{2k-1}$ is a perfect square. It is odd, so
$$1+2^{2k-1}=(2l+1)^2$$
$$\iff 2^{2k-1}=4l^2+4l$$
$$\iff 2^{2k-3}=l(l+1)$$
Since either $l$ or $l+1$ is odd, and $2^{2k-1}$ contains only even factors, it must be that $l=1$, $l+1=2$ and $k=2$.
i.e. the only cases are $n=0,4$ found by Dietrich above, who did all the hard work. |
# Where can I find the old papers of the Math Tripos?
Is there a repository on the Internet which has the old question papers of the tripos? I am specifically interested in the papers during the 1890-1910 era, which was the era before the reforms, although I'm also interested in the other, more recent papers.
Most of the problems which I have come across from the pre-reform era are very interesting, and it would be wonderful if the actual full papers are accessible now.
-
It looks to me that the Internet Archive has some, e.g. this. – J. M. Sep 10 '11 at 5:15
May I ask why there are sentences like "Shew that ... " instead of "Show that ..." ? – Michel Marcus Oct 26 '11 at 7:48
Old English. – J. M. Oct 26 '11 at 7:54
This doesn't really answer your question, since you asked for online copies; but when I was an undergraduate I occasionally used to play snooker in the Cambridge Union's snooker room, which had an entire wall covered in shelves containing volumes of 19th-century Maths Tripos papers. So if you have a contact who is physically in Cambridge, that would be a place to look. – David Loeffler Oct 26 '11 at 8:57
If you are still interested, I can upload one from $1842$ – Julien Godawatta Jan 6 '14 at 0:55
There are dozens of books which contain papers and solutions. The exam used to be called the senate-house examination a while ago, hence the titles. 'Tripos' is derived from the way the exam was taken, where you had to sit on a 3-legged wooden stool and "wrangle" - argue through problems orally - with the examiners. ('Riders' are just the first few parts of a problem, which are usually simple book-work questions in order to allow the weaker candidates to gain some points on the exam.) To name a few:
1. Cambridge senate-house problems and riders, with solutions, 1875 - Greenhill
2. Cambridge senate-house problems and riders, with solutions, 1878 - Glaisher
3. Mathematical problems for Cambridge Mathematical Tripos - Wolstenholme (I remember seeing a solutions manual to this somewhere, but it might be called something weird like "Key to..." instead of "Solutions to...".)
4. Cambridge senate-house problems and riders, 1848-1851 - Ferrers
5. Cambridge senate-house problems and riders, 1843-1851 - Jameson
6. Cambridge senate-house problems and riders, 1854 - Walton
7. Cambridge senate-house problems and riders, 1857 - Walton
You can easily discover more by searching "cambridge senate house" on archive.org. Other useful terms include: "cambridge problems and riders", "cambridge senate house solutions", "problems cambridge examinations", "cambridge examples", etc.
-
An example search though they are mostly earlier than 1890 – Henry Aug 15 '15 at 9:25
They seem to be hard to find! Here is a link to one from 1906 which was published in the Bulletin of the AMS. The article begins with a lengthy description, but includes the actual exam papers. (Navigate to the bottom of the page to find a link to the .pdf.)
An interesting book available from Google e-books for the right price (free) can be found here. It overlaps with your specified period, and has exam papers in various subjects, including mathematics.
-
I think if you search through archive.org you will find some there
EDIT: These are outside your era however..
- |
Home >> AIMS
# If A,B, and C are angles of a triangle , then $e^{iA}.e^{iB}.e^{iC}=$
$\begin{array}{1 1}(A)\;i \\(B)\;1 \\(C)\;-i \\ (D)\;-1 \end{array}$
If A,B, and C are angles of a triangle , then $e^{iA}.e^{iB}.e^{iC}=-i$ |
# GCC Assembly questions
## Recommended Posts
I've been planning to use GCC inline assembly in my cross-platform engine for speed and brevity, but after reading about many of the fundamental differences (endianness, etc) in processors, I'm not so sure about it anymore. How much alteration would some basic inline asm require for it to run on different processors? If I didn't use any x86-specific instructions and were wary of endianness, would it require a significant amount of work to port the code from x86 to a PPC or ARM?
##### Share on other sites
Well, there are no non-x86-specific instructions, so if all you use is that (i.e. don't use any), you're good to go.
Machine language really is just that: the language the processor accepts as its input. Different processors use different instruction sets. Okay, you're going to use assembly -- that's one level higher, but I don't suppose it's going to help you any. Consider "movl $42, %eax" (in AT&T syntax). What you're assuming here is: the processor has a "move hardcoded value into a register" instruction, also you're assuming the processor has an "%eax" register. (You're furthermore assuming the register can hold the value 42. This looks like a stupid point in this example, but you can instead consider "movl$1234567890, %eax".) This piece of assembly is going to produce correct machine code for any x86- or AMD64-based CPU. Maybe someone will point a CPU for which this piece of assmebly would produce a correct machine instruction too. But if you want to do anything even remotely useful with assembly, you're going to go CPU specific. That's what assembly is: a CPU-specific langauge.
Oh, and the usual: If you think you're smarter than your compiler, then stop: you most likely aren't.
##### Share on other sites
You are better off looking into compiler intrinsics of one form or another for things like SSE/MMX. It gives the compiler a better idea of what you are trying to do, and it is easier to rewrite (or substitute through wrappers/macros) the C intrinsics to match other systems.
##### Share on other sites
Inline assembler must be rewritten for different processor architectures. Then again, there are only a handful you are likely to be supporting under for the same "engine". You can always have a fall back to C for architectures you don't want to support.
Modern compilers can be coerced into producing very good assembly. Unless you deeply understand your computer at the assembly level you are unlikely to beat it by enough to make it work the maintenance issues that assembly brings.
Optimisation is a tricky thing. Before you start, you need to know which parts of your code are your bottleneck. Know, not guess. A profiler is one way of discovering this. 90% of the time is spent in 10% of the code, so optimising the other code will not pay off as much as concentrating on that critical portion.
Next up is to make sure that your high level algorithms are efficient. There is no point writing bubble sort in assembly - you are limited by the efficiency of the algorithm.
There are other things that you can take advantage of to improve the speed of your code without resorting to assembly. One is to learn about caches of your target system. If you can maximise the amount of your working set in cache memory, you will have performance gains.
Another avenue of research is to look into using compiler intrinsics.
After exhausting all the other options, you might think about assembly. Chances are your code has gotten fast enough in the mean time.
It depends on what you are doing, but you might be able to find libraries that have been prewritten and come already optimised. Something like physics would be a good example.
Well, I had been using vectors as a stack for my virtual machine, but I would prefer just to extend the stack and use push and pop on the physical stack. I was under the impression that inline assembly would resolve to the correct instruction for the architecture you assemble for, since practically every system has a 'mov' instruction, for example. I suppose I could just use malloc and a pointer for the VM stack...
##### Share on other sites
There is next to no reason to write assembly for purposes of optimization.
Exceptions are when dealing with special functionality, such as SIMD. Of course, such concepts are typically not portable, and bring more baggage with them, such as alignment.
Then there's other gotchas. IIRC, PPCs fault when accessing non-aligned memory, while x86 just takes access penalty. SIMD may require strict alignment, so one needs to allocate memory properly. Floating point is horror even at best, some calls might change flags. And ARM is a whole different story anyway.
It's possible, it's been done, but it's very recommended to be paid for it. For hobby development, it's not worth it.
Just porting C or C++ code to PPC and ARM is challenging enough.
##### Share on other sites
Well, I had been using vectors as a stack for my virtual machine,
Awesome
but I would prefer just to extend the stack and use push and pop on the physical stack.
Why? The VM is NOT your machine...unless they happen to have the same abi and run in the same address space. "the stack" on your machine should be different from the stack you are emulating.
I was under the impression that inline assembly would resolve to the correct instruction for the architecture you assemble for, since practically every system has a 'mov' instruction, for example.
No. This is totally incorrect. If you do not understand quite why, please feel free to do some research...additionally, are you sure you want to implement your own VM? You may be much better off using something someone else has done, such as lua.
I suppose I could just use malloc and a pointer for the VM stack...
You could, but don't. std::vector<> is a correct API wrapper around the exact same memory allocation types, and it is much better tested and less prone to dumb memory errors than raw pointers are.
##### Share on other sites
Would this be applicable?
##### Share on other sites
The engine, though currently written by a hobbyist (me), will eventually become a commercial engine with games released on valve's Steam distribution system, so I'm not able to include anything that isn't proprietary. However, as a seasoned Garry's Mod modder, I have a lot of experience with Lua and think its a very wonderful language. My language will take a lot of good ideas from Lua, though hopefully leaving them as separate as possible.
Thanks for the reassurance with Vectors. My previous problem was that I was unable to copy a pointer to the vector... I even tried casting it as an int and copying that. I've been able to push/pop strings, ints, longs, bools, and bytes just fine, but when it came to pointers, I couldn't seem to get it to work properly.
##### Share on other sites
Assembly blocks don't translate between architectures, and I'm honestly not sure how many different architectures have assembler support under GCC. It would be interesting if GCC implimented LLVM assembler as a sort of "cross-platform inline assembly", but honestly there are so many things you do differently on different architectures due to alignment, endianess, and other small details that I'm not sure it would be feasible to support, much less practical to use.
1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5
• 14
• 12
• 9
• 12
• 37 |
# Homework Help: Electric Potential Energy concepts
1. Sep 23, 2014
### wannabee_engi
1. The problem statement, all variables and given/known data
1. What does the following equation mean and how is it analogous to the relationship between the electric field and the the electrostatic force?
V ( ~r ) = U ( ~r ) / q
In general, confused about concepts, meanings and vocabulary. The naming of different terms in this section is annoying to sort out. Voltage, V, is defined as electric potential energy, but is also called electric potential, which also refers to U, which is actually the electric potential. Is this correct?
Questions I'm trying to understand:
- How are V and U different?
- Related to U: Negative charges go from low to high potential in positive E field. How does this work intuitively? They go against the direction of the E field and this brings them closer to the positive charge. Is it because if you keep them apart when they are close it takes a lot of force to do so? (just thought of this, this makes a lot of sense)
It makes sense for two positive or two negative charges, since you have to hold them there or they will want to separate like a compressed spring.
- The difference of Vf and Vi is the integral of E dot dr, dr being tangent to the E field at each point. Work is done by the electric force to move particles and this is equal but opposite to the change in electric potential. Is voltage dangerous because if let go (or circuit is connected) a lot of charge will be released?
- When V is defined from bringing a particle from infinity (i.e. infinite r makes the potential zero), how does what the particle is being brought to effect anything?
2. Relevant equations
V = U / q = kQ/r
U = kQq / r
3. The attempt at a solution
What I'm thinking so far: electric potential created by taking charge q close to another charge Q at distance r. Dividing this by q is V, the electric potential energy, which is the work per charge to bring it to that point. The electric field is what you have to work against because of the electrostatic force it exerts on charged particles.
Thanks for taking the time to read through my confusion if you do
### Konoha
I remember my confusion over u and v back in college! So don't feel bad, you aren't the only one. :)
I will start with E (electric field) then move to V electric potential and U electric potential energy.
3. Sep 24, 2014
### Konoha
As you know, according to Coulomb's law, a charge q, that is a distance r away from another charge Q, exerts the force
$$F = k \frac{q Q}{r^2}\$$, Now we want to know what is the force that q exerts on a unit charge at distance r. To find it we simply need to divide F by Q and call the results E. Meaning, $$E = \frac{F}{Q}$$. So technically Electric Field (E) is just the force on unit charge (we'll leave it here for now)
Now about electric potential, unlike electric field which is a vector quantity, electric potential (V) is a scalar, which means it can be easier to work with. Now what is electric potential? It is the amount of electric potential energy a unit charge will have if we put it at a distance r from a charge q. (you can follow electric field analogy here: E is F per unit charge (E=F/q), V is U per unit charge, so V=U/q)
another way to look at V is to imagine it as the work done in carrying a unit charge from infinity to the point a distance r from a charge q.
So remember that Electric potential energy (U) and electric potential (V) are not the same things. U would be the work done in carrying a charge Q from infinity to the point a distance r from a charge q, and V is the work done in carrying a unit charge from infinity to the point a distance r from a charge q (or the work per unit charge)
We usually need to know only the difference between electric potentials of two point, which can be calculated using the following equation:
$$\Delta V = V_{b} - V_{a} = - \int_a^b\vec{E}\cdot d\vec{l}$$
Now about the direction negative charges move: Imagine we have a uniform electric field in the direction of x, (from left to right). What will be the force on a negative charge in this field? as we said before F=Q.E since Q in negative, the force F would be in the opposite direction of E, (hence from right to left) which means the charges will be moved from right to left in the direction of -x.
we can calculate deltaV using the above equation. Here E is in x direction, and dl is in -x direction (we just proved why), which means the angle between them is 180, so E.dl will be negative, and considering the minus sign in deltaV equation, the difference between electric potentials will be a positive number, which means the negative charges are moving from lower to higher potential.
4. Sep 24, 2014
### Konoha
5. Sep 24, 2014
### wannabee_engi
Thank you for the response. I'm good with the directions and dot product, that makes sense. The difference between U and V is similar between F and E, so does this mean V is the potential of the field measured by a unit charge that doesn't effect the field? With U being the energy put into moving the specific charge Q
Also the MIT lectures are very good.
6. Sep 24, 2014
### Konoha
yes, that's right! :) |
North Carolina Math 3 EOC Post Test
### North Carolina Math 3 EOC Post Test Sample
Standard: F-BF.4a DOK: 1 1 pt
9.
What is the domain of the inverse of the function, ?
Standard: G-CO.14 DOK: 3 1 pt
15.
Circle B includes $\stackrel{⏜}{XY}$ that measures 4 feet. What is the length of Circle B’s diameter, rounded to the nearest tenth.
Standard: F-BF.4b DOK: 2 1 pt
29.
Use the table below to determine if the inverse relation of the given function is also a function.
Standard: G-CO.10 DOK: 2 1 pt
32.
The intersection where three perpendicular bisectors cross with a triangle is called:
Standard: N-CN.9 DOK: 3 1 pt
41.
State the possible number of real zeros for the function. Then factor each and find all zeros. |
Paul Alexander Dienes
Quick Info
Born
24 November 1882
Tokaj, Hungary
Died
23 March 1952
Tunbridge Wells, Kent, England
Summary
Paul Dienes was a Hungarian mathematician who, because of his political views, had to escape from Hungary in 1920. He spent most of his career in Wales and England, was a highly effective Ph.D. supervisor, and wrote the influential book The Taylor Series (1931).
Biography
We note first that Paul Alexander Dienes was also known as Pál Sándor Dienes. He seldom used his middle name, however, and is usually known as Paul Dienes. He came from a wealthy family who were Presbyterians. He was the son of Barnabás Dienes (1852-1923), a jurist who owned local vineyards, and Ilona Pusztay (1860-1934), who was of Greek origin. Barna and Ilona Dienes were married in Debrecen on 7 October 1877. They had eight children: Klára Katalin Dienes (1878-1959); Rósza Ilona Etelka Dienes (1880-1971); Kálmán Dienes (1882-1954); Pál Sándor Dienes (1882-1952), the subject of this biography; Lajos Dienes (1885-1974: László Dienes (1889-1953), Barna Dienes (1895-1950); and Katalin Dienes (1900-1979). We note at this point that Kálmán became an engineer, Lajos became a bacteriologist, and Barna became a Presbyterian priest.
Paul Dienes was educated at the Debrecen Reformed College. This important college had been founded in 1538 and provided a good education, particularly strong in philosophy. After graduating from the Debrecen Reformed College, Dienes began studying mathematics and physics at the Pázmány Péter University in Budapest (it was later renamed the Eötvös Lóránd University). There he was taught by Lipót Fejér who also taught the student Valéria Anna Geiger (1879-1978). Valéria Geiger's [5]:-
... university work focused on studying mathematics and physics. Lipót Fejér, the doyen of Hungarian mathematics of that period, was in love with Valéria. He happened to introduce Paul Dienes to her, with the remark that Paul Dienes was a mathematical genius, an introduction that led to their marriage. Valéria began her doctoral studies of philosophy by listening to Bernát Alexander's lectures, the leading philosopher of Budapest those days, just like Pólya and many contemporary intellectuals did. She received her degree in the same ceremony as Paul Dienes in 1905 at Pázmány Péter University, Budapest. They exchanged engagement rings during that ceremony. Paul received his doctorate in mathematics. Valéria's doctorate was in philosophy as a major subject with a first minor in mathematics and a second minor in aesthetics.
While undertaking research, Dienes had spent some time in Paris studying with Émile Borel and Jacques Hadamard at Université Paris IV-Sorbonne. The degree ceremony in Budapest was held on 24 June 1905 and Dienes was awarded a doctorate for his thesis Additions to the theory of analytic functions (Hungarian). Paul and Valéria Dienes were married in December 1905. On 2 August 1907 he began teaching at the 10th District State High School High School where he taught mathematics, physics, philosophy, and French. He went to Paris with his wife in 1908 and, after working with Borel and Hadamard, published the 88-page Essai sur les Singularités des Fonctions Analytiques with Gauthier-Villars in Paris in 1909. Valéria worked with the philosopher Henri-Louis Bergson (1859-1941) and became fascinated by the relations between dance and mathematics [4]:-
The couple had a common interest in epistemology and in the philosophy of mathematics and physics at a time when the university life of Paris was loud from the discussion of functional representations of physical quantities, duration, and measurable time. Einstein's definitions on simultaneity and thoughts on the relativity of length and time (1905) began to become more widely known these days in France, while Bergson's public lectures in the Collège de France went on about 'duration' and 'time'. The couple worked together on common analytical functional topics. Paul had a strong interest in the mathematical problems of the theory of relativity and vector space singularities. Reports on their work in this period were published in the 'Comptes Rendus' of the Académie des Sciences, and in several papers in Hungarian.
Their joint papers include Sur les singularités algébrologarithmiques (1909) and three papers on General theorems on algebraic and logarithmic singularity (Hungarian) in 1911 and 1912.
As a school teacher in Hungary he gave lectures aimed at parents of the school children, organised charity events to raise money to support the education of children of poor parents, and arranged field trips for his students in the areas of thermodynamics and electrical engineering. Valéria Dienes was friendly with the poet and writer Mihály Babits (1883-1941) and a close friendship developed between Paul Dienes and Babits. Dienes took part in school groups led by Babits discussing method of education. It was not only school teaching that Dienes undertook at this time for he was appointed as a docent at the University of Budapest in 1908, a docent at the University of Cluj-Napoca in 1912, and at the University of Budapest in 1916. Émile Borel had approached Dienes to see if he would publish his Budapest lectures as a monograph in the series "Collection of Monographs on the Theory of Functions, published under the direction of M Émile Borel," and Dienes' Leçons sur les singularités des fonctions analytiques was published in that collection in 1913.
You can read extracts from reviews of Leçons sur les singularités des fonctions analytiques at THIS LINK.
Paul and Valéria Dienes had two children, Gedeon Dienes (born in Budapest on 16 December 1914) and Zoltan Pál Dienes (born in Budapest on 11 September 1916). Gedeon Dienes learnt English, French, German, Swedish, Italian and Russian and became a secretary at the Foreign Office. He represented Hungary at the peace conference at the end of World War II. He later worked in the Publishing House of the Hungarian Academy of Sciences, then became interested in dance, writing articles in many languages and founding a Budapest dance company. Zoltan Dienes became a mathematician and has a biography in this archive.
Béla Kun was a Hungarian Communist revolutionary and politician who, with Soviet support, led a successful coup d'état and proclaimed the Hungarian Soviet Republic. As People's Commissar of Foreign Affairs he was the effective leader of Hungary from March 1919 until August 1919. Dienes had been a strong supporter of Béla Kun and, together with Babists, had argued for him in lectures to students and in protests in cafes. After the Hungarian Soviet Republic was formed, he became the head of the committee appointed to run the University of Budapest and also took part in the organisation of the Marx-Engels Workers' University. In mid July 1919, the Romanian army attacked Hungary, and when the Red Army failed to come to their aid, Béla Ku fled the country. Counter-revolutionaries start to hunt down supporters of Béla Kun and execute them. Richard George Cooke (1895-1965) writes [9]:-
During the Government of Bela Kun, Dienes took an active part in educational work in relation to the University. When this Government fell in 1919, Dienes had to leave Hungary in haste, with his life in danger. He has given me entertaining and thrilling accounts of his escape, and of his activities in the first period after his exile; for example, he escaped from Hungary in a cargo boat on the Danube which was supposed to be carrying beer to Vienna, and he occupied one cask instead of the beer. He duly arrived in Vienna ...
Let us give a few more details about his escape. To avoid being captured, Dienes had hidden in a cupboard in a friends' apartment. His food was brought regularly by Sari Chylinska (1898-1992) who had studied dance with Valéria Dienes in Budapest from 1915 to 1918. She and Dienes became romantically involved. Valéria tried to arrange for Dienes to be taken out of the country and was put in touch with the captain of a river boat. He agreed to smuggle Dienes out of the country but only if he received a large sum of money. To raise this sum, all the family possessions had to be sold, including their large and very valuable library. He arrived in Vienna with nothing but the clothes he was wearing.
After Dienes arrived in Vienna in 1920 he failed to find an academic position and only managed to do a little film work being in crowd scenes in a film. Valéria, Gedeon, Zoltan and Sari Chylinska joined Dienes in Vienna and they lived in a Montessori children's home. Paul and Valéria agreed to divorce, Valéria and the children went to Nice in France, where they lived in a commune run by Raymond Duncan, the brother of the dancer Isadora Duncan. Paul, after contacting Émile Borel and Jacques Hadamard, made his way to Paris with Sari in 1921. He asked Hadamard if he knew of any British or American university that would like to employ him to teach as "a representative of the Paris School of Mathematicians." By coincidence Hadamard was approached by W H Young a couple of weeks later asking if he knew anyone looking for a position who could teach in the Parisian style. Dienes moved to Wales and began teaching in Aberystwyth in October 1921.
In 1922 Dienes' divorce from Valéria became official and he married Sari in Neukematen, Austria on 26 July of that year. He had been looking at the fundamental ideas in the theory of relativity and published papers such as Sur la connection du champ tensoriel (1922). He had made some criticisms and Einstein wrote to him in August 1922:-
Your attack on the mathematical theories by Weyl and Eddington only touches the formulation, not however the content. Weyl had treated the second-order quantities loosely negligently but in a manner that is easily demonstrated to be innocuous. I am sending you a small book in which you will find on pages 48-49 a proof Levi-Civita's and Weyl's train of thought sketched a little more precisely, against which there ought to be no objection, in which he avoids these inaccuracies so such objections do not arise anymore.
In 1923 W H Young resigned from Aberystwyth and Dienes left to take up a lectureship in Swansea, Wales. Paul and Sari Dienes set up home in Sketty, on the outskirts of Swansea.
Evan Davies had begun his university studies in 1921 at Aberystwyth and was taught there by both Dienes and W H Young. When both left in 1923, Evan Davies followed Dienes to Swansea where his research advisor was Dienes who advised him to work on the absolute differential calculus. In 1926 G H Hardy and Archibald Richardson approached Dienes suggesting he write a book on Taylor series. After four years work, in 1931 The Taylor Series. An Introduction to the Theory of Functions of a Complex Variable was published. Norman Miller writes [16]:-
Professor Dienes has here performed a notable service to the mathematical world in assembling and ordering in a thoroughgoing manner the modern theories relating to Taylor series. From its title and subtitle one might suppose the book to be another elementary text book on complex function theory from the Weierstrass point of view. The title however is too modest. The book is a pioneer in its field and makes no inconsiderable demands on the maturity of its readers.
Joseph Ritt writes [19]:-
This treatise conducts the reader from the elements of real variable theory into some of the furthest reaches of complex analysis. ... It would be difficult to overestimate the value, for advanced students, of these later chapters in Dienes' book. They reduce to didactic form a large section of the recent literature on complex analysis. ... this treatise ... is the work of a distinguished authority and will hold an important place in every mathematical library.
You can read more extensive extracts from reviews of this book at THIS LINK.
In 1929, before Dienes had finished writing his book on Taylor series, he had left Swansea to take up a readership at Birkbeck College in the University of London. Dienes' two sons, Gedeon and Zoltan, spent most of the year with their mother Valéria, but would spend the summers with Paul and Sari Dienes travelling in France, Germany, Hungary, Italy and Transylvania. Dienes kept his interest in music and dance and in 1930 he and his wife became friends with the composer Michael Tippett and with the classical Indian dancer Uday Shankar, brother of Ravi Shankar. In 1937 Paul and Sari Dienes separated.
In addition to Evan Davies, Dienes supervised the Ph.D. studies of a number of outstanding students including: H S Allen, who wrote the thesis Maximum matrix rings (1942); Ralph Henstock (1923-2007), who wrote the thesis Interval Functions and their Integrals (1948); Abraham Robinson, who wrote the thesis The Metamathematics of Algebraic Systems (1949); and Paul Vermes (1897-1968), who wrote the thesis Gamma matrices and their applications to infinite series (1947). Although not formally Reuben Louis Goodstein's advisor, nevertheless he was much involved in helping Goodstein with his thesis An axiom-free equation calculus (1946). He also wrote the joint paper On the effective range of generalised limit processes (1938) with Richard George Cooke and encouraged him to write the book Infinite matrices and sequence spaces (1950) which contains a wealth of original results by Cooke and by Dienes.
Dienes was appointed to the newly created Chair of Mathematics at Birkbeck College in 1945. He became more interested in logic and wrote papers such as On ternary logic (1945) which begins as follows:-
In this paper we give the complete list of the functions of one variable with some of their properties, and lists of functions corresponding to various properties of sum, product, implication, and equivalence. The range of the variables as well as that of functional values will be 0 (false), 1/2, 1 (true). As an application we consider Frege's, Russell's and Heyting's systems in ternary logic. The proofs are mostly omitted as obvious if sometimes laborious.
Wilhelm Ackermann, reviewing this paper, writes [1]:-
It is well known that in a three-valued propositional calculus with the values 0, $\large\frac{1}{2}\normalsize$, 1 there is no unequivocal determination of the matrix system defining the logical connections due to the lack of a generally recognised interpretation. Usually, however, one will proceed in such a way that the logical functions for 0 and 1 retain the same values that they have in two-valued logic. Symmetry will be required for disjunction, conjunction and equivalence. The author now undertakes to list the various possibilities that then remain for negation, disjunction, conjunction, implication and equivalence and to group them according to their main properties. It is stated which types of conjunctions and disjunctions are associative, which disjunctions with regard to which conjunctions satisfy one or the other of the two distributive laws, and for which definition of negation, conjunction and disjunction De Morgan's formulas apply.
Dienes retired in 1948 and turned to writing poetry. The book of his poems, The Maiden And The Unicorn: A Cycle Of Poems, was published in 1954, two years after his death. His friend, the composer Michael Tippet, said of his poetry [9]:-
The first thing to remember about him was that he had a philosophical and mathematical discipline as the core of his mind, but music had an almost Schopenhauerian significance for him .... I think, therefore, that his poetry represented an attempt at an amalgam of these various sensibilities, so that the sound pattern of the verse seems to overbalance the pattern of the sense. It is possible that I, as someone with a musical discipline, and abiding interest, though no aptitude, for philosophy, can savour Dienes's poetry better than most.
Dienes died of a heart attack in March 1952. Cooke writes [9]:-
Dienes had a most charming personality, and was much loved by both his colleagues and his students.
|
# Math Help - Linearization
1. ## Linearization
Find the linearization of the function at the point (0, -1).
__________
Use the linear approximation to estimate the value of = _____________
Thanks!
2. Originally Posted by jffyx
Find the linearization of the function at the point (0, -1).
__________
there is a formula for this: $L(x,y) = f(x_0,y_0) + f_x(x_0,y_0)(x - x_0) + f_y(x_0,y_0)(y - y_0)$
here, your $(x_0,y_0) = (0,-1)$
Use the linear approximation to estimate the value of = _____________
Thanks!
this is given by $L(-0.1, -0.9)$ |
# Prime Numbers
→ Print-friendly version
It’s pretty simple to multiply two numbers and get another number.
$2 \times 3 = 6$
Here’s a question for you: What happens if we try to go the other way? For instance:
$15 = ? \times ?$
With a little thinking – remembering times tables, experimenting a bit – we can figure out the answer.
$15 = 3 \times 5$
What we just did is called factoring. Instead of taking two little numbers and multiplying them to get a bigger number, we took a bigger number and broke it into two little numbers.
Let’s give this one a try:
$11 = ? \times ?$
No matter how hard we think, we can never come up with two smaller numbers that multiply to $11$. The best we can do is to say $11 = 1 \times 11$, but we didn’t really break it down into anything smaller that way. When a number doesn’t have any factors besides $1$ and itself, we call it a prime number. When we can break it up, we call it a composite number. $11$ is a prime number. $15$ is a composite number.
Here’s another one to try:
$24 = ? \times ?$
With a little thinking, we might come up with this:
$24 = 4 \times 6$
But what happens if we go a step further? What if we tried to factor the factors?
$4 = ? \times ?$
$6 = ? \times ?$
Try to solve this on your own first. I’ll wait.
$4 = 2 \times 2$
$6 = 2 \times 3$
So we can write 24 like so:
$24 = 2 \times 2 \times 2 \times 3$
We can’t break these factors down any more ($2$ and $3$ are prime), so that’s as far as we can go.
Wait a minute,” you might say. “I didn’t get $24 = 4 \times 6$. I got $24 = 8 \times 3$ instead. Am I wrong?”
Good point. $24 = 4 \times 6$ isn’t the only way we could have started factoring $24$. You’re not breaking it apart wrong; you’re just breaking it apart in a different way. So let’s go down that path and see what we find.
$24 = 8 \times 3$
$8 = 4 \times 2$
$4 = 2 \times 2$
We get:
$24 = (4 \times 2) \times 3 = (2 \times 2) \times 2 \times 3$
$24 = 2 \times 2 \times 2 \times 3$
Hm, this is interesting…
$24 = 4 \times 6 = 2 \times 2 \times 2 \times 3$
$24 = 8 \times 3 = 2 \times 2 \times 2 \times 3$
We get the same breakdown both times, even though we started in two different ways. Is this a coincidence?
As it turns out, it’s not. Anytime we break apart a composite number into its prime factors, no matter what path we take to get there, we’ll always arrive at the same result. In other words, every composite number has a unique prime factorization. This fact is so important it’s called the Fundamental Theorem of Arithmetic.
Prime numbers are arguably the most fundamental building blocks of an area of math called number theory. But even as fundamental as they are, they’re also surprisingly mysterious. They’ve fascinated and puzzled people through the ages, and even today we don’t know everything about them.
Let’s dive in and explore these special numbers. We’ll start by asking: How do we find prime numbers? Can we make a list of them?
Okay, let’s try. We’ll start at the beginning, at the number $1$. $1$ is funny – it’s actually not a prime number. Remember, our definition says that a prime number’s only factors are $1$ and itself, and “$1$ and itself” in this case means “$1$ and $1$,” which doesn’t really fit the definition. It’s not composite, but it’s not prime either. So we leave out $1$ from our prime number list.
Now $2$. $2$ is the first prime number. It’s also the only even prime number. (Can you figure out why?) Then comes $3$, which is also prime. $4$ is not prime, since $4 = 2 \times 2$. But $5$ is a prime number: it’s not divisible by any of the primes smaller than itself, so we can’t break it up any further. $6$ is not prime, as we saw before: $6 = 2 \times 3$. But $7$ is prime; you can’t factor out any $2$, $3$, or $5$ (the primes smaller than $7$). $8$ is not prime; it’s divisible by $2$ as well. $9$ is not prime either, since $9 = 3 \times 3$. Neither is $10$, since it’s also divisible by $2$.
This is getting a little tiring. Every time we test a number to see if it’s prime, we have to check all the prime numbers smaller than it to see if any of them are factors. We’ve only checked the numbers up to $10$ so far, and we only have four primes: $2$, $3$, $5$, $7$. This might take an awfully long time if, for example, we were trying to see if $1,000,003$ is prime. There’s got to be a better way to find prime numbers.
Fortunately, we can take a couple of tactics to make our search easier. One way we can do this is by using what’s called the Sieve of Eratosthenes, named after a fellow from ancient Greece. Here’s how it works.
We start with a grid of all the numbers we want to test. (We’ll gray out $1$ because we already know it’s not prime.) For now we’ll just go up to $100$, though you can extend the grid as far as you want.
We first circle the first prime number, which we already know is $2$:
Now we count off every other number, shading them because we know they’re divisible by $2$ but they’re bigger than $2$:
Right after $2$ is a number we haven’t shaded: $3$. We circle this prime:
And then we shade every third number, thus eliminating all composite numbers divisible by $3$. We might run into a number that’s already grayed out, and that’s fine – it’s already been marked composite, and composite it shall stay.
The number right after $3$ is grayed out, which means we’ve marked it as composite. (And it’s just as we expected, since $4 = 2 \times 2$.) So we skip over it and head to the next open number: $5$. We do the same thing, circling it and shading all its multiples.
Onward we go. $6$ is grayed out, so we skip it and go to $7$. As before, we circle the prime and shade its multiples.
Now we skip over $8$, $9$, and $10$, and find that the next prime is $11$:
We could keep going like this all the way to $100$. (If we were using a bigger grid, we could go even further.) The composite numbers fall through the Sieve, and what we have left over – the circled numbers – are our primes.
But we can make this process even easier! At some point in our Sieve-making, the step of shading all the multiples of the current prime became trivial. Past $50$, all the next multiples of our primes were past the end of the sieve. So once we hit that halfway point, we could just stop and circle all the surviving numbers. We can do even better than that, though: we only need to check numbers up to the square root of the size of the sieve. (Read that again, slowly, and try to figure out why it’s true. Hint: look what happened when we marked the multiples of $11$.)
Anyway, here’s the resulting list of primes less than $100$:
$2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97$
The list proceeds in skips and hops of irregular length. There’s no clear pattern to the primes – we can only guess where exactly the next one might land.
For that matter, how do we know for certain there’s a “next one” at all? Might the list just stop at some point? Is there a biggest prime? After all, it makes intuitive sense that primes should become scarcer as they get bigger.
It turns out that there are an infinite number of primes: there is no “biggest” one, because if there were, you could always find one that’s bigger. We can prove it, too!
We’ll start by assuming that there’s a biggest prime, so that if we make a list of all the primes we’ll eventually get to the end. Now let’s use this list to build an even bigger number that has to be prime. What we’ll do is multiply all the primes in our list together, and then add $1$. This new (huge) number isn’t divisible by $2$, because it’s $1$ more than a multiple of $2$; it’s not divisible by $3$, because it’s $1$ more than a multiple of $3$; it’s not divisible by $5$, because it’s $1$ more than a multiple of $5$; and so on through all the primes on our list, all the way up to the biggest prime. Therefore, our new huge number must be prime. But that makes no sense! We assumed there were no primes bigger than the last prime on our list, and now we’ve contradicted ourselves by saying there’s something bigger than the biggest. So our assumption must be wrong. There is no biggest prime.
“Okay,” you might say. “So we can just use this method to keep generating more primes, right? We can start with, say, $(2 \times 3 \times 5) + 1$, and calculate it out to $31$, and hey presto, it’s prime!”
Not necessarily. The numbers you’re talking about, where you multiply the first $n$ primes and then add $1$, are called Euclid numbers, and they’re not always prime (though they certainly can be, as in your example).
“Why not? Isn’t that what we did in our proof just now? We built a Euclid number and knew it had to be prime?”
Well, not quite. See, we only knew our new huge number was prime because we assumed that we knew what the biggest prime in the world was. But now we know that our assumption was false. So if there’s another prime between our “biggest prime” and our Euclid number, it could potentially gum up the works. Just say we multiply all the primes up to $13$ and make a Euclid number from that:
$(2 \times 3 \times 5 \times 7 \times 11 \times 13) + 1 = 30030 + 1 = 30031$
But $30031$ is not prime: $30031 = 59 \times 509$. This is an example where two primes ($59$ and $509$) between our “biggest prime” ($13$) and our Euclid number ($30031$) happened to be factors of our Euclid number.
“Okay,” you reply. “So that algorithm doesn’t always give us primes. Is there some other algorithm that will?”
This is a very good question, and one that has baffled mathematicians for years. There have been many valiant attempts at solving this conundrum.
For instance, a mathematician named Pierre de Fermat came up with this formula:
$2^{2^n} + 1$
Fermat thought that this formula would always result in primes, no matter what $n$ you stuck into it. And the first four Fermat numbers, as they’re called, are indeed prime. (This was back in the days before calculators, so Fermat figured that all out on paper, which was a big deal.) But the formula breaks down at $n = 5$:
$2^{2^5} + 1 = 2^32 + 1 = 4294967297 = 641 \times 6700417$
In fact, all the Fermat numbers bigger than this that we’ve calculated so far have been composite. So that didn’t work.
Another such formula is this one, invented by a person named Marin Mersenne:
$2^n - 1$
Numbers in this form are called Mersenne numbers, and if a Mersenne number is prime it’s called a Mersenne prime. The first few Mersenne primes are $3$, $7$, $31$, and $127$.
Not every $n$ that we plug into the formula will result in a prime number. But it turns out we can be even more specific: if $n$ is composite, then $2^n - 1$ must also be composite. Does that mean that plugging in a prime $n$ will give us a prime Mersenne number? No, not always. If we let $n = 11$, for example, then we get
$2^n - 1 = 2^{11} - 1 = 2047 = 23 \times 89$
So a prime $n$ doesn’t guarantee a Mersenne prime. Nevertheless, Mersenne primes are still very important – most record-breaking prime numbers are Mersenne primes. The biggest number so far that we know to be a prime is
$2^{57,885,161} - 1$
That is an absolutely ginormous Mersenne prime. When you calculate it out, it has over $17$ million digits. Just to get a sense for how ginormous that is, the number of atoms in the observable universe has about $80$ digits.
The distribution of prime numbers is still at the cutting edge of mathematics today. The most recent development in the quest to find a formula for them is called the Riemann hypothesis, which is one of the greatest unsolved problems in mathematics today. The Riemann hypothesis basically says that the numbers that make a certain function equal to zero all have to be in the form $\frac{1}{2} + t\sqrt{-1}$ (for some $t$); these numbers can then be used in another formula that tells how many prime numbers are less than any given number. It might sound a bit roundabout, but if it’s proven, the Riemann hypothesis would give us a way to predict the distribution of prime numbers with remarkable accuracy. So far it’s been over $150$ years, and nobody’s proven it yet!
Prime numbers, those that can’t be broken down into smaller factors, are simple to start playing with but intriguingly complex to fully master. Who knows – maybe you’ll be the next one to discover something new about these enigmatic numbers! |
# Heron’s Formula for Area of Triangle, Proof and Example
## Heron’s Formula
Heron of Alexandria, also known as Hero, was a Greek geometer and inventor who lived around AD 62 in Alexandria, Egypt, and whose writings preserved knowledge of Babylonian, Egyptian, and Greco-Roman mathematics and engineering for posterity.
## Heron’s Formula for Area of Triangle
Metrica, Heron’s most important geometric work, was not discovered until 1896. It’s a three-volume compilation of geometric principles and formulas on areas and volumes of flat and solid forms that Heron compiled from a range of sources, some of which date back to ancient Babylon. The methods for calculating the area of various plane figures and the surface areas of common solids are listed in Volume I. A derivation of Heron’s (really, Archimedes’) formula for the area A of a triangle is included.
A = √s(s−a)(s−b)(s−c)
where a, b, and c are the lengths of the sides of the triangle, and s is one-half the triangle’s perimeter or
s = a + b + c
Related Post:
## Heron’s Formula: Definition
Heron’s formula (also known as Hero’s formula) gives the area of a triangle given the lengths of all three sides are known in geometry. It is named after Hero of Alexandria, who is the person to find the formula. Unlike previous triangle area formulas, no angles or other distances in the triangle must be calculated first.
## Heron’s Formula: History
Heron of Alexandria penned Heron’s formula about 60 CE. He was a Greek engineer and mathematician who calculated the area of a triangle using only the lengths of its sides and went on to calculate the areas of quadrilaterals using the same method. This formula was used to prove trigonometric laws such as the Laws of Cosines and the Laws of Cotangents.
## Heron’s Formula: Example
Heron’s formula is a mathematical formula that may be used to calculate the area of a triangle given its three side lengths. It can be used to any shape or type of triangle as long as the three side lengths are known. Hero’s Formula is another name for it. Also, that it is not requited to know a triangle’s angle measurement to compute its area.
According to the Heron, the formula may be used to compute the area of any triangle, whether it is isosceles, equilateral, or scalene, given the sides of the triangle. Consider the triangle ABC, which has sides a, b, and c, respectively.
## Heron’s Formula: FAQs
Ques. What is Heron’s formula?
Ans. Heron’s Formula is used to calculate the area of a triangle. Area of a triangle using Heron’s Formula, A = √{s(s-a)(s-b)(s-c)}, in which a, b and c are the length of the three sides of a triangle and s being the semi-perimeter of the triangle, which is(a + b + c)/2.
Ques. Is Heron’s formula accurate?
Ans. Given the lengths of each side, Heron’s formula calculates the area of a triangle. A direct application of Heron’s formula may not be accurate if you have a very narrow triangle with two sides that are about equal and the third side is substantially shorter.
Ques. What is the formula of Semi perimeter?
Ans. The area of a scalene triangle is calculated using this formula. S (Semi perimeter) = (a + b + c)/2, where a, b, and c are the side lengths.
Ques. Can Heron’s formula be used for all triangles?
Ans. The Heron’s formula for the area of a triangle is applicable for all triangles and can be used to find the area of any triangle no matter whatever are the side lengths of the triangle.
Ques. Was Hero of Alexandria, Heron Greek?
Ans. Heron of Alexandria, also known as Hero, was a Greek geometer and inventor who lived around AD 62 in Alexandria, Egypt, and whose writings preserved knowledge of Babylonian, Egyptian, and Greco-Roman mathematics and engineering for posterity. Metrica, Heron’s most important geometric work, was not discovered until 1896.
|
A train is traveling from New Orleans to Memphis at a constant speed of 79 mph. New Orleans and Tennessee are 395.1 miles apart. How long will the train take to reach Tennessee?
$R \cdot T = D$
$\frac{D}{R} = T$ we are solving for T, time
$\frac{395.1}{79} = T$
$T = 5$ |
# Wade Not In Unknown Waters: Part Three
General and Gameplay Programming
I'm going on to tell you about how programmers walk on thin ice without even noticing it. Let's speak on shift operators [lessthan][lessthan], >>. The working principles of the shift operators are evident and many programmers even don't know that using them according to the C/C++ standard might cause undefined or unspecified behavior.
You can read the previous articles here: [1], [2].
## Excursus to the history
A bit of history first. The necessity of bit shifting operations is evident to any programmer. Anyone sooner or later faces the need to handle individual bits and bit masks. However, shift operators are much more popular among programmers than they should be. The reason is that you can multiply and divide numbers by powers of two. For example, the "X [lessthan][lessthan] 3" operation will multiply X by 8. In the past, the advantage of this number multiplication/division method lied in the speed of its work.
I've just got a book from the dusty shelf with a description of assembler commands for processors from 8086 to 80486. I've found a table with the number of clock cycles necessary to perform various instructions.
Multiplying a 16-bit register by a memory cell using the MUL instruction takes about 124-139 clock cycles on the 8086 processor!
A shift of a 16-bit register by N digits using the SHL instruction takes 8+4*N clock cycles on the 8086 processor. That is, it will take 72 clock cycles at worst.
You could get a noticeable speed gain by using various tricks handling bitwise operations when calculating arithmetic expressions. This is what became the reason for massively using shifts - first in assembler, and then in C and C++. The first C/C++ compilers were simple. You could get a performance gain by explicitly prompting the compiler to use a shift instead of multiplication or division instructions in certain places.
As processors were developing, shift operators were of use for a long time. On the 80486 processor, multiplication now took about 26 clock cycles. Seems like it became much better, doesn't it? But a shift operator took just 3 clock cycles at that time and again appeared to be better than multiplication.
Fortunately, most of these forced optimizations have been forgotten by now. First, compilers have become smarter and now use an optimal instruction set to calculate arithmetic expressions. Second, processors have undergone great changes too. Pipelines, branch predictions, register renaming and many other things have appeared. That's why an ordinary programmer of nowadays cannot tell for sure how much time will take the execution of a certain instruction. But it's clear that if some fragments of code are not ideal, you may not even notice it. The processor will split instructions into micro-instructions and start executing them in parallel. To be honest, I don't make out now how it all goes on there. I've come to the understanding that it's no longer reasonable to know all the subtleties starting with the Intel Pentium processor. So, I've concluded that one should not think that one knows better how to write optimized code and use shifts and bitwise operations wherever possible. It's not necessarily true that you can make the code faster than the compiler's optimizer can, but you can tell for sure that the program will become complicated and difficult to understand in that case.
Everything said above doesn't mean that you cannot benefit from bitwise operations anymore. There are many interesting and useful tricks [3]; just don't get too fond of them.
## Undefined behavior
It all began when I decided to create more diagnostics related to undefined behavior [4] and unspecified behavior [5] in PVS-Studio. It took me rather little time and effort to create a rule to detect incorrect use of shift operators. And after that I had to stop and think it over.
It turned out that programmers are very fond of shifts. They use them in every way they can, which often leads to undefined behavior from the viewpoint of the coding standard. But theory is one thing and practice is another. Is there sense in persecuting code that has been faithfully serving you for many decades and gone through many compilers? That's a difficult question. Despite the code being incorrect, compilers adhere to some secret agreement and process it uniformly.
After pondering over it for a long time, I finally decided to leave this diagnostic rule in PVS-Studio without making any exceptions to it. If there are too many complaints from users, maybe I will change my mind. However, perhaps users will be satisfied by the capability of disabling this diagnostic or use other methods of warning suppression.
By the way, it is these painful thoughts that made me write the article. I hope that you will find the information I'm going to show you interesting and useful.
So, let's see what the C++11 standard has to say about shift operators:
The shift operators [lessthan][lessthan] and >> group left-to-right.
The operands shall be of integral or unscoped enumeration type and integral promotions are performed.
1. The type of the result is that of the promoted left operand. The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
2. The value of E1 [lessthan][lessthan] E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 * 2^E2, reduced modulo one more than the maximum value representable in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1*2^E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
3. The value of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a non-negative value, the value of the result is the integral part of the quotient of E1/2^E2. If E1 has a signed type and a negative value, the resulting value is implementation-defined.
It is sad to read such texts. But don't worry - now we will study various issues by examples.
The simplest case leading to undefined behavior is the situation when the right operand has a negative value. For example:
int A = 10; int B = A << -5;
Thank God, nobody does it that way. Well, at least we haven't seen such errors after analyzing more than 70 open-source projects.
The next case is much more interesting. This is a shift by N bits where N is larger than the number of bits in the left operand. Here is a simple example:
int A = 10; int B = A << 100;
Let's see what such an error looks like in practice. The next code fragment was found in the Lib7z library:
SZ_RESULT SafeReadDirectUInt64(ISzInStream *inStream, UInt64 *value) { int i; *value = 0; for (i = 0; i < 8; i++) { Byte b; RINOK(SafeReadDirectByte(inStream, &b)); *value |= ((UInt32)b << (8 * i)); } return SZ_OK; }
PVS-Studio's diagnostic message: V610 Undefined behavior. Check the shift operator '[lessthan][lessthan]'. The right operand ('(8 * i)' = [0..56]) is greater than or equal to the length in bits of the promoted left operand. lib7z 7zin.c 233
The function tries to read the 64-bit value byte-by-byte. Unfortunately, it will fail if the number was larger than 0x00000000FFFFFFFF. Note the (UInt32)b [lessthan][lessthan] (8 * i) shift. The size of the left operand is 32 bits. The shift takes from 0 to 56 bits. In practice, it will cause the high-order part of the 64-bit value to remain filled with zeroes. Theoretically, it is undefined behavior here and the result cannot be predicted.
This is the correct code:
*value |= ((UInt64)b << (8 * i));
char A = 1; int B = A << 20;
Yes, it is. To the left of the [lessthan][lessthan] operator is the A variable consisting of only 8 bits. But the left part will be extended to the int type before the shift. Therefore, a value of the 'int' type can be shifted by 20 bits.
And now for the most interesting thing - shifting of negative values. Here is a simple example:
int A = (-1) << 5; // undefined behavior int B = (-1) >> 5; // unspecified behavior
We can see undefined or unspecified behavior in this code. There's no difference between them from a practical point of view. Only one conclusion is to be drawn from this case - you should not write such code.
We could finish at this point and cite a couple of examples. But unfortunately, there are two peculiarities that spoil this idealistic picture.
## The peculiarities that spoil the idealistic picture
Peculiarity N1. In the old C++ language standard of 1998, cases with undefined behavior are avoided. It says only how the [lessthan][lessthan] operator behaves when unsigned values are shifted, but it doesn't say anything about signed values. So, it is that very case when reading the standard doesn't make the point any clearer to you: this case is simply not considered, and that's it.
So, from the viewpoint of C++ of 1998, the (-1) [lessthan][lessthan] 5 construct doesn't cause undefined behavior. However, it doesn't describe how it should work either.
Peculiarity N2. Programmers feel safe to shift negative values in many programs. It's hard to argue with them, as the code does work.
Let's try to find out if we should refuse implementing the new diagnostic because of the above mentioned peculiarities. We believe that we shouldn't.
The old C++ standard doesn't say anything about undefined behavior. But the new one does. It turns out that the old standard simply was not precise enough. By the way, the new C language standard (I checked the rough copy of 25th June, 2010) also says that shifts of negative values cause undefined behavior. The conclusion is you should eliminate incorrect code.
Now to the subject of a widespread use of dangerous shifts. They are really numerous. For example, in the JPEG library you need to fill an array with the following values:
11...11111111111111b 11...11111111111101b 11...11111111111001b 11...11111111110001b ....
This is how it is written:
/* entry n is (-1 [lessthan][lessthan] n) + 1 */ static const int extend_offset[16] = { 0, ((-1)<<1) + 1, ((-1)<<2) + 1, ((-1)<<3) + 1, ((-1)<<4) + 1, ((-1)<<5) + 1, ((-1)<<6) + 1, ((-1)<<7) + 1, ((-1)<<8) + 1, ((-1)<<9) + 1, ((-1)<<10) + 1, ((-1)<<11) + 1, ((-1)<<12) + 1, ((-1)<<13) + 1, ((-1)<<14) + 1, ((-1)<<15) + 1 };
We cannot tell that the JPEG library is a bad one. This code is time-proven and has gone through various compilers.
From the standard's viewpoint, it should be rewritten in the following way:
static const int extend_offset[16] = { 0, ((~0u)<<1) | 1, ((~0u)<<2) | 1, ((~0u)<<3) | 1, ((~0u)<<4) | 1, ((~0u)<<5) | 1, ((~0u)<<6) | 1, ((~0u)<<7) | 1, ((~0u)<<8) | 1, ((~0u)<<9) | 1, ((~0u)<<10) | 1, ((~0u)<<11) | 1, ((~0u)<<12) | 1, ((~0u)<<13) | 1, ((~0u)<<14) | 1, ((~0u)<<15) | 1 };
But it's up to you to decide whether or not you need such corrections. I can only advise that you should do this: you don't know when and to what consequences it may lead.
We could give you other examples of negative value shifts, but they are all alike and won't be interesting to read about.
## Conclusions
1. Using bitwise operations and shifts was earlier considered as a token of programmer's skill and allowed you to write fast code. Now it has almost no relevance. It's much more important that the code is understandable. I advise that you play with bits only when it is really necessary.
2. Expressions of the "(-1) [lessthan][lessthan] N" kind are now declared as incorrect and leading to an undefined behavior.
3. Expressions of the "(-1) [lessthan][lessthan] N" kind have been used for a long time and quite often. That's why we cannot give strong arguments against using such constructs. The only arguments are the new C and C++ language standards.
4. It is up to you to decide if you should fix negative value shifts. But I do recommend doing this. Just in case, at least.
5. Diagnostic messages covering dangerous shifts will be available in PVS-Studio starting with version 4.60 which is to be released soon.
0
|
### Consecutive Numbers
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
### Calendar Capers
Choose any three by three square of dates on a calendar page...
### Latin Numbers
Can you create a Latin Square from multiples of a six digit number?
# Painted Purple
##### Stage: 3 Short Challenge Level:
A wooden cube has three of its faces painted red and the other three of its faces painted blue, so that opposite faces have different colours. It is then cut into $27$ identical smaller cubes. How many of these new cubes have at least one face of each colour?
|
Adiabatic quantum computation (AQC) relies on the adiabatic theorem to do calculations[1] and is closely related to, and may be regarded as a subclass of, quantum annealing.[2][3][4][5] First, a complex Hamiltonian is found whose ground state describes the solution to the problem of interest. Next, a system with a simple Hamiltonian is prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the complex Hamiltonian. By the adiabatic theorem, the system remains in the ground state, so at the end the state of the system describes the solution to the problem.
AQC is a possible method to get around the problem of energy relaxation. Since the quantum system is in the ground state, interference with the outside world cannot make it move to a lower state. If the energy of the outside world (that is, the "temperature of the bath") is kept lower than the energy gap between the ground state and the next higher energy state, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay in a single system eigenstate as long as needed.
Universality results in the adiabatic model are tied to quantum complexity and QMA-hard problems. The k-local Hamiltonian is QMA-complete for k ≥ 2.[6] QMA-hardness results are known for physically realistic lattice models of qubits such as [7] $H = \sum_{i}h_i Z_i + \sum_{i where $Z, X$ represent the Pauli matrices $\sigma_z, \sigma_x$. Such models are used for universal adiabatic quantum computation. The Hamiltonians for the QMA-complete problem can also be restricted to act on a two dimensional grid of qubits[8] or a line of quantum particles with 12 states per particle.[9] and if such models were found to be physically realisable, they too could be used to form the building blocks of a universal adiabatic quantum computer.
In practice, there are problems during a computation. As the Hamiltonian is gradually changed, the interesting parts (quantum behaviour as opposed to classical) occur when multiple qubits are close to a tipping point. It is exactly at this point when the ground state (one set of qubit orientations) gets very close to a first energy state (a different arrangement of orientations). Adding a slight amount of energy (from the external bath, or as a result of slowly changing the Hamiltonian) could take the system out of the ground state, and ruin the calculation. Trying to perform the calculation more quickly increases the external energy; scaling the number of qubits makes the energy gap at the tipping points smaller.
## D-Wave quantum processors
The D-Wave One is a device made by a Canadian company D-Wave Systems which describes it as doing quantum annealing.[10] In 2011, Lockheed-Martin purchased one for about US\$10 million; in May 2013, Google purchased a D-Wave Two with 512 qubits.[11] As of now, the question of whether the D-Wave processors offer a speedup over a classical processor is still unanswered. Tests performed by researchers at USC, ETH Zurich, and Google show that as of now, there is no evidence of a quantum advantage.[12][13]
## Notes
|
# Bounding a prime number based equation
by Nelphine
Tags: based, bounding, equation, number, prime
P: 11 Hello all! I realize I am new to the community of online math forums, so I'm probably breaking a few etiquette rules (and possibly more important rules too - if so, please let me know and I'll fix what I can.) However, I am working on a math problem, and I am stuck on bounding a particular equation, and I am looking for help. I'm trying to bound a series of equations, each equation based on the first k primes greater than 3: k = 1, f(x) = x - x*2/5 k = 2, f(x) = x - x*2/5 - x*2/7 + x*4/35 k = 3, f(x) = x - x*2/5 - x*2/7 - x*2/11 + x*4/35 + x*4/55 + x*4/77 - x*8/385 etc The first wrinkle comes from the fact that x will always be an integer, and my answer must always be a positive integer. For example, if x was 103, then in the first equation (-x*2/5) would seem to be -41. However, the problem comes from the fact that the answers will not be completely evenly spread (we can't just use the floor or ceiling). Specifically, if x was 103, then in the first equation (-x*2/5) could be -40, -41, or -42. The value changes for each value of x. My concern comes from the third equation (and later equations, as we continue to get more and more terms in the equation as we continue adding prime numbers). Continuing the example, if x was 103, then: -x*2/5 could be -40, -41, or -42. -x*2/7 could be -28, -29 or -30. -x*2/11 could be -18, -19 or -20. x*4/35 could be 8, 9, 10, 11 or 12. x*4/55 could be 4, 5, 6, 7 or 8. x*4/77 could be 4, 5, 6, 7 or 8. -x*8/385 could be 0, -1, -2, -3, -4, -5, -6, -7, or -8. If I simply assume the largest possible answer for each one, then I end up in a situation where my answer is a negative integer (which I know it cannot be). So how do I put a bound on this equation such that I can ensure I'll always get a positive solution? As an additional note, this equation is equivalent to (at least) one other equation, which is much easier to manipulate and put bounds on. However, since this is the original equation I am working with, I need to make sure any bounds have their basis within the original equation, and then transfer them to the other equation, in order to actually finish the problem.
I can't make sense of the statement that if x = 103 then (-x*2)/5 could be -40,-41, or 42 since 103*2/5 = 41.2. If you explain the problem with more detail to avoid confusion, you stand a better chance of getting an answer to the question.
P: 891
## Bounding a prime number based equation
You are looking on a way to determine a more restricted bound on {m,n,o,p,...}. Lets call them m(i). As far as I understand m(i) appear in the equations f(x) = y - Sum[ for 3< prime(p) < p(n), 2^j *[Floor[y/ Product(p(1),P(2)...(p(j))] + m(i)]. As far as I understand 0<= m(i)<= 2^j. You attempt to show how the occasion of bad results can limit the value for "o" to > 1 in the third to the last paragraph of your last post, but I cant see how to determine good results from bad results or whether the result for m = 0,n = 2, o = 0 is any different from the result for m = 0, n = 0, o = 2. Accordingly, I need a better understanding of your problem to help you.
P: 11 My attempt at a proper formula, although I know it's not quite right: given k, let m = [p(k)^2 - p(k)]/6, where p(k) = the kth prime number, then: f(k) = m + sum [ from i = 1 to k, sum ( from j = 3 to k, (-1)^i * 2^i *floor[m/ product ( from L = j to k, p(L) )] +n(i,j) ) ] where n(i,j) is an element from the set {0,1,...,2^i} I think there's still something wrong with the product part of it (specifically what L should range over), sigh.. (L should range over i elements, but how do I formalize which elements??? (since for instance specific terms could be p(3)*p(9) if i = 2, or it could be p(4)*p(6)*p(7) if i = 3.)
P: 11 Another thing that has been pointed out to me that is wrong with that formula: I forgot to include a set of brackets containing [2^i *floor[m/ product ( from L = j to k, p(L) )] +n(i,j)], as both of those terms should be affected by the (-1)^i
|
# What is the Spanish Homophonic Group?
From the math.stackexchange question: The homophonic group: a mathematical diversion:
By definition, English words have the same pronunciation if their phonetic spellings in the dictionary are the same. The homophonic group H is generated by the letters of the alphabet, subject to the following relations: English words with the same pronunciation represent equal elements of the group. Thus be=bee, and since H is a group, we can conclude that e=1 (why?). Try to determine the group H.
This is an exercise from Michael Artin's Algebra on, well, abstract algebra.
In this exercise for the English language, words are equal if they are homophones, kind of like a formalisation of the joke that sin(x)/n=6. So in English:
• bee=be → This implies e=1 by cancellation of b and e.
• buy=by → This implies u=1 by cancellation of b and y.
• rase=raze → This implies s=z by cancellation of r, a and e.
• canvass = canvas → This implies s=1 by cancellation of c,a,n,v,a and s. By canvass=canvas and rase=raze, we have s=z=1.
Eventually, all 26 English letters will equal 1. Apparently, this was done for French and Czech.
What then is the analagous group for Spanish? Equivalently, what Spanish letters won't equal 1?
• I think i would upvote this if i could understand the question
– Mike
This question may have different answers depending on a number of factors, including:
• the dialect(s) of Spanish you consider
• which words you include
• which letters you consider distinct
• how narrow a phonemic/phonetic transcription you use for equivalence
Abusing a combination of interpretations of these rules, we may be able to get Spanish down to the trivial group {e}, but let's see how far we can get with a minimal number of assumptions.
Note: all rules appendixed with `dup` duplicate other equivalences and aren't strictly necessary. The overlap is maintained here in case an earlier rule is disputed.
### Assumptions
To start off, let's assume:
• distinción
• no yeísmo
• letters with diacritics are distinct
• only consider words which appear in the DLE
• only consider phonetic transcriptions or equivalent pronunciations described/implied by the RAE in its publications (DLE, DPD etc)
### Relations
When it doesn't appear in the digraph ch, h is silent. Thus, for example:
0. Silent letters
1. ha = a ⇒ h=1
``````{a,á,b,c,d,e,é,f,g,[h,1],i,í,j,k,l,m,n,ñ,o,ó,p,q,r,s,t,u,ú,ü,v,w,x,y,z}
``````
Some letters are pronounced identically (in certain contexts), for example:
1. Identical sounds
1. bote = vote ⇒ b=v
2. agito = ajito ⇒ g=j
3. encima = enzima ⇒ c=z
4. cappa = kappa ⇒ c=k
5. samurái = samuray ⇒ i=y (1.6)
6. Diacritics
• cual=cuál ⇒ a=á
• el=él ⇒ e=é
• si=sí ⇒ i=í
• como=cómo ⇒ o=ó
• tu=tú ⇒ u=ú
``````{[a,á],[b,v],[z,c,k],d,[e,é],f,[g,j],[h,1],[i,í,y],l,m,n,ñ,[o,ó],p,q,r,s,t,[u,ú],ü,w,x}
``````
1. Old orthographies
1. quilo = kilo ⇒ qu=k ⇒ cu=k (3.1) ⇒ u=1 (1.4)
2. México = Méjico ⇒ x=j
``````{[a,á],[b,v],[z,c,k,qu],d,[e,é],f,[g,j,x],[h,1],[i,í,y],l,m,n,ñ,[o,ó],p,q,r,s,t,[u,ú],ü,w}
``````
1. Latinisms
1. cuórum = quorum ⇒ c=q
2. sub iudice = sub judice ⇒ i=j
``````{[a,á],[b,v],[z,c,k,q],d,[e,é],f,[g,x,j,i,í,y],[h,u,ú,1],l,m,n,ñ,[o,ó],p,r,s,t,ü,w}
``````
1. Unnativised orthographies
1. huincha = wincha ⇒ hu=w ⇒ u=w (0.1) ⇒ w=1 (2.1)
2. wolframio = volframio ⇒ w=v ⇒ v=1 (4.1)
3. detall = detal ⇒ ll=l ⇒ l=1
4. sunní = suní ⇒ nn=n ⇒ n=1
5. judo = yudo ⇒ j=y `dup 1.5, 3.2`
``````{[a,á],[z,c,k,q],d,[e,é],f,[g,x,j,i,í,y],[l,h,b,v,w,u,ú,n,1],m,ñ,[o,ó],p,r,s,t,ü}
``````
1. Greek consonant clusters
1. gneis = neis ⇒ gn=n ⇒ g=1
2. psicología = sicología ⇒ ps=s ⇒ p=1
3. cneoráceo = neoráceo ⇒ cn=n ⇒ c=1
4. mnemónica = nemónica ⇒ mn=n ⇒ m=1
``````{[a,á],d,[e,é],f,[z,c,k,q,p,m,g,x,j,i,í,y,l,h,b,v,w,u,ú,n,1],ñ,[o,ó],r,s,t,ü}
``````
1. Reduced consonant clusters (prefixes)
1. substancia = sustancia ⇒ subs=sus ⇒ b=1 `dup`
2. transalpino = trasalpino ⇒ trans=tras ⇒ n=1 `dup`
3. consciencia = conciencia ⇒ cons=con ⇒ s=1
4. postmoderno = posmoderno ⇒ post=pos ⇒ t=1
``````{[a,á],d,[e,é],f,[z,c,k,q,p,m,g,x,j,i,í,y,l,h,b,v,w,u,ú,t,n,1],ñ,[o,ó],r,s,ü}
``````
1. Alophones
1. huaca = guaca ⇒ hu=gu ⇒ h=g ⇒ g=1 (0.1) `dup`
2. huemul = güemul ⇒ hu=gü ⇒ hu=ü (5.1) ⇒ u=ü (0.1) ⇒ ü=1 (2.1)
3. excusa = escusa ⇒ xc=sc ⇒ x=s
4. envasar = embasar ⇒ nv=mb ⇒ n=m (1.1) `dup`
``````{[a,á],d,[e,é],f,[z,c,k,q,p,m,g,x,s,j,i,í,y,l,h,b,v,w,u,ú,ü,t,n,1],ñ,[o,ó],r}
``````
1. Synalepha
1. contraalmirante = contralmirante ⇒ aa=a ⇒ a=1 (see also bezaar > bezar etc)
``````{d,[e,é],f,[a,á,z,c,k,q,p,m,g,x,s,j,i,í,y,l,h,b,v,w,u,ú,ü,t,n,1],ñ,[o,ó],r}
``````
So, with our initial conditions we can reduce the alphabet down to the free group on 6 generators:
``````A: {1,d,e,f,ñ,o,r}
``````
# B. Crude loanwords and abbreviations/acronyms
Including crude loanwords (those that appear in italics in the DLE) which have nativised doublets, we can gain a few more relations:
1. Further loans
1. sioux = siux ⇒ o=1
2. soufflé = suflé ⇒ ouf=u (9.1) ⇒ uf=u ⇒ f=1
3. toffee = tofe ⇒ fe=1 ⇒ e=1 (9.2)
``````{[e,é,o,ó,f,a,á,z,c,k,q,p,m,g,x,s,j,i,í,y,l,h,b,v,w,u,ú,ü,t,n,d,1],ñ,r}
``````
Now, finally we come to rr. First, the following lemma on hyphens:
1. Hyphens
1. fino-ugrio = finoúgrio > -=1
Now:
1. Acronyms and abbreviations
1. CD-ROM = cederrón (10.1) ⇒ CDROM=cederrón ⇒ r=rr ⇒ r=1
2. Letters
1. r = erre ⇒ r=rr ⇒ r=1 `dup 11.1`
2. c = ce ⇒ e=1 `dup 9.2`
Which leaves us with:
``````A,B: {1,ñ}
``````
The free group on one generator.
Assuming a different dialect and relaxing our restriction on RAE-sanctioned pronunciations (i.e. including pronunciations it recognises, but admonishes as not belonging to 'habla culta'), we can reduce the group without relying on italicised loanwords or abbreviations (B):
# C. Andalusia
We revise our assumptions. Note the following changes only add further equivalences, and do not negate existing ones:
1. ...
1. distinción seseo
• seda = ceda = zeda ⇒ s=c=z `dup`
2. no yeísmo yeísmo
• arrollo = arroyo ⇒ ll=y `dup`
3. elision of intervocal d:
• cantador = cantaór ⇒ d=1 (1.6)
4. elision of terminal r and d:
• comer = `[ko'me]` = comed ⇒ r=d ⇒ r=1 (12.3)
We thus achieve the free group on 4 generators:
``````A,C: {1,e,f,ñ,o}
``````
# D. South America
Although the RAE proscribes pronouncing formal examples of hiatus as diphthongs in 'habla esmerada', it does note that this occurs even in educated speech in Mexico and other South American countries. Thus we might also assume the following equivalences:
1. Hiatus > diphthong
1. noroeste = norueste ⇒ o=u (e.g. toalla > [ˈtwaja]) ⇒ o=1
2. óleo = olio ⇒ i=e (e.g. beatitud > [bʝatiˈtuð]) ⇒ e=1
1. Hypercorrection
1. buganvilla = buganvilia ⇒ ll=li ⇒ l=i ⇒ l=1 `dup`
Thus we could alternatively reduce the group to:
``````A,D: {1,d,f,ñ,r}
``````
# E. Other alophones
As opposed to using italicised loanwords or a specific dialect, we could consider other alophonic equivalences not explicitly stated by the RAE:
1. Further Alophones
1. icnita = ignita ⇒ c=g (/k/ voiced approximant [ɣ] before a voiced consonant) `dup`
2. yezgo = yedgo⇒ z=d (/θ/ voiced [ð] before a voiced consonant) ⇒ d=1
3. zafra = `[ˈθavɾa]` ~ `[ˈθaβɾa]` = zabra ⇒ f=b (/f/ voiced [v] before a voiced consonant, [v] allophone of /b/) (e.g. afgano = [avˈgano]) ⇒ f=1
4. desrabar = derrabar1 2 (p.50) ⇒ sr=rr ⇒ s=r ⇒ r=1 Elision of 's' in consonant cluster 'sr' and fricative realisation of 'r' i.e. 'r' > 'rr' (fricative interpreted by native speakers as allophonic to trill, never to tap)
(Note: the RAE itself claims sr=srr)
This would leave us with:
``````A,E: {1,e,o,ñ}
``````
Note: if we assume a seseo dialect we must omit rule 14.2 since the voiced seseo realisation of `/θ/` is `[z]`, not `[ð]`.
# Ñ
The existence of the following pair:
pergeño, pergenio
And given /n/ has a palatalized alophone [ɲ] ("ñ") before palatals ([ʎ], [j], [ʝ], [dʒ]), makes it very tempting to try and find a way to reduce ñ to 1, but so far I haven't been able to find a convincing example.
• @ukemi ah, yes, I didn't think of prefixation. Here the removal of one will modify the pronunciation but with even less of a distinction as cree -> cre, although technically both /b/ are pronounced (consider how tiny the difference between the sequences obio, obvio and obpio. There is a difference, but in rapid speech it may be imperceptible). In that case, we may consider B=V=1 and possibly even P=1, but I don't think further extension is possible. Aug 16, 2018 at 16:17
• Guifa do you agree with ukemi?
– BCLC
Aug 17, 2018 at 11:09 |
# Difference between spin and polarization of a photon
I understand how one associates the spin of a quantum particle, e.g. of a photon, with intrinsic angular momentum. And in electromagnetism I have always understood the polarization of an EM wave as the oscillations of the E and M field, not necessarily being aligned with the direction of propagation of the wave.
Questions:
• But when one talks about the polarization of a photon in Quantum Mechanics, how does it really differ from its spin?
• Are they somehow related and what's the physical idea behind photon polarization in contrast to photon-spin? Feel free to use mathematical reasonings as well if you see it fit!
The short answer is that the spin states of a photon come in two kinds, based on helicity, how the circular polarization tracks with the direction of the photons momentum. You can think of them as circularly polarized in the sense that we can define the relative relationship between the different polarizations the same way we do for classical electromagnetic waves (even though a single photon is not a classical electromagnetic wave), but we'll use the same math and the same terminology.
So I'll talk about polarization of classical electromagnetic waves just because you've already seen it. Imagine a wave travelling in the $z$ direction with the electric field always pointing in the same direction, say $\pm x$. This is called a linearly polarized wave. Same if the wave traveled in the $z$ direction and the electric field was in the plus or minus y direction. If those two waves were in phase and had the same magnitude, then their superposition would be a wave that oscillates at the same frequency/wavelength as the previous waves, and is still linearly polarized but this time not in the $x$ or $y$ direction but instead in the direction $45$ degrees (halfway) between them. Basically if the electric field always points in plus or minus the same direction, then that's linear polarization, and it could in theory be in any direction by adjusting the relative magnitude of an $x$ polarized one and a $y$ polarized one (that are in phase with each other).
OK, what if they aren't in phase, what if they they are a quarter of a period out of phase, then when the x direction is big the y direction is zero, so it points entirely in the x direction, then later it is entirely in the y direction, and so its direction moves in a circle (if the magnitudes of the out of phase fields in the x and y direction are the same magnitude the head does move in a circle, otherwise the head moves in an ellipse). If instead you put them three quarters of a a period out of phase, they will go in a circle in the opposite direction. The waves where the head of the electric field move in a circle are called circularly polarized waves.
OK, that's it for classical waves. You could discuss how photons make up classical waves, but that's not really what the question is about. The question is about spin for photons. And spin states for the photon come in two kinds, and the names for the positive spin $|+\hbar\rangle$ and the negative spin $|-\hbar\rangle$ are plus $|+\rangle$ and minus $|-\rangle$ and you can treat them just like the circularly polarized states.
Now we're going to steal some math and some terminology. Think of multiplying by $i$ as changing the phase of the wave by a quarter period, then we built up a circular polarization by $X+iY$ and the other circular polarization by $X+iii Y=X-iY$ so given two circular polarization you see that we can add them to get a linearly polarized state $|+\rangle + |-\rangle$ gives one of the linearly polarized states and $-i(|+\rangle - |-\rangle)$ gives a linearly polarized state orthogonal to the other one. We can borrow all the math and terminology from the classical waves, and associate the spin states of the photon with the right and left circularly polarized waves.
We are stealing the math and stealing the terminology, but the fact is that we have two vectors $|+\rangle$ and $|-\rangle$ and they span a (complex) two space of possibilities and the basis $$\left\{(|+\rangle + |-\rangle), -i(|+\rangle - |-\rangle) \right\}$$ would work just as well. We could also use $$\left\{((|+\rangle + |-\rangle) - i(|+\rangle - |-\rangle)),((|+\rangle + |-\rangle) +i(|+\rangle - |-\rangle))\right\}$$ which are two more linearly polarized states. Mathematically the spin states are like the left and right circularly polarized waves, so their sum and difference are like the $x$ and $y$ polarized waves but one of them shifted by a phase, and the $45$ degrees tilted ones really are literal sums and differences of the $x$ and $y$ (in phase) waves.
So $\{ |+\rangle , |-\rangle \}$ is one basis,
$\left\{(|+\rangle + |-\rangle), -i(|+\rangle - |-\rangle) \right\}$ is another basis and
$\left\{((|+\rangle + |-\rangle) - i(|+\rangle - |-\rangle)),((|+\rangle + |-\rangle) +i(|+\rangle - |-\rangle))\right\}$ is a third basis.
Each basis can the property that it is equal parts any one from the other two basis sets. And that's what the key distribution is based on. Just having multiple basis for a two dimensional set of states. All I've done above is write everything in terms of the spin states. Mathematically any basis is fine, and all three of these are equally nice in that within a basis the two are orthogonal to each other, and if you pick one from one basis it has equal sized dot products with either of the ones from the other sets.
Worrying about how these relate to classical waves is a distraction since it is the borrowing of the math and the terminology that is going on.
• I don't know why you refer to quantum superposition as stealing some math... So long story short, when one talks of linearly polarised photons, it is implied that the photon's spin state is in a quantum superposition of two right and left circular spin states, i.e. $S_{\pm}=S_x \pm iS_y$ with the inversion $S_x=1/2(S_+ + S_-)$, $S_y=(1/2i)(S_+ - S_-)$ right? – Ellie Jan 4 '15 at 13:19
• Yes. I gave three basis sets, the first is made up of the spin eigenstates, and both of the other basis sets are linearly polarized states. The example you give is one of the linearly polarized sets. Any real linear combination of the $S_x$ and $S_y$ in combinations that are mutually orthogonal would be equally deserving to be called linearly polarized. But the three sets I gave have the property you want for the quantum keys in that you want two sets of basis each of which is equal mixtures of the other basis elements. Basically to get that at least one basis needs to be linearly polarized. – Timaeus Jan 4 '15 at 17:31
• What was stealing was that we used the same math as for the classical waves, and so used the same terminology for the results. But that doesn't mean a single photon state is an electromagnetic wave, for instance $S_+$ and $iS_+$ are a quarter phase out of alignment from each other, but they don't literally have an $\vec{E}$ pointing in some direction, that phase for the photon is merely a relative phase whereas a classical wave really has an $\vec{E}$ pointing somewhere. – Timaeus Jan 4 '15 at 17:38
• @Timaeus Dear Timaeus, the last half of your answer is kind of hard to follow for me because of the terse notation, would there be other sources (books/papers) that you'd recommend, discussing the same issue more elaborately? thanks a lot. – user929304 Mar 16 '15 at 9:31
Photons, as quantum mechanical entities , are described by the solution of their quantum mechanical equation, a wave function. This equation, if you can follow the link, is a quantized version of Maxwell's equations in their potential form, acting on the photon wave function.
The state function of each photon is described by a complex number, there exists an amplitude whose square gives the probability of finding the photon at (x,y,z) at time t, and a given phase . In an ensemble of photons the phases will build up the electric and magnetic fields that are seen macroscopically.
Polarisation of the classical light means that the electric and magnetic fields are built up in a specific way, linear or circular. An innumerable number of photons contribute to the build up . Each individual photon will have its spin either along the direction of motion or against it, the synergistically built up electric field which defines macroscopic polarization is not a simple addition. This wiki link gives the mathematics of how this happens, and it needs second quantization.
Left and right handed circular polarization, and their associate angular momenta.
Please note that the individual photons have spin either along or against their direction of motion, while the electric fields are perpendicular. These are built up non-trivially, it is the handedness of the electric field vector ( which defines polarization classically) as it progresses in space and time that connects the electric fields to the spin direction.
What we call spin has really little to do with quantum mechanics and more to do with group theory and representations of the Lorentz group. Even before quantizing, the Dirac field, and the EM field, transform in a certain way under Lorentz transformations, and their transformation properties are captured by their spin. The reason these things are quantized is because of compactness of rotations in 3D, the same reason sound waves in a tube are quantized, and again has nothing to do with quantum mechanics, Hilbert spaces, etc.
It is important to realize that what folks usually think of as undergraduate single-particle quantum mechanics is actually classical field theory with a bit of Hilbert space stuff bolted on. It's only through quantum field theory that quantization is taken all the way. The single particle S.E. with spin is actually an approximation to the non-quantum relativistic Dirac equation, and the spin comes from this field being a spinor field. It's only once you quantize this field that you can claim that you are doing quantum mechanics. But to reduce the mental burden in undergraduate physics, we restrict ourselves to the 1-particle state of this quantum field, and these 1-particle states obey the classical Dirac equation (or, at low energy, the S.E.). When you talk about Stern-Gerlach experiments, you should isolate the quantum (measurement, probabilities, projection) from the not-strictly-quantum (spin), just as you can for spin-less particles. There we can measure position, but we don't claim that position is an inherently quantum idea with no classical analog. (I should stress that when physicists say classical, they often mean not quantum, not necessarily pre-1900s).
Now, it's a historical accident that we discovered the "classical" Dirac field a bit after/or at the same time as quantum mechanics, so people tend to confuse what is quantum and what is not. However, the same thing happens with E&M. There, the classical field needs to be quantized, and we end up with multi-photon states. But historically we discovered the E&M field first, long before quantum mechanics. The EM field, being a vector, transforms as spin 1, but because we can't go into the photon rest frame, and because of gauge invariance, only 2 possible components of spin can be measured. It's instructive to look up Wigner's classification and little groups.
|
# Show that the differential equation is homogeneous and solve $\left\{x\cos\bigg(\large\frac{y}{x}\bigg)\normalsize+y\sin\bigg(\large\frac{y}{x}\bigg)\right\}ydx=\left\{y\sin\bigg(\large\frac{y}{x}\bigg)-\normalsize x\cos\bigg(\large\frac{y}{x}\bigg)\right\}\normalsize xdy$
$\begin{array}{1 1} C=\large\frac{1}{xy}\sec\big(\large\frac{y}{x}\big) \\ C=\large\frac{1}{xy}\tan \big(\large\frac{y}{x}\big) \\C=\large\frac{1}{xy}\sin \big(\large\frac{y}{x}\big) \\C=\large\frac{1}{xy}\cos \big(\large\frac{y}{x}\big) \end{array}$
Toolbox:
• A differential equation of the form $\large\frac{dy}{dx }$$= F(x,y) is said to be homogenous if F(x,y) is a homogenous function of degree zero. • To solve this type of equations substitute y = vx and \large\frac{dy}{dx }$$= v + x\large\frac{dv}{dx}$
Step 1:
The equation can be rearranged and written as $\large\frac{dy}{dx} = \large\frac{y[x\cos(y/x) + y\sin(y/x)]}{x[y\sin(y/x) - x\cos(y/x)]}$
$F(x,y) = \large\frac{y [x\cos(y/x) +y\sin(y/x)]}{x[(y\sin(y/x) - x\cos((y/x)]}$
$F(kx,ky) = \large\frac{ky[kx\cos(ky/kx) + kysin(ky/kx)]}{kx[ky\sin(ky/kx) - kxcos(ky/kx)] }=$$k^0.F(x,y) Hence this is a homogenous equation with degree zero. Step 2: Using the information in the tool box, let us substitute for y and \large\frac{dy}{dx} v + x\large\frac{dv}{dx }$$= x(\cos v + vx\sin v).vx/(vx\sin v - x\cos v).x$
cancelling $x$ throughout we get
$v + x\large\frac{dv}{dx} =\frac{ (v\cos v + v^2\sin v)}{(v\sin v - \cos v)}$
Bringing $v$ from LHS to the RHS we get
$\large\frac{(v\cos v + v^2\sin v - v^2\sin v + v\cos v)}{(vsinv - cosv)}$
$x\large\frac{dv}{dx }=$$\large\frac{2v\cos v}{(v\sin v - \cos v)} seperating the variables we get, \large\frac{(v\sin v - \cos v)}{v\cos v }= \frac{2dx}{x} Step 3: Integrating on both sides we get, \int\large\frac{ v\sin v}{v\cos v}$$dv - \int\large\frac{\cos v}{\cos v}$$dv = 2 \int\large\frac{dx}{x} \int \tan v - \int (\large\frac{1}{v})$$dv = 2 \int\large\frac{ dx}{x}$
$\log(\sec v) - \log v = 2 \log x + \log C$
$\log(\large\frac{\sec v}{v}) = $$\log Cx^2 \large\frac{\sec v}{v}$$= Cx^2$
$\sec v = Cx^2v$
Step 4:
Writing $v = \large\frac{y}{x}$ we get,
$\sec(\large\frac{y}{x} )= $$Cx^2(\large\frac{y}{x} ) Cxy=\sec\big(\large\frac{y}{x}\big) C=\large\frac{1}{xy}$$\sec\big(\large\frac{y}{x}\big)$
This is the required equation. |