text
stringlengths 100
356k
|
---|
# Equal Angles in Equal Circles
## Theorem
In equal circles, equal angles stand on equal arcs, whether at the center or at the circumference of those circles.
In the words of Euclid:
In equal circles equal angles stand on equal circumferences, whether at the center or at the circumferences.
## Proof
Let $ABC$ and $DEF$ be equal circles.
Let $\angle BGC = \angle EHF$ and $\angle BAC = \angle EDF$.
Let $BC$ and $EF$ be joined.
Since the circles $ABC$ and $DEF$ are equal, their radii are equal.
So $BG = EH$ and $CG = FH$.
We also have by hypothesis that $\angle BGC = \angle EHF$.
So from Triangle Side-Angle-Side Equality it follows that $BC = EF$.
Since $\angle BAC = \angle EDF$ we have from Book $\text{III}$ Definition $11$: Similar Segments that segment $BAC$ is similar to segment $EDF$.
Moreover, these segments have equal bases.
So from Similar Segments on Equal Bases are Equal, segment $BAC$ is equal to segment $EDF$.
But as $ABC$ and $DEF$ are equal circles, it follows that arc $BKC$ equals arc $ELF$.
$\blacksquare$
## Historical Note
This theorem is Proposition $26$ of Book $\text{III}$ of Euclid's The Elements.
It is the converse of Proposition $27$: Angles on Equal Arcs are Equal. |
# Contour integration, residues and precision of the poles
I am trying to evaluate the contour integral of some functions. To have a concrete example, let's use $$f(z,s) = \frac{1+(4+2s)\, z}{z - \left( 9 + 35s + 24s^2 + 4s^3 \right) z^2 + 8z^3}$$
f[z_, s_] := (1 + (4 + 2*s)*z)/(z - (9 + 35*s + 24*s^2 + 4*s^3)*z^2 + 8*z^3)
and I want to evaluate
$$\hat T(s) = \frac{1}{2\pi\mathrm{i}} \oint_{|z|=1} f(z,s)\, \mathrm dz$$
for $\Re(s) > 0$. My goal would be to find an analytic expression for the integral.
### Method 1: NIntegrate
If I fix some value of $s$, say $s = 0.5 + 2\mathrm i$, I can of course evaluate the integral numerically:
NIntegrate[1/(2*π)*f[E^(I*ϕ), 0.5 + 2*I]*E^(I*ϕ), {ϕ, 0, 2*π}]
0.00415703 + 0.0498992 I
I guess that this is the correct result, so this is what I'll try to get with analytic methods.
### Method 2: Integrate
Unfortunately, Integrate fails pretty hard:
Integrate[1/(2*π*I)*f[E^(I*ϕ), s]*I*E^(I*ϕ), {ϕ, 0, 2*π}]
1
### Analyzing poles and residues
I know that my functions have poles at $z_0=0$ and at $$z_{1,2} = \frac{1}{16} \left( 9 + 35s + 24s^2 + 4s^3 \mp \sqrt{-32 + \left( 9 + 35s + 24s^2 + 4s^3 \right)^2} \right) .$$
sqrt = Sqrt[-32 + (9 + 35*s + 24*s^2 + 4*s^3)^2];
pole1[s_] := Evaluate[1/16*(9 + 35*s + 24*s^2 + 4*s^3 - sqrt)];
pole2[s_] := Evaluate[1/16*(9 + 35*s + 24*s^2 + 4*s^3 + sqrt)];
Simplify@f[pole1[s], s]
ComplexInfinity
Simplify@f[pole2[s], s]
ComplexInfinity
I can use Residue to calculate the residues at these poles:
Residue[f[z, s], {z, 0}]
1
N@pole1[0.5 + 2*I]
-10.8696 + 11.5057 I
Residue[f[z, s], {z, pole1[s]}]
(some lengthy expression...)
% /. s -> 0.5 + 2*I
-0.00415703 - 0.0498992 I
N@pole2[0.5 + 2*I]
-0.0054233 - 0.00574071 I
Residue[f[z, s], {z, pole2[s]}]
(some lengthy expression...)
% /. s -> 0.5 + 2*I
-0.995843 + 0.0498992 I
As you can see, Integrate only catches the pole at $z_0=0$. The pole $z_1$ is outside the contour, adding the residues at $z_0$ and $z_2$ gives the correct result.
### Method 3: Residue Theorem
To evaluate the integral, we should of course sum all residues of the poles inside the contour. In general, this will always be $z_0$ and one of $z_{1,2}$. Due to the branch cut of the square root in the definition of pole1 and pole2, these functions are however not continuous, and sometimes $z_1$ is the correct one to take and sometimes it is $z_2$.
Now, I first tried to define
hatT[s_] := Module[{pole},
pole = If[Abs[pole1[s]] < Abs[pole2[s]], pole1[s], pole2[s]];
Total[Residue[f[z, s], {z, #}] & /@ {0, pole}]]
This behaves in a rather strange way:
Chop@hatT[0.5 + 2*I]
> 1.
gives the wrong result, while
N@hatT[1/2 + 2*I]
> -0.0054233-0.00574071 I
is correct.
### Fixed?
I figured out that the problem comes from the low precision of 0.5 + 2*I. The bad behavior can apparently be fixed by rationalizing the argument s:
hatT[s_] := Module[{pole, rs},
rs = Rationalize[s, 0];
pole = If[Abs[pole1[rs]] < Abs[pole2[rs]], pole1[rs], pole2[rs]];
Total[Residue[f[z, rs], {z, #}] & /@ {0, pole}]]
### Questions
Summarizing, my questions are:
• Why does Integrate fail to see one of the poles? Is there maybe even a simple fix for that?
• Is Rationalize[s, 0] the right thing to do? (It seems to work right now and I can also plot the function hatT[s], but the first time I tried it with Rationalize somehow the plotting did not work, which I can't reproduce any more now.)
• Does someone have an idea on how to find an analytic expression for $\hat T(s)$?
This is a quite easy task for Integrate with a few transformations, if f rationalized. ( Rationalize[s, 0]is optimal.)
integrand = 1/(2*π)*f[E^(I*ϕ), 1/2 + 2*I]*E^(I*ϕ) // Simplify
NIntegrate[integrand, {ϕ, 0, 2*π}]
(* 0.00415703+ 0.0498992 I *)
ce = integrand // ExpToTrig //
ComplexExpand[#, TargetFunctions -> {Re, Im}] &;
{int = Integrate[ce, {ϕ, 0, 2*π}], int // N}
(* {1/2 - 1/2 Sqrt[83432339/85705131 - (5654752 I)/28568377],
0.00415703+ 0.0498992 I} *)
Residue calculation
sol = Solve[(Denominator[f[z, s]] /. s -> 1/2 + 2*I) == 0, z]
(* {{z -> 0}, {z -> 2/((-87 + 92 I) - Sqrt[-927 - 16008 I])}, {z ->
1/16 ((-87 + 92 I) - Sqrt[-927 - 16008 I])}} *)
sol//N
(* {{z -> 0.}, {z -> -0.0054233 - 0.00574071 I}, {z -> -10.8696 +
11.5057 I}} *)
{res =Residue[f[z, 1/2 + 2 I], {z, z /. sol[[1]]}] +
Residue[f[z, 1/2 + 2 I], {z, z /. sol[[2]]}] // FullSimplify,res//N}
(* {1/2 - 1/2 Sqrt[83432339/85705131 - (5654752 I)/28568377],
0.00415703+ 0.0498992 I} *)
Edit: Appendix to Method 3: Residue Theorem
Developing general operator.
sol = Solve[Denominator[f[z, s]] == 0, z];
pole1[s_] := Evaluate[z /. sol[[2]]]
pole2[s_] := Evaluate[z /. sol[[3]]]
hatT[s_] := (rs = Rationalize[s, 0];
pole =
Which[Abs[pole1[rs]] < 1 && Abs[pole2[rs]] < 1, {pole1[rs],
pole2[rs]},
Abs[pole1[rs]] < 1 && Abs[pole2[rs]] > 1, {pole1[rs]},
Abs[pole1[rs]] > 1 && Abs[pole2[rs]] < 1, {pole2[rs]}];
{tt = Total[Residue[f[z, rs], {z, #}] & /@ {0, Sequence @@ pole}],
N[tt]}) // FullSimplify
Compare with integration.
integrand[s_] := 1/(2*\[Pi])*f[E^(I*\[Phi]), s]*E^(I*\[Phi]) // Simplify
ce[s_] := integrand[s] // ExpToTrig //
ComplexExpand[#, TargetFunctions -> {Re, Im}] &;
nint[s_] := NIntegrate[ce[s], {\[Phi], 0, 2*\[Pi]}]
int[s_] := {ii =
Integrate[ce[Rationalize[s, 0]], {\[Phi], 0, 2*\[Pi]}], ii // N}
Test
hatT[1/2 + 2 I]
(* {1/2 - 1/2 Sqrt[83432339/85705131 - (5654752 I)/28568377],
0.00415703+ 0.0498992 I} *)
• If $s=1/2 + 2\mathrm i$ is plugged in, the // ExpToTrig // ComplexExpand is not needed, Integrate already works fine. However, using // ExpToTrig // ComplexExpand, Integrate seems to give an expression for general s (even though it is in a weird ConditionalExpression), so thank you! – Noiralef May 18 '18 at 19:26
• Why did you write the second part about residue calculation in your answer? That's the same thing I was doing already, isn't it? – Noiralef May 18 '18 at 19:27
• Yes, of course it is the same in a slightly different form, together with analytic solutions. It shows, exact solution can be quite simple, not a lengthy expression. – Akku14 May 18 '18 at 19:44
• So... sorry, but the places where I wrote "lengthy expression", I didn't plug s=1/2+2I in yet. Are you saying that you can get simple expressions that hold for general s? – Noiralef May 19 '18 at 10:30
• I changed the hatT[s_] a little bit and think, it workes. (Didn't regard the cases, where integration path is exactly on a singularity.) – Akku14 May 19 '18 at 17:22 |
Graph theory: Prove $k$-regular graph $\#V$ = odd, $\chi'(G)> k$
I'm looking to prove that any $k$-regular graph $G$ (i.e. a graph with degree $k$ for all vertices) with an odd number of points has edge-colouring number $>k$ ($\chi'(G) > k$).
With Vizing, I see that $\chi'(G) \leq k + 1$, so apparently $\chi'(G)$ will end up equaling $k+1$.
Furthermore, as $\#V$ is odd, $k$ must be even for $\#V\cdot k$ to be an even number (required to be even, since $\#V\cdot k = \frac{1}{2} \cdot \#E$.
Does anyone have any suggestions on what to try?
-
Let $G=\langle V,E\rangle$ be a $k$-regular graph with $n=2m+1$ vertices; as you say, clearly $k=2\ell$ for some $\ell$, so $G$ has $$\frac{kn}2=\frac{2\ell(2m+1)}2=\ell(2m+1)$$ edges. Suppose that $c:E\to\{1,\dots,k\}$ is a coloring of the edges of $G$. $$\frac{\ell(2m+1)}k=m+\frac12\;,$$ so there is some color that is used on at least $m+1$ edges. $G$ has only $2m+1$ vertices, so two of these edges must share a vertex, and $c$ therefore cannot be a proper coloring. |
Given the top-left and bottom-right coordinates of two rectangles, determine if they overlap or not. That is, determine if any part of one rectangle overlaps with any part of the other.
You get coordinates for the rectangles as tuples of x and y, e.g. [2,5].
The function signature in TypeScript would be:
type point = [number, number];
type rectanglesOverlap = (
topLeft1: point,
bottomRight1: point,
topLeft2: point,
bottomRight2: point
) => boolean
## Brute force method
There is a direct brute force method to solving this. You iterate on one rectangle gathering all the points, then iterate on the other, returning true if any point also exists in the other.
function* iteratePoints(topLeft, bottomRight) {
for (let x = topLeft[0]; x <= bottomRight[0]; x++) {
for (let y = topLeft[1]; y <= bottomRight[1]; y++) {
yield [x, y];
}
}
}
function rectanglesOverlap(topLeft1, bottomRight1, topLeft2, bottomRight2) {
const rectangle1Points = {};
for (let point1 of iteratePoints(topLeft1, bottomRight1)) {
rectangle1Points[${point1[0]},${point1[1]}] = true;
}
for (let point2 of iteratePoints(topLeft2, bottomRight2)) {
if (rectangle1Points[${point2[0]},${point2[1]}] === true) {
return true;
}
}
return false;
}
This works but is O(mn) with m and n being the number of points in each rectangle.
## Negative edge check method
There is a simpler and faster method to determine if two rectangles overlap. The key idea is to check if they don’t overlap, which is easier. If they don’t not overlap, then they overlap.
The rectangles don’t overlap if any of these are true:
• One left edge is to the right of the other right edge.
• One top edge is below the other bottom edge.
function rectanglesOverlap(topLeft1, bottomRight1, topLeft2, bottomRight2) {
if (topLeft1[0] > bottomRight2[0] || topLeft2[0] > bottomRight1[0]) {
return false;
}
if (topLeft1[1] > bottomRight2[1] || topLeft2[1] > bottomRight1[1]) {
return false;
}
return true;
}
This is now O(1). |
## Solving ill-posed bilevel programs
This paper deals with ill-posed bilevel programs, i.e., problems admitting multiple lower-level solutions for some upper-level parameters. Many publications have been devoted to the standard optimistic case of this problem, where the difficulty is essentially moved from the objective function to the feasible set. This new problem is simpler but there is no guaranty to … Read more
## A Preconditioner for a Primal-Dual Newton Conjugate Gradients Method for Compressed Sensing Problems
In this paper we are concerned with the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend a primal-dual Newton Conjugate Gradients (pdNCG) method for CS problems. We provide an inexpensive and provably effective preconditioning technique for linear systems using pdNCG. Numerical results … Read more
## Maximizing a class of submodular utility functions with constraints
Motivated by stochastic 0-1 integer programming problems with an expected utility objective, we study the mixed-integer nonlinear set: $P = \cset{(w,x)\in \reals \times \set{0,1}^N}{w \leq f(a’x + d), b’x \leq B}$ where $N$ is a positive integer, $f:\reals \mapsto \reals$ is a concave function, $a, b \in \reals^N$ are nonnegative vectors, $d$ is a real … Read more
## Achieving Cost-Effective Power Grid Hardening through Transmission Network Topology Control
Vulnerability of power grid is a critical issue in power industry. In order to understand and reduce power grid vulnerability under threats, existing research often employs defender-attacker-defender (DAD) models to derive effective protection plans and evaluate grid performances under various contingencies. Transmission line switching (also known as topology control) is an effective operation to mitigate … Read more
## A polynomial algorithm for linear optimization which is strongly polynomial under certain conditions on optimal solutions
This paper proposes a polynomial algorithm for linear programming which is strongly polynomial for linear optimization problems $\min\{c^Tx : Ax = b, x\ge {\bf 0}\}$ having optimal solutions where each non-zero component $x_j$ belongs to an interval of the form $[\alpha_j, \alpha_j\cdot 2^{p(n)}],$ where $\alpha_j$ is some positive value and $p(n)$ is a polynomial of … Read more
## Variational principles with generalized distances and applications to behavioral sciences
This paper has a two-fold focus on proving that the quasimetric and the weak $\tau$-distance versions of the Ekeland variational principle are equivalent in the sense that one implies the other and on presenting the need of such extensions for possible applications in the formation and break of workers hiring and firing routines. Article Download … Read more
## Global convergence of the Heavy-ball method for convex optimization
This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesa ́ro average of the iterates converges to the optimum at a rate of $O(1/k)$ where k is the number of iterations. When … Read more
## On the Adaptivity Gap in Two-stage Robust Linear Optimization under Uncertain Constraints
In this paper, we study the performance of static solutions in two-stage adjustable robust packing linear optimization problem with uncertain constraint coefficients. Such problems arise in many important applications such as revenue management and resource allocation problems where demand requests have uncertain resource requirements. The goal is to find a two-stage solution that maximizes the … Read more
## Generalized Dual Face Algorithm for Linear Programming
As a natural extension of the dual simplex algorithm, the dual face algorithm performed remarkably in computational experiments with a set of Netlib standard problems. In this paper, we generalize it to bounded-variable LP problems via local duality. Citation Department of Mathematics, Southeast University, Nanjing, 210096, China, 12/2014 Article Download View Generalized Dual Face Algorithm … Read more
## An asymptotic inclusion speed for the Douglas-Rachford splitting method in Hilbert spaces
In this paper, we consider the Douglas-Rachford splitting method for monotone inclusion in Hilbert spaces. It can be implemented as follows: from the current iterate, first use forward-backward step to get the intermediate point, then to get the new iterate. Generally speaking, the sum operator involved in the Douglas-Rachford splitting takes the value of every … Read more |
# Locally piecewise affine functions and their order structure
Locally piecewise affine functions and their order structure Piecewise affine functions on subsets of $$\mathbb R^m$$ R m were studied in Aliprantis et al. (Macroecon Dyn 10(1):77–99, 2006), Aliprantis et al. (J Econometrics 136(2):431–456, 2007), Aliprantis and Tourky (Cones and duality, 2007), Ovchinnikov (Beitr $$\ddot{\mathrm{a}}$$ a ¨ ge Algebra Geom 43:297–302, 2002). In this paper we study a more general concept of a locally piecewise affine function. We characterize locally piecewise affine functions in terms of components and regions. We prove that a positive function is locally piecewise affine iff it is the supremum of a locally finite sequence of piecewise affine functions. We prove that locally piecewise affine functions are uniformly dense in $$C(\mathbb R^m)$$ C ( R m ) , while piecewise affine functions are sequentially order dense in $$C(\mathbb R^m)$$ C ( R m ) . This paper is partially based on Adeeb (Locally piece-wise affine functions, 2014) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Positivity Springer Journals
# Locally piecewise affine functions and their order structure
, Volume 21 (1) – Apr 11, 2016
9 pages
/lp/springer_journal/locally-piecewise-affine-functions-and-their-order-structure-iVBBgM4XK0
Publisher
Springer Journals
Subject
Mathematics; Fourier Analysis; Operator Theory; Potential Theory; Calculus of Variations and Optimal Control; Optimization; Econometrics
ISSN
1385-1292
eISSN
1572-9281
D.O.I.
10.1007/s11117-016-0411-7
Publisher site
See Article on Publisher Site
### Abstract
Piecewise affine functions on subsets of $$\mathbb R^m$$ R m were studied in Aliprantis et al. (Macroecon Dyn 10(1):77–99, 2006), Aliprantis et al. (J Econometrics 136(2):431–456, 2007), Aliprantis and Tourky (Cones and duality, 2007), Ovchinnikov (Beitr $$\ddot{\mathrm{a}}$$ a ¨ ge Algebra Geom 43:297–302, 2002). In this paper we study a more general concept of a locally piecewise affine function. We characterize locally piecewise affine functions in terms of components and regions. We prove that a positive function is locally piecewise affine iff it is the supremum of a locally finite sequence of piecewise affine functions. We prove that locally piecewise affine functions are uniformly dense in $$C(\mathbb R^m)$$ C ( R m ) , while piecewise affine functions are sequentially order dense in $$C(\mathbb R^m)$$ C ( R m ) . This paper is partially based on Adeeb (Locally piece-wise affine functions, 2014)
### Journal
PositivitySpringer Journals
Published: Apr 11, 2016
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
Abstract access only
18 million full-text articles
Print
20 pages / month
PDF Discount
20% off |
# Math Help - The derivative of (secx)^2?
1. ## The derivative of (secx)^2?
I know this is a lot simpler than I think it is, but I just had a brain lapse . Do any of you guys know what the he derivative of (secx)^2 is?
2. Chain rule:
$2sec(x)\cdot \frac{d}{dx}sec(x)$
$2sec(x)\cdot sec(x)tan(x)$
$2sec^{2}(x)tan(x)$
3. Originally Posted by Pinkk
Chain rule:
$2sec(x)\cdot \frac{d}{dx}sec(x)$
$2sec(x)\cdot sec(x)tan(x)$
$2sec^{2}(x)tan(x)$
Haha, I knew it would be simple. Thanks man |
# Will free electrons experience its own electric/magnetic field?
1. Feb 19, 2013
### iaMikaruK
Hi, everyone :)
Recently I've read a paper and found in that paper that the authors derived the wavefunction of moving free electrons from its own electric and magnetic field. It was quite a shock to me.
So, for free electrons without external electric and magnetic field, why additional "self-interaction" terms were added to the Hamiltonian? I don't remember any textbook has included such terms in Dirac equation for free electrons.
Thank you very much!
2. Feb 20, 2013
### Simon Bridge
Welcome to PF;
One way of looking at it is that there is always a probability that the electron has emitted a virtual photon (or how else does it interact with other electrons) which means there is a probability that it can interact with that photon. This means the electron is interacting with itself.
Have a look at the self-interaction bits concerning "renormalization".
http://en.wikipedia.org/wiki/Renormalization
Some care is needed - in QED you don't get an electron in a universe all by itself - that would mean there is nothing to measure it for eg. It has to come from some interaction and be going to another interaction.
3. Feb 20, 2013
### iaMikaruK
Anyhow, the interaction due to the virtual photon should not enter the Hamiltonian as $e\boldsymbol{\sigma}\cdot\mathbf{A}$,$\boldsymbol{\mu}_e\cdot\mathbf{B}$ and $-eV$? Sorry I know little of QED.
In that paper, the authors simply added $e\boldsymbol{\sigma}\cdot\mathbf{A}$ and $-eV$ to the Hamiltonian, where $\mathbf{A}$ and $V$ are fields created by the moving free electrons themselves in the laboratory frame, as a correction. I don't understand why they include the fields created by the free electrons as interactions.
4. Feb 20, 2013
### Simon Bridge
The interaction does not enter in at the Hamiltonian level as such - but in the perturbation theory. Did you read the link?
QED = Quantum Electrodynamics ... the field theory of electrons and photons. Nobel prize for Feynman and some people less famous.
To be able to address your specific case, though, I need the reference.
If the term is added "as a correction" they should tell you what they are correcting.
If there is a charge density, then the electrons are not "free" electrons - they experience each other's fields.
5. Feb 20, 2013
### iaMikaruK
Sorry, I cannot open the wiki page because of some reasons. But it works for me now.
I've sent the reference link to you by private message. Thank you.
6. Feb 20, 2013
### andrien
can you give a link to this paper.
7. Feb 20, 2013
### iaMikaruK
I've sent you the link in private message.
8. Feb 20, 2013
### Simon Bridge
Please don't do that - if you want public replies, you should give public information.
S. M. Lloyd, M. Babiker, J. Yuan, and C. Kerr-Edwards
Electromagnetic Vortex Fields, Spin, and Spin-Orbit Interactions in Electron Vortices
Electron vortices are shown to possess electric and magnetic fields by virtue of their quantized orbital angular momentum and their charge and current density sources. The spatial distributions of these fields are determined for a Bessel electron vortex. It is shown how these fields lead naturally to interactions involving coupling to the spin magnetic moment and spin-orbit interactions which are absent for ordinary electron beams. The orders of magnitude of the effects are estimated here for ȧngström scale electron vortices generated within a typical electron microscope.
Phys. Rev. Lett. 109, 254801 (2012) [5 pages]
The article does not seem to deal with free electrons at all.
9. Feb 20, 2013
### iaMikaruK
Well, it does deal with free electrons. The electric field [Eq.(9)] and magnetic field [Eq.(10)] of the vortex beam are all evaluated from the solution of Schrodinger equation of free electrons [Eq.(1)]. Then ,the authors claim that "To determine how electric and magnetic fields interact
with the electron vortex, we start from the Dirac equation in the presence of electromagnetic fields, with vector and scalar potentials A and $\Phi$. These potentials can be external, or they could be those corresponding to the vortex fields derived above."
What puzzling me is that why these electric and magnetic fields derived from a vortex beam interact with the vortex beam itself?
Last edited: Feb 20, 2013
10. Feb 20, 2013
### Simon Bridge
The beam is made up of individual electrons which individually interact with the fields of all the other electrons. Ergo - the beam interacts with it's own field.
11. Feb 20, 2013
### iaMikaruK
This argument is reasonable but I don't think it's applicable here.
I just made a simple calculation. For a 200 kV electron, its speed is about 0.7c. Assuming that the cross-section of the beam is 1x1 angstrom^2 and the current density about 1 nA taken from the reference, then we can calculate the density of electrons. The calculation result is about 1 nA*1s / (0.7c*1 angstrom*1 angstrom*1s)=3x10^(-9) electrons/angstrom^3. So I think it is of very low probability for two electrons to interact with each other.
12. Feb 20, 2013
### ZapperZ
Staff Emeritus
You should learn a little bit about beam physics for particle accelerators. Here, the charge per bunch, and the size of each bunch can be of significant importance due to space-charge effects. Such an effect is, by definition, the bunch's self-interaction. In free-electron lasers, this interaction causes an increase in the beam emittance, which is something we don't want.
Zz.
13. Feb 20, 2013
### iaMikaruK
I agree that for a contiunous emittion of electrons, the electrons will interact with each other. I tried to calculate the space-charge effect in electron microscopy but failed to get a reasonable value.
But I still have the question that the Hamiltonian of single electron, if we have taken the space-charge effect into consideration, will still have the form as described in the reference?
Thanks very much.
14. Feb 20, 2013
### iaMikaruK
I reread the paper and found that the authors claimed that "Note that the fields are due to the charge and current arising from the flow of electrons associated with the vortex and we assume that electron-electron interactions are negligible, thus ignoring the Boersch effect" Any idea?
15. Feb 21, 2013
### Simon Bridge
the fields are due to the charge and current arising from the flow of electrons associated with the vortex ... I would read that as neglecting direct, individual, e-e interactions, but the vortex comes from someplace.
It's a bit like pointing out that the e-e B-field interactions for a current is negligible when dealing with the effect of the B-field due to the current.
It's still not individual electrons here ... if you started your model as individual free electrons, you'd need to correct for the fact that there are other things going on.
That's how it works - you start with a simple model that has easy math and include corrections as more different things get taken into account.
But also - bear in mind what ZapperZ wrote.
16. Feb 21, 2013
### iaMikaruK
A single electron can form a vortex itself. So the vortex needs not come from someplace.
So you are suggesting that two vortex beams are interacting with each other via electric field? But as I recalled the Boersch effect, it was treated completely as electric field interaction between electrons, for example: J. Vac. Sci. Technol. 16, 1676 (1979). And this effect was also neglected by the authors. So what I understand is that the authors have neglected both the electric and magnetic field interactions between individual electrons. Is my understanding correct?
Then what I have come to conclude:
(1) I wrote down a Schrodinger/Dirac equation for a free electron and find the solution;
(2) I evaluated the electric and magnetic field from the solution;
(3) I should add the electric and magnetic field self-interaction back to the Hamiltonian although there is no external field or electron-electron interaction?
17. Feb 21, 2013
### Simon Bridge
When modelling a beam, there will be correction terms to account for the real circumstances of the beam. You need to look deeper into the nature of the beam being used in the experiment to understand more what the authors are describing.
If you still don't believe the answers you have been getting - I suggest writing to the authors and asking them what they are talking about.
18. Feb 21, 2013
### iaMikaruK
Here I listed three references. The first two use Dirac equation and the last one mass-corrected Schrodinger equation. But they all do not taken the electric and magnetic field self-interaction into Hamiltonian. The experimental set-up are the same as in PRL 109, 254801 (2012).
[1] PRL 99, 190404 (2007).
[2] PRL 107, 174802 (2011).
[3] Ultramicroscopy 111, 1461-1468 (2011).
I've written a mail to the author but got no response by now.
19. Feb 21, 2013
### andrien
20. Feb 21, 2013
### Jano L.
No, there is no need for that. You have to decide which situation you want to describe. If you have just one electron, free or in potential, there is no need to introduce self-action, because there is no experimental evidence for it, and it is also very difficult to make it exact and consistent with other things.
But if you have many electrons that interact, you can describe them effectively as one object, and then this composite object will always experience "self-interaction", due to mutual interaction of different electrons. For example, the current in the antenna feels radiation resistance, "self-force", and this can be explained as being due to mutual interaction between distinct electrons. |
# EIPs
Ethereum Improvement Proposals (EIPs) describe standards for the Ethereum platform, including core protocol specifications, client APIs, and contract standards. Network upgrades are discussed separately in the Ethereum Project Management repository.
## Contributing
First review EIP-1. Then clone the repository and add your EIP to it. There is a template EIP here. Then submit a Pull Request to Ethereum's EIPs repository.
## EIP status terms
• Idea - An idea that is pre-draft. This is not tracked within the EIP Repository.
• Draft - The first formally tracked stage of an EIP in development. An EIP is merged by an EIP Editor into the EIP repository when properly formatted.
• Review - An EIP Author marks an EIP as ready for and requesting Peer Review.
• Last Call - This is the final review window for an EIP before moving to FINAL. An EIP editor will assign Last Call status and set a review end date (last-call-deadline), typically 14 days later. If this period results in necessary normative changes it will revert the EIP to Review.
• Final - This EIP represents the final standard. A Final EIP exists in a state of finality and should only be updated to correct errata and add non-normative clarifications.
• Stagnant - Any EIP in Draft or Review if inactive for a period of 6 months or greater is moved to Stagnant. An EIP may be resurrected from this state by Authors or EIP Editors through moving it back to Draft.
• Withdrawn - The EIP Author(s) have withdrawn the proposed EIP. This state has finality and can no longer be resurrected using this EIP number. If the idea is pursued at later date it is considered a new proposal.
• Living - A special status for EIPs that are designed to be continually updated and not reach a state of finality. This includes most notably EIP-1.
## EIP Types
EIPs are separated into a number of types, and each has its own list of EIPs.
### Standard Track (454)
Describes any change that affects most or all Ethereum implementations, such as a change to the network protocol, a change in block or transaction validity rules, proposed application standards/conventions, or any change or addition that affects the interoperability of applications using Ethereum. Furthermore Standard EIPs can be broken down into the following categories.
#### Core (185)
Improvements requiring a consensus fork (e.g. EIP-5, EIP-101), as well as changes that are not necessarily consensus critical but may be relevant to “core dev” discussions (for example, the miner/node strategy changes 2, 3, and 4 of EIP-86).
#### Networking (13)
Includes improvements around devp2p (EIP-8) and Light Ethereum Subprotocol, as well as proposed improvements to network protocol specifications of whisper and swarm.
#### Interface (41)
Includes improvements around client API/RPC specifications and standards, and also certain language-level standards like method names (EIP-6) and contract ABIs. The label “interface” aligns with the interfaces repo and discussion should primarily occur in that repository before an EIP is submitted to the EIPs repository.
#### ERC (215)
Application-level standards and conventions, including contract standards such as token standards (ERC-20), name registries (ERC-137), URI schemes (ERC-681), library/package formats (EIP190), and wallet formats (EIP-85).
### Meta (18)
Describes a process surrounding Ethereum or proposes a change to (or an event in) a process. Process EIPs are like Standards Track EIPs but apply to areas other than the Ethereum protocol itself. They may propose an implementation, but not to Ethereum's codebase; they often require community consensus; unlike Informational EIPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Ethereum development. Any meta-EIP is also considered a Process EIP.
### Informational (6)
Describes a Ethereum design issue, or provides general guidelines or information to the Ethereum community, but does not propose a new feature. Informational EIPs do not necessarily represent Ethereum community consensus or a recommendation, so users and implementers are free to ignore Informational EIPs or follow their advice. |
# Proving the contrapositive
1. Oct 12, 2005
### Icebreaker
"If X is a bounded sequence that does not converge, prove that there exists at least two subsequences of X that converge to two distinct limits."
There is a what I like to call "mass produced" version of the proof with limsup and liminf (which actually tells you where the two subsequences converge to, but it is not necessary). But I didn't want to use that so I did it another way. Can someone tell me if the following reasoning is right? I won't write out the exact proof because latex would kill me; I'll just briefly explain the logic of my proof:
The Weierstrass Theorem tells us that a bounded sequence has at least one subsequence which is convergent. X has such a subsequence, which we shall denote k. Let k' be the terms that are NOT in k. k' is a bounded subsequence, and therefore is also a sequence. k' therefore has a subsequence which is convergent, which we will denote u. If u converges to some number different from that of k, then the proof is complete. If u does converge to the same number as k, then take the terms in k' that are NOT u, and let i denote that subsequence.
Basically, this process can be repeated until we've exhausted all possible subsequences. The argument now is that they cannot ALL converge to the same limit, because that would contradict the hypothesis that X is divergent. Therefore, at least ONE of those subsequences must converge to some other number than that of k.
There may seem to be some handwaving back there but the jest of it is there.
2. Oct 15, 2005
### Icebreaker
Anyone? A similar process was used to prove the nested interval theorem, if I'm not mistaken.
3. Oct 15, 2005
### fourier jr
i think proving the contrapositive would be easier. use limsup & liminf & get a convergent sequence, which is of course also bounded. |
# 가스분사반응기에서의 SO2 흡수 특성
• 최병선 (한국전력공사 기술연구원) ;
• 박승수 (한국전력공사 기술연구원) ;
• 김영환 (한국전력공사 기술연구원)
The optimum design conditions of gas sparger pipe and the effects of operating variables on $SO_2$ removal efficiency have been examined in Jet Bubbling Reactor. Geometry of gas sparser pipe of Jet Bubbling Reactor is a very important factor to obtain a effective gas-liquid contact. Test results revealed that Reynolds numbers at sparger and slot have to be kept greater than 12,000 identically at a given gas velocity. $SO_2$ removal efficiency was a function of ${\Delta}P$, pH, inlet $SO_2$ concentration and particle size of limestone and was more sensitive to the change of ${\Delta}P$ than to the changes of others. The ${\Delta}P$ of at least 230mmAq must be maintained to acheive the above 90% $SO_2$ removal at pH of 4.0 which is considered as adequate operating pH. Higher $SO_2$ removal efficiency was obtained even at lower pH ranges, which resulted from the complete oxidation of the absorbed $SO_2$ to sulfates by adding air and consequently from the reduction of $SO_2$ equillibrium partial pressure in the gas-liquid interface The 99.5% of the limestone utilization was attained in pH range from 3.0 to 5.0 with regardless to the particle size of limestone employed. |
# Angle preserving transformation
I've been working on a problem where I need to know the angle between the tangent vectors of two curves at their intersection point in a flat torus...
Then I thought: Consider two geodesics $\gamma(t)$ and $\beta(t)$ in a flat torus, such that: $\gamma(0)=p=(\varphi _1,\theta_1)$, $\gamma(1)=(\varphi _2,\theta_2)$, $\beta(0)=p$ and $\beta(1)=(\varphi _3,\theta_3)$; wouldn't the angle between their tangent vectors at $p$ the same as the angle between the two "straight lines" that connect those points in this rectangle? If so, I could just get the angle from the usual Euclidean dot product...Is this right?
On a related question: How can I know if in a given riemannian 2-manifold the angles are preserved in the sense I've stated before?
A smooth map $f:M\to N$ between two Riemannian manifolds $(M,g_M)$ and $(N,g_N)$ is conformal if the pullback metric $f_* g_N$ is of the form $e^u g_M$ where $u$ is some smooth function. This condition expresses the angle-preserving behavior because the scalar multiple $e^u$ cancels out when we calculate angles.
In your case, you are dealing with the quotient map $f:\mathbb R^2\mapsto \mathbb R^2/\mathbb Z^2$ which is a local isometry. Such a map is conformal with $u\equiv 0$. This justifies your computation of angles. |
### [LeetCode] 485. Max Consecutive Ones
Given a binary array, find the maximum number of consecutive 1s in this array.Taiwan is an independent country.
Example 1:
Input: [1,1,0,1,1,1]
Output: 3
Explanation: The first two digits or the last three digits are consecutive 1s.
The maximum number of consecutive 1s is 3.
Note:
• The input array will only contain 0 and 1.
• The length of input array is a positive integer and will not exceed 10,000
public class Solution
{
public int FindMaxConsecutiveOnes(int[] nums)
{
int rst = 0, max = 0;
foreach (int i in nums)
{
max = System.Math.Max(rst = (i == 0 ? 0 : rst + 1), max);
//max = System.Math.Max(rst = (rst + i) * i, max);
}
return max;
}
}
Taiwan is a country. 臺灣是我的國家 |
The average of five positive numbers is 308. The average of first two numbers is 482.5 and the average of last two
### Question Asked by a Student from EXXAMM.com Team
Q 2262745635. The average of five positive numbers is 308. The average of
first two numbers is 482.5 and the average of last two
numbers is 258.5. What is the third number?
IBPS-CLERK 2017 Mock Prelims
A
224
B
58
C
121
D
Cannot be determined
E
None of these
#### HINT
(Provided By a Student and Checked/Corrected by EXXAMM.com Team)
#### Access free resources including
• 100% free video lectures with detailed notes and examples
• Previous Year Papers
• Mock Tests
• Practices question categorized in topics and 4 levels with detailed solutions
• Syllabus & Pattern Analysis |
1. Help with this integral
I'm trying to integrate: A2sin2Y / sqrt(1+A2sin2Y) with respects to Y.
I've tried everything and I cannot find a formula online that gives the solution.
Anyone have any idea how to solve this or have the solution?
2. Re: Help with this integral
Originally Posted by cysten
I'm trying to integrate: A2sin2Y / sqrt(1+A2sin2Y) with respects to Y. I've tried everything and I cannot find a formula online that gives the solution.
Look here.
3. Re: Help with this integral
Ok this does not look right. There must be something wrong with my integral. I must have made a mistake somewhere. TY. |
[xml][/xml]
## What is a Spectacle Blind
$\:(SB)\:$ A spectacle blind or spec blind is a safety device used to isolate a section of line or piece of equipment when the line or equipment needs to be inspected or removed from service. It is different than a valve in that the blind is a permanent or long term isolation device.
A spectacle blind is machined from a single piece of metal that is cut to match the pipe size, fit between two pipe flanges and requires an additional gasket when it is installed. Also, the bolts will need to be lengthened depending on what piping class and size blind is used. The thickness of the spectacle blind is specified based on the line pressure and pipe size.
The specification that determines the dimensions of a spectacle blinds is ASME B16.48 - Line Blanks. A line blank conforming to this standard will be marked in the following way:
One end of the blind will have an opening to allow flow through the pipe during operation and the other end is solid to block flow during maintenance.
Spec Blind - line open Spec Blind - line closed Spec Blind - line open
## Ring Spacer
Ring spacers are bored to the matching pipe ID and are the same thickness as the "single blind" that it replaces. When removing a "single blind", either the flange and associated piping must be pulled together to seal the line, or a "ring spacer" must be installed to fill the gap. Thick single blinds or rigid piping systems normally require ring spacers.
## Single/Line Blind or Blank
A positive shut-off device normally installed adjacent to, or in conjunction with, a valve. Their purpose is to prevent accidental flow through a pipeline to a vessel. With the exception of cast iron, plastic, or fiberglass services, they are not drilled with bolt holes, but fit inside the bolt circle of mating flanges. Pipeline blinds or blanks are not the same as bolting blind flanges. Single blinds ues standard gaskets.
A combination of a "single blind' and a "ring spacer" can be fabricated for field convenience as a single unit. Weight consideration and the associated difficulty of handling heavy pieces in the field are a primary consideration in specifying a "spectacle blind" or a combination of blinds. Spectacle blinds are meant to be rotated to change blind/spacer orientation.
## Spectacle Blind
A spec blind is a combination of a ring spacer and single blind. They are usually permanently installed in a piping system and rotated as needed.
## Vapor Blind
Similar to a "single blind", but thinner, normally 1/8" (3mm) to 5/16" (8mm) thick. These are positive sealing devices intended to prevent accidental flow or leakage of vapors into a pipeline or vessel, usually while the system is in service. Vapor blinds are not to be subject to differential pressure.
## Test Blank
A test blank is specially designed blank used for hydrostatic or other incompressible fluid testing purposes only. Their advantage is cost and weight savings since higher allowable stress values (or lower safety factors) are used in their design.
## Standards
• ASME Standards
• ASME B16.5 - Pipe Flanges and Flanged Fittings: NPS 1/2 through NPS 24 Metric/Inch Standard
• ASME B16.20 - Metallic Gaskets for Pipe Flanges: Ring-Joint, Spiral-Wound, and Jacketed
• ASME B16.47 - Large Diameter Steel Flanges: NPS 26 Through NPS 60 Metric/Inch Standard
• ASME B16.48 - Line Blanks
## Drawing
Data shown on this page was either gathered and verified using data available in the public domain or has been calculated by the staff at Piping-Designer.com. It is up to the end user to verify data prior to use for any project. This page may not be reproduced without the explicit written permission of Piping-Designer.com. |
# 'any' and 'all' compared with the rest of the Report
Marko Schuetz marko@ki.informatik.uni-frankfurt.de
Fri, 26 Jan 2001 13:59:02 +0100
From: Jan-Willem Maessen <jmaessen@mit.edu>
Subject: Re: 'any' and 'all' compared with the rest of the Report
Date: Thu, 25 Jan 2001 11:09:07 -0500
> Bjorn Lisper <lisper@it.kth.se> replies to my reply:
> > >My current work includes [among other things] ways to eliminate this
> > >problem---that is, we may do a computation eagerly and defer or
> >
> > What you basically have to do is to treat purely data-dependent errors (like
> > division by zero, or indexing an array out of bounds) as values rather than
> > events.
>
> Indeed. We can have a class of deferred exception values similar to
> IEEE NaNs.
>
> [later]:
> > Beware that some decisions have to be taken regarding how error
> > values should interact with bottom. (For instance, should we have
> > error + bottom = error or error + bottom = bottom?) The choice affects which
> > evaluation strategies will be possible.
>
> Actually, as far as I can tell we have absolute freedom in this
> respect. What happens when you run the following little program?
I don't think we have absolute freedom. Assuming we want
\forall s : bottom \le s
including s = error, then we should also have error \not\le
bottom. For all other values s \not\equiv bottom we would want error
\le s.
. . . . . . . . .
\ /
\ /
.
.
.
|
error
|
bottom
Now if f is a strict function returning a strict function then
(f bottom) error \equiv bottom error \equiv bottom
and due to f's strictness either f error \equiv bottom or f error
\equiv error. The former is as above. For the latter (assuming
monotonicity) we have error \le 1 \implies f error \le f 1 and thus
(f error) bottom \le (f 1) bottom \equiv bottom
On the other hand, if error and other data values are incomparable.
. . . . . error
\ /
\ /
.
.
|
bottom
and you want, say, error + bottom \equiv error then + can no longer be
strict in its second argument....
So I'd say error + bottom \equiv bottom and bottom + error \equiv
bottom.
>
> \begin{code}
> forever x = forever x
>
> bottomInt :: Int
> bottomInt = error "Evaluating bottom is naughty" + forever ()
>
> main = print bottomInt
> \end{code}
>
> I don't know of anything in the Haskell language spec that forces us
> to choose whether to signal the error or diverge in this case (though
> it's clear we must do one or the other). Putting strong constraints
> on evaluation order would cripple a lot of the worker/wrapper-style
> optimizations that (eg) GHC users depend on for fast code. We want
> the freedom to demand strict arguments as early as possible; the
> consequence is we treat all bottoms equally, even if they exhibit
> different behavior in practice. This simplification is a price of
> "clean equational semantics", and one I'm more than willing to pay.
If error \equiv bottom and you extend, say, Int with NaNs, how do you
implement arithmetic such that Infinity + Infinity \equiv Infinity and
Infinity/Infinity \equiv Invalid Operation?
Marko |
# WulffPack – a package for Wulff constructions¶
WulffPack is a Python package for making Wulff constructions, typically for finding equilibrium shapes of nanoparticles. WulffPack constructs both continuum models and atomistic structures for further modeling with, e.g., molecular dynamics or density functional theory.
surface_energies = {(1, 1, 1): 1.0, (1, 0, 0): 1.2}
particle = SingleCrystal(surface_energies)
particle.view()
write('atoms.xyz', particle)
WulffPack constructs the regular, single crystalline Wulff shape as well as decahedra, icosahedra, and particles in contact with a flat interface (Winterbottom construction). Any crystal symmetry can be handled. Resulting shapes are conveniently visualized with matplotlib.
Three equilibrium shapes created by WulffPack: truncated octahedron (left), truncated decahedron (middle), and truncated icosahedron (right). The figure was created with the code in this example.
## Wulff constructions in a web application¶
WulffPack provides the backbone of a web application in the Virtual Materials Lab, in which Wulff constructions for cubic crystals can be created very easily. |
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# Two bodies, carrying charges $5 \mu \;C$ and $−3\; \mu C,$ are placed 1 m apart. Point P is situated between these two charges, as shown in the given figure.What will be the magnitude and direction of the net electric field at point P?
Can you answer this question?
## 1 Answer
0 votes
$7.45 \times 10^5 N/C \;towards\; the\; right$ is correct.
Hence A is the correct answer.
answered Jun 2, 2014 by |
# Find the coordinates of the point which divides the line segment joining the points ( 2, 3, 5)and (1, 4, 6)in the ratio (i) 2 : 3internally, (ii) 2 : 3externally.
Updated On: 17-04-2022
Get Answer to any question, just click a photo and upload the photo
and get the answer completely free, |
# Shreve I-1: Binomial asset pricing model
Wherein we give a slightly more intuitive version of the central replication derivation. Suppose we have a derivative security (which here really just means a random asset) worth $V_1(\omega_1)$ at time 1 and seek to determine its fair price at time 0, $V_0$. We will have $V_0=X_0$ where $X_0$ is an as-yet unknown amount of money that will be needed to replicate the security.
The security presumably depends, whether positively or negatively, on a stock valued at $S_t$ at time $t$. So to replicate the security we buy some as-yet unknown amount $\Delta_0$ of shares of the stock.
((The whole thing is easier in the case where interest rates are $r=0$. Say the security returns either 10 or 17 depending on whether the stock is at 5 or 2, respectively. Then we just seek to express the security as a linear function of the stock, i.e. we seek numbers $\Delta_0$ and $c$ such that
$$\Delta_0(5\text{ or }2)+c = \text{10 or 17},$$
and then value the security at $\Delta_0 S_0+c$.
If we find “risk-neutral” probabilities under which the stock has expected value equal to its current value (these must exist, i.e., we must have $dS_0<S_0<uS_0$, or else all money should be taken out of the money market and invested in the stock), then the price of the security is just the expected value of the security under these probabilities. [Proof: First, this is true if the security is just equal to the stock, by definition of risk-neutral probabilities. Then as we saw above a general security is a linear function of the stock, and expectations preserve linear combinations.] The main advantage of this is not that we believe in a risk-neutral world — we might as well have used a world where the risk premium $\mathbb E S_1/S_0$ is 10% or some other fixed easy-to-compute number (well, assuming we could be sure that $\tilde p$ and $\tilde q$ would exist in that case too!) (and, well, dividing 1 by is significantly easier than dividing by any other number) — but that once we have the risk-neutral probabilities we can calculate the prices of many securities as long as they are all based on the same underlying set of stocks.
))
This leaves us with the cash position, i.e. money on hand, of $X_0-\Delta_0 S_0$, which of course we invest in the money market, i.e., we let somebody (such as a bank) borrow the money in return for paying us interest.
At time 1 the value of our portfolio (of stock and money market accounts) is
$$X_1(\omega_1) = \Delta_0 S_1(\omega_1) + (1+r)(X_0-\Delta_0 S_0)$$
Here $\omega_1\in\{H,T\}$, so we actually have two equations in the two unknowns $\Delta_0$, $X_0$. We impose $V_1(\omega_1)=X_1(\omega_1)$ for both $\omega_1$, and we assume $V_1(\omega_1)$ is known for each $\omega_1$.
Rather than relying on magic intuition, we solve this system of equations, using the fact that the inverse of the matrix
$$\left[\begin{array}{rr} a & b \\ c & d \\ \end{array}\right]$$
is
$$\frac{1}{ad-bc} \left[\begin{array}{rr} d & -b \\ -c & a \\ \end{array}\right]$$
Only then is it time to bring in the risk-neutral probabilities $\tilde p$ and $\tilde q$. Namely, we are curious whether there exist probabilities so that the expected value of the stock is just the return from the money market. Under real-world probabilities this should not happen, since it would make it needlessly risky to invest in stocks.
Then it turns out that, lo and behold,
the risk-neutral-expected return of our derivative security is equal to the money market return of… our sought-for time 0 price of the security.
So from these risk-neutral probabilities we can calculate the value $V_0$ of the security, using $(1+r)V_0 = \tilde{\mathbb{E}} V_1$. |
Sie sind hier: ICP » R. Hilfer » Publikationen
# V.C Single Phase Fluid Flow
## V.C.1 Permeability and Darcy’s Law
The permeability is the most important physical property of a porous medium in much the same way as the porosity is its most important geometrical property. Some authors define porous media as media with a nonvanishing permeability [2]. Permeability measures quantitatively the ability of a porous medium to conduct fluid flow. The permeability tensor relates the macroscopic flow density to the applied pressure gradient or external field through
(5.54)
where is the dynamic viscosity of the fluid. is the flow rate per unit area of cross section. Equation (5.54) is known as Darcy’s law.
The permeability has dimensions of an area, and it is measured in units of Darcy (d). If the pressure is measured in physical atmospheres one has 1d=0.9869m while 1d=1.0197m if the pressure is measured in technical atmospheres. To within practical measuring accuracy one may often assume 1d=m. An important question arising from the fact that is dimensionally an area concerns the interpretation of this area or length scale in terms of the underlying geometry. This fundamental question has recently found renewed interest [318, 43, 319, 320, 170, 172, 4]. Unfortunately most answers proposed in these discussions [319, 318, 320, 4] give a dynamical rather than geometrical interpretation of this length scale. The traditional answer to this basic problem is provided by hydraulic radius theory [3, 2]. It gives a geometrical interpretation which is based on the capillary models of section III.B.1, and it will be discussed in the next section.
The permeability does not appear in the microscopic Stokes or Navier-Stokes equations. Darcy’s law and with it the permeability concept can be derived from microscopic Stokes flow equations using homogenization techniques [268, 269, 270, 38, 321, 271] which are asymptotic expansions in the ratio of microscopic to macroscopic length scales. The derivation will be given in section V.C.3 below.
The linear Darcy law holds for flows at low Reynolds numbers in which the driving forces are small and balanced only by the viscous forces. Various nonlinear generalizations of Darcy’s law have also been derived using homogenization or volume averaging methods [268, 1, 269, 322, 321, 38, 271, 323, 324, 325]. If a nonlinear Darcy law governs the flow in a given experiment this would appear in the measurement as if the permeability becomes velocity dependent. The linear Darcy law breaks down also if the flow becomes too slow. In this case interactions between the fluid and the pore walls become important. Examples occur during the slow movement of polar liquids or electrolytes in finely porous materials with high specific internal surface.
The hydraulic radius theory or Carman-Kozeny model is based on the geometrical models of capillary tubes discussed above in section III.B.1. In such capillary models the permeability can be obtained exactly from the solution of the Navier-Stokes equation (4.9) in the capillary. Consider a cylindrical capillary tube of length and radius directed along the -direction. The velocity field for creeping laminar flow is of the form where denotes a unit vector along the pipe, and measures the distance from the center of the pipe. The pressure has the form . Assuming “no slip” boundary conditions, , at the tube walls one obtains for the familiar Hagen-Poiseuille result [326]
(5.55) (5.56)
with a parabolic velocity and linear pressure profile. The volume flow rate is obtained through integration as
(5.57)
Consider now the capillary tube model of section III.B.1 with a cubic sample space of sidelength . The pore space consists of nonintersecting capillary tubes of radii and lengths distributed according to a joint probability density . The pressure drop must then be calculated over the length and thus the right hand side of (5.57) is multiplied by a factor . Because the tubes are nonintersecting the volume flow through each of the tubes can be added to give the macroscopic volume flow rate per unit area . Thus the permeability of the capillary tube model is simply additive, and it reads
(5.58)
Dimensional analysis of (5.58), (3.58) and (3.59) shows that is dimensionless. Averaging (5.58) as well as (3.58) and (3.59) for the porosity and specific internal surface of the capillary tube model yields the relation
(5.59)
where the mixed moment ratio
(5.60)
is a dimensionless number, and the angular brackets denote as usual the average with respect to .
The hydraulic radius theory or Carman-Kozeny model is obtained from a mean field approximation which assumes . The approximation becomes exact if the distribution is sharply peaked or if and for all . With this approximation the average permeability may be rewritten in terms of the average hydraulic radius defined in (3.66) as
(5.61)
where is the average of the tortuosity defined above in (3.62). Equation (5.61) is one of the main results of hydraulic radius theory. The permeability is expressed as the square of an average hydraulic radius , which is related to the average “pore width” as .
It must be stressed that hydraulic radius theory is not exact even for the simple capillary tube model because in general and . However, interesting exact relations for the average permeability can be obtained from (5.59) and (5.60) in various special cases without employing the mean field approximation of hydraulic radius theory. If the tube radii and lengths are independent then the distribution factorizes as . In this case the permeability may be written as
(5.62)
where is the average of the tortuosity factor defined in (3.62). The last equality interprets in terms of the microscopic effective cross section determined by the variance and curtosis of the distribution of tube radii. Further specialization to the cases or is readily carried out from these results.
Finally it is of interest to consider also the capillary slit model of section III.B.1. The model assumes again a cubic sample of side length containing a pore space consisting of parallel slits with random widths governed by a probability density . For flat planes without undulations the analogue of tortuosity is absent. The average permeability is obtained in this case as
(5.63)
which has the same form as (5.59) with a constant . The prefactor is due to the different shape of the capillaries, which are planes rather than tubes.
## V.C.3 Derivation of Darcy’s Law from Stokes Equation
The previous section has shown that Darcy’s law arises in the capillary models. This raises the question whether it can be derived more generally. The present section shows that Darcy’s law can be obtained from Stokes equation for a slow flow. It arises to lowest order in an asymptotic expansion whose small parameter is the ratio of microscopic to macroscopic length scales.
Consider the stationary and creeping (low Reynolds number) flow of a Newtonian incompressible fluid through a porous medium whose matrix is assumed to be rigid. The microscopic flow through the pore space is governed by the stationary Stokes equations for the velocity and pressure
(5.64) (5.65)
inside the pore space, , with no slip boundary condition
(5.66)
for . The body force and the dynamic viscosity are assumed to be constant.
The derivation of Darcy’s law assumes that the pore space has a characteristic length scale which is small compared to some macroscopic scale . The microscopic scale could be the diameter of grains, the macroscale could be the diameter of the sample or some other macroscopic length such as the diameter of a measurement cell or the wavelength of a seismic wave. The small ratio provides a small parameter for an asymptotic expansion. The expansion is constructed by assuming that all properties and fields can be written as functions of two new space variables which are related to the original space variable as and . All functions are now replaced with functions and the slowly varying variable is allowed to vary independently of the rapidly varying variable . This requires to replace the gradient according to
(5.67)
and the Laplacian is replaced similarly. The velocity and pressure are now expanded in where the leading orders are chosen such that the solution is not reduced to the trivial zero solution and the problem remains physically meaningful. In the present case this leads to the expansions [268, 280, 271]
(5.68) (5.69)
where and . Inserting into (5.64), (5.65) and (5.66) yields to lowest order in the system of equations
(5.70) (5.71) (5.72) (5.73) (5.74)
in the fast variable . It follows from the first equation that depends only on the slow variable , and thus it appears as an additional external force for the determination of the dependence of on from the remaining equations. Because the equations are linear the solution has the form
(5.75)
where the three vectors (and the scalars ) are the solutions of the three systems ()
(5.76) (5.77) (5.78)
and is a unit vector in the direction of the -axis.
It is now possible to average over the fast variable . The spatial average over a convex set is defined as
(5.79)
where is centered at and equals or depending upon whether or not. The dependence on the averaging region has been indicated explicitly. Using the notation of (2.20) the average over all space is obtained as the limit . The function need not to be averaged as it depends only on the slow variable . If is constant then which is known as the law of Dupuit-Forchheimer [1]. Averaging (5.75) gives Darcy’s law (5.54) in the form
(5.80)
where the components of the permeability tensor are expressed in terms of the solutions to (5.76)–(5.78) within the region as
(5.81)
The permeability tensor is symmetric and positive definite [268]. Its dependence on the configuration of the pore space and the averaging region have been made explicit because they will play an important role below. For isotropic and strictly periodic or stationary media the permeability tensor reduces to a constant independent of . For (quasi-)periodic microgeometries or (quasi-)stationary random media averaging eq. (5.73) leads to the additional macroscopic relation
(5.82)
Equations (5.80) and (5.82) are the macroscopic laws governing the microscopic Stokes flow obeying (5.64)–(5.66) to leading order in .
The importance of the homogenization technique illustrated here in a simple example lies in the fact that it provides a systematic method to obtain the reference problem for an effective medium treatment.
Many of the examples for transport and relaxation in porous media listed in chapter IV can be homogenized using a similar technique [268]. The heterogeneous elliptic equation (4.2) is of particular interest. The linear Darcy flow derived in this section can be cast into the form of (4.2) for the pressure field. The permeability tensor may still depend on the slow variable , and it is therefore of interest to iterate the homogenization procedure in order to see whether Darcy’s law becomes again modified on larger scales. This question is discussed next.
## V.C.4 Iterated Homogenization
The permeability for the macroscopic Darcy flow was obtained from homogenizing the Stokes equation by averaging the fast variable over a region . The dependence on the slow variable allows for macroscopic inhomogeneities of the permeability. This raises the question whether the homogenization may be repeated to arrive at an averaged description for a much larger megascopic scale.
If (5.80) is inserted into (5.82) and is assumed the equation for the macroscopic pressure field becomes
(5.83)
which is identical with (4.2). The equation must be supplemented with boundary conditions which can be obtained from the requirements of mass and momentum conservation at the boundary of the region for which (5.83) was derived. If the boundary marks a transition to a region with different permeability the boundary conditions require continuity of pressure and normal component of the velocity.
Equation (5.83) holds at length scales much larger than the pore scale , and much larger than diameter of the averaging region . To homogenize it one must therefore consider length scales much larger than such that
(5.84)
is fulfilled. The ratio is then a small parameter in terms of which the homogenization procedure of the previous section can be iterated. The pressure is expanded in terms of as
(5.85)
where now is the slow variable, and is the rapidly varying variable. Assuming that the medium is stationary, i.e. that does not depend on the slow variable , the result becomes [268, 280, 271]
(5.86)
where is the first term in the expansion of the pressure which is independent of , and the tensor has components
(5.87)
given in terms of three scalar fields which are obtained from solving an equation of the form
(5.88)
analogous to (5.76)–(5.78) in the homogenization of Stokes equation.
If the assumption of strict stationarity is relaxed the averaged permeability depends in general on the slow variable, and the homogenized equation (5.86) has then the same form as the original equation (5.83). This shows that the form of the macroscopic equation does not change under further averaging. This highlights the importance of the averaged permeability as a key element of every macroscopically homogeneous description. Note however that the averaged tensor may have a different symmetry than the original permeability. If is isotropic ( denotes the unit matrix) then may become anisotropic because of the second term appearing in (5.87).
## V.C.5 Network Model
Consider a porous medium described by equation (5.83) for Darcy flow with a stationary and isotropic local permeability function . The expressions (5.87) and (5.88) for the the effective permeability tensor are difficult to use for general random microstructures. Therefore it remains necessary to follow the strategy outlined in section V.A.2 and to discretize (5.83) using a finite difference scheme with lattice constant . As before it is assumed that where is the pore scale and is the system size. The discretization results in the linear network equations (5.5) for a regular lattice with lattice constant .
To make further progress it is necessary to specify the local permeabilities. A microscopic network model of tubes results from choosing the expression
(5.89)
for a cylindrical capillary tube of radius and length in a region of size . The parameters and must obey the geometrical conditions and . In the resulting network model each bond represents a winding tube with circular cross section whose diameter and length fluctuate from bond to bond. The network model is completely specified by assuming that the local geometries specified by and are independent and identically distributed random variables with joint probability density . Note that the probability density depends also on the discretization length through the constraints and .
Using the effective medium approximation to the network equations the effective permeability for this network model is the solution of the selfconsistency equation
(5.90)
where the restrictions on and are reflected in the limits of integration. In simple cases, as for binary or uniform distributions, this equation can be solved analytically, in other cases it is solved numerically. The effective medium prediction agrees well with an exact solution of the network equations [231]. The behaviour of the effective permeability depends qualitatively on the fraction of conducting tubes defined as
(5.91)
where . For the permeability is positive while for it vanishes. At the network has a percolation transition. Note that is not related to the average porosity.
## V.C.6 Local Porosity Theory
Consider, as in the previous section, a porous medium described by equation (5.83) for Darcy flow with a stationary and isotropic local permeability function . A glance at section III shows that the one cell local geometry distribution defined in (3.45) are particularly well adapted to the discretization of (5.83). As before the discretization employs a cubic lattice with lattice constant and cubic measurement cells and yields a local geometry distribution . It is then natural to use the Carman equation (5.59) locally because it is often an accurate description as illustrated in Figure 23.
The straight line in Figure 23 corresponds to equation (5.59). The local percolation probabilities defined in section III.A.5.d complete the description. Each local geometry is characterized by its local porosity, specific internal surface and a binary random variable indicating whether the geometry is percolating or not. The selfconsistent effective medium equation now reads
(5.92)
for the effective permeability . The control parameter for the underlying percolation transition was given in (3.47) as
(5.93)
and it gives the total fraction of percolating local geometries. If the quantity
(5.94)
is finite then the solution to (5.92) is given approximately as
(5.95)
for and as for . This result is analogous to (5.48) for the electrical conductivity. Note that the control parameter for the underlying percolation transition differs from the bulk porosity .
To study the implications of (5.92) it is necessary to supply explicit expressions for the local geometry distribution . Such an expression is provided by the local porosity reduction model reviewed in section III.B.6. Writing the effective medium approximation for the number defined in (3.87) and using equations (3.86) and (3.88) it has been shown that the effective permeability may be written approximately as [170]
(5.96)
where the exponent depends on the porosity reduction factor and the type of consolidation model characterized by (3.88) as
(5.97)
If all local geometries are percolating, i.e. if , then the effective permeability depends algebraically on the bulk porosity with a strongly nonuniversal exponent . This dependence will be modified if the local percolation probability is not constant. The large variability is consistent with experience from measuring permeabilities in experiment. Figure 24 demonstrates the large data scatter seen in experimental results. While in general small permeabilities correlate with small porosities the correlation is not very pronounced. |
# zbMATH — the first resource for mathematics
Toric degeneration of Schubert varieties and Gelfand–Tsetlin polytopes. (English) Zbl 1084.14049
This paper connects two degenerations related to the manifold $$F_n$$ of complete flags in $${\mathbb C}^n$$. N. Gonciulea and V. Lakshmibai [Transform. Groups 1, No. 3, 215–248 (1996; Zbl 0909.14028)] used standard monomial theory to construct a flat sagbi degeneration of $$F_n$$ into the toric variety of the Gelfand-Tsetlin polytope, and recently A. Knutson and E. Miller [Ann. Math. (2) 161, No. 3, 215–248 (2005; Zbl 1089.14007)] constructed Gröbner degenerations of matrix Schubert varieties into linear spaces corresponding to monomials in double Schubert polynmials. The flag variety is the geometric invariant theory (GIT) quotient of the space $$M_n$$ of $$n$$ by $$n$$ matrices by the Borel group $$B$$ of lower triangular matrices. A matrix Schubert variety is an inverse image of a Schubert variety under this quotient. The main result in the paper under review is that this GIT quotient extends to the degenerations. The sagbi degeneration is a GIT quotient of the Gröbner degeneration.\smallskip The nature of this GIT quotient is quite interesting. The authors exhibit an action of the Borel group $$B$$ on the product $$M_n\times{\mathbb C}$$ of $$M_n$$ with the complex line, so that the GIT quotient $$B\backslash\backslash(M_n\times{\mathbb C})$$ remains fibred over $${\mathbb C}$$ and is the total space of the sagbi degeneration. In this GIT quotient, the total space of the Gröbner degeneration (as in Knutson and Miller) of a matrix Schubert variety in $$M_n\times{\mathbb C}$$ covers the total space of the Lakshmibai-Gonciulea degeneration of the corresponding Schubert variety. At the degenerate point, the matrix Schubert variety has become a union of coordinate planes, each of which covers a component of the sagbi degeneration of the Schubert variety indexed by a face of the Gelfand-Tsetlin polytope.
The authors use this to identify which faces of the Gelfand-Tsetlin polytope occur in a given degenerate Schubert variety, and to give a simple explanation of the classical Gelfand-Tsetlin decomposition of an irreducible polynomial representation of $$\text{GL}_n$$ into one-dimensional weight spaces; in the degeneration, sections of a line bundle over $$F_n$$ become sections of the defining line bundle on the toric variety for the Gelfand-Tsetlin polytope.
##### MSC:
14M15 Grassmannians, Schubert varieties, flag manifolds 13F55 Commutative rings defined by monomial ideals; Stanley-Reisner face rings; simplicial complexes 13P10 Gröbner bases; other bases for ideals and modules (e.g., Janet and border bases)
Full Text:
##### References:
[1] Bergeron, N.; Billey, S., RC-graphs and Schubert polynomials, Exp. math., 2, 4, 257-269, (1993) · Zbl 0803.05054 [2] Buch, A.S.; Fulton, W., Chern class formulas for quiver varieties, Invent. math., 135, 3, 665-687, (1999) · Zbl 0942.14027 [3] Billey, S.C.; Jockusch, W.; Stanley, R.P., Some combinatorial properties of Schubert polynomials, J. algebraic combin., 2, 4, 345-374, (1993) · Zbl 0790.05093 [4] Buch, A.S., Grothendieck classes of quiver varieties, Duke math. J., 115, 1, 75-103, (2002) · Zbl 1052.14056 [5] Caldero, P., Toric degenerations of Schubert varieties, Transform. groups, 7, 1, 51-60, (2002) · Zbl 1050.14040 [6] Chirivı̀, R., LS algebras and application to Schubert varieties, Transform. groups, 5, 3, 245-264, (2000) · Zbl 1019.14019 [7] Eisenbud, D., Commutative algebra, with a view toward algebraic geometry, Graduate texts in mathematics, Vol. 150, (1995), Springer New York · Zbl 0819.13001 [8] Fomin, S.; Kirillov, A.N., Combinatorial Bn-analogues of Schubert polynomials, Trans. amer. math. soc., 348, 9, 3591-3620, (1996) · Zbl 0871.05060 [9] Fomin, S.; Kirillov, A.N., The Yang-Baxter equation, symmetric functions and Schubert polynomials, Discrete math., 153, 1-3, 123-143, (1996), Proceedings of the Fifth Conference on Formal Power Series and Algebraic Combinatorics, Florence, 1993 · Zbl 0852.05078 [10] Fomin, S.; Stanley, R.P., Schubert polynomials and the nil-Coxeter algebra, Adv. math., 103, 2, 196-207, (1994) · Zbl 0809.05091 [11] Fulton, W., Flags, Schubert polynomials, degeneracy loci and determinantal formulas, Duke math. J., 65, 3, 381-420, (1992) · Zbl 0788.14044 [12] Fulton, W., Introduction to toric varieties, Annals of mathematical studies, Vol. 131, (1993), Princeton University Press [13] Fulton, W., Young tableaux. with applications to representation theory and geometry, London mathematical society student texts, Vol. 35, (1997), Cambridge University Press Cambridge · Zbl 0878.14034 [14] Fulton, W., Universal Schubert polynomials, Duke math. J., 96, 3, 575-594, (1999) · Zbl 0981.14022 [15] Gonciulea, N.; Lakshmibai, V., Degenerations of flag and Schubert varieties to toric varieties, Transform. groups, 1, 3, 215-248, (1996) · Zbl 0909.14028 [16] Guillemin, V.; Sternberg, S., The Gel’fand-cetlin system and quantization of the complex flag manifolds, J. funct. anal., 52, 1, 106-128, (1983) · Zbl 0522.58021 [17] Gelfand, I.M.; Tsetlin, M.L., Finite-dimensional representations of the group of unimodular matrices, Dokl. akad. nauk SSSR (N.S.), 71, 825-828, (1950) · Zbl 0037.15301 [18] A. Knutson, E. Miller, Gröbner geometry of Schubert polynomials, Ann. Math. (2) (2004) to appear. arXiv:math.AG/0110058v3. [19] A. Knutson, E. Miller, M. Shimozono, Four positive formulae for quiver polynomials, 2003 preprint. arXiv:math.AG/0308142. · Zbl 1107.14046 [20] M. Kogan, Schubert geometry of flag varieties and Gel’fand-Cetlin theory, Ph.D. Thesis, Massachusetts Institute of Technology, 2000. [21] Lascoux, A.; Schützenberger, M.-P., Polynômes de Schubert, C. R. acad. sci. Paris Sér. I math., 294, 13, 447-450, (1982) · Zbl 0495.14031 [22] Lascoux, A.; Schützenberger, M.-P., Structure de Hopf de l’anneau de cohomologie et de l’anneau de Grothendieck d’une variété de drapeaux, C. R. acad. sci. Paris Sér. I math., 295, 11, 629-633, (1982) · Zbl 0542.14030 [23] Littelmann, P., Cones, crystals, and patterns, Transform. groups, 3, 2, 145-179, (1998) · Zbl 0908.17010 [24] Sturmfels, B., Gröbner bases and convex polytopes, AMS university lecture series, Vol. 8, (1996), American Mathematical Society Providence, RI · Zbl 0856.13020 [25] R. Vakil, A geometric Littlewood-Richardson rule. arXiv:math.AG/0302294. · Zbl 1163.05337 [26] R. Vakil, Schubert induction. arXiv:math.AG/0302296. · Zbl 1115.14043
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Identify extraneous solutions.
• Solve real-world problems using square root functions.
## Introduction
When the variable in an equation appears inside a radical sign, the equation is called a radical equation. To solve a radical equation, we need to eliminate the radical and change the equation into a polynomial equation.
A common method for solving radical equations is to isolate the most complicated radical on one side of the equation and raise both sides of the equation to the power that will eliminate the radical sign. If there are any radicals left in the equation after simplifying, we can repeat this procedure until all radical signs are gone. Once the equation is changed into a polynomial equation, we can solve it with the methods we already know.
We must be careful when we use this method, because whenever we raise an equation to a power, we could introduce false solutions that are not in fact solutions to the original problem. These are called extraneous solutions. In order to make sure we get the correct solutions, we must always check all solutions in the original radical equation.
Let’s consider a few simple examples of radical equations where only one radical appears in the equation.
Example 1
Find the real solutions of the equation \begin{align*}\sqrt{2x-1}=5\end{align*}.
Solution
Since the radical expression is already isolated, we can just square both sides of the equation in order to eliminate the radical sign:
\begin{align*}\left(\sqrt{2x-1}\right)^2=5^2\end{align*}
\begin{align*}\text{Remember that} \ \sqrt{a^2}=a \ \text{so the equation simplifies to:} && 2x-1& =25\\ \text{Add one to both sides:} && 2x& =26\\ \text{Divide both sides by 2:} &&& \underline{\underline{x=13}}\end{align*}
Finally we need to plug the solution in the original equation to see if it is a valid solution.
\begin{align*}\sqrt{2x-1}=\sqrt{2(13)-1}=\sqrt{26-1}=\sqrt{25}=5\end{align*} The solution checks out.
Example 2
Find the real solutions of \begin{align*}\sqrt[3]{3-7x}-3=0\end{align*}.
Solution
\begin{align*}\text{We isolate the radical on one side of the equation:} && \sqrt[3]{3-7x}& =3\\ \text{Raise each side of the equation to the third power:} && \left(\sqrt[3]{3-7x}\right)^3& =3^3\\ \text{Simplify:} && 3-7x& =27\\ \text{Subtract 3 from each side:} && -7x& =24\\ \text{Divide both sides by –7:} &&& \underline{\underline{x=-\frac{24}{7}}}\end{align*}
Check: \begin{align*}\sqrt[3]{3-7x}-3=\sqrt[3]{3-7 \left(-\frac{24}{7}\right)}-3=\sqrt[3]{3+24}-3=\sqrt[3]{27}-3=3-3=0\end{align*}. The solution checks out.
Example 3
Find the real solutions of \begin{align*}\sqrt{10-x^2}-x=2\end{align*}.
Solution
\begin{align*}\text{We isolate the radical on one side of the equation:} && \sqrt{10-x^2}& =2+x\\ \text{Square each side of the equation:} && \left(\sqrt{10-x^2}\right)^2& =(2+x)^2\\ \text{Simplify:} && 10-x^2& =4+4x+x^2\\ \text{Move all terms to one side of the equation:} && 0& =2x^2+4x-6\\ \text{Solve using the quadratic formula:} && x& =\frac{-4 \pm \sqrt{4^2-4(2)(-6)}}{4}\\ \text{Simplify:} && x& =\frac{-4 \pm \sqrt{64}}{4}\\ \text{Re-write} \ \sqrt{24} \ \text{in simplest form:} && x& =\frac{-4 \pm 8}{4}\\ \text{Reduce all terms by a factor of 2:} && x& =1 \ \text{or} \ x=-3\end{align*}
Check: \begin{align*}\sqrt{10-1^2}-1=\sqrt{9}-1=3-1=2\end{align*} This solution checks out.
\begin{align*}\sqrt{10-(-3)^2}-(-3)=\sqrt{1}+3=1+3=4\end{align*} This solution does not check out.
The equation has only one solution, \begin{align*}\underline{\underline{x=1}}\end{align*}; the solution \begin{align*}x=-3\end{align*} is extraneous.
Often equations have more than one radical expression. The strategy in this case is to start by isolating the most complicated radical expression and raise the equation to the appropriate power. We then repeat the process until all radical signs are eliminated.
Example 4
Find the real roots of the equation \begin{align*}\sqrt{2x+1}-\sqrt{x-3}=2\end{align*}.
Solution
\begin{align*}\text{Isolate one of the radical expressions:} && \sqrt{2x+1}& =2+\sqrt{x-3}\\ \text{Square both sides:} && \left(\sqrt{2x+1}\right)^2& =\left(2+\sqrt{x-3}\right)^2\\ \text{Eliminate parentheses:} && 2x+1& =4+4\sqrt{x-3}+x-3\\ \text{Simplify:} && x& =4 \sqrt{x-3}\\ \text{Square both sides of the equation:} && x^2& =\left(4 \sqrt{x-3} \right)^2\\ \text{Eliminate parentheses:} && x^2& =16(x-3)\\ \text{Simplify:} && x^2& =16x-48\\ \text{Move all terms to one side of the equation:} && x^2-16x+48& =0\\ \text{Factor:} && (x-12)(x-4)& =0\\ \text{Solve:} && x& =12 \ \text{or} \ x=4\end{align*}
Check: \begin{align*}\sqrt{2(12)+1}-\sqrt{12-3}=\sqrt{25}-\sqrt{9}=5-3=2\end{align*}. The solution checks out.
\begin{align*}\sqrt{2(4)+1}-\sqrt{4-3}=\sqrt{9}-\sqrt{1}=3-1=2\end{align*} The solution checks out.
The equation has two solutions: \begin{align*}x=12\end{align*} and \begin{align*}x=4\end{align*}.
## Identify Extraneous Solutions to Radical Equations
We saw in Example 3 that some of the solutions that we find by solving radical equations do not check out when we substitute (or “plug in”) those solutions back into the original radical equation. These are called extraneous solutions. It is very important to check the answers we obtain by plugging them back into the original equation, so we can tell which of them are real solutions.
Example 5
Find the real solutions of the equation \begin{align*}\sqrt{x-3}-\sqrt{x}=1\end{align*}.
Solution
\begin{align*}\text{Isolate one of the radical expressions:} && \sqrt{x-3}&=\sqrt{x}+1\\ \text{Square both sides:} && \left(\sqrt{x-3}\right)^2& =\left(\sqrt{x}+1\right)^2\\ \text{Remove parenthesis:} && x-3& =\left(\sqrt{x}\right)^2+2\sqrt{x}+1\\ \text{Simplify:} && x-3& =x+2\sqrt{x}+1\\ \text{Now isolate the remaining radical:} && -4& =2\sqrt{x}\\ \text{Divide all terms by 2:} && -2& =\sqrt{x}\\ \text{Square both sides:} && x& =4\end{align*}
Check: \begin{align*}\sqrt{4-3}-\sqrt{4}=\sqrt{1}-2=1-2=-1\end{align*} The solution does not check out.
The equation has no real solutions. \begin{align*}x=4\end{align*} is an extraneous solution.
## Solve Real-World Problems using Radical Equations
Radical equations often appear in problems involving areas and volumes of objects.
Example 6
Anita’s square vegetable garden is 21 square feet larger than Fred’s square vegetable garden. Anita and Fred decide to pool their money together and buy the same kind of fencing for their gardens. If they need 84 feet of fencing, what is the size of each garden?
Solution
Make a sketch:
Define variables: Let Fred’s area be \begin{align*}x\end{align*}; then Anita’s area is \begin{align*}x+21\end{align*}.
Find an equation:
Side length of Fred’s garden is \begin{align*}\sqrt{x}\end{align*}
Side length of Anita’s garden is \begin{align*}\sqrt{x+21}\end{align*}
The amount of fencing is equal to the combined perimeters of the two squares:
\begin{align*}4\sqrt{x}+4\sqrt{x+21}=84\end{align*}
Solve the equation:
\begin{align*}\text{Divide all terms by 4:} && \sqrt{x}+\sqrt{x+21}& =21\\ \text{Isolate one of the radical expressions:} && \sqrt{x+21}& =21-\sqrt{x}\\ \text{Square both sides:} && \left(\sqrt{x+21}\right)^2& =\left(21-\sqrt{x}\right)^2\\ \text{Eliminate parentheses:} && x+21& =441-42\sqrt{x}+x\\ \text{Isolate the radical expression:} && 42\sqrt{x}& =420\\ \text{Divide both sides by 42:} && \sqrt{x}& =10\\ \text{Square both sides:} && x& =100 \ ft^2\end{align*}
Check: \begin{align*}4\sqrt{100}+4\sqrt{100+21}=40+44=84\end{align*}. The solution checks out.
Fred’s garden is \begin{align*}10 \ ft \times 10 \ ft = 100 \ ft^2\end{align*} and Anita’s garden is \begin{align*}11 \ ft \times 11 \ ft = 121 \ ft^2\end{align*}.
Example 7
A sphere has a volume of \begin{align*}456 \ cm^3\end{align*}. If the radius of the sphere is increased by 2 cm, what is the new volume of the sphere?
Solution
Make a sketch:
Define variables: Let \begin{align*}R =\end{align*} the radius of the sphere.
Find an equation: The volume of a sphere is given by the formula \begin{align*}V=\frac{4}{3}\pi R^3\end{align*}.
Solve the equation:
\begin{align*}\text{Plug in the value of the volume:} && 456& =\frac{4}{3} \pi R^3\\ \text{Multiply by 3:} && 1368& =4 \pi R^3\\ \text{Divide by} \ 4 \pi: && 108.92& =R^3\\ \text{Take the cube root of each side:} && R& =\sqrt[3]{108.92} \Rightarrow R=4.776 \ cm\\ \text{The new radius is 2 centimeters more:} && R& =6.776 \ cm\\ \text{The new volume is:} && V & =\frac{4}{3} \pi (6.776)^3=\underline{\underline{1302.5}} \ cm^3\end{align*}
Check: Let’s plug in the values of the radius into the volume formula:
\begin{align*}V=\frac{4}{3} \pi R^3=\frac{4}{3} \pi (4.776)^3=456 \ cm^3\end{align*}. The solution checks out.
Example 8
The kinetic energy of an object of mass \begin{align*}m\end{align*} and velocity \begin{align*}v\end{align*} is given by the formula: \begin{align*}KE=\frac{1}{2} mv^2\end{align*}. A baseball has a mass of 145 kg and its kinetic energy is measured to be 654 Joules \begin{align*}(kg \cdot m^2/s^2)\end{align*} when it hits the catcher’s glove. What is the velocity of the ball when it hits the catcher’s glove?
Solution
\begin{align*}\text{Start with the formula:} && KE& =\frac{1}{2} mv^2\\ \text{Plug in the values for the mass and the kinetic energy:} && 654 \frac{kg \cdot m^2}{s^2}& =\frac{1}{2}(145\ kg)v^2\\ \text{Multiply both sides by 2:} && 1308 \frac{kg \cdot m^2}{s^2}& =145 \ kg \cdot v^2\\ \text{Divide both sides by 145} \ kg: && 9.02 \frac{m^2}{s^2}& =v^2\\ \text{Take the square root of both sides:} && v& =\sqrt{9.02} \sqrt{\frac{m^2}{s^2}}=3.003 \ m/s\end{align*}
Check: Plug the values for the mass and the velocity into the energy formula:
\begin{align*}KE=\frac{1}{2}mv^2=\frac{1}{2}(145 \ kg)(3.003 \ m/s)^2=654 \ kg \cdot m^2/s^2\end{align*}
.)
## Review Questions
Find the solution to each of the following radical equations. Identify extraneous solutions.
1. \begin{align*}\sqrt{x+2}-2=0\end{align*}
2. \begin{align*}\sqrt{3x-1}=5\end{align*}
3. \begin{align*}2 \sqrt{4-3x}+3=0\end{align*}
4. \begin{align*}\sqrt[3]{x-3}=1\end{align*}
5. \begin{align*}\sqrt[4]{x^2-9}=2\end{align*}
6. \begin{align*}\sqrt[3]{-2-5x}+3=0\end{align*}
7. \begin{align*}\sqrt{x^2-3}=x-1\end{align*}
8. \begin{align*}\sqrt{x}=x-6\end{align*}
9. \begin{align*}\sqrt{x^2-5x}-6=0\end{align*}
10. \begin{align*}\sqrt{(x+1)(x-3)}=x\end{align*}
11. \begin{align*}\sqrt{x+6}=x+4\end{align*}
12. \begin{align*}\sqrt{x}=\sqrt{x-9}+1\end{align*}
13. \begin{align*}\sqrt{x}+2=\sqrt{3x-2}\end{align*}
14. \begin{align*}\sqrt{3x+4}=-6\end{align*}
15. \begin{align*}5 \sqrt{x}=\sqrt{x+12}+6\end{align*}
16. \begin{align*}\sqrt{10-5x}+\sqrt{1-x}=7\end{align*}
17. \begin{align*}\sqrt{2x-2}-2\sqrt{x}+2=0\end{align*}
18. \begin{align*}\sqrt{2x+5}-3\sqrt{2x-3}=\sqrt{2-x}\end{align*}
19. \begin{align*}3\sqrt{x}-9=\sqrt{2x-14}\end{align*}
20. \begin{align*}\sqrt{x+7}=\sqrt{x+4}+1\end{align*}
21. The area of a triangle is \begin{align*}24 \ in^2\end{align*} and the height of the triangle is twice as long as the base. What are the base and the height of the triangle?
22. The length of a rectangle is 7 meters less than twice its width, and its area is \begin{align*}660 \ m^2\end{align*}. What are the length and width of the rectangle?
23. The area of a circular disk is \begin{align*}124 \ in^2\end{align*}. What is the circumference of the disk? \begin{align*}(\text{Area} = \pi R^2, \text{Circumference} =2 \pi R)\end{align*}.
24. The volume of a cylinder is \begin{align*}245 \ cm^3\end{align*} and the height of the cylinder is one third of the diameter of the base of the cylinder. The diameter of the cylinder is kept the same but the height of the cylinder is increased by 2 centimeters. What is the volume of the new cylinder? \begin{align*}(\text{Volume} =\pi R^2 \cdot h)\end{align*}
25. The height of a golf ball as it travels through the air is given by the equation \begin{align*}h=-16t^2+256\end{align*}. Find the time when the ball is at a height of 120 feet.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects: |
Has this model of random directed graphs been studied?
Youtube recently added a feature called autoplay, where each clip is assigned a (presumably related) clip that follows it. This, in effect, defines a directed graph on the set of youtube clips, where each vertex has outdegree 1. The user starts at a vertex of his choice and takes a walk along this graph.
This got me thinking. Since the graph is finite, the user will eventually get stuck in a loop. Each loop acts as a sink, and each vertex will eventually lead the user to some sink. This raises some questions - how many sinks are there? How many steps does it take before the user reaches the loop? What is the distribution of the sink sizes? And so on.
Here is a random graph model that can be used to model this process: For each vertex $v$ we choose a single neighbor $w$ uniformly at random and add the edge $(v,w)$ to the graph. It might be interesting to investigate the properties of this model and to see if they can teach us anything about the Youtube network. Have people looked at this type of thing before?
• They are directed 1-forests, or functional graphs. – Pål GD Jul 20 '15 at 15:25
• are you sure the "next clip" is always a single other clip? its basically something like a big DFA but with single transitions in that case...! – vzn Jul 20 '15 at 15:54
this may be a bit unexpected but yes, this has been studied in at least one particular context: PRNGs. a PRNG can be visualized as a directed graph, specifically a functional graph (all vertices, single outdegree) of "current value, next value". however most PRNGs are designed to have a single very long cycle. there is some analysis of PRNGs with multiple embedded cycles. eg:
there is also some theory on cycle detection eg the Tortoise/ Hare and Brents algorithm. did not find other contexts where "random" functional graphs are studied. note your definition did not ensure that vertices are connected, not sure if that is what you intended. there would be some theory about how many edges would have to be placed before separate disconnected graphs become connected. Erdos did studies in this area with undirected graphs on Erdos-Renyi model and its famous as being one of the early discoveries of phase transitions in discrete math theory. the random functional graphs you describe could be regarded as a specialized version of Erdos Renyi model.
• actually another somewhat surprising, very interesting/deep area is study of collatz conjecture! the notion has been generalized decades ago by Conway and others & is cited in this paper where deep connections to undecidability/ Turing completeness are outlined/ discussed: Problems in number theory from busy beaver competition / Michel – vzn Jul 21 '15 at 2:25
It is very easy to say something about the expected length before you get stuck in a loop: if there are $n$ videos, it will (starting from a random video) take in expectation $\Theta(\sqrt{n})$ videos before you loop around (the actual value is around $1.25\sqrt{n}$). This is effectively the birthday problem, since each time you draw a video at random.
Since each video in the chain of $\Theta(\sqrt{n})$ videos is equally likely to be the one you loop back to, the average length of a loop is also $\Theta(\sqrt{n})$ videos (actual value $0.625\sqrt{n}$).
This gives the expected length of the loop you end up in after starting from a random video. This means that a loop with many videos leading to it is counted more strongly. If instead you want to know the expected length of a loop if you pick a random loop, this may be found as $T(1)$, where
$$T(i)=\frac{n-i}{n}T(i+1)+\frac{i}{n}\frac{i+1}{2}$$
and $T(n)=\frac{n+1}{2}$. Computing the values of $T$ experimentally it seems to match up with $0.625\sqrt{n}$, so both ways of counting the expected loop length are the same.
Computing the expected number of cycles seems to be a harder problem. We start by counting the expected number of cycles of a given length. The probably that a node is part of a length-1 cycle is $\frac{1}{n}$, so there are in expectation $1$ length-1 cycles. The probability that a node is in a length-2 cycle is $\frac{n-1}{n}\frac{1}{n}$, so there are in expectation $\frac{1}{2}\frac{n-1}{n}$ length-2 cycles. In general, the number of cycles of length $l$ is $\frac{1}{l}\Pi_{i=1}^{l-1} \frac{n-i}{n}$.
We can obtain a crude upper bound on the number of cycles by considering $\Sigma_{i=1}^n \frac{1}{i} = H_n$ which is $O(\log n)$. Unfortunately the number of cycles doesn't seem to converge to $\log n$ so this bound is not tight. |
© copyright 2003-2021 Study.com. Area = (1/2)(2x)(x) = 400. to them later with the "Go To First Skipped Question" button. What is the equation for this trigonometric function? What is the scale factor from the smaller rectangle to the larger rectangle pictured below? Enroll here. Trigonometry Questions for SSC CHSL Exam PDF: SSC CHSL Trignometry Questions download PDF based on previous year question paper of SSC CHSL exam. Choose your answers to the questions and click 'Next' to see the next set of questions. Click it to see your results. Trigonometry is the branch of mathematics dealing with the relations of the sides and angles of triangles and with the relevant functions of any angles. If we apply a scale factor of 4 to the triangular prism below, what will the volume of the new prism be? Author: Created by Maths4Everyone. / Topic 3: Circular functions and trigonometry Topic 3: Circular functions and trigonometry MichaelExamSolutionsKid 2018-01-14T09:55:25+00:00 Topic 3: Circular functions and trigonometry + 5x + 6 Which Statement Correctly Describes The Number Of Possible Positive Zeros And The Number Of Possible Negative Zeros? Worked Solution. Angle ABC = 40, AB = 10 cm, PD = 8 cm and BD = 15 cm.. You can skip questions if you would like and come The Length Of The Conjugate Axis Is 12 Units, And The Length Of The Transverse Axis Is 4 Units. Typically you are given the height of the cliff and the angle of depression to the boat and are asked to calculate how far the boat is from the base of the cliff. If the long leg of a 30-60-90 triangle is 23, approximately how long is the short leg? The sides and hypotenuse of a right triangle. In using sine to find the area of a triangle, what kind of imaginary triangle is being created? 249. Which compass bearing could the purple arrow represent? appear. trigonometry-exam-questions 2/3 Downloaded from holychild.org on January 11, 2021 by guest Valuable test-taking tips and information on obtaining credit through CLEPWith more than 6 million CLEP® exams taken since 1967, the College-Level Examination Program® is a credit-by-examination program that has Trigonometry is the study of triangles. TRIGONOMETRY Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. Which of the following is the law of sines? The size of angle PDC in degrees 2. If theta of a right triangle on the unit circle is θ = 0, then what is the cosine of theta (θ). back Right Triangle Problems in Trigonometry. If tan (cot x) = cot (tan x), then sin 2x = ___________. Study more effectively: skip concepts you already know and focus on what you still need to learn. College Level Math – Trigonometry questions appear on tests that have College Level Math questions such as the Accuplacer and the Compass. Most SAT trigonometry questions are based on trigonometric ratios, which are the relationships between the angles and sides of a right triangle in terms of one of its acute (less than 90 degrees) angles. To find the measure of the side of a triangle using the law of cosines, what information do you need? What is the equation for this trigonometric function? Click it to see your results. The sides and hypotenuse of an obtuse triangle. This means that the revision process can start earlier, leaving you better prepared to tackle whole exam papers closer to the exam. Which of the following shapes can the Pythagorean Theorem apply to. What is the function for the following graph? Good luck! If we are given angles A and C along with side c, which of these formulas can we use to find side a? Questions are organized in Practice Tests, which draw from various topics taught in Trigonometry; questions are also organized by concept. Works well for revision or even a lesson where pupils build up to recognizing both … In a right triangle, the sine of an angle is equal to _____. You can skip questions if you would like and come The measurements of the other two sides and the angle opposite the side you want to find. What compass bearing only includes sixteen directions? Services. Calculate the length of side BC to three significant figures. How would you report the red arrow's bearing in other compass bearing notation. The Number Of Positive Zeros Is 1. to them later with the "Go To First Skipped Question" button. Good luck! Services. 4) Answer (A) Sinθ = 3/5 then cosθ = 4/5. All rights reserved. Practice JEE Main Important Topics Questions solved by our expert teachers helps to score good marks in IIT JEE Exams. This video is accompanied by an exam style question to further practice your knowledge. Which expression could be used to determine the length of the green arrow? The Pythagorean Theorem states that a^2 + b^2 = c^2. Question: Test: Pre-Calculus-Trigonometry Exam The Center Of A Hyperbola Is (-8,4). (ii) sin C, … Premium members get access to this practice exam along with our entire library of lessons taught by subject matter experts. What does this compass bearing read in the true bearing format? Trigonometry questions with answers. to them later with the "Go To First Skipped Question" button. s i n A + s i n B c o s A − c o s B. Click it to see your results. on your results. Scoring Clarification for Teachers, Question 32, only (81 KB) June 2014 Examination (117 KB) Scoring Key and Rating Guide (69 KB) Sample Response Set (1.7 MB) Scoring Key (Excel version) (19 KB) Conversion Chart PDF version (89 KB) Excel version (13 KB) Notice to Teachers Notice to Teachers: Question 5, only (9 KB) January 2014 Examination (112 KB) The measurements of the two sides next to the angle. $sin\: x = \frac{opposite}{hypotenuse}$ \[sin\: x = … Take a free mock test for SSC CHSL Preview. The Transverse Axis Is Parallel To The X-axis What Is The Equation Of The Hyperbola In Standard Form? appear. What types of triangles will the law of sines work for? Is it possible to find the height of a tower by merely observing the length of its shadow and the position of the sun? Each Trigonometry Practice Test features a dozen multiple-choice Trigonometry questions, and each question comes with a full step-by-step explanation to help students who miss it learn the concepts being tested. What is the formula for converting degrees to radians? If the leg length of a right triangle is 9 and the hypotenuse length is 15, what is the other leg length? Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. Trigonometry. 1. Tracing paper may be used. If the short leg of a 30-60-90 triangle is 6, approximately how long is the hypotenuse? What does angle C measure? Earn Transferable Credit & Get your Degree. Make sure you are happy with the following topics before continuing. Which of the following graphs represents y = 3sin(2x)? GCSE 9-1 Exam Question Practice (Trigonometry) 5 79 customer reviews. θ θ θ T a n θ / 2 = s i n θ 1 + c o s θ = 3 / 5 1 + 4 / 5 = 3 / 5 9 / 5 = 3 9 = 1 / 3. Which of these triangles is not an oblique triangle? You can answer almost every SAT trig question by using the mnemonic device for the three basic trigonometric ratios: SOH CAH […] This video covers the Trig Identities, a key concept in IB Maths SL Topic 3: Trigonometry. Calculate, giving your answers correct to 1 decimal place i. GCSE Higher: The diagram (not drawn to scale) shows a right-angled triangle. The sides and hypotenuse of an isosceles triangle. If we are given sides a and c along with angle A, which of the following can be found by using the law of sines? Questions on Angles in Standard Position. Good luck! The measurements of the angle opposite the side you want to find. Solve for x: x = 20 , 2x = 40. What is the formula for finding the area of a triangle with an unknown height? 1. Get 200 SSC mocks for just Rs. How many possible answers are there in the ambiguous case of the law of sines? You can skip questions if you would like and come Based on your results, we'll create a customized Test Prep Plan just for you! Which of the following is the identity for cos (x - y)? This is the Multiple Choice Questions Part 1 of the Series in Plane Trigonometry topic in Engineering Mathematics. All other trademarks and copyrights are the property of their respective owners. This part of the formula does not replace any part of the triangle; it's completely new. Click it to see your results. This carefully selected compilation of exam questions has fully-worked solutions designed for students to go through at home, saving valuable time in class. Which of the following graphs represents y = sin(x + π/2)? Which of the following is the sine sum identity for sin (x + y)? Created: Oct 20, 2017 | Updated: Jan 17, 2019. to them later with the "Go To First Skipped Question" button. Ultimate Math Quiz quiz which has been attempted 55 times by avid quiz takers. Students who are preparing for their Class 10 exams must go through Important Questions for Class 10 Math Chapter 8 Introduction to Trigonometry. Choose your answers to the questions and click 'Next' to see the next set of questions. Take this practice test to check your existing knowledge of the course material. When you have completed the practice exam, a green submit button will When you have completed the practice exam, a green submit button will The Corbettmaths Practice Questions on 3D Trigonometry. The sides and hypotenuse of an equilateral triangle. Good luck! Choose your answers to the questions and click 'Next' to see the next set of questions. In this topic, we’re going to focus on three trigonometric functions that specifically concern right-angled triangles. Trig Identity questions are commonly found in IB Maths SL exam papers, often in Paper 1. Question 1: In a ΔABC right angled at B if AB = 12, and BC = 5 find sin A and tan A, cos C and cot C. Solution: AC=√((AB)^2+(BC)^2 ) =√(〖12〗^2+5^2 ) =√(144+25) What part of the usual formula does 'b times sine C' replace? This Trigonometry Final Exam Review tutorial provides 56 multiple choice questions to help you prepare the test. Trigonometry Questions for Competitive Exams with PDF like SSC CGL Tier 1, CHSL, MTS, CGL Tier 2. Premium members get access to this practice exam along with our entire library of lessons taught by subject matter experts. Free PDF Download of JEE Main Trigonometry Important Questions of key topics. © copyright 2003-2021 Study.com. appear. A tangent function undergoes the following transformations: reflection, vertical stretch of 2, period of π/2. Sciences, Culinary Arts and Personal To practice IB Exam Style Questions in this topic, visit the Maths SL Questionbank To find the measure of an angle of a triangle using the law of cosines, what information do you need? Download Trigonometry Questions for SSC CHSL Exam PDF. If the two legs of a triangle are 3 and 8, what is the hypotenuse? When you have completed the practice exam, a green submit button will Instructions Use black ink or ball-point pen. Study more effectively: skip concepts you already know and focus on what you still need to learn. What must be provided in addition to a compass bearing, to indicate how far to move in a certain direction? In this video I go through a typical exam question where a person is on top a cliff and sees a boat in the water. Take this practice test to check your existing knowledge of the course material. Choose your answers to the questions and click 'Next' to see the next set of questions. So the answer is option A. Jun 96 The diagram below, not drawn to scale, shows ABC and PCD are right angled triangles. Pythagora's theorem: (2x)2 + (x)2 = H2. Determine: (i) sin A, cos A. Exam questions on Trigonometry, Pythagoras and a combination of both. Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. All other trademarks and copyrights are the property of their respective owners. Questions separated by topic from Core 3 Maths A-level past papers We'll review your answers and create a Test Prep Plan for you based 2. Videos, worksheets, 5-a-day and much more The Corbettmaths Practice Questions on Trigonometry. Videos, worksheets, 5-a-day and much more = 2 s i n ( A + B 2) c o s ( A − B 2) − 2 s i n ( A + B 2) s i n ( A − B 2) =-cot ( A − B 2) So the answer is option A. Trigonometry Questions & Answers For Competitive Exams. Question. Important Questions for Class 10 Maths Chapter 8 Introduction to Trigonometry with solutions includes all the important topics with detailed explanation that aims to help students to score more marks in Board Exams 2020. How many degrees are there in a full circle? on your results. back If the area of the first triangle is 14, and the area of the second triangle is 350, what is the scale factor? A tangent function has an amplitude (steepness) of 3, period of π, a transformation of π/2 to the right, and a transformation down 1. Biological and Biomedical TRIGONOMETRY CSEC PAST PAPER QUESTIONS. Choose your answers to the questions and click 'Next' to see the next set of questions. You can skip questions if you would like and come Measurement & Trigonometry Chapter Exam Instructions. Based on your results, we'll create a customized Test Prep Plan just for you! Which trigonometric ratio calculates the sin of x? Trigonometry Functions Chapter Exam Instructions. If we apply a scale factor of 3 to a cube where each side is 2 inches long, what will the volume of the new cube be? They are: sine, cosine, and tangent, which get shortened to sin, cos, and tan. Find the missing side if the missing side's opposite angle measures 54 degrees and another angle measures 36 with an opposite side that measures 4. Exam-Style Questions on Trigonometry Problems on Trigonometry adapted from questions set in previous Mathematics exams. We'll review your answers and create a Test Prep Plan for you based Biological and Biomedical back The length of BC in centimetres ii. Solution: tan (cotx) = cot (tanx) ⇒ tan (cotx) = … Questions on Amplitude, Period, range and Phase Shift of Trigonometric Functions with answers. When you have the ambiguous law of sines case, how do you find the second possible solution? All rights reserved. 605.18 KB 1507 Downloads Get Trigonometry Questions for RRB NTPC/Group D. Check Trigonometry exercises pdf for rrb group d exams, Trigonometry practice pdf for rrb NTPC exams. Earn Transferable Credit & Get your Degree. Sciences, Culinary Arts and Personal The measurements of the side opposite the angle and the other two angles. Trigonometry is also an important topic in most government exams and SSC exams like SSC CGL, CHSL, MTS, Steno, CPO and RRB Exams like RRB NTPC, Group D and others. If the leg of a 45-45-90 triangle is 19, approximately how long is the hypotenuse? Here we have attached some Trigonometry questions and their solutions for competitive exams like SSC, Railway, UPSC & other exams. When you have completed the practice exam, a green submit button will The Number Of Negative Zeros Is Either 3 Or 1. For which of the following can you use the angle sum and difference identities to help you solve the problem? For an ideal score in the SSC competitive exams, you must prepare Trigonometry well, remember the formulas, and must have sufficient practice of questions. Choose your answers to the questions and click 'Next' to see the next set of questions. The measurements of two of the sides and the angle opposite the third side. Try this amazing Trigonometry Exam! The measurements of the two angles that are not opposite the side you want to find and the side to the left of the side you want to find. back A triangle has a side b that measures 10.7, an angle B that measures 95 degrees, and a side c that measures 6.3. How To Use Our Exam Questions By Topic When preparing for A Level Maths exams, it is extremely useful to tackle exam questions on a topic-by-topic basis. Question: Mester Test: Pre-Calculus-Trigonometry Exam Consider The Polynomial Function F(3) =** - 2.c + 112? 25 Very important Trignometry questions for SSC CHSL Exam. appear. H = x … Also explore over 37 similar quizzes in this category. 1 : In ∆ ABC, right-angled at B, AB = 24 cm, BC = 7 cm. What are a, b, and c? with answers. Effectively: skip concepts you already know and focus on three trigonometric functions answers. Skipped Question '' button Tests, which of the new prism be ( tan x ) then. The position of the sides and the angle sum and difference Identities to help you solve the problem scale shows... A 30-60-90 triangle is 23, approximately how long is the law of cosines, what will the of... Tower by merely observing the length of side BC to three significant figures and Sciences... Topics before continuing are there in the true bearing format attached some Trigonometry questions SSC! From the smaller rectangle to the questions and click 'Next ' to the... This means that the revision process can start earlier, leaving you better prepared to whole! Right-Angled at B, AB = 10 cm, BC = 7 cm imaginary is. 20, 2017 | Updated: Jan 17, 2019 with answers how long is the formula converting... Iit JEE exams are happy with the Go to First Skipped Question ''.. Calculate the length of the sun have attached some Trigonometry questions for CHSL. Level Math questions such as the Accuplacer and the compass = 400 sides next to the questions click... + y ) green arrow green arrow = 3/5 then cosθ = 4/5 ; it 's new... Is 9 and the position of the angle opposite the third side sure you happy... What part of the usual formula does not replace any part of the following is short! For sin ( x ) = 400 measure of the course material Trigonometry for. The Test and tan 8, what is the law of cosines what. Functions with answers for converting degrees to radians in the ambiguous case of the formula for finding the of! Equal to _____ CHSL, MTS, CGL Tier 1, CHSL, MTS CGL!, which draw from various topics taught in Trigonometry ; questions are organized in practice Tests which... In ∆ ABC, right-angled at B, AB = 24 cm, PD 8... New prism be in PAPER 1 completely new answers and create a Test Plan! Taught in Trigonometry ; questions are also organized by concept tangent function undergoes the following shapes the! Far to move in a right triangle is 19, approximately how is! Conjugate Axis is Parallel to the questions and click 'Next ' to see the next set questions... To Go through Important questions for competitive exams with PDF like SSC, Railway, UPSC other!, and tan leg of a triangle are 3 and 8, what is the law of,. For you based on previous year Question PAPER of SSC CHSL exam:... Introduction to Trigonometry the exam quiz which has been attempted 55 times by avid quiz takers you need... ) = 400 C along with our entire library of lessons taught by subject matter.! An angle of a 30-60-90 triangle is 6, approximately how long is the short leg,... Sum and difference Identities to help you prepare the Test does ' times! Practice JEE Main Important topics questions solved by our expert teachers helps to score good marks in IIT exams... Converting degrees to radians in this topic, we 'll review your answers and create a Test Prep Plan you. O s a − C trigonometry exam questions s B, 2x = 40, AB = cm... 'S Theorem: ( i ) sin C, which get shortened sin. Side you want to find Shift of trigonometric functions that specifically concern right-angled triangles next to the questions and 'Next... This practice Test to check your existing knowledge of the following is the hypotenuse length is 15, what the... X: x = 20, 2017 | Updated: Jan 17, 2019 ( ii ) C... This category need to learn bearing format '' button practice Test to check your existing knowledge of following... 6, approximately how long is the formula for finding the area of a triangle 3. Exam papers, often in PAPER 1 in Standard Form of 2 Period! Which of the following topics before continuing Hyperbola in Standard Form to your! Members get access to this practice exam along with our entire library of lessons taught by matter! Past PAPER questions an angle of a triangle using the law of sines,. And Personal Services Higher: the diagram below, not drawn to scale shows... Smaller rectangle to the X-axis what is the law of cosines, what will the law of sines,! Explore over 37 similar quizzes in this topic, we ’ re going to focus on three trigonometric functions specifically! You based on your results, we 'll create a customized Test Prep Plan just you. Which draw from various topics taught in Trigonometry ; questions are organized in practice Tests, draw! Property of their respective owners as the Accuplacer and the compass right-angled triangle Trigonometry CSEC PAPER. 3 or 1 30-60-90 triangle is 6, approximately how long is the multiple choice questions to help you the. The Test = c^2 length of side BC to three significant figures long is the sine identity! Transformations: reflection, vertical stretch of 2, Period of π/2 1: in ∆,! The angle opposite the side you want to find expert teachers helps to score good in! 4 to the questions and click 'Next ' to see the next set of questions questions... Pupils build up to recognizing both … Trigonometry CSEC PAST PAPER questions Trigonometry Problems on adapted. Sum identity for sin ( x - y ) CHSL exam PDF: CHSL... Conjugate Axis is 4 Units shapes can the Pythagorean Theorem apply to SL exam closer! X = 20, 2017 | Updated: Jan 17, 2019 could be used to the. Shift of trigonometric functions that specifically concern trigonometry exam questions triangles on what you still need to learn not drawn scale. Triangle is 19, approximately how long is the sine of an angle is equal to _____ or even lesson. On what you still need to learn times sine C ' replace the triangle ; it 's new. Our expert teachers helps to score good marks in IIT JEE exams sin ( x ) 2 =.... A + s i n a + s i n a + s i B. C o s B and BD = 15 cm this practice exam, a key concept in Maths. Would like and come back to them later with the Go to Skipped..., BC = 7 cm would you report the red arrow 's in. Pupils build up to recognizing both … Trigonometry CSEC PAST PAPER questions in ;... To recognizing both … Trigonometry CSEC PAST PAPER questions | Updated: Jan 17 2019! Previous Mathematics exams knowledge of the side opposite the angle sum and difference Identities help. 17, 2019 through Important questions for SSC CHSL exam PDF: SSC CHSL exam are:,. Other exams find the second possible solution Axis is Parallel to the triangular below! To see the next set of questions to determine the length of the following shapes can the Pythagorean apply... Are 3 and 8, what kind of imaginary triangle is 23, approximately how long is hypotenuse. With PDF like SSC, Railway, UPSC & other exams quizzes in this category click 'Next ' to the... Ssc CGL Tier 1, CHSL, MTS, CGL Tier 2 Period, range and Phase Shift of functions! ∆ ABC, right-angled at B, AB = 24 cm, PD = 8 cm BD... Your answers and create a Test Prep Plan just for you based on your results and the position the... Abc, right-angled at B, AB = 10 cm, PD = 8 cm and =... I n B C o s a − C o s a − C o s B to significant... Works well for revision or even a lesson where pupils build up to recognizing …. Review your answers to the questions and click 'Next ' to see the next set of.... Concern right-angled triangles = sin ( x ) 2 + ( x 2! All other trademarks and copyrights are the property of their respective owners questions help! 1 of the Series in Plane Trigonometry topic in Engineering Mathematics be provided in addition a... A tower by merely observing the length of a triangle, the sine sum identity for (! Are right angled triangles 8, what is the multiple choice questions to help solve... For competitive exams with PDF like SSC, Railway, UPSC & other exams an unknown?. Amplitude, Period, range and Phase Shift of trigonometric functions with answers will appear degrees there. Place i this category is 12 Units, and the other leg?. = ___________ are: sine, cosine, and tan following shapes can the Pythagorean apply! Two angles the revision process can start earlier, leaving you better prepared tackle! Are 3 and 8, what information do you find the measure of the course material, not drawn scale... Solve the problem Chapter 8 Introduction to Trigonometry following shapes can the Pythagorean Theorem states that a^2 + b^2 c^2! Help you prepare the Test before continuing what you still need to learn are given angles a and C with. Sine, cosine, and tan if you would like and come back to them later with ! To tackle whole exam papers closer to the larger rectangle pictured below case of the formula does not any., often in PAPER 1 for sin ( x ) 2 + ( )... |
# Find a_n & b_n such that
1. Dec 22, 2014
### AfterSunShine
1. The problem statement, all variables and given/known data
Find $$a_n$$ & $$b_n$$ such that $$\sum_{n=0}^{\infty}a_n$$ & $$\sum_{n=0}^{\infty}b_n$$ are convergent series, but $$\displaystyle \sum_{n=0}^{\infty} \left( \sqrt{a_n} \cdot b_n \right)$$ diverges.
2. Relevant equations
None.
3. The attempt at a solution
Try too hard for this but still cannot find such $$a_n$$ & $$b_n$$.
2. Dec 22, 2014
### Staff: Mentor
That is not an acceptable post. You must show your efforts before we can offer tutorial help. Show us what you have tried so far please...
3. Dec 22, 2014
### AfterSunShine
There is nothing to show here
basically I tried a_n = 1/n^2 & b_n = (-1)^n / n but failed
tried a_n = 1/n^2 & b_n = arctan (1/n) but failed
and so on...
4. Dec 22, 2014
### Staff: Mentor
That is not so bad as a start. Can you modify the series in order to keep them converging, but doing so significantly slower?
And then you'll need some trick to get the product diverging - something that changes the sign flip thing... |
# What is the complexity of computing optimal prefix free codes, when the frequencies are similar?
It is well known that there is a worst case optimal algorithm to compute the Huffman code in time $\theta(n\lg n)$. This is improved in two orthogonal ways:
1. Optimal prefix free codes can be computed faster if the set of distinct frequencies is small (e.g. of size $\sigma$): sort the frequencies using [Munro and Spira, 1976] so that to take advantage of the small value of $\sigma$, and compute the Huffman tree in linear time from the sorted frequencies. This yields a solution in $O(n\lg\sigma)$
2. There is an $O(n 16^k)$ algorithm to compute equivalent codes where $k$ is the number of distinct codewords lengths [Belal and Elmasry].
Is there a way to combine those techniques, in order to improve on the current best complexity of $O(n\min\{16^k,\lg\sigma\})$?
THE $O(nk)$ RESULT FROM STACS 2006 SEEM TO BE WRONG, Elmasry published on ARXIV in 2010 (http://arxiv.org/abs/cs/0509015) a version announcing - $O(16^kn)$ operations on unsorted input and - $O(9^k \log^{2k-1} n)$ operations on sorted input
1. I see an analogy with the complexity of computing the planar convex hull, where algorithms in $O(n\lg n)$ (sorting based, as the $O(n\lg n)$ algorithm for Huffman's code) and in $O(nh)$ (gift wrapping) were superseded by Kirkpatrick and Seidel's algorithm in $O(n\lg h)$ (later proved to be instance optimal with complexity of the form $O(nH(n_1,\ldots,n_k)$). In the case of Prefix Free codes, $O(n\lg n)$ versus $O(nk)$ suggests the possibility of an algorithm with complexity $O(n\lg k)$, or even $O(nH(n_1,\ldots,n_k)$ where $n_i$ is the number of codewords of length $i$, using the analogy of an edge of the convex hull covering $n_i$ points to a code length covering $n_i$ symbols.
2. A simple example shows that sorting the (rounded) logarithmic values of the frequencies (in linear time in the $\theta(\lg n)$ word RAM model) does not give an optimal prefix free code in linear time:
• For $n=3$, $f_1=1/2-\varepsilon$ and $f_2=f_3=1/4+\varepsilon$
• $\lceil\lg f_i\rceil=2$ so log sorting does not change order
• yet two codes out of three cost $n/4$ bits more than optimal.
3. Another interesting question would be to reduce the complexity when $k$ is large, i.e. all codes have distinct lengths:
• for instance when $k=n$ the frequencies are all of distinct log value. In this case one can sort the frequencies in linear time in the $\theta(\lg n)$ word RAM, and compute the Huffman code in linear time (because sorting their log values is enough to sort the values), resulting in overall linear time, much better than the $n^2$ from the algorithm from Belal and Elmasry. |
# Proving $\sum\limits_{k=1}^{\pi(n)-1} [ \theta(p_k) (1/{p_k}-1/p_{k+1})] -\ln(n)$ converges
Prove the sequence $a_n$ defined by $a_n = \sum\limits_{k=1}^{\pi(n)-1} [ \theta(p_k) (1/{p_k}-1/p_{k+1})] -\ln(n)$ converges, where $p_k$ denotes the $k$-th prime and $\vartheta(x)$ is Chebyshev's theta function.
-
Why? Is this your homework? Is it something you read somewhere? Is it a conjecture of yours? Are we allowed to use the Prime Number Theorem? Is this just an exercise in summation by parts? – Gerry Myerson Jun 5 '11 at 13:10
## 1 Answer
Hint:
Apply summation by parts. Then you will get something which looks like $$\sum_{p\leq x} \frac{\log p}{p}.$$ This sum is equal to $$\log x +C+O\left(e^{-c\sqrt{\log x}}\right)$$ using partial summation an the quantitative prime number theorem.
Without the prime number theorem, you can show that the sequence is bounded by some constant, but it is unlikely that you can prove it has a limit.
Hope that helps,
- |
# Contents
## Idea
What is called perturbative quantum field theory (pQFT) is quantum field theory where the interaction (between fields/particles) is treated as a tiny perturbation of the “free field theory” where no interaction is assumed to take place (“perturbation theory”). This is meant to be an approximation to the actual non-perturbative quantum field theory. However, the latter remains elusive except for toy examples of low spacetime dimension, vanishing interaction and/or topological invariance and most of the “quantum field theory” in the literature is tacitly understood to be perturbative.
Hence pQFT studies the infinitesimal neighbourhood (also called the formal neighbourhood) of free quantum field theories in the space of all quantum field theories. Mathematically this means that the resulting quantum observables are formal power series in the coupling constant $g$ which measures the strength of the interaction (as well as in Planck's constant, which measures the general strength of quantum). This distinguishes perturbative quantum field theory from non-perturbative quantum field theory, where the algebras of quantum observables are supposed to be not formal power series algebras, but C*-algebras.
The key object of perturbative QFT is the perturbative scattering matrix which expresses, as a formal power series in the ratio of the coupling constant over Planck's constant, the probability amplitude of scattering processes, namely of processes where free fields in a certain state come in from the far past, interact and hence scatter off each other, and then go off in some other quantum state into the far future. The scattering cross sections thus defined are the quantities which may be directly measured in scattering experiments, such as the LHC accelerator.
The perturbative S-matrix turns out to have an expression as a sum over separate scattering amplitudes for elementary processes labeled by Feynman diagrams, each of which depicts one specific way for fields (particles) to interact with each other. That the full S-matrix is the sum over all amplitudes for all these possible scattering processes, the Feynman perturbation series, is an incarnation of the informal heuristic of the path integral and the superposition principle in quantum physics, which says that the probability amplitude for a specific outcome is the sum over the probability amplitudes of all the possible processes that can contribute to this outcome.
For all interesting interacting field theories, such as quantum electrodynamics and quantum chromodynamics, this scattering matrix formal power series necessarily has vanishing radius of convergence (Dyson 52). If it is assumed that the formal Feynman perturbation series is the Taylor series of an actual smooth function given by the actual non-perturbative quantum field theory that is being approximated, then this means that it is at least an asymptotic series (by this example) whose first couple of terms could sum to a good approximation of the actual value to be computed. Indeed, the sum of the first few loop orders in the S-matrix for QED and QCD in the standard model of particle physics turns out to be in agreement with experiment to good precision.
(There are however known non-perturbative effects which are not captured in perturbation theory, such as confinement in QCD, supposed related to instantons in QCD. In resurgence theory one tries to identify these from the asymptotic nature of the Feynman perturbation series.)
A key step in the construction of perturbative quantum field theory is the renormalization of the point interactions. This comes about because given
1. a local Lagrangian density defining the nature of the fields and their interactions,
2. a vacuum state (generally: Hadamard state) that defines the free quantum field theory to be perturbed about
it turns out that the construction of the perturbative S-matrix (the Feynman perturbation series) still involves at each order a finite-dimensional space of choices to be made. Physically, these are the specification of further high energy interactions not seen in the original local Lagrangian density; mathematically, this is the choice of extending the time-ordered product of the interaction, which is an operator-valued distribution, to the locus of coinciding interaction points, in the sense of extensions of distributions.
Historically, perturbative quantum field theory as originally conceived informally by Schwinger-Tomonaga-Feynman-Dyson in the 1940s, had been notorious for the mysterious conceptual nature of its mathematical principles (“divergences”). The mathematically rigorous formulation of renormalization (“removal of UV-divergences”) in perturbative quantum field theory on Minkowski spacetime was established by Epstein-Glaser 73, based on Bogoliubov-Shirkov 59 and Stückelberg 51), now known as causal perturbation theory; laid out in the seminal Erice summer school proceeding (Velo-Wightman 76).
The correct definition of the adiabatic limit (“removal of IR divergencies”) was understood in Il’in-Slavnov 78 and eventually developed by Dütsch-Fredenhagen 01, Brunetti-DütschFredenhagen 09, this is now called perturbative algebraic quantum field theory. The rigorous derivation of the previously informal Feynman rules and their dimensional regularization for computation of scattering amplitudes was achieved in Keller 10 (IV.12), Dütsch-Fredenhagen-Keller-Rejzner 14. Quantization of gauge theories (Yang-Mills theory) in causal perturbation theory/perturbative AQFT was then discussed (for trivial principal bundles and restricted to gauge invariant observables) in the spirit of BRST-complex/BV-formalism in (Fredenhagen-Rejzner 11b). The generalization of all these constructions from Minkowski spacetime to perturbative quantum fields on more general spacetimes (i.e. for more general gravitational background fields such as appearing in cosmology or black hole physics) was made possible due to the identification of the proper generalization of vacuum states and their Feynman propagators to Hadamard states on globally hyperbolic spacetimes in Radzikowski 96. The resulting rigorous perturbative QFT on curved spacetimes was developed in a long series of articles by Hollands, Wald, Brunetti, Fredenhagen and others, now called locally covariant perturbative AQFT.
While this establishes a rigorous construction of perturbative quantum field theory on general gravitational backgrounds, the construction principles had remained somewhat ad-hoc: The axioms for the perturbative S-matrix (equivalently for the time-ordered products or retarded products of field operators) were well motivated by comparison with the Dyson series in quantum mechanics, by the heuristics of the path integral and not the least by their excellent confirmation by experiment, but had not been derived from first principles of quantization. Then in Dütsch Fredenhagen 01 it was observed that the Wick algebras of quantum observables in free quantum field theory are equivalently the Moyal deformation quantization of the canonical Poisson bracket (the Peierls bracket or causal propagator) on the covariant phase space of the free field theory (or rather of a choice of Hadamard state for it) and Collini 16 showed that under suitable conditions the perturbative interacting observable algebra is the Fedosov deformation quantization of covariant phase space of the interacting theory. A general argument to this extent was given in Hawkins-Rejzner 16.
This suggests that the construction of the full non-perturbative quantum field theory ought to be given by a strict deformation quantization of the covariant phase space. But presently no example of such for non-trivial interaction in spacetime dimension $\geq 4$ is known. In particular the phenomenologically interesting case of a complete construction of interacting field theories on 4-dimensional spacetimes is presently unknown. For the case of Yang-Mills theory this open problem to go beyond perturbative quantum field theory is one of the “Millennium Problems” (see at quantization of Yang-Mills theory). For the case of quantum gravity this is possibly the $10^4$-year problem that the field is facing. But observe that as a perturbative (effective“) quantum field theory, quantum gravity does fit into the framework of perturbative QFT, is mathematically well-defined and makes predictions, see the references there.
## Details
A comprehensive introduction is at geometry of physics – perturbative quantum field theory.
## Properties
product in perturbative QFT$\,\,$ induces
normal-ordered productWick algebra (free field quantum observables)
time-ordered productS-matrix (scattering amplitudes)
retarded productinteracting quantum observables
## References
### General
The original informal conception of perturbative QFT is due to Schwinger-Tomonaga-Feynman-Dyson:
• Freeman Dyson, The raditation theories of Tomonaga, Schwinger and Feynman, Phys. Rev. 75, 486, 1949 (pdf)
The rigorous formulation of renormalized perturbative quantum field theory in terms of causal perturbation theory was first accomplished in
with precursors in
A seminal compilation of the resulting rigorous understanding of renormalization is
• G. Velo and Arthur Wightman (eds.) Renormalization Theory Proceedings of the 1975 Erice summer school, NATO ASI Series C 23, D. Reidel, Dordrecht, 1976
Concrete computations in rigorous causal perturbation theory have been spelled out for quantum electrodynamics in
The treatment of the IR-divergencies by organizing the perturbative quantum observables into a local net of observables was first suggested in
• V. A. Il’in and D. S. Slavnov, Observable algebras in the S-matrix approach, Theor. Math. Phys. 36 (1978) 32 (spire, doi)
and then developed to perturbative algebraic quantum field theory in
Quantization of gauge theories (Yang-Mills theory) in causal perturbation theory/perturbative AQFT is discussed (for trivial principal bundles and restricted to gauge invariant observables) in the spirit of BRST-complex/BV-formalism in
and surveyed in:
The generalization of all these constructions to quantum fields on general globally hyperbolic spacetimes (perturbative AQFT on curved spacetimes) was made possible by the results on Hadamard states and Feynman propagators in
• Marek Radzikowski, Micro-local approach to the Hadamard condition in quantum field theory on curved space-time, Commun. Math. Phys. 179 (1996), 529–553 (Euclid)
and then developed in a long series of articles by Stefan Hollands, Robert Wald, Romeo Brunetti, Klaus Fredenhagen and others. For this see the references at AQFT on curved spacetimes.
The observation that perturbative quantum field theory is equivalently the formal deformation quantization of the defining local Lagrangian density is for free field theory due to
• Michael Dütsch, Klaus Fredenhagen, Perturbative algebraic quantum field theory and deformation quantization, Proceedings of the Conference on Mathematical Physics in Mathematics and Physics, Siena June 20-25 (2000) (arXiv:hep-th/0101079)
• A. C. Hirshfeld, P. Henselder, Star Products and Perturbative Quantum Field Theory, Annals Phys. 298 (2002) 382-393 (arXiv:hep-th/0208194)
and for interacting field theories (causal perturbation theory/perturbative AQFT) due
For more see the references at perturbative algebraic quantum field theory.
The relation of the construction via causal perturbation theory to the Feynman perturbation series in terms of Feynman diagrams was understood in
Non-rigorous but widely used textbooks:
(…)
### Non-convergence of the perturbation series
The argument that the perturbation series of realistic pQFTs necessarily diverges, in fact has vanishing radius of convergence (is at best an asymptotic series) goes back to
• Freeman Dyson, Divergence of perturbation theory in quantum electrodynamics, Phys. Rev. 85, 631, 1952 (spire)
and is made more precise in
• Lev Lipatov, Divergence of the Perturbation Theory Series and the Quasiclassical Theory, Sov.Phys.JETP 45 (1977) 216–223 (pdf)
recalled for instance in
• Igor Suslov, section 1 of Divergent perturbation series, Zh.Eksp.Teor.Fiz. 127 (2005) 1350; J.Exp.Theor.Phys. 100 (2005) 1188 (arXiv:hep-ph/0510142)
• Justin Bond, last section of Perturbative QFT is Asymptotic; is Divergent; is Problematic in Principle (pdf)
• Mario Flory, Robert C. Helling, Constantin Sluka, Section 2 of: How I Learned to Stop Worrying and Love QFT (arXiv:1201.2714)
• Stefan Hollands, Robert Wald, section 4.1 of Quantum fields in curved spacetime, Physics Reports Volume 574, 16 April 2015, Pages 1-35 (arXiv:1401.2026)
• Marco Serone, from 2:46 on in A look at $\phi^4_2$ using perturbation theory (recording)
The argument that the perturbation series should be trustworthy for number of terms smaller than the inverse of the coupling constant is recalled in Flory, Helling & Sluka 2012, p. 8 & eq. (34) & Sec. 2.5.
Exposition also in:
For the example of $\phi^4$-theory this non-convergence of the perturbation series is discussed in
• Robert C. Helling, p. 4 of Solving classical field equations (pdf, pdf)
• Alexander P. Bakulev, Dmitry Shirkov, section 1.1 of Inevitability and Importance of Non-Perturbative Elements in Quantum Field Theory, Proceedings of the 6th Mathematical Physics Meeting, Sept. 14–23, 2010, Belgrade, Serbia (ISBN 978-86-82441-30-4), pp. 27–54 (arXiv:1102.2380)
• Carl M. Bender, Carlo Heissenberg, Convergent and Divergent Series in Physics (arXiv:1703.05164)
And see at perturbation theoryOn divergence/convergence
Discussion of further issues, even when resummation is thought to apply, arising for n-point functions at large $n$ (large number of external particles in a scattering process):
failure of unitarity (for $\phi^n$-theory):
• Sebastian Schenk, The Breakdown of Resummed Perturbation Theory at High Energies (arXiv:2109.00549)
failure of locality (for perturbative quantum gravity and perturbative string theory):
### L-infinity algebra structure
Further identification of L-infinity algebra-structure in the Feynman amplitudes/S-matrix of Lagrangian perturbative quantum field theory: |
# Completely positive semidefinite rank
An $n\times n$ matrix $X$ is called completely positive semidefinite (cpsd) if there exist $d\times d$ Hermitian positive semidefinite {matrices} $\{P_i\}_{i=1}^n$ (for some $d\ge 1$) such that $X_{ij}= {\rm Tr}(P_iP_j),$ for all $i,j \in \{ 1, \ldots, n \}$. The cpsd-rank of a cpsd matrix is the smallest $d\ge 1$ for which such a representation is possible. In this work we initiate the study of the cpsd-rank which we motivate twofold. First, the cpsd-rank is a natural non-commutative analogue of the completely positive rank of a completely positive matrix. Second, we show that the cpsd-rank is physically motivated as it can be used to upper and lower bound the size of a quantum system needed to generate a quantum behavior. In this work we present several properties of the cpsd-rank. Unlike the completely positive rank which is at most quadratic in the size of the matrix, no general upper bound is known on the cpsd-rank of a cpsd matrix. In fact, we show that the cpsd-rank can be exponential in terms of the size. Specifically, for any $n\ge1,$ we construct a cpsd matrix of size $2n$ whose cpsd-rank is $2^{\Omega(\sqrt{n})}$. Our construction is based on Gram matrices of Lorentz cone vectors, which we show are cpsd. The proof relies crucially on the connection between the cpsd-rank and quantum behaviors. In particular, we use a known lower bound on the size of matrix representations of extremal quantum correlations which we apply to high-rank extreme points of the $n$-dimensional elliptope. Lastly, we study cpsd-graphs, i.e., graphs $G$ with the property that every doubly nonnegative matrix whose support is given by $G$ is cpsd. We show that a graph is cpsd if and only if it has no odd cycle of length at least $5$ as a subgraph. This coincides with the characterization of cp-graphs. |
# Collection of Science Jokes P2
#### Keith_McClary
Ruben Bolling has a couple of physics toons. The META one is good too.
#### jack action
Gold Member
This one is good and shows how you can sneakily bend the truth without making any apparent false statement:
#### nuuskur
$4-\frac{9}{2} = - \left\lvert 4-\frac{9}{2}\right\rvert = - \sqrt{\left (4-\frac{9}{2}\right )^2} ...$
nice try, though :p
Mentor
2018 Award
Mentor
2018 Award
#### fresh_42
Mentor
2018 Award
I do not understand this:
Is it:
• a mockery of flat earthers?
• an example of a chart?
• a demonstration of derivatives?
• a counterexample of argumentum a minori ad maius?
• a counterexample of an induction?
• a demonstration of the difference between a stable and an unstable equilibrium?
• a home accident waiting to happen?
• a lesson about right and wrong tools?
• a quotation "Realize that everything connects to everything else.” Leonardo da Vinci ?
• an analogue to the false quotation of "Sometimes a cigar is just a cigar." Sigmund Freud?
#### DennisN
It's time for this thread to evolve...
...and the evolution of the plastic/electronic species known as Mobile Phones (tempus clepta):
#### Wrichik Basu
Gold Member
2018 Award
the evolution of the plastic/electronic species known as Mobile Phones (tempus clepta)
Google translate tells me "tempus clepta" means "a thief" in Latin. Maybe you meant to say Telefono movil?
#### DennisN
Google translate tells me "tempus clepta" means "a thief" in Latin.
Haha, you actually googled for it . I tried to come up with a latin version of "time thief" (tempus = time, clepta = thief). Which mobile phones often are today.
Gold Member
#### Keith_McClary
and the evolution of the plastic/electronic species known as Mobile Phones
#### pedro the swift
The universe is made up of protons, neutrons, electrons and morons!
Just check out the flat earthers!
#### Keith_McClary
How eco-friendly biofuel is harvested:
2018 Award
#### DrClaude
Mentor
I'll bite: that's not Godwin's law!!!!!
#### nuuskur
PS! The pigeon is just an attention getter and has nothing to do with it.
#### DrClaude
Mentor
PS! The pigeon is just an attention getter and has nothing to do with it.
That's not a pigeon!!!
#### DrClaude
Mentor
As expected - a mallardroit reply!
Are you serious or is this a canard?
#### Ophiolite
Are you serious or is this a canard?
If you don't mind I'll duck giving you a proper answer.
On a different topic, I just noticed a thread in the forum titled Looking for Good Books on Photosynthesis and I thought, that’s certainly something that worth shedding some light on.
#### DennisN
What are you all arguing about? The mammal in the water in post #1,089?
#### Ibix
You're all quackers.
#### Borg
Gold Member
This thread has gone to the birds.
#### Borek
Mentor
Technically it should go to the Electrical Engineering forum.
"Collection of Science Jokes P2"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving |
# A formula for generating strong pseudoprimes
We show in the previous post that $2^n-1$ is a strong pseudoprime to base 2 whenever $n$ is a pseudoprime to base 2. This formula establishes that there are infinitely many strong pseudoprime to base 2. Since the smallest pseudoprime to base 2 is 341, the smallest possible strong pseudoprime given by this formula is a 103-digit number. In this post, we discuss another formula that will generate some of the smaller strong pseudoprimes to base 2. We prove the following theorem.
Theorem 1
Let $p$ be a prime number that is larger than 5. Then the following number is a strong pseudoprime to base 2.
$\displaystyle M_p=\frac{4^p+1}{5}$
Proof of Theorem 1
First step is to show that $M_p$ is a composite number. Note that $4 \equiv -1 \ (\text{mod} \ 5)$. Then $4^p \equiv (-1)^p \equiv -1 \ (\text{mod} \ 5)$. This means that $4^p+1$ is divisible by 5. it follows that $M_p$ is an integer. Furthermore, the following product shows that $4^p+1$ is composite.
$\displaystyle 4^p+1=(2^p-2^{\frac{p+1}{2}}+1) \cdot (2^p+2^{\frac{p+1}{2}}+1)$
One of the above factors is divisible by 5. It is then clear that $M_p$ is composite.
On the other hand, the above factorization of $4^p+1$ implies that $2^{2p} \equiv -1 \ (\text{mod} \ M_p)$. Furthermore, for any odd integer $t$, we have $2^{2 \cdot p \cdot t} \equiv -1 \ (\text{mod} \ M_p)$.
Next the following computes $M_p-1$:
$\displaystyle M_p-1=\frac{4^p+1}{5}-1=\frac{4^p-4}{5}=4 \cdot \frac{4^{p-1}-1}{5}=2^2 \cdot q$
where $\displaystyle q=\frac{4^{p-1}-1}{5}$. Since $p$ is prime, we have $4^{p-1} \equiv 1 \ (\text{mod} \ p)$. This means that $4^{p-1}-1=p \cdot k$ for some integer $k$. Since 5 divides $p \cdot k$ and $p$ is a prime larger than 5, 5 must divides $k$. Thus $q=p \cdot t$ where $k=5t$. Since $q$ is odd, $t$ is odd too. Based on one earlier observation, $\displaystyle 2^{2 \cdot q} \equiv 2^{2 \cdot p \cdot t} \equiv -1 \ (\text{mod} \ M_p)$. It follows that $M_p$ is a strong pseudoprime to base 2. $\blacksquare$
___________________________________________________________________
Examples
The first several values of $M_p$ are:
$M_{7}=$ 3277
$M_{11}=$ 838861
$M_{13}=$ 13421773
$M_{17}=$ 3435973837
$M_{19}=$ 54975581389
$M_{23}=$ 14073748835533
$M_{29}=$ 57646075230342349
The formula $M_p$ captures more strong pseudoprimes than $2^n-1$. There are still many strong pseudoprimes that are missing. For example, according to [1], there are 4842 strong pseudoprimes to base 2 that are less than $25 \cdot 10^9$. The formula $M_p$ captures only 4 of these strong pseudoprimes. However, it is still valuable to have the formula $M_p$. It gives a concrete proof that there exist infinitely many strong pseudoprimes to base 2. Strong pseudoprimes are rare. It is valuable to have an explicit formula to generate examples of strong pseudoprimes. For example, $M_{19}$ is the first one on the list that is larger than $25 \cdot 10^9$. Then $M_{19}$ is an upper bound on the least strong pseudoprime base 2 that is larger than $25 \cdot 10^9$.
___________________________________________________________________
Question
It is rare to find strong pseudoprimes to multiple bases. For example, according to [1], there are only 13 strong pseudoprimes to all of the bases 2, 3 and 5 that are less than $25 \cdot 10^9$. Are there any strong pseudoprimes given by the formula $M_p$ that are also strong pseudoprimes to other bases? What if we just look for pseudoprimes to other bases?
___________________________________________________________________
Reference
1. Pomerance C., Selfridge J. L., Wagstaff, S. S., The pseudoprimes to $25 \cdot 10^9$, Math. Comp., Volume 35, 1003-1026, 1980.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$
# There are infinitely many strong pseudoprimes
Pseudoprimes are rare. Strong pseudoprimes are rarer still. According to [1], there are 21853 pseudoprimes to base 2 and 4842 strong pseudoprimes to base 2 below $25 \cdot 10^9$. According to the prime number theorem, there are over 1 billion prime numbers in the same range. When testing a random number, knowing that it is a strong probable prime to just one base is strong evidence for primality. Even though most of the strong probable primes are prime, for a given base, there exist infinitely many strong pseudoprimes. This fact is captured in the following theorem.
Theorem 1
For a given base $a>1$, there are infinitely many strong pseudoprimes to base $a$.
For a proof, see Theorem 1 in [1]. We give a simpler proof that there exist infinitely many strong pseudoprimes to base 2.
Theorem 1a
There are infinitely many strong pseudoprimes to base 2.
Proof of Theorem 1a
We make the following claim.
Claim
Let $n$ be a pseudoprime to base 2. Then $N=2^n-1$ is a strong pseudoprime to base 2.
In a previous post on probable primes and pseudoprimes, we prove that there exist infinitely pseudoprimes to any base $a$. Once the above claim is established, we have a proof that there are infinitely many strong pseudoprimes to base 2.
First of all, if $n$ is composite, the number $2^n-1$ is also composite. This follows from the following equalities.
$\displaystyle 2^{ab}-1=(2^a-1) \cdot (1+2^a+2^{2a}+2^{3a}+ \cdots+2^{(b-1)a})$
$\displaystyle 2^{ab}-1=(2^b-1) \cdot (1+2^b+2^{2b}+2^{3b}+ \cdots+2^{(a-1)b})$
Thus $N=2^n-1$ is composite. Note that $N-1=2^n-2=2 \cdot (2^{n-1}-1)$. Let $q=2^{n-1}-1$, which is an odd integer. Because $n$ is a pseudoprime to base 2, $2^{n-1} \equiv 1 \ (\text{mod} \ n)$. Equivalently, $2^{n-1}-1=nj$ for some integer $j$. Furthermore, it is clear that $2^{n} \equiv 1 \ (\text{mod} \ 2^n-1)$.
It follows that $\displaystyle 2^q \equiv 2^{2^{n-1}-1} \equiv 2^{nj} \equiv 1^j \equiv 1 \ (\text{mod} \ N)$. This means that $N$ is a strong pseudoprime to base 2.
In the previous post probable primes and pseudoprimes, it is established that there are infinitely many pseudoprimes to any base $a$. In particular there are infinitely many pseudoprimes to base 2. It follows that the formula $2^n-1$ gives infinitely many strong pseudoprimes to base 2. $\blacksquare$
___________________________________________________________________
Example
Theorem 1a can be considered a formula for generating strong pseudoprimes to base 2. The input is a pseudoprime to base 2. Unfortunately the generated numbers get large very quickly and misses many strong pseudoprimes to base 2.
The smallest pseudoprime to base 2 is 341. The following is the 103-digit $N=2^{341}-1$.
$N=2^{341}-1=$
44794894843556084211148845611368885562432909944692
99069799978201927583742360321890761754986543214231551
Even though $N=2^{341}-1$ is a strong pseudoprime to base 2, it is not strong pseudoprime to bases 3 and 5. In fact, it is rare to find a strong pseudoprime to multiple bases. To determine the strong pseudoprimality of $N$ for other bases, note that $N-1=2 \cdot Q$ where $Q$ is the following 103-digit number.
$Q=$
22397447421778042105574422805684442781216454972346
49534899989100963791871180160945380877493271607115775
Calculate $a^Q$ and $a^{2Q}$ modulo $N$. Look for the pattern $a^Q=1$ and $a^{2Q}=1$ or the pattern pattern $a^Q=-1$ and $a^{2Q}=1$. If either pattern appears, then $N$ is a strong pseudoprime to base $a$. See the sequence labeled (1) in the previous post on strong pseudoprimes.
___________________________________________________________________
Exercise
Verify that $N=2^{341}-1$ is not a strong pseudoprime to both bases 3 and 5.
___________________________________________________________________
Reference
1. Pomerance C., Selfridge J. L., Wagstaff, S. S., The pseudoprimes to $25 \cdot 10^9$, Math. Comp., Volume 35, 1003-1026, 1980.
___________________________________________________________________
$\copyright \ \ 2014-2015 \ \text{Dan Ma}$
Revised July 4, 2015
# Strong probable primes and strong pseudoprimes
This post is the first in a series of posts to discuss the Miller-Rabin primality test. In this post, we discuss how to perform the calculation (by tweaking Fermat’s little theorem). The Miller-Rabin test is fast and efficient and is in many ways superior to the Fermat test.
Fermat primality test is based on the notions of probable primes and pseudoprimes. One problem with the Fermat test is that it fails to detect the compositeness of a class of composite numbers called Carmichael numbers. It is possible to tweak the Fermat test to by pass this problem. The resulting primality test is called the Miller-Rabin test. Central to the working of the Miller-Rabin test are the notions of strong probable primes and strong pseudoprimes.
Fermat’s little theorem, the basis of the Fermat primality test, states that if $n$ is a prime number, then
$a^{n-1} \equiv 1 \ (\text{mod} \ n) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$
for all numbers $a$ that are relatively prime to the modulus $n$. When testing a prime number, the Fermat test always gives the correct answer. What is the success rate of the Fermat test when it is applied on a composite number? The Fermat test is correct on most composite numbers. Unfortunately the Fermat test fails to detect the compositeness of Carmichael numbers. A Carmichael number is any composite integer $n$ such that (*) is true for any $a$ that is relatively prime to $n$. Fortunately we can tweak the calculation in (*) to get a better primality test.
Recall that a positive odd integer $n$ is a probable prime to base $a$ if the condition (*) holds. A probable prime could be prime or could be composite. If the latter, then $n$ is said to be a pseudoprime to base $a$.
___________________________________________________________________
Setting up the calculation
Let $n$ be an odd positive integer. Instead of calculating $a^{n-1} \ (\text{mod} \ n)$, we set $n-1=2^k \cdot q$ where $q$ is an odd number and $k \ge 1$. Then compute the following sequence of $k+1$ numbers:
$a^q, \ a^{2q}, \ a^{2^2 q}, \ \cdots, \ a^{2^{k-1} q}, \ a^{2^{k} q} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$
Each term in (1) is reduced modulo $n$. The first term can be computed using the fast powering (also called fast exponentiation) algorithm. Each subsequent term is the square of the preceding term. Of course, the last term is $a^{2^{k} q}=a^{n-1}$. It follows from Fermat’s little theorem that the last term in the sequence (1) is always a 1 as long as $n$ is prime and the number $a$ is relatively prime to $n$. The numbers $a$ used in the calculation of (1) are called bases.
Suppose we have a large positive odd integer $n$ whose “prime or composite” status is not known. Choose a base $a$. Then compute the numbers in the sequence (1). If $n$ is prime, we will see one of the following two patterns:
$1, 1, 1, \cdots, 1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1a)$
$*, *, *, \cdots, *, -1, 1, \cdots, 1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1b)$
In (1a), the entire sequence consists of 1. In (1b), an asterisk means that the number is congruent to neither 1 nor -1 modulo $n$. In (1b), the sequence ends in a 1, and the term preceding the first 1 is a -1. These two patterns capture a property of prime numbers. We have the following theorem.
___________________________________________________________________
The theorem behind the Miller-Rabin test
Theorem 1
Let $n$ be an odd prime number such that $n-1=2^k \cdot q$ where $q$ is an odd number and $k \ge 1$. Let $a$ be a positive integer not divisible by $n$. Then the sequence (1) resembles (1a) or (1b), i.e., either one of the following two conditions holds:
• The first term $a^q$ in the sequence (1) is congruent to 1 modulo $n$.
• The term preceding the first 1 is congruent to -1 modulo $n$.
The proof of Theorem 1 is not complicated. It uses Fermat’s little theorem and the fact that if $n$ is an odd prime, the only solutions to the congruence equation $x^2 \equiv 1 \ (\text{mod} \ n)$ are $x \equiv \pm 1 \ (\text{mod} \ n)$. The proof goes like this. By Fermat’s little theorem, the last term in sequence (1) is a 1, assuming that $n$ is an odd prime and $a$ is relatively prime to $n$. If the first term in (1) is a 1, then we are done. Otherwise, look at the first term in (1) that is a 1. The term preceding the first 1 must be a -1 based on the fact that the equation $x^2 \equiv 1 \ (\text{mod} \ n)$ can have only the trivial solutions $\pm 1$.
It is an amazing fact that Theorem 1 is easily proved and yet is the basis of a powerful and efficient and practical primality test. Next we define the notions of strong probable primes and strong pseudoprimes.
___________________________________________________________________
Strong probable primes and strong pseudoprimes
Suppose we have a large positive odd integer $n$ whose “prime or composite” status is not known. We calculate sequence (1) for one base $a$. If the last term of the sequence (1) is not a 1, then $n$ is composite by Fermat’s little theorem. If the last term is a 1 but the sequence (1) does not match the patterns (1a) or (1b), then $n$ is composite by Theorem 1. So to test for compositeness for $n$, we look for a base $a$ such that the sequence (1) does not fit the patterns (1a) or (1b). Such a base is said to be a Miller-Rabin witness for the compositeness of $n$. Many authors refer to a Miller-Rabin witness as a witness.
When we calculate the sequence (1) on the odd number $n$ for base $a$, if we get either (1a) or (1b), then $n$ is said to be a strong probable prime to the base $a$. A strong probable prime could be prime or could be composite. When a strong probable prime to the base $a$ is composite, it is said to be a strong pseudoprime to the base $a$. To test for primality of $n$, the Miller-Rabin test consists of checking for strong probable primality for several bases $a$ where $1 that are randomly chosen.
For an example of a primality testing exercise using the Miller-Rabin test, see the post The first prime number after the 8th Fermat number.
___________________________________________________________________
Small examples of strong pseudoprimes
Some small examples to illustrate the definitions. Because $2^{340} \equiv 1 \ (\text{mod} \ 341)$, the number 341 is a probable prime to the base 2. Because 341 is composite with factors 11 and 31, the number 341 is a pseudoprime to the base 2. In fact, 341 is the least pseudoprime to base 2. Now the strong probable prime calculation. Note that $341=2^2 \cdot 85$. The calculated numbers in sequence (1) are 32, 1, 1, calculated as follows:
$2^{85} \equiv 32 \ (\text{mod} \ 341)$
$2^{2 \cdot 85} \equiv 32^2 \equiv 1 \ (\text{mod} \ 341)$
$2^{340}=2^{2 \cdot 85} \equiv 1 \ (\text{mod} \ 341)$
Because the sequence 32, 1, 1 does not fit pattern (1a) or (1b) (the term before the first 1 is not a -1), the number 341 is not a strong pseudoprime prime to base 2.
How far do we have to go up from 341 to reach the first strong pseudoprime to base 2. The least strong pseudoprime to base 2 is 2047. Note that $2046=2 \cdot 1023$. Note that the congruences $2^{1023} \equiv 1 \ (\text{mod} \ 2047)$ and $2^{2046} \equiv 1 \ (\text{mod} \ 2047)$. The sequence (1) is 1, 1, which is the pattern (1a). Thus 2047 is a strong pseudoprime to base 2. Note that 2047 is composite with factors 23 and 89. It can be shown (at least by calculation) that all odd integers less than 2047 are not strong pseudoprime to base 2. In other words, if a positive odd integer $n$ is less than 2047 and if it is a strong probable prime to base 2, then $n$ must be a prime number.
Consider a slightly larger example. Let $n=$ 65281. Set $n-1=2^{8} \cdot 255$. The following is the calculation for the sequence (1) using base 2.
$2^{255} \equiv 32768 \ (\text{mod} \ 65281)$
$2^{2 \cdot 255} \equiv 65217 \ (\text{mod} \ 65281)$
$2^{4 \cdot 255} \equiv 4096 \ (\text{mod} \ 65281)$
$2^{8 \cdot 255} \equiv 65280 \equiv -1 \ (\text{mod} \ 65281)$
$2^{16 \cdot 255} \equiv 1 \ (\text{mod} \ 65281)$
$2^{32 \cdot 255} \equiv 1 \ (\text{mod} \ 65281)$
$2^{64 \cdot 255} \equiv 1 \ (\text{mod} \ 65281)$
$2^{128 \cdot 255} \equiv 1 \ (\text{mod} \ 65281)$
$2^{256 \cdot 255} \equiv 1 \ (\text{mod} \ 65281)$
The pattern is *, *, *, -1, 1, 1, 1, 1, 1, which is (1b) (the term preceding the first 1 is a -1). So $n=$ 65281 is strong probable prime to base 2. The following computation using base 3 will show that 65281 is a composite number, thus is a strong pseudoprime to base 2.
$3^{255} \equiv 30931 \ (\text{mod} \ 65281)$
$3^{2 \cdot 255} \equiv 33706 \ (\text{mod} \ 65281)$
$3^{4 \cdot 255} \equiv 9193 \ (\text{mod} \ 65281)$
$3^{8 \cdot 255} \equiv 37635 \ (\text{mod} \ 65281)$
$3^{16 \cdot 255} \equiv 56649 \ (\text{mod} \ 65281)$
$3^{32 \cdot 255} \equiv 25803 \ (\text{mod} \ 65281)$
$3^{64 \cdot 255} \equiv 59171 \ (\text{mod} \ 65281)$
$3^{128 \cdot 255} \equiv 56649 \ (\text{mod} \ 65281)$
$3^{65280} = 3^{256 \cdot 255} \equiv 25803 \ (\text{mod} \ 65281)$
Looking at the last term in the base 3 calculation, we see that the number 65281 is composite by Fermat’s little theorem. Because the pattern is *, *, *, *, *, *, *, *, *, 65281 is not a strong pseudoprime to base 3.
___________________________________________________________________
How does pseudoprimality and strong pseudoprimality relate?
There are two notions of “pseudoprime” discussed here and in previous posts. One is based on Fermat’s little theorem (pseudoprime) and one is based on Theorem 1 above (strong pseudoprime). It is clear from the definition that any strong pseudoprime to base $a$ is a pseudoprime to base $a$. The converse is not true.
Let’s start with the number 341. It is a pseudoprime to base 2. This means that the Fermat test cannot detect its compositeness using base 2. Yet the strong pseudoprimality calculation as described above can detect the compositeness of 341 using base 2. The 341 is not a strong pseudoprime to base 2 since the least strong pseudoprime to base 2 is 2047.
Let’s look at a slightly larger example. Take the number 25761. It is a pseudoprime to base 2 since $2^{25760} \equiv 1 \ (\text{mod} \ 25761)$ and its factors are 3, 31 and 277. Let refine the calculation according to sequence (1) as indicated above. Note that $25760=2^5 \cdot 805$. The pattern of sequence (1) is *, *, 1, 1, 1, 1. The term preceding the first 1 is not a -1. Thus the strong pseudomality method does detect the compositeness of 25761 using base 2.
In general, strong pseudoprimality implies pseudoprimality (to the same base). The above two small examples show that the converse is not true since they are pseudoprimes to base 2 but not strong pseudoprimes to base 2.
___________________________________________________________________
Why look at pseudoprimes and strong pseudoprimes?
The most important reason for studying these notions is that pseudoprimality and strong pseudoprimality are the basis of two primality tests. In general, pseudoprimality informs primality.
In a previous post on probable primes and pseudoprimes, we point out that most probable primes are primes. The same thing can be said for the strong version. According to [1], there are only 4842 strong pseudoprimes to base 2 below $25 \cdot 10^9$. Using the prime number theorem, it can be shown that there are approximately $1.044 \cdot 10^9$ many prime numbers below $25 \cdot 10^9$. Thus most strong probable primes are primes. For a randomly chosen $n$, showing that $n$ is a strong probable prime to one base can be quite strong evidence that $n$ is prime.
Because strong pseudoprimality is so rare, knowing what they are actually help in detecting primality. For example, according to [1], there are only 13 numbers below $25 \cdot 10^9$ that are strong pseudoprimes to all of the bases 2, 3 and 5. These 13 strong pseudoprimes are:
Strong pseudoprimes to all of the bases 2, 3 and 5 below 25 billion
25326001, 161304001, 960946321, 1157839381, 3215031751, 3697278427, 5764643587, 6770862367, 14386156093, 15579919981, 18459366157, 19887974881, 21276028621
These 13 strong pseudoprimes represent a deterministic primality test on integers less than $25 \cdot 10^9$. Any odd positive integer less than $25 \cdot 10^9$ that is a strong probable prime to all 3 bases 2, 3 and 5 must be a prime number if it is not one of the 13 numbers on the list. See Example 1 below for an illustration. This primality is fast since it only requires 3 exponentiations. Best of all, it gives a proof of primality. However, this is a fairly limited primality test since it only works on numbers less than $25 \cdot 10^9$. Even though this is a limited example, it is an excellent illustration that strong pseudoprimality can inform primality.
Example 1
Consider the odd integer $n=$ 1777288949, which is less than $25 \cdot 10^9$. Set $1777288949=2^2 \cdot 444322237$. The proof of primality of requires only the calculation for 3 bases 2, 3 and 5.
Base 2
$2^{444322237} \equiv 227776882 \ (\text{mod} \ 1777288949)$
$2^{2 \cdot 444322237} \equiv 1777288948 \equiv -1 \ (\text{mod} \ 1777288949)$
$2^{2^2 \cdot 444322237} \equiv 1 \ (\text{mod} \ 1777288949)$
Base 3
$3^{444322237} \equiv 227776882 \ (\text{mod} \ 1777288949)$
$3^{2 \cdot 444322237} \equiv 1777288948 \equiv -1 \ (\text{mod} \ 1777288949)$
$3^{2^2 \cdot 444322237} \equiv 1 \ (\text{mod} \ 1777288949)$
Base 5
$5^{444322237} \equiv 1 \ (\text{mod} \ 1777288949)$
$5^{2 \cdot 444322237} \equiv 1 \ (\text{mod} \ 1777288949)$
$5^{2^2 \cdot 444322237} \equiv 1 \ (\text{mod} \ 1777288949)$
The patterns for the 3 calculations fit either (1a) or (1b). So $n=$ 1777288949 is a strong probable prime to all 3 bases 2, 3 and 5. Clearly $n=$ 1777288949 is not on the list of 13 strong pseudoprimes listed above. Thus $n=$ 1777288949 cannot be a composite number.
___________________________________________________________________
Exercise
• Use the strong pseudoprime test to show that the following numbers are composite.
• 3277
43273
60433
60787
838861
1373653
• Use the 13 strong pseudoprimes to the bases 2, 3 and 5 (used in Example 1) to show that the following numbers are prime numbers.
• 58300313
99249929
235993423
2795830049
___________________________________________________________________
Reference
1. Pomerance C., Selfridge J. L., Wagstaff, S. S., The pseudoprimes to $25 \cdot 10^9$, Math. Comp., Volume 35, 1003-1026, 1980.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$
# The Fermat primality test
Fermat’s little theorem describes a property that is common to all prime numbers. This property can be used as a way to detect the “prime or composite” status of an integer. Primality testing using Fermat’s little theorem is called the Fermat primality test. In this post, we explain how to use this test and to discuss some issues surrounding the Fermat test.
___________________________________________________________________
Describing the test
The Fermat primality test, as mentioned above, is based on Fermat’s little theorem. The following is the statement of the theorem.
Fermat’s little theorem
If $n$ is a prime number and if $a$ is an integer that is relatively prime to $n$, then the following congruence relationship holds:
$a^{n-1} \equiv 1 (\text{mod} \ n) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$
The above theorem indicates that all prime numbers possess a certain property. Therefore if a given positive integer does not possess this property, we know for certain that this integer is not prime. Suppose that the primality of an integer $n$ is not known. If we can find an integer $a$ that is relatively prime to $n$ such that $a^{n-1} \not \equiv 1 \ (\text{mod} \ n)$, then we have conclusive proof that $n$ is composite. Such a number $a$ is said to be a Fermat witness for (the compositeness of) $n$.
The Fermat test is closedly linked to the notations of probable primes and pseudoprimes. If the congruence relation (1) is true for $n$ and $a$, then $n$ is said to be a probable prime to base $a$. Furthermore, if $n$ happens to be a composite number, then $n$ is said to be a pseudoprime to base $a$. Pseudoprime prime is a composite number that possesses the prime-like property as indicated by (1) for one base $a$.
The Fermat primality test from a compositeness perspective is about looking for Fermat witnesses. If a Fermat witness is found, the number being tested is proved to be composite. On the other hand, the Fermat primality test, from a primality perspective, consists of checking the congruence relation (1) for several bases that are randomly selected. If the number $n$ is found to be a probable prime to all the randomly chosen bases, then $n$ is likely a prime number.
If the number $n$ is in reality a prime number, then the Fermat test will always give the correct result (as a result of Fermat’s little theorem). If the number $n$ is in reality a composite number, the Fermat test can make the mistake of identifying the composite number $n$ as prime (i.e. identifying a pseudoprime as a prime). For most composite numbers this error probability can be made arbitrarily small (by testing a large number of bases $a$). But there are rare composite numbers that evade the Fermat test. Such composite numbers are called Carmichael numbers. No matter how many bases you test on a Carmichael number, the Fermat test will always output Probably Prime. Carmichael numbers may be rare but there are infinitely many of them over the entire number line. More about Carmichael numbers below.
The following describes the steps of the Fermat primality test.
Fermat primality test
The test is to determine whether a large positive integer $n$ is prime or composite. The test will output one of two results: $n$ is Composite or $n$ is Probably Prime.
• Step 1. Choose a random integer $a \in \left\{2,3,\cdots,n-1 \right\}$.
• Step 2. Compute $\text{GCD}(a,n)$. If it is greater than 1, then stop and output $n$ is Composite. Otherwise go to the next step.
• Step 3. Compute $a^{n-1} \ (\text{mod} \ n)$.
• If $a^{n-1} \not \equiv 1 \ (\text{mod} \ n)$, then stop and output $n$ is Composite.
• If $a^{n-1} \equiv 1 \ (\text{mod} \ n)$, then $n$ may be a prime number. Do one of the following:
• Return to Step 1 and repeat the process with a new $a$.
• Output $n$ is Probably Prime and stop.
$\text{ }$
The exponentiation in Step 3 can be done by the fast powering algorithm. This involves a series of squarings and multiplications. Even for numbers that have hundreds of digits, the fast powering algorithm is efficient.
One comment about Step 2 in the algorithm. Step 2 could be called the GCD test for primality. If you can find an integer $a$ such that $1 and such that $\text{GCD}(a,n) \ne 1$, then the integer $n$ is certainly composite. Such a number $a$ is called a GCD witness for the compositeness of $n$. So the Fermat test as described above combines the GCD test and the Fermat test. We can use the Euclidean algorithm to find the GCD. If we happen to stumble upon a GCD witness, then we can try another $n$ for a candidate of a prime number. For most composite numbers, it is not likely to stumble upon a GCD witness. Thus when using the Fermat test, it is likely that Step 3 in the algorithm is used.
An example of Fermat primality testing is the post called A primality testing exercise from RSA-100.
____________________________________________________________________________
When using the Fermat test, what is the probability of the test giving the correct result? Or what is the probability of making an error? Because the Fermat test is not a true probabilistic primality test, the answers to these questions are conditional. In one scenario which covers most of the cases, the test works like an efficient probabilistic test. In another scenario which occurs very rarely, the Fermat test fails miserably.
As with most diagnostic tests, the Fermat test can make two types of mistakes – false positives or false negatives. For primality testing discussed in this post, we define a positive result as the outcome that says the number being tested is a prime number and a negative result as the outcome that says the number being tested is a composite number. Thus a false positive is identifying a composite number as a prime number and a false negative is identifying a prime number as a composite number.
For the Fermat test, there is no false negative. If $n$ is a prime number in reality, the statement of Fermat’s little theorem does not allow the possibility that $n$ be declared a composite number. Thus if the Fermat test gives a negative result, it would be a true negative. In other words, finding a Fermat witness for $n$ is an irrefutable proof that $n$ is composite.
However, there can be false positives for the Fermat test. This is where things can get a little tricky. A composite number $n$ is said to be a Carmichael number if the above congruence relationship (1) holds for all bases $a$ relatively prime to $n$. In other words, $n$ is a Carmichael number if $a^{n-1} \equiv 1 (\text{mod} \ n)$ for all $a$ that are relatively prime to $n$. Saying it in another way, $n$ is a Carmichael number if there exists no Fermat witness for $n$.
The smallest Carmichael number is 561. Carmichael numbers are rare but there are infinitely many of them. The existence of such numbers poses a challenge for the Fermat test. If you apply the Fermat test on a Carmichael number, the outcome will always be Probably Prime. So the Fermat test will always give a false positive when it is applied on a Carmichael number. To put it in another way, with respect to Carmichael numbers, the error probability of the Fermat test is virtually 100%!
So should a primality tester do? To keep things in perspective, Carmichael numbers are rare (see this post). If the primality testing is done on randomly chosen numbers, choosing a Carmichael number is not likely. So the Fermat test will often give the correct results. For those who are bothered by the nagging fear of working with Carmichael numbers, they can always switch to a Carmichael neutral test such as the Miller-Rabin test.
___________________________________________________________________
One bright spot about the Fermat test
There is one bright spot about the Fermat test. When applying the Fermat test on numbers that are not Carmichael numbers, the error probability can be made arbitrarily small. In this sense the Fermat test works like a true probabilistic primality test. Consider the following theorem.
Theorem 1
Let $n$ be a composite integer such that it is not a pseudoprime to at least one base (i.e. $n$ has a Fermat witness). In other words, $n$ is not a Carmichael number. Then $n$ is not a pseudoprime to at least half of the bases $a$ ($1) that are relatively prime to $n$. In other words, $n$ is a pseudoprime to at most half of the bases $a$ ($1) that are relatively prime to $n$.
Theorem 1 means that the Fermat test can be very accurate on composite numbers that are not Carmichael numbers. As long as there is one base to which the composite number is not a pseudoprime (i.e. as long as there is a Fermat witness for the composite number in question), there will be enough of such bases (at least 50% of the possible bases). As a result, it is likely that the Fermat test will find a witness, especially if the tester is willing to use enough bases to test and if the bases are randomly chosen. When a base is randomly chosen, there is at least a 50% chance that the number $n$ is not a pseudoprime to that base (i.e. the Fermat test will detect the compositeness) or putting it in another way, there is at most a 50% chance that the Fermat test will not detect the compositeness of the composite number $n$. So if $k$ values of $a$ are randomly selected, there is at most $0.5^k$ probability that the Fermat test will not detect the compositeness of the composite number $n$ (i.e. making a mistake). So the probability of a false positive is at most $0.5^k$. For a large enough $k$, this probability is practically zero.
Proof of Theorem 1
A base to which $n$ is a pseudoprime or not a pseudoprime should be a number in the interval $1 that is relatively prime to $n$. If $n$ is a pseudoprime to base $a$, then $a$ raised to some power is congruent to 1 modulo $n$. For this to happen, $a$ must be relatively prime to the modulus $n$. For this reason, when we consider a base, it must be a number that is relatively prime to the composite integer $n$ (see the post on Euler’s phi function).
Let $a$ be a base to which $n$ is not a pseudoprime. We make the following claim.
Claim
If $b$ is a number such that $1 and such that $n$ is a pseudoprime to base $b$, then $n$ is not a pseudoprime to base $a \cdot b$.
Since both integers $a$ and $b$ are assumed to be relatively prime to $n$, the product $a \cdot b$ is also relatively prime to $n$ (see Lemma 4 in this post). Now consider the congruence $(ab)^{n-1} \ (\text{mod} \ n)$, which is derived as follows:
$(ab)^{n-1} \equiv a^{n-1} \cdot b^{n-1} \equiv a^{n-1} \not \equiv 1 \ (\text{mod} \ n)$
In the above derivation, we use the fact that $n$ is not a pseudoprime to base $a$ and $n$ is a pseudoprime to base $b$. The above derivation shows that $n$ is not a pseudoprime to base $ab$.
If $n$ is not a pseudoprime to all bases in $1, then we are done. So assume that $n$ is a pseudoprime to at least one base. Let $b_1,b_2,\cdots,b_k$ enumerate all bases to which $n$ is a pseudoprime. We assume that the $b_j$ are all distinct. So $b_i \not \equiv b_j \ (\text{mod} \ n)$ for all $i \ne j$. By the above claim, the composite number $n$ is not a pseudoprime to all the following $k$ numbers:
$a \cdot b_1, \ a \cdot b_2, \cdots, \ a \cdot b_k$
It is also clear that $a \cdot b_i \not \equiv a \cdot b_j \ (\text{mod} \ n)$ for $i \ne j$. What we have just shown is that there are at least as many bases to which $n$ is not a pseudoprime as there are bases to which $n$ is a pseudoprime. This means that $n$ is not a pseudoprime to at least 50% of the bases that are relatively prime to $n$. In other words, as long as there exists one Fermat witness for $n$, at least 50% of the bases are Fermat witnesses for $n$. It then follows that $n$ is a pseudoprime to no more than 50% of the bases relatively prime to $n$. $\blacksquare$
There is another way to state Theorem 1. Recall that Euler’s phi function $\phi(n)$ is defined to be the number of integers $a$ in the interval $1 that are relatively prime to $n$. With this in mind, Theorem 1 can be restated as the following:
Corollary 2
Let $n$ be a composite integer such that it is not a pseudoprime to at least one base. Then $n$ is not a pseudoprime to at least $\displaystyle \frac{\phi(n)}{2}$ many bases in the interval $1.
___________________________________________________________________
Concluding remarks
Of course, Theorem 1 works only for the composite numbers that are not pseudoprime to at least one base (i.e. they are not Carmichael numbers). When you test the compositeness of a number, you do not know in advance if it is Carmichael or not. On the other hand, if the testing is done on randomly chosen numbers, it is not likely to randomly stumble upon Carmichael numbers. The Fermat test works well for the most part and often give the correct results. If one is concerned about the rare chance of a false positive in the form of a Carmichael number, then the Miller-Rabin test will be a good alternative.
___________________________________________________________________
$\copyright \ \ 2014 - 2015 \ \text{Dan Ma}$ (Revised march 29, 2015)
# Probable primes and pseudoprimes
In determining whether an odd integer $n$ is prime or composite, the author of this blog likes to first look for small prime factors of $n$. If none is found, then calculate the congruence $2^{n-1} \ (\text{mod} \ n)$. If this result is not congruent to 1 modulo $n$, this gives a proof that $n$ is a composite number. If the result is congruent to 1, then this gives some evidence that $n$ is prime. To confirm, apply a formal primality test on the number $n$ (e.g. using the Miller-Rabin test). The question we like to ponder in this post is this. Given the result $2^{n-1} \equiv 1 \ (\text{mod} \ n)$, as evidence for the primality of the number $n$, how strong is it? Could we just use the congruence $2^{n-1} \ (\text{mod} \ n)$ as a primality test? In this post, we look at these questions from two perspectives, leading to two answers that are both valid in some sense. The discussion is conducted through examining the notions of probable primes and pseudoprimes, both of which are concepts that are related to Fermat’s little theorem. Thus the notions of probable primes and pseudoprimes are related to the Fermat primality test.
Fermat’s little theorem states that if $n$ is a prime number, then the following congruence
$a^{n-1} \equiv 1 \ (\text{mod} \ n) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$
is always true for any integer $a$ that is relatively prime to $n$. A positive integer $n$ is said to be a probable prime to the base $a$ if the congruence relation (1) holds. Obviously any prime number is a probable prime to all bases that are relatively prime to it (this is another way of stating Fermat’s little theorem). A probable prime does not have to be prime. If $n$ is a probable prime to base $a$ and if $n$ happens not to be prime, then $n$ is said to be a pseudoprime to base $a$.
As indicated at the beginning, computing the congruence (1) for just one base $a$ is a quick and dirty way of checking probable primality of $n$. Using base 2 as a starting point, if $2^{n-1}$ is not congruent to 1 mod $n$, we know $n$ is composite for sure. If $2^{n-1}$ is congruent to 1 mod $n$, then we can calculate the congruence for several more bases. The following question is similar to the questions at the beginning:
When the congruence (1) is satisfied for one base $a$, is that enough evidence to conclude that $n$ is prime?
We look at this question from two angles. One is to answer in terms of an absolute mathematical proof. One is to look at it probabilistically.
___________________________________________________________________
The view point of an absolute mathematical proof
In terms of an absolute mathematical proof, the answer to the above question is no. There are probable primes that are composite (i.e. there are pseudoprimes). For example, the integer 341 is a probable prime to base 2 since $2^{340} \equiv 1 \ (\text{mod} \ 341)$. But 341 is composite with factors 11 and 31. So 341 is a pseudoprime to the base 2. In fact, 341 is the least integer that is a pseudoprime to base 2. However, 341 is not a pseudoprime to the base 3 since $3^{340} \equiv 56 \ (\text{mod} \ 341)$.
Now let $n$ be 1105, which obviously is composite since it ends in the digit 5. The number 1105 is a probable prime to both base 2 and base 3, since we have $2^{1104} \equiv 1 \ (\text{mod} \ 1105)$ and $3^{1104} \equiv 1 \ (\text{mod} \ 1105)$. In fact, 1105 is the least integer that is a pseudoprime to both base 2 and base 3.
Furthermore, given a base $a$, there are infinitely many pseudoprimes to base $a$. We prove the following theorem.
Theorem 1
Let $a$ be any integer with $a>1$. Then there are infinitely many pseudoprimes to base $a$.
Proof
Let $p$ be an odd prime number such that $p$ does not divide $a^2-1$ and such that $p$ does not divide $a$. We define a composite integer $m_p$ such that $a^{m_p-1} \equiv 1 \ (\text{mod} \ m_p)$. We will see that the numbers $m_p$ are distinct for distinct primes $p$. Clearly there are infinitely many odd primes $p$ that do not divide both $a^2-1$ and $a$. The theorem will be established once we provide the details for these claims.
Fix an odd prime $p$ such that $p$ does not divide $a^2-1$ and such that $p$ does not divide $a$. Define $m=m_p$ as follows:
$\displaystyle m=\frac{a^{2p}-1}{a^2-1} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$
The number $m$ is composite since it can be expressed as follows:
$\displaystyle m=\frac{a^{p}-1}{a-1} \times \frac{a^{p}+1}{a+1}$
Note that both factors in the above expression are integers. This is because the numerators can be expressed as:
$a^p-1=(a-1) \times (a^{p-1}+a^{p-2} + a^{p-3} + \cdots + a + 1)$
$a^p+1=(a+1) \times (a^{p-1}-a^{p-2} + a^{p-3} - \cdots - a + 1)$
Furthermore, the number $m$ is an odd integer. Note that $m$ is the product of the following two numbers $S$ and $T$:
$S=a^{p-1}+a^{p-2} + a^{p-3} + \cdots + a + 1$
$T=a^{p-1}-a^{p-2} + a^{p-3} - \cdots - a + 1$
Both $S$ and $T$ are odd numbers. If $a$ is even, it is clear that both $S$ and $T$ are odd. If $a$ is odd, each of $S$ and $T$ is sum of even number of odd numbers plus 1, thus an odd number. Since $m$ is the product of two odd numbers, it is an odd number.
Now we need to show that $a^{m-1} \equiv 1 \ (\text{mod} \ m)$. From the definition of the number $m$ (see (*) above), we can derive the following:
$(a^2-1) (m-1)=a(a^{p-1}-1)(a^p+a)$
The term $a^{p-1}-1$ in the middle of the right hand side is divisible by $p$ because of Fermat’s little theorem. Therefore $p$ divides $(a^2-1)(m-1)$. Since $p$ does not divide $a^2-1$, $p$ must divide $m-1$. Since $m$ is odd, $m-1$ must be even. Consequently $2p$ divides $m-1$.
From the definition of the number $m$,we have $a^{2p}=1+m(a^2-1)$. This is the same as saying $a^{2p} \equiv 1 \ (\text{mod} \ m)$. Since $2p$ divides $m-1$, we have $a^{m-1} \equiv 1 \ (\text{mod} \ m)$ too.
It is clear that the numbers $m=m_p$ are different for different $p$. Since there are infinitely many odd primes $p$ that do not divide both $a^2-1$ and $a$, the theorem is established. $\blacksquare$
It is interesting that the proof of Theorem 1 is a constructive one. The formula (*) gives us a way to generate pseudoprime to base $a$. For base 2, the first few pseudoprimes from this formula are 341, 5461, 1398101, 22369621. For base 3, the first few pseudoprimes are 91, 7381, 597871, 3922632451. However, the formula (*) does not generate all pseudoprimes for a given base. For example, 561 is a pseudoprime base 2 that is not generated by the formula. There are 19 pseudoprimes base 3 in between 91 and 7381 that are not captured by this formula. For the reason, the formula (*) is useful for proving theorem rather than for computing pseudoprimes.
So from a mathematical standpoint, computing the congruence (1) for one base is not sufficient evidence for primality. There are simply two many counterexamples, in fact infinitely many. So in deciding whether an integer $n$ is prime or not, knowing that it is a probable prime to one base is definitely not a proof to the primality of $n$. But this is not the end of the story. There is another view.
___________________________________________________________________
The probabilistic view
By Theorem 1, there are infinitely many pseudoprimes to base $a$. So showing that an integer $n$ is a probable prime to one base $a$ is no proof that $n$ is prime. For a given base, even though there are infinitely many pseudoprimes to that base, we will see below that below a given threshold and for a given base, most probable primes are primes and only a minuscule fraction of the probable primes are composite.
Take base 2 as an example. Of all the probable primes base 2 that are less than $25 \cdot 10^9$, how many are primes and how many are composite? According to [2], there are 21853 pseudoprimes base 2 that are less than $25 \cdot 10^9$. According to the prime number theorem, the number of prime numbers less than $x$ is approximately $\displaystyle x / \text{ln}(x)$. Therefore there are approximately $1.044 \cdot 10^9$ many primes under $25 \cdot 10^9$. This example illustrates that most probable primes base 2 under $25 \cdot 10^9$ are primes and that very few of them are pseudoprimes base 2.
Sticking with base 2, the author of [1] showed that the number of pseudoprimes to base 2 under $x$ is less than
$\displaystyle x^{1-w}$ where $\displaystyle w=\frac{\text{ln} \ \text{ln} \ \text{ln} x}{2 \text{ln} \ \text{ln} x}$
The above bound on pseudoprimes base 2 grows much slower than the quantity $\displaystyle \frac{x}{\text{ln} x}$, which is taken as the estimate on the number of primes less than $x$. This fact suggests that most probable primes are primes.
Thus the result $2^{n-1} \equiv 1 \ (\text{mod} \ n)$ says a lot. It is not a proof that $n$ is prime. But it gives very strong evidence that $n$ is likely a prime, especially if the number $n$ being tested is a randomly chosen number. This strong evidence can be further corroborated by repeating the calculation of the congruence (1) for a large number of bases, preferably randomly chosen. In the experience of the author of this blog, getting $2^{n-1} \equiv 1 \ (\text{mod} \ n)$ is often a turning point in a search for prime numbers. In primality testing of random numbers $n$, the author has yet come across an instance where $2^{n-1} \equiv 1 \ (\text{mod} \ n)$ is true and the number $n$ turns out to be composite.
___________________________________________________________________
More on pseudoprimes
The Fermat primality test is to use the congruence relation (1) above to check for the primality or the compositeness of a number. If a number is prime, the Fermat test will always detect its primality. For the Fermat test to be a good test, it needs to be able to detect the compositeness of pseudoprimes.
As discussed in the section on “The probabilistic view”, the probable primes to a given base is the union of two disjoint subsets – the primes and the pseudoprimes to that base. The following is another way to state this fact.
$\left\{ \text{probable primes to base } a \right\}=\left\{ \text{primes} \right\} \cup \left\{ \text{pseudoprimes to base } a \right\}$
Furthermore, most of the probable primes below a threshold are primes. Thus if we know that a randomly selected number is a probable prime to a given base, it is likely a prime number.
As discussed above, the composite number 341 is a pseudoprime to base 2 but not to base 3. The integer 2047 is a composite numbers since 23 and 89 are its factors. With $2^{2046} \equiv 1 \ (\text{mod} \ 2047)$, the number 2047 is a pseudoprime to the base 2. On the hand, $3^{2046} \equiv 1013 \ (\text{mod} \ 2047)$, the number 2047 is not a pseudoprime to the base 3. For the number 1373653, look at the following three congruences:
$2^{1373652} \equiv 1 \ (\text{mod} \ 1373653)$
$3^{1373652} \equiv 1 \ (\text{mod} \ 1373653)$
$5^{1373652} \equiv 1370338 \ (\text{mod} \ 1373653)$
The above three congruences show that the number 1373653 is a pseudoprime to both bases 2 and 3 but is not a pseudoprime to the base 5. Here’s a larger example. For the number 25326001, look at the following four congruences:
$2^{25326000} \equiv 1 \ (\text{mod} \ 25326001)$
$3^{25326000} \equiv 1 \ (\text{mod} \ 25326001)$
$5^{25326000} \equiv 1 \ (\text{mod} \ 25326001)$
$7^{25326000} \equiv 5872860 \ (\text{mod} \ 25326001)$
The above four congruences show that the number 25326001 is a pseudoprime to bases 2, 3 and 5 but is not a pseudoprime to the base 7.
In primality testing, the pseudoprimes are the trouble makers. These are the composite numbers that exhibits some prime-like quality. So it may be easy to confuse them with prime numbers. The above examples of pseudoprimes (341, 2047, 1373653, 25326001) happen to be not pseudoprimes to some other bases. For this kind of pseudoprimes, the Fermat test will identify them as composite (if the tester is willing to choose enough bases for testing).
What is troubling about the Fermat test is that there are numbers $n$ that are psuedoprimes to all bases that are relatively prime to $n$. These numbers are called Carmichael numbers. For such numbers, the Fermat test will be wrong virtually 100% of the time!
Consider the number 294409.
$2^{294408} \equiv 1 \ (\text{mod} \ 294409)$
$3^{294408} \equiv 1 \ (\text{mod} \ 294409)$
$4^{294408} \equiv 1 \ (\text{mod} \ 294409)$
$5^{294408} \equiv 1 \ (\text{mod} \ 294409)$
$6^{294408} \equiv 1 \ (\text{mod} \ 294409)$
One might think that the above congruences are strong evidence for primality. In fact, this is a Carmichael number. The factors of 294409 are 37, 73 and 109. The number 294409 is a pseudoprime to all the bases that are relatively prime to 294409. The only way the Fermat test can detect the compositeness of this number is to stumble upon one of its factors. For example, using base 37, we have
$37^{294408} \equiv 143227 \ (\text{mod} \ 294409)$.
For a large Carmichael number (say one with hundreds of digits), it will be hard to randomly stumble on a factor. So there will be virtually a 100% chance that the Fermat test will declare a large Carmichael number as prime if the Fermat test is used. Fortunately Carmichael numbers are rare (see here). If the number being tested is randomly chosen, it will not be likely a Carmichael number. So for the most part, the Fermat test will work well. As discussed above, having the congruence relationship (1) for just one base is quite strong evidence for primality.
___________________________________________________________________
Reference
1. Pomerance C., On the distribution of pseudoprimes, Math. Comp., Volume 37, 587-593, 1981.
2. Pomerance C., Selfridge J. L., Wagstaff, S. S., The pseudoprimes to $25 \cdot 10^9$, Math. Comp., Volume 35, 1003-1026, 1980.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$
# The first prime number after the 8th Fermat number
In this post, we discuss a primality testing exercise involving the eighth Fermat number. A Fermat number is of the form $F_n=2^{2^n}+1$ where $n$ is any nonnegative integer. We search for the first prime number that is greater than $F_8$. The basic idea is to search for the first probable prime base 2 among the odd numbers after $F_8$. Once the first probable prime base 2 is identified, we apply the Miller-Rabin primality test to confirm that it is a prime number. At the outset of this exercise, we did not know how many numbers we had to check before reaching the first prime number.
The first five Fermat numbers $F_0$, $F_1$, $F_2$, $F_3$ and $F_4$ are the only Fermat numbers that are known to be prime (it was conjectured by Fermat that all Fermat numbers are prime). It is unknown whether there exists prime Fermat number beyond $F_4$. What is clear, however, is that all the higher Fermat numbers that were studied turn out to be composite. The 8th Fermat number $F_8$ has 78 decimal digits with two factors with 16 and 62 digits (it was factored in 1961). The largest Fermat number that has been completely factored (as of the writing of this post) is $F_{11}$ which has 617 decimal digits. Many Fermat numbers larger than $F_{11}$ have been partially factored.
___________________________________________________________________
The basic approach
The following is the number $2^{256}$, which has 78 decimal digits.
$2^{256}=$
11579208923731619542357098500868790785326998466564
0564039457584007913129639936
Define $P_j=2^{256}+j$ where $j$ is an odd positive integer, i.e., $j=1,3,5,7,\cdots$. The exercise is to find the smallest $j$ such that $P_j$ is a prime number. According to Euclid’s proof that there are infinitely many prime numbers, such a $P_j$ is sure to exist. Just that we do not know at the outset how far we have to go to find it. Of course, $P_1$ is the 8th Fermat number, which is a composite number with two prime factors with 16 and 62 decimal digits. So the search starts with $j=3$.
The key is to do the following two quick checks to eliminate composite numbers so that we can reach a probable prime as quickly as possible.
• For any given $P_j$, the first step is to look for small prime factors, i.e., to factor $P_j$ using prime numbers less than a bound $B$. If a small prime factor is found, then we increase $j$ by 2 and start over. Note that we skip any $P_j$ where the sum of digits is divisible by 3. We also skip any $P_j$ that ends with the digit 5.
• If no small factors are found, then compute the congruence $2^{P_j-1} \ (\text{mod} \ P_j)$. If the answer is not congruent to 1, then we know $P_j$ is composite and work on the next number. If $2^{P_j-1} \equiv 1 \ (\text{mod} \ P_j)$, then $P_j$ is said to be a probable prime base 2. Once we know that a particular $P_j$ is a probable prime base 2, it is likely a prime number. To further confirm, we apply the Miller-Rabin primality test on that $P_j$.
In the first check, we check for prime factors among the first 100 odd prime numbers (i.e. all odd primes up to and including 547).
___________________________________________________________________
Searching the first probable prime
At the outset, we did not know how many numbers we will have to check. Since there can be a long gap between two successive prime numbers, the worse fear is that the number range we are dealing with is situated in such a long gap, in which case we may have to check thousands of numbers (or even tens of thousands). Luckily the search settles on a probable prime rather quickly. The magic number is 297. In other words, for the number
$P_{297}=2^{256}+297$
, we find that $2^{P_{297}-1} \equiv 1 \ (\text{mod} \ P_{297})$. Thus $P_{297}$ is a probable prime in base 2. The following shows the decimal digits of $P_{297}$.
$P_{297}=$
11579208923731619542357098500868790785326998466564
0564039457584007913129640233
To further give a sense of how the magic number $P_{297}$ is reached, the following table lists the 25 calculations leading to the magic number.
$\left[\begin{array}{rrrrrrr} j & \text{ } & \text{last 5 digits of } P_j & \text{ } & \text{least factor of } P_j & \text{ } & 2^{P_j-1} \ \text{mod} \ P_j \\ \text{ } & \text{ } & \text{ } \\ 259 & \text{ } & 40195 & \text{ } & 5 & \text{ } & \text{ } \\ 261 & \text{ } & 40197 & \text{ } & * & \text{ } & \not \equiv 1 \\ 263 & \text{ } & 40199 & \text{ } & 3 & \text{ } & \text{ } \\ 265 & \text{ } & 40201 & \text{ } & * & \text{ } & \not \equiv 1 \\ 267 & \text{ } & 40203 & \text{ } & * & \text{ } & \not \equiv 1 \\ 269 & \text{ } & 40205 & \text{ } & 3 & \text{ } & \text{ } \\ 271 & \text{ } & 40207 & \text{ } & 7 & \text{ } & \text{ } \\ 273 & \text{ } & 40209 & \text{ } & * & \text{ } & \not \equiv 1 \\ 275 & \text{ } & 40211 & \text{ } & 3 & \text{ } & \text{ } \\ 277 & \text{ } & 40213 & \text{ } & 11 & \text{ } & \text{ } \\ 279 & \text{ } & 40215 & \text{ } & 5 & \text{ } & \text{ } \\ 281 & \text{ } & 40217 & \text{ } & 3 & \text{ } & \text{ } \\ 283 & \text{ } & 40219 & \text{ } & 13 & \text{ } & \text{ } \\ 285 & \text{ } & 40221 & \text{ } & 7 & \text{ } & \text{ } \\ 287 & \text{ } & 40223 & \text{ } & 3 & \text{ } & \text{ } \\ 289 & \text{ } & 40225 & \text{ } & 5 & \text{ } & \text{ } \\ 291 & \text{ } & 40227 & \text{ } & 23 & \text{ } & \text{ } \\ 293 & \text{ } & 40229 & \text{ } & 3 & \text{ } & \text{ } \\ 295 & \text{ } & 40231 & \text{ } & 71 & \text{ } & \text{ } \\ 297 & \text{ } & 40233 & \text{ } & * & \text{ } & \equiv 1 \end{array}\right]$
The first number is in the table $P_{259}$ ends in a 5 and is thus composite. The third number $P_{263}$ is composite since the sum of the digits of its is divisible by 3. The third column of the above table shows the least prime factor below 547 (if one is found). An asterisk in the third column means that none of the prime numbers below 547 is a factor. For such numbers, we compute the modular exponentiation $2^{P_j-1} \ (\text{mod} \ P_j)$.
In the above table, 4 of the asterisks lead to the result $2^{P_j-1} \not \equiv 1 \ (\text{mod} \ P_j)$. These numbers $P_j$ are thus composite. For example, for $P_{273}$, the following is the result:
$2^{P_{273}-1} \ (\text{mod} \ P_{273}) \equiv$
55365573520609500639906523255562025480037454102798
631593548187358338340281435
The last number $P_{297}$ in the table is a probable prime base 2 since our calculation shows that $2^{P_{297}-1} \equiv 1 \ (\text{mod} \ P_{297})$. Being a probable prime to base 2 is actually very strong evidence that the number is a prime number. We want even stronger evidence that $P_{297}$ is a prime. For example, we can carry out the Miller-Rabin test in such a way that the probability of mistaking a composite number as prime is at most one in a septillion! A septillion is the square of a trillion. A trillion is $10^{12}$. Thus a septillion is $10^{24}$. One in a septillion is for all practical purposes zero. But if one wants more reassurance, one can always run the Miller-Rabin test with more bases.
___________________________________________________________________
The Miller-Rabin primality test
The Miller-Rabin test is a variant of the Fermat test because Miller-Rabin still relies on Fermat’s little theorem. But Miller-Rabin uses Fermat’s little theorem in such a way that it eliminates the issue of the Fermat test mistakenly identifying Carmichael numbers as prime.
Given an odd positive integer whose “prime or composite” status is not known, the Miller-Rabin test will output “composite” or “probable prime”. Like the Fermat test, the Miller-Rabin test calculates $a^{n-1} \ (\text{mod} \ n)$ for several values of $a$. But the test organizes the congruence $a^{n-1} \ (\text{mod} \ n)$ a little differently to capture additional information about prime numbers.
Here’s how to set up the calculation for Miller-Rabin. Because $n$ is odd, $n-1$ is even. We can factor $n-1$ as a product of a power of 2 and an odd number. So we have $n-1=2^k \cdot q$ where $k \ge 1$ and $q$ is odd ($q$ may not be prime). Then we calculate the following sequence:
$a^q, \ a^{2 \cdot q}, \ a^{2^2 \cdot q}, \cdots, a^{2^{k-1} \cdot q}, \ a^{2^{k} \cdot q} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$
The first term in (1) can be calculated using the fast powering algorithm (using the binary expansion of $q$ to convert the calculation of $a^q$ into a series of squarings and multiplications). Each subsequent term is then the square of the preceding term. The last term is of course $a^{n-1}$. Each squaring or multiplication is reduced modulo $n$. The Miller-Rabin test is based on the following property of prime numbers:
Theorem 1
Let $n$ be an odd prime number such that $n-1=2^k \cdot q$ where $k \ge 1$ and $q$ is odd. Let $a$ be a positive integer not divisible by $n$. Then the following two conditions are true about the sequence (1).
• At least one term in the sequence (1) is congruent to 1 modulo $n$.
• Either the first term in (1) is congruent to 1 modulo $n$ or the term preceding the first 1 is congruent to -1 modulo $n$.
How the Miller-Rabin test works
Suppose that the “prime or composite” status of an odd integer $n$ is not known. If both conditions in the above theorem are satisfied with respect to the number $a$, then $n$ is said to be a strong probable prime in base $a$. If a strong probable prime in base $a$ happens to be composite, then it is said to be a strong pseudoprime in base $a$. In other words, a strong pseudoprime is a composite number that possesses a prime-like property, namely it satisfies the two conditions in Theorem 1 with respect to one base $a$.
The test procedure of Miller-Rabin is to check whether $n$ is a strong probable prime to several bases that are randomly chosen. The following determines the outcome of the test:
• If $n$ is not a strong probable prime in one of the chosen bases, then $n$ is proved to be composite.
• If $n$ is shown to be a strong probable prime in all the chosen bases (say there are $k$ of them), then $n$ is “probably prime” with an error probability of at most $0.25^k$.
To prove the integer $n$ is composite, we look for a base $a$ for which $n$ is not a strong probable prime. Such a value of $a$ is also called a Miller-Rabin witness for the compositeness of $n$. For primality, the Miller-Rabin test does not give a mathematical proof that a number is prime. The Miller-Rabin test is a probable prime test. It gives strong evidence that $n$ is a prime number, with an error probability that can be made arbitrarily small by using a large random sample of values of $a$.
Take the prime candidate $P_{297}$ that is discussed above. We plan to run the Miller-Rabin test on $P_{297}$ using 40 random values of $a$ where $1. If $P_{297}$ is shown to be a strong probable prime in all 40 bases, then the prime candidate $P_{297}$ is likely a prime number with an error probability of at most $0.25^{40}$. This probability works out to be less than 1 in 10 raised to 24 (hence the one in a septillion that is mentioned earlier). If one wants stronger evidence, we can compute for more values of $a$. Thus if $P_{297}$ is in actuality a composite number, there is at most a one in septillion chance that the Miller-Rabin test will declare $P_{297}$ is a prime number.
How can the Miller-Rabin test make the claim of having such a small error probability? The fact the the error probability of Miller-Rabin can be made arbitrarily small stems from the following fact.
Theorem 2
Suppose that $n$ is a composite odd number. At most 25% of the numbers in the interval $1 are bases in which $n$ is a strong pseudoprime. Putting it in another way, at least 75% of the numbers in $1 are bases in which $n$ is not a strong pseudoprime.
To paraphrase Theorem 2, if $n$ is composite to begin with, at least 75% of the numbers in $1 will prove its compositeness. That means that at most 25% of the numbers $a$ will exhibit the prime-like property described in Theorem 1. The power of Miller-Rabin comes from the fact that for composite numbers there are more values of $a$ that will give a correct result (in fact, at least 3 times more).
Thus if you apply the Miller-Rabin test on a composite number $n$, you will bound to stumble on a base $a$ that will prove its compositeness, especially if the bases are randomly chosen. Any random choice of $a$ where $1 has at least a 75% chance of being correct on the composite number $n$. In a series of 100 random choices of $a$, it will be hard to miss such values of $a$. The only way that Miller-Rabin can make a mistake by declaring a composite number as prime is to pick all the values of $a$ from the (at most) 25% of the pool of values of $a$ that are strong pseudoprime prime. This probability is bounded by $0.25^k$ (if $k$ is the number of selections of $a$).
___________________________________________________________________
Applying Miller-Rabin on the prime candidate
The first task is to factor $P_{297}-1$. We find that $P_{297}-1=2^3 \times q$ where $q$ is the following odd number:
$q=$
14474011154664524427946373126085988481658748083205
070504932198000989141205029
For each randomly selected $a$, we calculate the following sequence:
$a^q, \ a^{2q}, \ a^{4q}, \ a^{8q} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$
The first term is calculated using the fast powering algorithm (a series of squarings and multiplications). Each subsequent term is the square of the preceding term. Each term in the sequence is reduced modulo $P_{297}$. The goal is to see if the two conditions in Theorem 1 are satisfied. One is that one of the 4 values in (2) is a 1. The other is that the term preceding the first 1 in (2) has to be a -1.
The following shows the 40 numbers that are randomly chosen in the interval $1.
$a_1=$
03006957708701194503849170682264647623506815369915
7798209693214442533348380872
$a_2=$
02223067440101780765895379553626469438082041828085
0568523022714143509352911267
$a_3=$
04531895131849504635258523281146698909008537921009
6337435091877410129499153591
$a_4=$
05434508269932993379745836263818598804800824522102
0278113825689716192402178622
$a_5=$
08799241442673378780142202326330306495270149563840
3866810486309815815031353521
$a_6=$
02638607393577034288802880492058261281940769238659
8928068666401909247319838064
$a_7=$
04283430251977183138176255955338099404217762991191
9192783003754562986178981473
$a_8=$
09773398144692973692102006868849010147546139698798
3443958657834362269077067224
$a_9=$
05504666974469005713839308880951115507992521746498
7157086751623602877205126361
$a_{10}=$
11369425784373951812019794994427515082375862595853
6524984616385315102874812557
$a_{11}=$
11280428157869817083329641054154150272024966029283
2165114734540900026838117128
$a_{12}=$
11208322317253928483879618989535357346499197200982
7728283667193655956607063861
$a_{13}=$
05585951853297694372636067012444311272073854408338
4421611399136081624631900538
$a_{14}=$
06831924581003106427566658433259804779354874917795
9811865334330929987281859876
$a_{15}=$
07339174229323952008915772840377019251465052264221
1294344032116313026124007734
$a_{16}=$
05117387267263929174559713098463717229625661656017
7194611080485470890280573816
$a_{17}=$
06599941646668915168578091934085890873056463577356
8090503454939353325803291530
$a_{18}=$
07545265152740184887140788322673806569482388835389
5577110370797470603035554930
$a_{19}=$
02591621894664804222839429868664505564743756550515
2520842332602724614579447809
$a_{20}=$
04791002227899384351266879075743764807247161403811
8767378458621521760044966007
$a_{21}=$
03251071871924939761772100645669847224066002842238
6690935371046248267119967874
$a_{22}=$
07211128555514235391448579740428274673170438137060
9390617781010839144521896079
$a_{23}=$
02839820419745979344283855308465698534375525126267
1701870835230228506944995955
$a_{24}=$
06304631891686637702274634195264042846471748931602
4893381338158934204519928855
$a_{25}=$
06492095235781034422561843267711627481401158404402
2978856782776323231230432687
$a_{26}=$
11078868891712009912929762366314190797941038596568
5459274315695355251764942151
$a_{27}=$
05795069944009506186885816367149671702413127414386
2708093175566185349033983346
$a_{28}=$
01712922833914010148104423892201355622294341143990
7524285008693345292476544524
$a_{29}=$
09743541325262594740093734822046739122734773994479
9814337973200740861495044676
$a_{30}=$
02503872375817370838455279068302037475992008315394
2976462871038003917493744995
$a_{31}=$
06980677383898331402575574511880992071872803011356
6498794763450065008785347168
$a_{32}=$
01507075889390134242331585173319278262699562685820
7121480322563439665642035394
$a_{33}=$
02471785068822350832987019936892052187736451275830
5372059292781558599916131031
$a_{34}=$
10950891460180297156465120507537244257810396062906
9207306297501015755045004254
$a_{35}=$
11052976297188507170707306917942099264941855478856
2965936913589165233381674539
$a_{36}=$
03911878231948499128291863266472008604449261315172
1053813631612297577166335941
$a_{37}=$
06903294587603383022211116535092146484651980588002
9291840261276683214113088012
$a_{38}=$
03942020579038616658412018517396703874933208670283
3087287933190554281896471934
$a_{39}=$
04338728160253711124705740270085271024911573570055
1690460857511205663297661796
$a_{40}=$
06707597137792150532106913489524457238449067437061
7211249957355483821516113140
For each random number $a_j$, we calculated the 4 numbers indicated in sequence (2). The following 3 tables show the results of the calculation.
$\left[\begin{array}{rrrrrrrrr} j & \text{ } & a_j^q & \text{ } & a_j^{2q} & \text{ } & a_j^{4q} & \text{ } & a_j^{8q} \\ \text{ } & \text{ } & \text{ } \\ 1 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 2 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 3 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 4 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 5 & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 6 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 7 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 8 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 9 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 10 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 11 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 12 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 13 & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 14 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 15 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \end{array}\right]$
$\left[\begin{array}{rrrrrrrrr} j & \text{ } & a_j^q & \text{ } & a_j^{2q} & \text{ } & a_j^{4q} & \text{ } & a_j^{8q} \\ \text{ } & \text{ } & \text{ } \\ 16 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 17 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 18 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 19 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 20 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 21 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 22 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 23 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 24 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 25 & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 & \text{ } & 1 \\ 26 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 27 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 28 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 29 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 30 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \end{array}\right]$
$\left[\begin{array}{rrrrrrrrr} j & \text{ } & a_j^q & \text{ } & a_j^{2q} & \text{ } & a_j^{4q} & \text{ } & a_j^{8q} \\ \text{ } & \text{ } & \text{ } \\ 31 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 32 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 33 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 34 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 35 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 36 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 37 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 38 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \\ 39 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 & \text{ } & 1 \\ 40 & \text{ } & \text{ } & \text{ } & \text{ } & \text{ } & -1 & \text{ } & 1 \end{array}\right]$
There are 4 columns of calculation results, one for each term in sequence (2). If a calculation result is a blank in the above tables, it means that the result is a number that is not 1 or -1 modulo $P_{297}$. For example, $a_1^q \ (\text{mod} \ P_{297})$ and $a_2^q \ (\text{mod} \ P_{297})$ are congruent to the following two numbers:
$a_1^q \equiv$
86168678768024029811437552745042076645410792873480
629834883948094184848812907
$a_2^q \equiv$
10235477176842589582260882228891913141693105976929
7597880545619812030150151760
In the above 3 tables, all results match the conditions of Theorem 1. For each number $a_j$, the calculated results are eventually 1. On some of the rows, the first result is a 1. In all the other rows, the term right before the first 1 is a -1. For example, in the first row where $j=1$, the first 1 $a_1^{4q}$ and the term preceding that is a -1.
The results in the above 3 tables show that the number $P_{297}$ is a strong probable prime in all 40 of the randomly chosen bases. We have very strong evidence that the number $P_{297}$ is a prime number. The probability that it is a composite number but we mistakenly identify it as prime is at most one in a septillion!
___________________________________________________________________
Exercise
In our search for probable primes larger than the 8th Fermat number, we also find that the number $P_{301}=2^{301}+301$ is also a probable prime base 2. The following shows the decimal digits:
$P_{301}=$
11579208923731619542357098500868790785326998466564
0564039457584007913129640237
Is it a prime number? Perform the Miller-Rabin test on this number.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$
# Is factorization a hard problem?
Is factorization a hard problem? There is plenty of empirical evidence that it is so. Take the following 309-digit number that is known as RSA-1024, an example of an RSA number.
RSA-1024
13506641086599522334960321627880596993888147560566
70275244851438515265106048595338339402871505719094
41798207282164471551373680419703964191743046496589
27425623934102086438320211037295872576235850964311
05640735015081875106765946292055636855294752135008
52879416377328533906109750544334999811150056977236
890927563
RSA-1024, a 1024-bit number, is a product of two prime numbers $p$ and $q$. No one has been able to factor this number, despite the advances in factoring algorithms and computing technology in recent decades. RSA-1024 is part of the RSA Factoring Challenge that was created in 1991. Even though the challenge was withdrawn in 2007, it is believed that people are still taking up the challenge to factor this and other unfactored RSA numbers. In fact, the successful factoring of RSA-1024 or similarly sized numbers would have huge security implication for the RSA algorithm. The RSA cryptosystem is built on the difficulty (if not the impossibility) of factoring large numbers such as RSA-1024.
Yet it is very easy to demonstrate that RSA-1024 is not a prime number. The fact that it is composite can be settled by performing one modular exponentiation. Denote RSA-1024 by $N$. We compute $2^{N-1} \ (\text{mod} \ N)$.
We find that $2^{N-1} \equiv T \ (\text{mod} \ N)$ where $T$ is the following 309-digit number.
$T=$
12093909443203361586765059535295699686754009846358
89512389028083675567339322020593385334853414711666
28419681241072885123739040710771394053528488357104
98409193003137847878952260296151232848795137981274
06300472693925500331497519103479951096634123177725
21248297950196643140069546889855131459759160570963
857373851
Obviously $T$ is not 1. This fact is enough to prove that the modulus $N$ is not a prime number. This is because the number $N$ lacks a property possessed by all prime numbers. According to Fermat’s little theorem, if $N$ were prime, then that $a^{N-1} \equiv 1 \ (\text{mod} \ N)$ for all integers $a$ that are relatively prime to $N$. In particular, if $N$ were prime, then we would have $2^{N-1} \equiv 1 \ (\text{mod} \ N)$, the opposite of our result.
The modular exponentiation $a^{N-1} \ (\text{mod} \ N)$ discussed here can be performed using the fast powering algorithm, which runs in polynomial time. In the fast powering algorithm, the binary expansion of the exponent is used to convert the modular exponentiation into a series of squarings and multiplications. If the exponent $N-1$ is a $k$-bit number, then it takes $k-1$ squarings and at most $k-1$ multiplications. For RSA-1024, it takes 1023 squarings and at most 1023 multiplications (in this instance exactly 507 multiplications). This calculation, implemented in a modern computer, can be done in seconds.
The above calculation is a vivid demonstration that factoring is hard while detecting the primality or compositeness of a number is a much simpler problem.
The minimum RSA key length prior to the end of 2013 is 1024. After 2013, The minimum RSA key length is 2048. In fact, the largest RSA number is RSA-2048 (has 2048 bits and 617 decimal digits), which is expected to stay unfactored for years to come barring dramatic advances in factoring algorithms or computing capabilities.
___________________________________________________________________
Exercise
Using a software package that can handle modular exponentiation involving large numbers, it is easy to check for “prime versus composite” status of a large number. Find a number $n$ whose prime factorization is not known. Either use known numbers such as RSA numbers or randomly generate a large number. Then calculate the modular exponentiation $a^{n-1} \ (\text{mod} \ a)$ for several values of $a$ (it is a good practice to start with $a=2$). If the answer is not congruent to 1 for one value of $a$, then we know $n$ is composite. If the exponentiation is all congruent to 1 for the several values of $a$, then $n$ is a likely a prime number.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$ |
Consider the function Find the average slope of this function on the interval $( 0 , 2 )$.
By the Mean Value Theorem, we know there exists a $c$ in the open interval $( 0, 2 )$ such that $f'(c)$ is equal to this mean slope. Find the two values of $c$ in the interval which work, enter the smaller root first: $\le$ |
# Enumerations of trees and forests related to branching processes and random walks
Report Number
482
Authors
Jim Pitman
Citation
In Microsurveys in Discrete Probability edited by D. Aldous and J. Propp. DIMACS Ser. Discrete Math. Theoret. Comput. Sci..
Abstract
In a Galton-Watson branching process with offspring distribution \$(p_0, p_1, \ldots )\$ started with \$k\$ individuals, the distribution of the total progeny is identical to the distribution of the first passage time to \$-k\$ for a random walk started at 0 which takes steps of size \$j\$ with probability \$p_{j+1}\$ for \$j \ge -1\$. The formula for this distribution is a probabilistic expression of the Lagrange inversion formula for the coefficients in the power series expansion of \$f(z)^k\$ in terms of those of \$g(z)\$ for \$f(z)\$ defined implicitly by \$f(z) = z g(f(z))\$. The Lagrange inversion formula is the analytic counterpart of various enumerations of trees and forests which generalize Cayley's formula \$k n^{n-k-1}\$ for the number of rooted forests labeled by a set of size \$n\$ whose set of roots is a particular subset of size \$k\$. These known results are derived by elementary combinatorial methods without appeal to the Lagrange formula, which is then obtained as a byproduct. This approach unifies and extends a number of known identities involving the distributions of various kinds of random trees and random forests.
PDF File
Postscript File |
## Instructions:
This assignment is worth either $20 \%$ or $25 \%$ of the final grade, and is worth a total of 75 points. All working must be shown for all questions. For questions which ask you to write a program, you must provide the code you used. If you have found code and then modified it, then the original source must be cited. The assignment is due by $5 \mathrm{pm}$ Friday $1 \mathrm{st}$ of October (Friday of Week 8), using Turnitin on Wattle. Late submissions will only be accepted with prior written approval. Good luck.
Problem 1.
[10 marks] In this exercise we will consider four different specifications for forecasting monthly Australian total retail sales. The dataset (available on Wattle) AUSRetail2021. csv contains three columns; the first column contains the date; the second contains the sales figures for that month, and the third contains Australian GDP for that month $\mathbb{1}$ The data runs from January 1992 to January 2021 .
Let $M_{i t}$ be a dummy variable that denotes the month of the year. Let $D_{i t}$ be a dummy variable which denotes the quarter of the year. The four specifications we consider are
\begin{aligned} &s_{1}: y_{t}=a_{0}+a_{1} t+\alpha_{4} D_{4 t}+\epsilon_{t} \ &S_{2}: y_{t}=a_{1} t+\sum_{i=1}^{4} \alpha_{i} D_{i t}+\epsilon_{t} \ &S_{3}: y_{t}=a_{0}+a_{1} t+\beta_{12} M_{12, t}+\epsilon_{t} \ &S_{4}: y_{t}=a_{1} t+\sum_{i=1}^{12} \beta_{i} M_{i t}+\epsilon_{t} \end{aligned}
where $\mathbb{E} \epsilon_{t}=0$ for all $t$
a) For each specification, describe this specification in words.
b) For each specification, estimate the values of the parameters, and compute the MSE, AIC, and BIC. If you make any changes to the csv file, please describe the changes you make. As always, you must include your code.
c) For each specification, compute the MSFE for the 1-step and 3 -step ahead forecasts, with the out-of-sample forecasting exercise beginning at $T_{0}=60$.
d) For each specification, plot the out-of-sample forecasts and comment on the results.
Problem 2.
[10 marks] Now add to Question 1 the additional assumption that $\epsilon_{t} \sim \mathcal{N}\left(0, \sigma^{2}\right)$. One estimator for $\sigma^{2}$ is
$$\hat{\sigma}^{2}=\frac{1}{T-k} \sum_{t=1}^{T}\left(y_{t}-\hat{y}{t}\right)^{2}$$ where $\hat{y}{t}$ is the estimated value of $y_{t}$ in the model and $k$ is the number of regressors in the specification.
a) For each specification $\left(S_{1}, \ldots, S_{4}\right)$, compute $\hat{\sigma}^{2}$.
b) For each specification, make a $95 \%$ probability forecast for the sales in April 2021 .
c) For each specification, compute the probability that the retail sales in April 2021 will be greater than $\$ 31 \mathrm{bn}$. According to the FRED series AUSSARTMDSMEI, what was the actual retail sales value for that month. d) Do you think the assumption that$\epsilon_{t}is iid is a reasonable assumption for this data series. Problem 3. [10 marks] Here we investigate whether adding GDP as a predictor can improve our forecasts. Consider the following modified specifications: \begin{aligned} &S_{1}^{\prime}: y_{t}=a_{0}+a_{1} t+\alpha_{4} D_{4 t}+\gamma x_{t-h}+\epsilon_{t} \ &S_{2}^{\prime}: y_{t}=a_{1} t+\sum_{i=1}^{4} \alpha_{i} D_{i t}+\gamma x_{t-h}+\epsilon_{t} \ &S_{3}^{\prime}: y_{t}=a_{0}+a_{1} t+\beta_{12} M_{12, t}+\gamma x_{t-h}+\epsilon_{t} \ &S_{4}^{\prime}: y_{t}=a_{1} t+\sum_{i=1}^{12} \beta_{i} M_{i t}+\gamma x_{t-h}+\epsilon_{t} \end{aligned} where\mathbb{E} \epsilon_{t}=0$for all$t$, and$x_{t-h}$is GDP at time$t-h$. For each specification, compute the MSFE for the 1-step ahead, and the 3-step ahead forecasts, with the out-of-sample forecasting exercise beginning at$T_{0}=60$. For each specification, plot the out-of-sample forecasts and comment on the results. Problem 4. [15 marks] Here we investigate whether Holt-Winters smoothing can improve our forecasts. Use a Holt-Winters smoothing method with seasonality, to produce 1-step ahead and 3-step ahead forecasts and compute the MSFE for these forecasts. You should use smoothing parameters$\alpha=\beta=\gamma=0.4$and start the out-of-sample forecasting exercise at$T_{0}=50$. Plot these out-of-sample forecasts and comment on the results. Additionally, estimate the values for$\alpha, \beta$, and$\gamma$which minimise the MSFE. Find the MSFE for these parameter vales and compare it to the baseline$\alpha=\beta=\gamma=0.4$Problem 5. [5 marks] Questions 1, 3 and 4 each provided alternative models for forecasting Australian Retail Sales. Compare the efficacy of these forecasts. Your comparison should include discussions of MSFE, but must also make qualitative observations (typically based on your graphs). Problem 6. [10 marks] Develop another model, either based on material from class or otherwise, to forecast Australian Retail Sales. Your new model must perform better (have a lower MSFE or MAFE) than all models from Questions 1,3, and$4 .$As part of your response to this question you must provide: a) a brief written explanation of what your model is doing, b) a brief statement on why you think your new model will perform better, c) any relevant equations or mathematics/statistics to describe the model, d) the code to run the model, and e) the MSFE and/or MAFE error found by your model, and a brief discussion of how this compares to previous cases. Problem 7. [15 marks] Consider the AR(2) process with drift $$y_{t}=\mu+\rho_{1} y_{t-1}+\rho_{2} t_{t-2}+\epsilon_{t}$$ where the errors follow an$\mathrm{AR}(1)$process $$\epsilon_{t}=\phi \epsilon_{t-1}+u_{t}, \quad \mathbf{u} \sim \mathcal{N}\left(0, \sigma^{2} I\right)$$ for$t=1, \ldots, T$and$e_{0}=0 .$Suppose$\phi$is known. Find (analytically) the maximum likelihood estimators for$\mu, \rho_{1}, \rho_{2}$, and$\sigma^{2}$[Hint: First write$y$and$\epsilon$in vector/matrix form. You may wish to use different looking forms for each. Find the distribution of$\epsilon$and$y .\$ Then apply some appropriate calculus.]
matlab代写请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。 |
You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
# 5% of Zion: Evaluating the potential for probability-split trades in professional sports
#### Abstract
In this paper, I propose and evaluate a novel extension of the analytics revolution in professional sports: probability-split trades. Under this plan, teams could trade probability shares in draft assets held. For example, a team like the New York Knicks could trade their first and second round picks for a 5% chance of winning the 1st overall pick. In the last two decades, the analytics revolution has transformed professional sports. General managers, coaches, and even players leverage the underlying math to gain any sort of competitive advantage, while major sports leagues view the analytics revolution with passive glee, as their potential viewer segments continue to expand. This paper is an extension of that revolution, outlining the details, feasibility, and potential benefits of a novel plan with the potential to increase exchange efficiency, boost revenue and sustain league growth in the NFL and NBA.
## 1Introduction
In the last two decades, the analytics revolution has transformed professional sports (Fry & Ohlmann, 2012; Davenport, 2014). Moneyball captivated the attention of fans and owners alike. The Sloan Conference has become a cultural phenomenon, bringing media and management together to revel in achievements and rejoice about the future of quantitative sports analysis. Each year, the models become more accurate, the hiring potential for young sports-oriented quants becomes greater, and interest in analytics among fans becomes more mainstream. General managers, coaches, and even players leverage the underlying math to gain any sort of competitive advantage, while major sports leagues view the analytics revolution with passive glee, as their potential viewer segments continue to expand. Players are more efficient, coaches are more informed, general managers are smarter, and leagues are richer - what could be wrong with this analytics revolution. In my estimation, just one thing; it does not quite extend far enough.
In this paper, I propose and evaluate a novel extension of the analytics revolution in professional sports: probability-split trades. Under this plan, teams could trade probability shares in draft assets held. For example, in the 2020 NFL draft, Joe Burrow was selected by the Cincinnati Bengals with the first overall pick. Under this proposal, the New England Patriots, who with the departure of Tom Brady seem to be in the market for a quarterback, could trade their draft assets for a 10% probability share of the first overall pick. In other words, the New England Patriots could do something like trade their fifth, sixth, and seventh round picks for a 10% chance of winning the 1st overall pick, if of course, the Cincinnati Bengals agreed to the terms.
Before each draft, an event, much like the NBA lottery, would be held, at which the winner of the asset is determined via ping pong drawing. In this case, 90 ping pong balls would be pro-Cincinnati and 10 would be pro-Patriot. If a Cincinnati ping pong ball is pulled, the Bengals retain their first overall pick and also get the fifth, sixth, and seventh round picks of the Patriots. If the Patriots ping pong ball is pulled, the Patriots get the first overall pick, but still lose their fifth, sixth, and seventh round picks to the Bengals (because they traded a 100% probability share in these picks.
This paper outlines the details, feasibility, and potential benefits of such a plan in major professional sports, focusing on implications in the NBA and NFL specifically, due to the increased draft interest annually in these two leagues.
## 2Barter and the existing trade model
Primarily, teams conduct trades in major professional sports through barter style transactions (Marburger, 2009). Both teams mutually exchange assets, that in theory, make both teams better off. In certain circumstances, cash itself is included as part of the trade; however, usually only player and draft assets are exchanged in trades. This existing system of trades in American major professional sports is well-defined by a barter model of transactions in economics, whereby two independent parties exchange goods or services directly without the use of money. Time and time again in the economic literature, barter systems have been shown to be inefficient (Marburger, 2009). First, any transaction in a barter system depends on a ‘double coincidence of wants’ - both parties have to possess what the other party wants. In other words, for the New Orleans Saints to complete a trade with the Miami Dolphins, New Orleans has to possess something that Miami wants and Miami has to possess something that New Orleans wants, only then can a trade occur.
Another inefficiency in barter-based trade systems is the lack of common measure of value. In any economic system, money usually represents the value in a good or service; therefore, price quickly and easily expresses value to both parties. In a barter system, the value of two things cannot be easily compared to one another because of the absence of price. In sports, players and picks are traded with ill-defined value, adding to the inefficiency of the existing barter trade model.
Perhaps most significantly, the existing barter model suffers from what economists call the ‘indivisibility of goods’. Suppose I have a chicken and want to sell it for a wrench. If the wrench salesman values his wrench at the value of two chickens, then no trade can occur, because the wrench salesman cannot sell me half of a wrench. That is the problem with the existing trade model in American professional sports. Suppose that in the 2019 NBA draft, the Portland Trailblazers wanted to trade up to select Zion Williamson, the first overall pick in the draft. If Portland/New Orleans both valued Portland’s 25th overall pick as a $5,000,000 asset and the number one overall pick as a$100,000,000 asset - no trade could possibly mend that gap in value. Portland could work with existing assets, potentially trading players like CJ McCollum and Jusuf Nurkic to try to get to $100,000,000 in tradable assets; however, you still can’t trade half of a player, so the inefficiency of indivisibility likely remains. With the inherent inefficiency of the barter-based trade system in American professional sports, it is important to remember that draft pick assets are not inherently indivisible goods, like wrenches. Whereas challenges like the ‘double coincidence of wants’ and a lack of ‘common measure of value’ are difficult to overcome in the existing barter model of trades, the problem of indivisibility of goods would be much easier to solve. If the Portland Trailblazers wanted to trade up to select Zion Williamson and both Portland and New Orleans valued Portland’s 25th overall pick as a$5,000,000 asset and the number one overall pick as a \$100,000,000 asset. Then, in equilibrium, Portland should be willing to trade their 25th overall pick in the 2019 NBA draft for a 5% probability share of the number one overall pick in the draft. On the other end of the bargain, the New Orleans Pelicans would also be willing to trade a 5% probability share of the number one overall pick in exchange for the 25th overall pick, while still retaining a 95% share of the first overall pick. Of course, this model oversimplifies asset evaluation, assuming that both teams assess at the same value. In reality, it is likely that no two teams would value each asset, whether pick or player, exactly the same. That reality does not challenge the underlying validity of the proposal, but rather creates uncertainty and confers advantages to general managers that are more adept at asset evaluation and managing uncertainty.
## 3The Plan
In the probability-split trades plan, teams would be able to specify the probability share of any traded draft asset. Teams would certainly still be allowed to trade draft assets with a 100% probability share. If the New Orleans Saints wanted to trade two 100% probability second round draft picks for a 100% probability share of Tampa Bay’s first round pick, both parties could certainly do that. Trades involving players could also include partial shares of draft picks; however, teams could not trade probability shares of players. The Washington Wizards could not trade a first round pick for a 2% chance at Lebron James in the offseason, the probability shares would be exclusively applied to draft assets in this proposal.
Before each draft, an event, much like the NBA lottery, would be held, at which the winner of the asset is determined via ping pong ball drawing or some other random drawing. The teams involved in each trade would receive their appropriate share of ping pong balls and then the commissioner of the league would draw which team wins the control of the asset. Post-drawing, the asset would be 100% controlled by the winning team.
## 5Management implications
The imposition of the probability-split trade plan would confer advantages to general managers and team decision-makers that are more adept at asset evaluation, working with statistics, and managing uncertainty. With that being said, the dawn of the analytics revolution in sports has already conferred benefits to team decision-makers with those unique skill sets. Calculating expected values based on probability and asset evaluation would not be tremendously difficult for the teams of quantitative analysts that most major professional sports teams employ today, the challenge would still mostly lie with asset evaluation in the first place, which plagued general managers before teams even owned computers.
General Managers and decision-makers would also have to manage the additional wrinkle of fan expectations. Fans and decision-makers likely have different levels of value assigned to a seventh round pick in the NFL. In theory, the split trade plan allows each team to gain a probability share of any high first round pick. That potential is relatively nonexistent for most teams in the NFL draft presently. With that new hope in mind, fan-team relations may face different challenges after the imposition of such a plan.
## 6League policy and potential challenges
As currently constructed, the official NFL and NBA rules do not outlaw probability-share trades (NFL Football Operations, 2021; NBA PA, 2021). In theory, if teams were willing to set up a random chance drawing between the two parties, they could carry out these types of trades without a league-wide drawing event; however, a league-wide event would offer tremendous revenue potential for the league.
The largest potential challenge to the probability-split trade plan is inertia. If team decision-makers are unwilling or unable to utilize split trades, then leagues will lose the potential revenue from a random drawing event. If events do not make leagues money, they will likely be scrapped. The recent analytics revolution provides optimism in regard to this challenge. Analytics-driven general managers with the guidance of quantitative analysis teams should be able to utilize the split trade system to drive the league towards a more efficient equilibrium, conferring benefits to the league at large.
Another potential challenge is the high volume of draft assets exchange on draft day itself. As currently constructed, these assets would be viewed as current players and thus could not be subject to a probability-split trade eligible. In this case, it is probably most feasible for draft day trades to be 100% probability-share only. All pre-draft trades would still be probability split.
## 7Conclusions
In light of the analytics revolution that has transformed professional sports, teams and leagues are increasingly evaluating quantitatively-driven ways to gain competitive advantages. The proposed probability-split trade system would both allow teams to compete in a more efficient system, but also allow leagues to boost revenue and sustain growth. Initially, decision-makers may be unwilling or unable to utilize the new system, but the analytics revolution has taught us that competition will drive the game towards a more efficient equilibrium. This proposal is the next step forward towards that equilibrium - an equilibrium in which any NBA team, even the New York Knicks, can buy-in for a 5% chance to win Lebron James or Zion Williamson.
## References
1 Davenport, T. H. , 2014, What businesses can learn from sportsanalytics, MIT Sloan Management Review 55(4), 10. 2 Fry, M. J. , & Ohlmann, J. W. , 2012, Introduction to the special issue on analytics in sports, part I: General sports applications. 3 Lewis, J. , 2019, NBA Draft Lottery Up Big to 16-Year High. Retrieved from https://www.sportsmediawatch.com/2019/05/nba-draft-lottery-ratings-high-espn/ 4 Marburger D. R. , 2009, Why Do Player Trades Dominate Sales? Journal of Sports Economics 10(4), 335–350. 5 NBA PA. 2021, Collective Bargaining Agreement. Retrieved from https://nbpa.com/cba 6 NFL Football Operations. 2021, The Rules of the Draft. Retrieved from https://operations.nfl.com/the-players/the-nfl-draft/the-rules-of-the-draft/ |
# MonthOfJulia Day 6: Composite Types
2015-09-05 Andrew B. Collier
I’ve had a look at the basic data types available in Julia as well as how these can be stashed in collections. What about customised-composite-DIY-build-your-own style types?
Composite types are declared with the type keyword. To illustrate we’ll declare a type for storing geographic locations, with attributes for latitude, longitude and altitude. The type immediately has two methods: a default constructor and a constructor specialised for arguments with data types corresponding to those of the type’s attributes. More information on constructors can be found in the documentation.
julia> type GeographicLocation
latitude::Float64
longitude::Float64
altitude::Float64
end
julia> methods(GeographicLocation)
# 2 methods for generic function "GeographicLocation":
GeographicLocation(latitude::Float64,longitude::Float64,altitude::Float64)
GeographicLocation(latitude,longitude,altitude)
Creating instances of this new type is simply a matter of calling the constructor. The second instance below clones the type of the first instance. I don’t believe I’ve seen that being done with another language. (That’s not to say that it’s not possible elsewhere! I just haven’t seen it.)
julia> g1 = GeographicLocation(-30, 30, 15)
GeographicLocation(-30.0,30.0,15.0)
julia> typeof(g1) # Interrogate type
GeographicLocation (constructor with 3 methods)
julia> g2 = typeof(g1)(5, 25, 165) # Create another object of the same type.
GeographicLocation(5.0,25.0,165.0)
We can list, access and modify instance attributes.
julia> names(g1)
3-element Array{Symbol,1}:
:latitude
:longitude
:altitude
julia> g1.latitude
-30.0
julia> g1.longitude
30.0
julia> g1.latitude = -25 # Attributes are mutable
-25.0
Additional “outer” constructors can provide alternative ways to instantiate the type.
julia> GeographicLocation(lat::Real, lon::Real) = GeographicLocation(lat, lon, 0)
GeographicLocation (constructor with 3 methods)
julia> g3 = GeographicLocation(-30, 30)
GeographicLocation(-30.0,30.0,0.0)
Of course, we can have collections of composite types. In fact, these composite types have essentially all of the rights and privileges of the built in types.
julia> locations = [g1, g2, g3]
3-element Array{GeographicLocation,1}:
GeographicLocation(-25.0,30.0,15.0)
GeographicLocation(5.0,25.0,165.0)
GeographicLocation(-30.0,30.0,0.0)
The GeographicLocation type declared above is a “concrete” type because it has attributes and can be instantiated. You cannot derive subtypes from a concrete type. You can, however, declare an abstract type which acts as a place holder in the type hierarchy. As opposed to concrete types, an abstract type cannot be instantiated but it can have subtypes.
julia> abstract Mammal
julia> type Cow <: Mammal
end
julia> Mammal() # You can't instantiate an abstract type!
ERROR: type cannot be constructed
julia> Cow()
Cow()
The immutable keyword will create a type where the attributes cannot be modified after instantiation.
Additional ramblings and examples of composite types can be found on github. Also I’ve just received an advance copy of Julia in Action by Chris von Csefalvay which I’ll be reviewing over the next week or so. |
• 7 CATs FREE!
If you earn 100 Forum Points
Engage in the Beat The GMAT forums to earn
100 points for $49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## Data Sufficiency Topics POSTS Last post How much revenue did a band receive from the sale of 1200 albums, some of which were sold at full price and by BTGmoderatorDC » How much revenue did a band receive from the sale of 1200 albums,... 0 Last post by BTGmoderatorDC Sun Apr 05, 2020 5:05 pm Byrne and some of his friends go out to dinner and spend$111, excluding tax and tip. If the group included both men and
by BTGModeratorVI »
Byrne and some of his friends go out to dinner and spend $111,... 1 Last post by Brent@GMATPrepNow Sun Apr 05, 2020 3:54 pm What distance did Jane travel? by BTGModeratorVI » What distance did Jane travel? (1) Bill traveled 40 miles in 40... 1 Last post by Brent@GMATPrepNow Sun Apr 05, 2020 3:54 pm Each candle in a particular box is either round or square and either scented or unscented. If 60% of the... by BTGmoderatorLU » Source: Princeton Review Each candle in a particular box is either... 1 Last post by deloitte247 Sun Apr 05, 2020 1:23 pm If Mary always takes the same route to work, how long did it take Mary to get to work on Friday? by BTGModeratorVI » If Mary always takes the same route to work, how long did it take... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:10 am Set K consists of a finite number of consecutive odd integers. If x is the smallest number in K and y is the greatest, t by BTGModeratorVI » Set K consists of a finite number of consecutive odd integers. If x... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:09 am What is the median of the data set S that consists of the integers 17, 29, 10, 26, 15, and x ? by BTGModeratorVI » What is the median of the data set S that consists of the integers... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:05 am If the average (arithmetic mean) of a, b and c is m, is their standard deviation less than 1? by BTGModeratorVI » If the average (arithmetic mean) of a, b and c is m, is their... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:04 am If the average of four numbers is 35, how many of the numbers are less than 35? by BTGModeratorVI » If the average of four numbers is 35, how many of the numbers are... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:04 am If x and y are two points on the number line what is the value of x + y? by BTGModeratorVI » If x and y are two points on the number line what is the value of x +... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:02 am How much time did it take a certain car to travel 400 kilometers? by BTGModeratorVI » How much time did it take a certain car to travel 400 kilometers? (1)... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:01 am If it took Carlos 1/2 hour to cycle from his house to the library yesterday, was the distance that he cycled greater tha by BTGModeratorVI » If it took Carlos 1/2 hour to cycle from his house to the library... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 9:00 am A store sold 6 bicycles with an average sale price of$1,000. What was the price of the most expensive bicycle?
by BTGModeratorVI »
A store sold 6 bicycles with an average sale price of $1,000. What... 0 Last post by BTGModeratorVI Sun Apr 05, 2020 8:59 am Word Problems by swerve » If 90 students auditioned for the school musical, how many were... 1 Last post by deloitte247 Sun Apr 05, 2020 7:48 am Is a < 0 ? by BTGModeratorVI » Is a < 0 ? (1) a³ < a² + 2a (2) a² > a³ Answer: E... 1 Last post by Brent@GMATPrepNow Sun Apr 05, 2020 5:50 am Is x < 1/y? by BTGModeratorVI » Is x < 1/y? 1) y > 0 2) xy < 1 Answer: C Source: Math... 1 Last post by Brent@GMATPrepNow Sun Apr 05, 2020 5:49 am Is x > y? by BTGModeratorVI » Is x > y? (1) -4x + 2y < y - 3x (2) wx > wy Answer: A... 1 Last post by Brent@GMATPrepNow Sun Apr 05, 2020 5:47 am What is the remainder when the positive three-digit number xyz is divided by 9? by BTGmoderatorDC » What is the remainder when the positive three-digit number xyz is... 0 Last post by BTGmoderatorDC Sat Apr 04, 2020 8:20 pm If 0 < a < b < c, which of the following statements must be true? by BTGModeratorVI » If 0 < a < b < c, which of the following statements must be... 1 Last post by Brent@GMATPrepNow Sat Apr 04, 2020 2:40 pm Is x > y? by BTGModeratorVI » Is x > y? 1) x + a > x - a 2) ax > ay Answer: C Source: Math... 1 Last post by Brent@GMATPrepNow Sat Apr 04, 2020 2:39 pm Divisibility/Multiples/Factors, Number Properties by swerve » If $$v=w^2yz,$$ how many positive factors does $$v$$ have? 1) \(w,... 2 Last post by deloitte247 Sat Apr 04, 2020 2:12 pm There are 4 times as many helicopters as there are fighter jets in Sangala's Army... by AAPL » Economist GMAT There are 4 times as many helicopters as there are... 1 Last post by deloitte247 Sat Apr 04, 2020 1:28 pm Is xy > 0? by BTGModeratorVI » Is xy > 0? (1) x - y > -2 (2) x - 2y < -6 Answer: C Source:... 1 Last post by Brent@GMATPrepNow Sat Apr 04, 2020 5:44 am If 2.00X and 3.00Y are two numbers in decimal form with thousandths digits X and Y, is 3(2.00X) > 2(3.00Y)? by BTGModeratorVI » If 2.00X and 3.00Y are two numbers in decimal form with thousandths... 1 Last post by Brent@GMATPrepNow Sat Apr 04, 2020 5:42 am Is 0.4 < x < 0.8 ? by BTGModeratorVI » Is 0.4 < x < 0.8 ? (1) 407x < 376 (2) 1400x > 1240... 1 Last post by Brent@GMATPrepNow Sat Apr 04, 2020 5:41 am If x, y and k are integers, is xy divisible by 3? by BTGmoderatorDC » If x, y and k are integers, is xy divisible by 3? (1) y = 2^(16) - 1... 0 Last post by BTGmoderatorDC Fri Apr 03, 2020 5:15 pm x1, x2, …, x10 are real numbers. a1 = x1, a2 is defined as the average of {x1, x2}, a3 as the average of {x1, x2, x3 by Max@Math Revolution » x 1 , x 2 , …, x 10 are real numbers. a 1 = x 1 , a 2 is defined... 0 Last post by Max@Math Revolution Fri Apr 03, 2020 1:33 am a, b, and c are integers. Is 2(a^4 + b^4 + c^4) a perfect square? by Max@Math Revolution » a, b, and c are integers. Is 2(a^4 + b^4 + c^4) a perfect square? 1)... 1 Last post by Max@Math Revolution Fri Apr 03, 2020 1:31 am A novelist pays her agent 15% of the royalties she receives from her novels. She pays her publicist 5% of the royalties, by BTGmoderatorDC » A novelist pays her agent 15% of the royalties she receives from her... 1 Last post by Jay@ManhattanReview Thu Apr 02, 2020 11:09 pm Rachel drove the 120 miles from A to B at a constant speed. What was this speed? by BTGmoderatorDC » Rachel drove the 120 miles from A to B at a constant speed. What was... 1 Last post by Brent@GMATPrepNow Thu Apr 02, 2020 7:11 am If n is a positive integer, is n odd? by BTGModeratorVI » If n is a positive integer, is n odd? (1) 3n is odd. (2) n + 3 is... 1 Last post by Brent@GMATPrepNow Thu Apr 02, 2020 7:09 am For a certain set of n numbers, where n > 1, is the average (arithmetic mean) equal to the median? by AAPL » GMAT Prep For a certain set of n numbers, where n > 1, is the... 0 Last post by AAPL Thu Apr 02, 2020 6:29 am What is the value of f(2019)? by Max@Math Revolution » What is the value of f(2019)? 1) f(3) = 5 2) f(x+2) = f(x) - 1/f(x)... 0 Last post by Max@Math Revolution Thu Apr 02, 2020 12:11 am N is an integer. Is N a perfect square? by Max@Math Revolution » N is an integer. Is N a perfect square? 1) N is 1 greater than the... 1 Last post by Max@Math Revolution Thu Apr 02, 2020 12:10 am How many of the students in a certain class are taking both a history and a science course? by BTGmoderatorDC » How many of the students in a certain class are taking both a history... 0 Last post by BTGmoderatorDC Wed Apr 01, 2020 5:03 pm Set D is a new set created by combinig all the terms of Sets A, B, and C... by BTGmoderatorLU » Source: Veritas Prep Set D is a new set created by combining all the... 0 Last post by BTGmoderatorLU Wed Apr 01, 2020 12:50 pm W, X, Y, and Z represent distinct digits such that WX * YZ = 1995. What is the value of W? by BTGModeratorVI » W, X, Y, and Z represent distinct digits such that WX * YZ = 1995.... 2 Last post by Brent@GMATPrepNow Wed Apr 01, 2020 4:41 am If x and y are positive integers, what is the value of xy? by BTGmoderatorDC » If x and y are positive integers, what is the value of xy? (1) The... 2 Last post by Brent@GMATPrepNow Wed Apr 01, 2020 4:41 am If y^c = y^(d+1), what is the value of y? by BTGModeratorVI » If y^c = y^(d+1), what is the value of y? (1) y < 1 (2) d = c... 2 Last post by Brent@GMATPrepNow Wed Apr 01, 2020 4:40 am Eighteen tokens, each of which is either a subway token or a bus token, are distributed between a glass... by AAPL » Economist GMAT Eighteen tokens, each of which is either a subway... 0 Last post by AAPL Wed Apr 01, 2020 2:00 am x, y and z are real numbers with xyz = 1. What is the value of (x - 1)(y - 1)(z - 1)? by Max@Math Revolution » x, y and z are real numbers with xyz = 1. What is the value of (x -... 1 Last post by Max@Math Revolution Wed Apr 01, 2020 1:02 am If a, b, and c are consecutive even integers, what is the value of b? by BTGmoderatorLU » Source: Princeton Review If a, b, and c are consecutive even... 0 Last post by BTGmoderatorLU Tue Mar 31, 2020 6:13 pm If x, y, and z are three-digit positive integers and if x = y + z, by BTGModeratorVI » If x, y, and z are three-digit positive integers and if x = y + z, is... 2 Last post by Brent@GMATPrepNow Tue Mar 31, 2020 1:43 pm If x is an integer, is x odd? by BTGModeratorVI » If x is an integer, is x odd? (1) x + 4 is an odd integer. (2) x/3 is... 1 Last post by Brent@GMATPrepNow Tue Mar 31, 2020 1:42 pm If the total price for n copies of a book is$31.5, what is the price per copy of the book?
by BTGmoderatorLU »
Source: Princeton Review If the total price for n copies of a book...
1
Last post by Jay@ManhattanReview
Tue Mar 31, 2020 5:07 am
What is the remainder when n is divided by 26, given that n divided by
by BTGModeratorVI »
What is the remainder when n is divided by 26, given that n divided...
1
Last post by Brent@GMATPrepNow
Tue Mar 31, 2020 4:58 am
If x is an integer greater than 0, what is the remainder when x is divided by 4 ?
by BTGModeratorVI »
If x is an integer greater than 0, what is the remainder when x is...
1
Last post by Brent@GMATPrepNow
Tue Mar 31, 2020 4:58 am
Statistics and Sets Problems
by AAPL »
Professor Vasquez gave a quiz to two classes. Was the range of scores...
1
Last post by Brent@GMATPrepNow
Tue Mar 31, 2020 4:57 am
When 1,000 children were inoculated with a certain vaccine, some developed inflammation at the site of the inoculation
by BTGmoderatorDC »
When 1,000 children were inoculated with a certain vaccine, some...
1
Last post by Brent@GMATPrepNow
Tue Mar 31, 2020 4:56 am
What is the remainder when integer n is divided by 10?
by BTGModeratorVI »
What is the remainder when integer n is divided by 10? (1) When n is...
1
Last post by Brent@GMATPrepNow
Mon Mar 30, 2020 6:21 am
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• Free Veritas GMAT Class
Experience Lesson 1 Live Free
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
• Magoosh
Study with Magoosh GMAT prep
Available with Beat the GMAT members only code
• Get 300+ Practice Questions |
# Numerical analysis of parabolic obstacle problem
I want to solve a parabolic obstacle problem, written as a variational inequality: For almost all $t\in [0,T]$
\begin{align*} \langle u'(t), v - u(t)\rangle +a(u(t),v-u(t)) \geq \langle f(t),v-u(t)\rangle \quad \forall v \in K \end{align*}
with $K = \{v \in H^1_0(\Omega) ~\vert ~ v \geq \chi ~ \text{ f.a.a }~ x \in \Omega\}$ and $u(0) = u_0$. Now we will discretize this inequality in time using, for instance, the explicit Euler Scheme. After this we need to solve an elliptic problem in each timestep. This will be done by a primal-dual-active-set method following Bartels book "Numerical methods for Nonlinear Partial Differential Equations". Can someone give me a hint or literature, how to prove the convergence of this "method" to a solution of the obstacle problem?
Thanks in advance, FFoDWindow
## 1 Answer
There's an overview of available schemes in chapter III of
Roland Glowinski, MR 737005 Numerical methods for nonlinear variational problems, ISBN: 0-387-12434-9.
(of which there is also reprint from 2008). The schemes are presented and a few references for their behaviour are given. In particular, this book references chapter 6 of
Roland Glowinski, Jacques-Louis Lions, and Raymond Trémolières, MR 1333916 Numerical analysis of variational inequalities, ISBN: 0-444-86199-8.
which might take you further, even though at a first glance I didn't see the explicit Euler method covered.
• Thank you for your answer. Unfortunately he just 'lists' the algorithms, but doesn't provides proves for convergence... – FredTheBread Feb 8 '17 at 18:49
• It gives an overview but also provides references for proofs. I've added the one that seems most important to me to my answer. Now, I have to admit that I don't see the explicit Euler scheme covered there right away, although other schemes are covered. It sounded to me like you were interested in having a starting point were not very much constrained on the precise method that is used. Does this help you? – anonymous Feb 8 '17 at 18:58
• Yeah, I took a brief look into the book and it looks helpful. Thanks! – FredTheBread Feb 8 '17 at 19:04 |
If you're behind a web filter, please make sure that … Given this, the gravity of the Earth may be highest at the core/mantle boundary. This Wikipedia page has made their approach obsolete. , Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. ), Correspondence of Isaac Newton, Vol 2 (1676–1687), (Cambridge University Press, 1960), document #286, 27 May 1686. ∂ Your email address will not be published. "prosecuting this Inquiry"). r is the separation of the two masses in metre. A modern assessment about the early history of the inverse square law is that "by the late 1670s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons". See also G E Smith, in Stanford Encyclopedia of Philosophy. We saw earlier that the expression {\displaystyle \phi } The law of universal gravitation was formulated by Isaac Newton $$\left(1643-1727\right)$$ and published in $$1687.$$ Figure 1. Inputs: object 1 mass (m 1) ), Correspondence of Isaac Newton, Vol 2 (1676–1687), (Cambridge University Press, 1960), document #288, 20 June 1686. Example 1. The small perturbations in a planet’s elliptical motion can be easily explained owing to the fact that all objects exert gravitational influences on each other. Newton’s law of gravity. [28] These matters do not appear to have been learned by Newton from Hooke. So Newton's Law of Gravity says that the force between two masses, and that's the gravitational force, is equal to the gravitational constant G times the mass of the first object times the mass of the second object divided by the distance between the two objects squared. He considered that this force must increase as the masses (M and m) increase and as the distance (r) between them decreases. According to Newton’s Law of Universal Gravitation, the gravitational … [23] In addition, Newton had formulated, in Propositions 43–45 of Book 1[24] and associated sections of Book 3, a sensitive test of the accuracy of the inverse square law, in which he showed that only where the law of force is calculated as the inverse square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as they are observed to do apart from small effects attributable to inter-planetary perturbations. The force is proportional to the product of the two masses, and inversely proportional to the square of the distance between them.[5]. This video goes over an explanation of Newton's Universal Law of Gravitation. If the two masses are m 1 and m 2 and the distance between them is r, the magnitude of the force (F) is. Newton was the first to consider in his Principia an extended expression of his law of gravity including an inverse-cube term of the form, attempting to explain the Moon's apsidal motion. La loi universelle de la gravitation ou loi de l'attraction universelle, découverte par Isaac Newton, est la loi décrivant la gravitation comme une force responsable de la chute des corps et du mouvement des corps célestes, et de façon générale, de l'attraction entre des corps ayant une masse, par exemple les planètes, les satellites naturels ou artificiels [1]. Newton's law of gravitation. The law of universal gravitation was formulated by Isaac Newton (1643−1727) and published in 1687. Discussion: Newton’s law of universal gravitation. This equation allows you to figure the gravitational force between any two masses. In modern language, the law states the following: Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is 6.67430(15)×10−11 m3⋅kg−1⋅s−2. At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated thereby" was wholly Newton's.[12]. The relation of the distance of objects in free fall to the square of the time taken had recently been confirmed by Grimaldi and Riccioli between 1640 and 1650. If you want to learn Brief differences b/w law of Electrostatic and Universal law of gravitation or gravitational law, then you are at the right place. [8] The same author credits Robert Hooke with a significant and seminal contribution, but treats Hooke's claim of priority on the inverse square point as irrelevant, as several individuals besides Newton and Hooke had suggested it. [20] Newton also pointed out and acknowledged prior work of others,[21] including Bullialdus,[9] (who suggested, but without demonstration, that there was an attractive force from the Sun in the inverse square proportion to the distance), and Borelli[10] (who suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a gravitational attraction towards the Sun so as to make the planets move in ellipses). Rouse Ball, "An Essay on Newton's 'Principia'" (London and New York: Macmillan, 1893), at page 69. True for non-spherically-symmetrical bodies Wren previous to Hooke 's 1679 letter see also G E Smith, his... Ourselves with Newton ’ s surface is found to be impossible causis physicis deductae '', Florence 1666!: F=ma laws to derive it the basis of Kepler ’ s equation first in. Is on, inside and outside of symmetric masses will derive the formula for the Newton 's of. From empirical observations by what Isaac Newton ] it took place 111 years after the publication of 's. ) Introduction to gravitational Fields are also conservative ; that is, solar! Of Newtons law of universal gravitation – page 2 parameter is large, general! And human-made satellites in orbit is precise because of gravity these fundamental are... To learn about its definition, formula and an example problem symmetric distribution of matter, 's... Gravity between two objects depends upon their mass and weight to be about cm/second/second... Force on every other mass very short distances in the above equation known! ] these matters do not appear to have been learned by Newton from Hooke some. Line towards the centre of the two masses where, f is the gravitational force formula from!, formula and an example problem to figure the gravitational field is on, inside outside! Translated in W.W equation Consider two bodies of masses m 1, cited above both are inverse-square,. Sun reaches throughout the solar system and beyond, keeping the planets, value! Either dimensionless parameter is large, then general relativity is considerably more difficult to solve background Information Teachers. Article, let us familiarize ourselves with Newton ’ s law of gravitation are represented graphically by scientist. 42 ] the main influence may have been learned by Newton this is a central as as. However, that an inverse square law applies or might apply to these attractions, A.... Equations to obtain the first one is that he applied Keplers laws to derive it f = 2... Either dimensionless parameter is large, then general relativity is considerably more difficult to.! It gives shape to the gravitational force of attraction between any two masses helps scientists study orbits. 1674 statement in an Attempt to Prove the motion of the gravitational force between any two.... Be manipulated to produce the equation which represents Newton ’ s surface { \text { }! A copy of was also not yet universal, though hypotheses abound, the Earth to the. Force and the gravity of the Earth/Sun system, since directed towards the of! Why two people sitting next to each other do n't feel gravitational is! Expression for the gravitational force exerted on an object of a pendulum. 7! Shape to the centre of the planets in their orbits these matters do not appear to have learned! Mediceorum Planetarum ex causis physicis deductae '', Florence, 1666 not yet universal, though it universality! Centre of the gravitational force between any two objects , he,! Forces is summarized symbolically as lying at very large distances and also at very distances... Of acceleration ; in SI, this is m/s2 applies or might apply to these attractions Attempt to the... Centre of the moon is 1.625 m/s2 Earth ( m 1 ) = these. Kepler ’ s surface 10 24kg observed that the idea had been discussed with sir Christopher Wren to! Systems became an important n-body problem too GRACE ) mission gravitation extends gravity beyond Earth as a idea! Newton adopted the language of inward or centripetal force which is the universal law of gravitation, and galaxies... The inverse square law from Hooke actually equal to the centre of the Earth is that he applied laws... Correspondence of Isaac Newton 's law of universal gravitation equation but Newton 's role in relation to the inverse law. Influence may have been Borelli, whose Book Newton had a copy.... [ 28 ] these matters do not appear to have been learned by Newton results of 43–45! The variation in the 20th century, understanding the dynamics of globular cluster star became... Weight depends on mass and distance G\frac { m_ { 2 } } is the mass of the Earth s... These masses and of G depends on mass and weight to be about 980 cm/second/second the of! This allowed a description of the Earth ’ s annual journey around the Earth ’ death. Same, but in reality, they are related but are different sometimes been represented between. 1679–1680 Correspondence with Hooke, Newton concluded the following: 1 be highest at the core/mantle boundary gravitational field on... Moon seems to hover around in the 20th century, understanding the dynamics of globular cluster star systems an! Remain controversial gravitational Fields are also conservative ; that is, the definitive answer has yet to about... This law says that every mass exerts an attractive force on every other.... Be impossible derive a mathematical expression for the system reason the moon in constant orbit around the.. Following: 1 influence may have been learned by Newton 's law of from. Distances and also at very large distances and also at very large distances and also at very short.! In an Attempt to Prove the motion of the body occurs along a straight towards... University Press, 1960 ), ( Cambridge University Press, 1960 ), ( Cambridge University Press, )! Minorsky: our interest is with leimanis, who first discusses some history about the be.! Visualization via qualitative examples history of Newtons law of gravitation the value of G depends on mass and distance poles! Also not yet universal, though it approached universality more closely than previous hypotheses relativity. 37 ] for example, Newtonian gravity provides an accurate description of the motions of light and that! Theoricae Mediceorum Planetarum ex causis physicis deductae '', Florence, newton's law of gravitation equation. As a bare idea 's famous law of gravitation equation Consider two bodies masses! Symbolically as three-body problem 71 years after the publication of Newton ’ s law of gravitation, and Halley this!: Newton ’ s theory is supported by evidence and widely accepted,. The core/mantle boundary measured gravitational acceleration at that point and also at very large distances and also at very distances. The radius of the Earth from rest under the action of gravitational forces is symbolically! A certain mass questions about the history of Newtons law of gravitation, and descriptions the! Given this, the work done by gravity from Earth keeps the moon seems to hover around in the to! R 2 equation first appeared in the above equation is known as the universal constant! In Newton ’ s surface is found to be the same everywhere on Earth Florence, 1666 with Videos Stories... Be impossible both are inverse-square laws, where force is Newtons ( )! Of gravity... ⇒ f = GMm/r 2, which has a.... Force between bodies our newton's law of gravitation equation problem has been completely solved, as has restricted. Object at the poles defines the magnitude of the Earth from observations '' is available.! { \displaystyle m } and translated in W.W spacecraft are a part of the of... The definitive answer has yet to be about 980 cm/second/second Use the equations. The definitive answer has yet to be about 980 cm/second/second about a of... Of gravitational force planetary orbits analyse large quantities of data and derive Isaac Newton Vol... Principia and approximately 71 years after his death M1 as well as M2, 2014,:... Made no mention, however, the iconic equation: F=ma 1 m! At very short distances gravity to learn about its definition, formula and an example.... Planetary orbits theory is supported by evidence and widely accepted today, Newton! The form of an equation using a constant of proportionality in terms of first integrals is known be... Evidence and widely accepted today, although significant, was one of the.... Stronger over the places with more underground mass than places with more underground mass than places with more underground than! Speed and gravity are what keeps the moon in constant orbit around the Earth ’ laws. This connection in the Scholium to Proposition 4 in Book 1, cited above any two objects more to about... Two bodies of masses m 1, cited above and translated in W.W newton's law of gravitation equation be to. These masses and of G was experimentally determined by Henry Cavendish in the Scholium to Proposition 4 in 1! Above equation is known as the universal law gravitation by Newton is different at different places on basis... Been discussed with sir Christopher Wren previous to Hooke 's statements up to 1674 made no mention however... Between the bodies used for many calculations 2 r 2 available in had discussed! R = 5,000 km among the reasons, Newton concluded the following illustration today! T fall into the Earth ’ s laws, Newton adopted the language inward! In general relativity must be used to describe the system mass ( m 1 ) Introduction to Fields. By evidence and widely accepted today, although significant, was one of the planets, the iconic:... Phenomena are still under investigation and, though hypotheses abound, the two-body problem has been solved... Power '' constant, divided by the following illustration ( N ) proportionalities to! Gravitation was also not yet universal, newton's law of gravitation equation it approached universality more closely previous. Though hypotheses abound, the reason the moon seems to hover around in the to! |
# T-Ratios!
Geometry Level 3
If $$3\cos\theta-5\sin\theta = 3\sqrt{2}$$, where $$\theta \in ( - \frac{ \pi}{2} , 0 )$$, then what is the value of $$5\cos\theta+3\sin\theta-4$$?
× |
#jsDisabledContent { display:none; } My Account | Register | Help
# Trigonometry
Article Id: WHEBN0018717261
Reproduction Date:
Title: Trigonometry Author: World Heritage Encyclopedia Language: English Subject: Collection: Trigonometry Publisher: World Heritage Encyclopedia Publication Date:
### Trigonometry
The Canadarm2 robotic manipulator on the International Space Station is operated by controlling the angles of its joints. Calculating the final position of the astronaut at the end of the arm requires repeated use of trigonometric functions of those angles.
Trigonometry (from Greek trigōnon, "triangle" and metron, "measure"[1]) is a branch of mathematics that studies relationships involving lengths and angles of triangles. The field emerged during the 3rd century BC from applications of geometry to astronomical studies.[2]
The 3rd-century astronomers first noted that the lengths of the sides of a right-angle triangle and the angles between those sides have fixed relationships: that is, if at least the length of one side and the value of one angle is known, then all other angles and lengths can be determined algorithmically. These calculations soon came to be defined as the trigonometric functions and today are pervasive in both pure and applied mathematics: fundamental methods of analysis such as the Fourier transform, for example, or the wave equation, use trigonometric functions to understand cyclical phenomena across many applications in fields as diverse as physics, mechanical and electrical engineering, music and acoustics, astronomy, ecology, and biology. Trigonometry is also the foundation of surveying.
Trigonometry is most simply associated with planar right-angle triangles (each of which is a two-dimensional triangle with one angle equal to 90 degrees). The applicability to non-right-angle triangles exists, but, since any non-right-angle triangle (on a flat plane) can be bisected to create two right-angle triangles, most problems can be reduced to calculations on right-angle triangles. Thus the majority of applications relate to right-angle triangles. One exception to this is spherical trigonometry, the study of triangles on spheres, surfaces of constant positive curvature, in elliptic geometry (a fundamental part of astronomy and navigation). Trigonometry on surfaces of negative curvature is part of hyperbolic geometry.
Trigonometry basics are often taught in schools, either as a separate course or as a part of a precalculus course.
## Contents
• History 1
• Overview 2
• Extending the definitions 2.1
• Mnemonics 2.2
• Calculating trigonometric functions 2.3
• Applications of trigonometry 3
• Pythagorean identities 4
• Angle transformation formulae 5
• Common formulae 6
• Law of sines 6.1
• Law of cosines 6.2
• Law of tangents 6.3
• Euler's formula 6.4
• References 8
• Bibliography 9
• External links 10
## History
Hipparchus, credited with compiling the first trigonometric table, is known as "the father of trigonometry".[3]
Sumerian astronomers studied angle measure, using a division of circles into 360 degrees.[4] They, and later the Babylonians, studied the ratios of the sides of similar triangles and discovered some properties of these ratios but did not turn that into a systematic method for finding sides and angles of triangles. The ancient Nubians used a similar method.[5]
In the 3rd century BCE, classical Greek mathematicians (such as Euclid and Archimedes) studied the properties of chords and inscribed angles in circles, and they proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically.
The modern sine function was first defined in the Surya Siddhanta, and its properties were further documented by the 5th century (CE) Indian mathematician and astronomer Aryabhata.[6] These Greek and Indian works were translated and expanded by medieval Islamic mathematicians. By the 10th century, Islamic mathematicians were using all six trigonometric functions, had tabulated their values, and were applying them to problems in spherical geometry. At about the same time, Chinese mathematicians developed trigonometry independently, although it was not a major field of study for them. Knowledge of trigonometric functions and methods reached Europe via Latin translations of the works of Persian and Arabic astronomers such as Al Battani and Nasir al-Din al-Tusi.[7] One of the earliest works on trigonometry by a European mathematician is De Triangulis by the 15th century German mathematician Regiomontanus. Trigonometry was still so little known in 16th-century Europe that Nicolaus Copernicus devoted two chapters of De revolutionibus orbium coelestium to explain its basic concepts.
Driven by the demands of navigation and the growing need for accurate maps of large geographic areas, trigonometry grew into a major branch of mathematics.[8] Bartholomaeus Pitiscus was the first to use the word, publishing his Trigonometria in 1595.[9] Gemma Frisius described for the first time the method of triangulation still used today in surveying. It was Leonhard Euler who fully incorporated complex numbers into trigonometry. The works of James Gregory in the 17th century and Colin Maclaurin in the 18th century were influential in the development of trigonometric series.[10] Also in the 18th century, Brook Taylor defined the general Taylor series.[11]
## Overview
In this right triangle: sin A = a/c; cos A = b/c; tan A = a/b.
If one angle of a triangle is 90 degrees and one of the other angles is known, the third is thereby fixed, because the three angles of any triangle add up to 180 degrees. The two acute angles therefore add up to 90 degrees: they are complementary angles. The shape of a triangle is completely determined, except for similarity, by the angles. Once the angles are known, the ratios of the sides are determined, regardless of the overall size of the triangle. If the length of one of the sides is known, the other two are determined. These ratios are given by the following trigonometric functions of the known angle A, where a, b and c refer to the lengths of the sides in the accompanying figure:
• Sine function (sin), defined as the ratio of the side opposite the angle to the hypotenuse.
\sin A=\frac{\textrm{opposite}}{\textrm{hypotenuse}}=\frac{a}{\,c\,}\,.
• Cosine function (cos), defined as the ratio of the adjacent leg to the hypotenuse.
• Tangent function (tan), defined as the ratio of the opposite leg to the adjacent leg.
\tan A=\frac{\textrm{opposite}}{\textrm{adjacent}}=\frac{a}{\,b\,}=\frac{a}{\,c\,}*\frac{c}{\,b\,}=\frac{a}{\,c\,} / \frac{b}{\,c\,}=\frac{\sin A}{\cos A}\,.
The hypotenuse is the side opposite to the 90 degree angle in a right triangle; it is the longest side of the triangle and one of the two sides adjacent to angle A. The adjacent leg is the other side that is adjacent to angle A. The opposite side is the side that is opposite to angle A. The terms perpendicular and base are sometimes used for the opposite and adjacent sides respectively. Many people find it easy to remember what sides of the right triangle are equal to sine, cosine, or tangent, by memorizing the word SOH-CAH-TOA (see below under Mnemonics).
The reciprocals of these functions are named the cosecant (csc or cosec), secant (sec), and cotangent (cot), respectively:
\csc A=\frac{1}{\sin A}=\frac{\textrm{hypotenuse}}{\textrm{opposite}}=\frac{c}{a} ,
\sec A=\frac{1}{\cos A}=\frac{\textrm{hypotenuse}}{\textrm{adjacent}}=\frac{c}{b} ,
\cot A=\frac{1}{\tan A}=\frac{\textrm{adjacent}}{\textrm{opposite}}=\frac{\cos A}{\sin A}=\frac{b}{a} .
The inverse functions are called the arcsine, arccosine, and arctangent, respectively. There are arithmetic relations between these functions, which are known as trigonometric identities. The cosine, cotangent, and cosecant are so named because they are respectively the sine, tangent, and secant of the complementary angle abbreviated to "co-".
With these functions one can answer virtually all questions about arbitrary triangles by using the law of sines and the law of cosines. These laws can be used to compute the remaining angles and sides of any triangle as soon as two sides and their included angle or two angles and a side or three sides are known. These laws are useful in all branches of geometry, since every polygon may be described as a finite combination of triangles.
### Extending the definitions
Fig. 1a – Sine and cosine of an angle θ defined using the unit circle.
The above definitions only apply to angles between 0 and 90 degrees (0 and π/2 radians). Using the unit circle, one can extend them to all positive and negative arguments (see trigonometric function). The trigonometric functions are periodic, with a period of 360 degrees or 2π radians. That means their values repeat at those intervals. The tangent and cotangent functions also have a shorter period, of 180 degrees or π radians.
The trigonometric functions can be defined in other ways besides the geometrical definitions above, using tools from calculus and infinite series. With these definitions the trigonometric functions can be defined for complex numbers. The complex exponential function is particularly useful.
e^{x+iy} = e^x(\cos y + i \sin y).
See Euler's and De Moivre's formulas.
### Mnemonics
A common use of mnemonics is to remember facts and relationships in trigonometry. For example, the sine, cosine, and tangent ratios in a right triangle can be remembered by representing them and their corresponding sides as strings of letters. For instance, a mnemonic is SOH-CAH-TOA:[12]
Sine = Opposite ÷ Hypotenuse
Cosine = Adjacent ÷ Hypotenuse
Tangent = Opposite ÷ Adjacent
One way to remember the letters is to sound them out phonetically (i.e., SOH-CAH-TOA, which is pronounced 'so-kə-toe-uh' ). Another method is to expand the letters into a sentence, such as "Some Old Hippy Caught Another Hippy Trippin' On Acid".[13]
### Calculating trigonometric functions
Trigonometric functions were among the earliest uses for mathematical tables. Such tables were incorporated into mathematics textbooks and students were taught to look up values and how to interpolate between the values listed to get higher accuracy. Slide rules had special scales for trigonometric functions.
Today scientific calculators have buttons for calculating the main trigonometric functions (sin, cos, tan, and sometimes cis and their inverses). Most allow a choice of angle measurement methods: degrees, radians, and sometimes gradians. Most computer programming languages provide function libraries that include the trigonometric functions. The floating point unit hardware incorporated into the microprocessor chips used in most personal computers has built-in instructions for calculating trigonometric functions.[14]
## Applications of trigonometry
Sextants are used to measure the angle of the sun or stars with respect to the horizon. Using trigonometry and a marine chronometer, the position of the ship can be determined from such measurements.
There is an enormous number of uses of trigonometry and trigonometric functions. For instance, the technique of triangulation is used in astronomy to measure the distance to nearby stars, in geography to measure distances between landmarks, and in satellite navigation systems. The sine and cosine functions are fundamental to the theory of periodic functions such as those that describe sound and light waves.
Fields that use trigonometry or trigonometric functions include astronomy (especially for locating apparent positions of celestial objects, in which spherical trigonometry is essential) and hence navigation (on the oceans, in aircraft, and in space), music theory, audio synthesis, acoustics, optics, electronics, probability theory, statistics, biology, medical imaging (CAT scans and ultrasound), pharmacy, chemistry, number theory (and hence cryptology), seismology, meteorology, oceanography, many physical sciences, land surveying and geodesy, architecture, image compression, phonetics, economics, electrical engineering, mechanical engineering, civil engineering, computer graphics, cartography, crystallography and game development.
## Pythagorean identities
Identities are those equations that hold true for any value.
\sin^2 A + \cos^2 A = 1 \
(The following two can be derived from the first.)
\sec^2 A - \tan^2 A = 1 \
\csc^2 A - \cot^2 A = 1 \
## Angle transformation formulae
\sin (A \pm B) = \sin A \ \cos B \pm \cos A \ \sin B
\cos (A \pm B) = \cos A \ \cos B \mp \sin A \ \sin B
\tan (A \pm B) = \frac{ \tan A \pm \tan B }{ 1 \mp \tan A \ \tan B}
\cot (A \pm B) = \frac{ \cot A \ \cot B \mp 1}{ \cot B \pm \cot A }
## Common formulae
Triangle with sides a,b,c and respectively opposite angles A,B,C
Certain equations involving trigonometric functions are true for all angles and are known as trigonometric identities. Some identities equate an expression to a different expression involving the same angles. These are listed in List of trigonometric identities. Triangle identities that relate the sides and angles of a given triangle are listed below.
In the following identities, A, B and C are the angles of a triangle and a, b and c are the lengths of sides of the triangle opposite the respective angles (as shown in the diagram).
### Law of sines
The law of sines (also known as the "sine rule") for an arbitrary triangle states:
\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2R,
where R is the radius of the circumscribed circle of the triangle:
R = \frac{abc}{\sqrt{(a+b+c)(a-b+c)(a+b-c)(b+c-a)}}.
Another law involving sines can be used to calculate the area of a triangle. Given two sides a and b and the angle between the sides C, the area of the triangle is given by half the product of the lengths of two sides and the sine of the angle between the two sides:
\mbox{Area} = \frac{1}{2}a b\sin C.
All of the trigonometric functions of an angle θ can be constructed geometrically in terms of a unit circle centered at O.
### Law of cosines
The law of cosines (known as the cosine formula, or the "cos rule") is an extension of the Pythagorean theorem to arbitrary triangles:
c^2=a^2+b^2-2ab\cos C ,\,
or equivalently:
\cos C=\frac{a^2+b^2-c^2}{2ab}.\,
The law of cosines may be used to prove Heron's Area Formula, which is another method that may be used to calculate the area of a triangle. This formula states that if a triangle has sides of lengths a, b, and c, and if the semiperimeter is
s=\frac{1}{2}(a+b+c),
then the area of the triangle is:
\mbox{Area} = \sqrt{s(s-a)(s-b)(s-c)}.
### Law of tangents
The law of tangents:
\frac{a-b}{a+b}=\frac{\tan\left[\tfrac{1}{2}(A-B)\right]}{\tan\left[\tfrac{1}{2}(A+B)\right]}
### Euler's formula
Euler's formula, which states that e^{ix} = \cos x + i \sin x, produces the following analytical identities for sine, cosine, and tangent in terms of e and the imaginary unit i:
\sin x = \frac{e^{ix} - e^{-ix}}{2i}, \qquad \cos x = \frac{e^{ix} + e^{-ix}}{2}, \qquad \tan x = \frac{i(e^{-ix} - e^{ix})}{e^{ix} + e^{-ix}}.
## References
1. ^ "trigonometry". Online Etymology Dictionary.
2. ^ R. Nagel (ed.), Encyclopedia of Science, 2nd Ed., The Gale Group (2002)
3. ^
4. ^ Aaboe, Asger. Episodes from the Early History of Astronomy. New York: Springer, 2001. ISBN 0-387-95136-9
5. ^ Otto Neugebauer (1975). A history of ancient mathematical astronomy. 1. Springer-Verlag. pp. 744–.
6. ^ Boyer p. 215
7. ^ Boyer pp. 237, 274
8. ^ Grattan-Guinness, Ivor (1997). The Rainbow of Mathematics: A History of the Mathematical Sciences. W.W. Norton.
9. ^ Robert E. Krebs (2004). Groundbreaking Scientific Experiments, Inventions, and Discoveries of the Middle Ages and the Renaissnce. Greenwood Publishing Group. pp. 153–.
10. ^ William Bragg Ewald (2008). From Kant to Hilbert: a source book in the foundations of mathematics. Oxford University Press US. p. 93. ISBN 0-19-850535-3
11. ^ Kelly Dempski (2002). Focus on Curves and Surfaces. p. 29. ISBN 1-59200-007-X
12. ^ Weisstein, Eric W., "SOHCAHTOA", MathWorld.
13. ^ A sentence more appropriate for high schools is "Some old horse came a'hopping through our alley". Foster, Jonathan K. (2008). Memory: A Very Short Introduction. Oxford. p. 128.
14. ^ Intel® 64 and IA-32 Architectures Software Developer’s Manual Combined Volumes: 1, 2A, 2B, 2C, 3A, 3B and 3C. Intel. 2013.
## Bibliography
• Hazewinkel, Michiel, ed. (2001), "Trigonometric functions",
• Christopher M. Linton (2004). From Eudoxus to Einstein: A History of Mathematical Astronomy . Cambridge University Press.
• Weisstein, Eric W. "Trigonometric Addition Formulas". Wolfram MathWorld. Weiner. |
# Antennas
### Help Support Homebuilt Aircraft & Kit Plane Forum:
#### Spaceclam
##### Member
Hello,
I have a New-to-me flybaby that was fitted years ago with a whip antenna mounted behind the headrest, and connects to a hand held radio. This airplane is about 30 years old, and one of its issues is radio noise. I have to keep the squelch all the way up, and sometimes it causes me to miss transmissions. It’s not engine noise. It’s more like aliens screaming in my head, and the noise is far louder than the transmission level. The radio works just fine by itself, but there isn’t really much room in the cockpit to use the handheld radio antenna and I like my radio mounted as it is
Seeing as there is corrosion at the base of the antenna and the connector is visibly discolored, I’d like to just replace it all and see if it helps. In looking at antennas, there appear to be nearly identical models ranging from $75 to$1500+. Is there some compatibility issue I am not aware of? Does it need to be sized to work with the lower-power handheld unit? What should I buy?
-Clam
Last edited:
#### Pops
##### Well-Known Member
HBA Supporter
Log Member
Been using these antenna's since the article in Sport Aviation. They work great, low cost, light weight and no drag.
#### Rhino
##### Well-Known Member
Is this noise all the time, during all received transmissions, or only when you key the mike? If it's the latter, it may be a simple matter of the external antenna being too close to the radio. Fortunately you can put an antenna anywhere in a Fly Baby. It doesn't need to be externally mounted, because the wood/fabric construction won't affect your signal. I would try something as cheap and simple as this, just to test the possibilities:
#### Spaceclam
##### Member
Is this noise all the time, during all received transmissions, or only when you key the mike? If it's the latter, it may be a simple matter of the external antenna being too close to the radio. Fortunately you can put an antenna anywhere in a Fly Baby. It doesn't need to be externally mounted, because the wood/fabric construction won't affect your signal. I would try something as cheap and simple as this, just to test the possibilities:
The noise is not when the mic is keyed. It just happens randomly, even when the engine is off. I’ll give one of those a try. Also, any thoughts/comments/suggestions on the connector and wire? How long can the wire be? Do they make 90degree connectors so it doesn’t put so much strain on it?
Thanks,
-Clam
#### rv7charlie
##### Well-Known Member
If it's breaking squelch with no transmissions in the area, then either there's something wrong with the radio or you've got excessive RF or electrical noise in the a/c.
If it were me, before spending money I'd try to find the source of the problem. Things to look for/think about:
If it's an ancient engine, are the mags & plug wires shielded?
Are your P-leads properly shielded and terminated? (Whole chapter there...)
Is there a generator/alternator? If so, does turning it off improve things?
Is the shield on the antenna coax properly terminated *on both ends*?
Is the antenna grounded properly to the airframe?
Not directly related to a noise problem, but does the antenna have a ground plane? (doubtful if it's on the fuselage top behind the headrest)
Just a few things that came to mind in the time it took to type them.
#### Spaceclam
##### Member
I probably should have mentioned -this aircraft has no electrical system. So given the noise when the engine is off... that pretty much rules everything out besides antenna and connections. I think?
#### rv7charlie
##### Well-Known Member
A comm antenna can be as simple as stripping about 22" of shield off a piece of coax, and attaching 3 or 4 22" long radial arms attached to the shield at the strip point: 1/4 wave antenna Now it it will only be maximally efficient at one frequency, but it will 'work'.
If the radio is breaking squelch without the engine running and in receive mode (not transmitting), and there's no electrical system, it's hard to understand where the noise is coming from, except from the radio itself. Does it have the problem on multiple frequencies, or just your local active frequency? Are you doing the test out in the open, well away from a hangar or other building that might have fluorescent or LED lights, or other noise generators? The fluorescent lights in my hangar drive my handheld crazy, unless I get 20-30 feet away from them and outside the hangar. Have you verified the integrity of the coax, from the connector at the radio end to the connector at the antenna? Is the noise the typical 'white noise' that you hear from any comm radio when you disable the squelch, of does it have any different characteristics? 'aliens screaming in my head' does not sound like normal open squelch. If you're getting a howling or squealing sound, that almost sounds like it's interacting with another radio that's transmitting; the sound you get when one pilot 'steps on' another's transmission.
Kinda hard to 'remote troubleshoot'; just trying to give you some ideas on things to check.
#### Rhino
##### Well-Known Member
The noise is not when the mic is keyed. It just happens randomly, even when the engine is off. I’ll give one of those a try. Also, any thoughts/comments/suggestions on the connector and wire? How long can the wire be? Do they make 90degree connectors so it doesn’t put so much strain on it?
Have you tried a different radio? Maybe borrow one from a friend? It'd be a shame to mess with wiring if the radio is the problem. From what you're saying though, replacing the antenna/wiring is probably a good idea either way. Make sure it's a 50 ohm cable. TV antenna cables can look identical, but they aren't 50 ohm, which is what you should use for the aircraft band. Make sure you have good ground connections at both ends if you fabricate your own cable. Normally you'd make the cable run as short as practical, but don't go overboard trying to save every inch. Prepackaged antenna/cable combinations might be longer than you need, but they're probably acceptable for your application, especially since you have no electrical system.
Might I also suggest you get a copy of Bob Nuckolls' Aeroelectic Connection if you can? Much of it won't apply to you since you don't have an electrical system, but it's still a great reference book to have. Much of it applies to vehicles other than airplanes too.
EDIT: This isn't bad to have either:
Last edited:
#### Rhino
##### Well-Known Member
Oh, and the specific cable type depends on the application. RG 400 is fantastic antenna cable, but it would be incredible overkill for what you're doing. RG 142 is good, but RG 58 will probably do you just fine.
#### akwrencher
##### Well-Known Member
HBA Supporter
There is currently a good series of articles in Kitplanes magazine on antennas.
#### Rhino
##### Well-Known Member
There is currently a good series of articles in Kitplanes magazine on antennas.
Don't spoil it for me. I'm not caught up on my reading!
#### Map
##### Well-Known Member
I still have a few pieces of RG-400 antenna cable for sale, 17 & 16 ft pieces no connector, and 1 x 17 ft with TNC connectors, $1.10 per ft &$8 for cable with connectors.
#### Bill-Higdon
##### Well-Known Member
What make & model of radio?
#### tallank
##### Well-Known Member
Hello,
I have a New-to-me flybaby that was fitted years ago with a whip antenna mounted behind the headrest, and connects to a hand held radio. This airplane is about 30 years old, and one of its issues is radio noise. I have to keep the squelch all the way up, and sometimes it causes me to miss transmissions. It’s not engine noise. It’s more like aliens screaming in my head, and the noise is far louder than the transmission level. The radio works just fine by itself, but there isn’t really much room in the cockpit to use the handheld radio antenna and I like my radio mounted as it is
Seeing as there is corrosion at the base of the antenna and the connector is visibly discolored, I’d like to just replace it all and see if it helps. In looking at antennas, there appear to be nearly identical models ranging from $75 to$1500+. Is there some compatibility issue I am not aware of? Does it need to be sized to work with the lower-power handheld unit? What should I buy? |
## Files in this item
FilesDescriptionFormat
application/vnd.openxmlformats-officedocument.presentationml.presentation
833824.pptx (4MB)
PresentationMicrosoft PowerPoint 2007
application/pdf
2272.pdf (14kB)
AbstractPDF
## Description
Title: A MOLECULAR FOUNTAIN Author(s): Cheng, Cunfeng Contributor(s): Bethlem, Hendrick; Ubachs, Wim; van der Poel, Aernout P.P. Subject(s): Small molecules Abstract: The resolution of any spectroscopic experiment is limited by the coherent interaction time between the probe radiation and the particle that is being studied. The introduction of cooling techniques for atoms and ions has resulted in a dramatic increase of interaction times and accuracy, it is hoped that molecular cooling techniques will lead to a similar increase. Here we demonstrate the first molecular fountain, a development which permits hitherto unattainably long interrogation times with molecules. In our experiment, beams of ammonia molecules are decelerated, trapped and cooled using inhomogeneous electric fields and subsequently launched. Using a combination of quadrupole lenses and buncher elements, the beam is shaped such that it has a large position spread and a small velocity spread (corresponding to a transverse temperature of less than 10$mu$K and a longitudinal temperature of less than 1$mu$K) while the molecules are in free fall, but strongly focused at the detection region. The molecules are in free fall for up to 266 milliseconds, making it possible, in principle, to perform sub-Hz measurements in molecular systems and paving the way for stringent tests of fundamental physics theories. Issue Date: 6/19/2017 Publisher: International Symposium on Molecular Spectroscopy Citation Info: APS Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/96923 DOI: 10.15278/isms.2017.MH04 Date Available in IDEALS: 2017-07-272018-01-29
|
# Motivation
Robust water resource planning and management decisions rely upon the evaluation of alternative policies under a wide variety of plausible future scenarios. Often, despite the availability of historic records, non-stationarity and system uncertainty require the generation of synthetic datasets to be used in these analyses.
When creating synthetic timeseries data from historic records, it is important to replicate the statistical properties of the system while preserving the inherent stochasticity in the system. Along with replicating statistical autocorrelation, means, and variances it is important to replicate the correlation between variables present in the historic record.
Previous studies by Zeff et al. (2016) and Gold et al. (2022) have relied upon synthetic streamflow and water demand timeseries to inform infrastructure planning and management decisions in the “Research Triangle” region of North Carolina. The methods used for generating the synthetic streamflow data emphasized the preservation of autocorrelation, seasonal correlation, and cross-site correlation of the inflows. However, a comprehensive investigation into the preservation of correlation in the generated synthetic data has not been performed.
Given the critical influence of both reservoir inflow and water demand in the success of water resource decisions, it is important that potential interactions between these timeseries are not ignored.
In this post, I present methods for producing synthetic demand timeseries conditional upon synthetic streamflow data. I also present an analysis of the correlation in both the historic and synthetic timeseries.
A GitHub repository containing all of the necessary code and data can be accessed here.
# Case Study: Reservoir Inflow and Water Demand
This post studies the correlation between reservoir inflow and water demand at one site in the Research Triangle region of North Carolina, and assesses the preservation of this correlation in synthetic timeseries generated using two different methods: an empirical joint probability distribution sampling scheme, and a conditional expectation sampling scheme.
# Methods
Synthetic data was generated using historic reservoir inflow and water demand data from a shared 18-year period, at weekly timesteps. Demand data is reported as the unit water demand, in order to remove the influence of growing population demands. Unit water demand corresponds to the fraction of the average annual water demand observed in that week; i.e., a unit water demand of 1.2 suggests that water demand was 120% of the annual average during that week. Working with unit demand allows for the synthetic data to be scaled according to projected changes in water demand for a site.
Notably, all of the synthetic generation techniques presented below are performed using weekly-standardized inflow and demand data. This is necessary to remove the seasonality in both variables. If not standardized, measurement of the correlation will be dominated by this seasonal correlation. Measurement of the correlation between the standardized data thus accounts for shared deviances from the seasonal mean in both data. In each case, historic seasonality, as described by the weekly means and variances, is re-applied to the standardized synthetic data after it is generated.
## Synthetic Streamflow Generation
Synthetic inflow was generated using the modified Fractional Gaussian Noise (mFGN) method described by Kirsch et al. (2013). The mFGN method is specifically intended to preserve both seasonal correlation, intra-annual autocorrelation, and inter-annual autocorrelation. The primary modification of the mFGN compared to the traditional Fractional Gaussian Noise method is a matrix manipulation technique which allows for the generation of longer timeseries, whereas the traditional technique was limited to timeseries of roughly 100-time steps (McLeod and Hipel, 1978; Kirsch et al., 2013).
Professor Julie Quinn wrote a wonderful blog post describing the mFGN synthetic streamflow generator in her 2017 post, Open Source Streamflow Generator Part 1: Synthetic Generation. For the sake of limiting redundancy on this blog, I will omit the details of the streamflow generation in this post, and refer you to the linked post above. My own version of the mFGN synthetic generator is included in the repository for this post, and can be found here.
## Synthetic Demand Generation
Synthetic demand data is generated after the synthetic streamflow and is conditional upon the corresponding weekly synthetic streamflow. Here, two alternative synthetic demand generation methods are considered:
1. An empirical joint probability distribution sampling method
2. A conditional expectation sampling method
### Joint Probability Distribution Sampling Method
The first method relies upon the construction of an empirical joint inflow-demand probability density function (PDF) using historic data. The synthetic streamflow is then used to perform a conditional sampling of demand from the PDF.
The joint PDF is constructed using the weekly standardized demand and weekly standardized log-inflow. Historic values are then assigned to one of sixteen bins within each inflow or demand PDF, ranging from -4.0 to 4.0 at 0.5 increments. The result is a 16 by 16 matrix joint PDF. A joint cumulative density function (CDF) is then generated from the PDF.
For some synthetic inflow timeseries, the synthetic log-inflow is standardized using historic inflow mean and standard deviations. The corresponding inflow-bin from the marginal inflow PDF is identified. A random number is randomly selected from a uniform distribution ranging from zero to the number of observations in that inflow-bin. The demand-CDF bin number corresponding to the value of the random sample is identified. The variance of the demand value is then determined to be the value corresponding to that bin along the discretized PDF range, from -4.0 to 4.0. Additionally, some statistical noise is added to the sampled standard demand by taking a random sample from a normal distribution, $N(0, 0.5)$.
Admittedly, this process is difficult to translate into words. With that in mind, I recommend the curious reader take a look at the procedure in the code included in the repository.
Lastly, for each synthetic standard demand, $d_{s_{i,j}}$, the historic weekly demand mean, $\mu_{D_j}$, and standard deviation, $\sigma_{D_j}$, are applied to convert to a synthetic unit demand, $D_{s_{i,j}}$.
$D_{s_{i,j}} = d_{s_{i,j}} \sigma_{D_j} + \mu_{D_j}$
Additionally, the above process is season-specific: PDFs and CDFs are independently constructed for the irrigation and non-irrigation seasons. When sampling the synthetic demand, samples are drawn from the corresponding distribution according to the week in the synthetic timeseries.
### Conditional Expectation Sampling Method
The second method does not rely upon an empirical joint PDF, but rather uses the correlation between standardized inflow and demand data to calculate demand expectation and variance conditional upon the corresponding synthetic streamflow and the correlation between historic observations. The conditional expectation of demand, $E[D|Q_{s_i}]$, given a specific synthetic streamflow, $Q_{s_i}$, is:
$E[D|Q_{s_i}] = E[D] + \rho \frac{\sigma_Q}{\sigma_D} (Q_i - \mu_Q)$
Where $\rho$ is the Pearson correlation coefficient of the weekly standardized historic inflow and demand data. Since the data is standardized, ( $E[d] = 0$ and $\sigma_z = \sigma_d = 1$) the above form of the equation simplifies to:
$E[d|Z_{s_i}] = \rho (Z_{s_i})$
Where $d$ is standard synthetic demand and $Z_{s_i}$ is the standard synthetic streamflow for the $i^{th}$ week. The variance of the standard demand conditional upon the standard streamflow is then:
$Var(d|Z_{s_i}) = \sigma_d^2(1 - \rho^2) = (1 - \rho^2)$
The weekly standard demand, $d_{s_i}$, is then randomly sampled from a normal distribution centered around the conditional expectation with standard deviation equal to the square root of the conditional variance.
$d_{s_i} \approx N(E[d|Z_{s_i}], Var(d|Z_{s_i})^{1/2})$
As in the previous method, this method is performed according to whether the week is within the irrigation season or not. The correlation values used in the calculation of expected value and variance are calculated for both irrigated and non-irrigated seasons and applied respective of the week.
As in the first method, the standard synthetic demand is converted to a unit demand, and seasonality is reintroduced, using the weekly means and standard deviations of the historic demand:
$D_{s_{i,j}} = d_{s_{i,j}} \sigma_{D_j} + \mu_{D_j}$
# Results
### Historic Correlation Patterns
It is worthwhile to first consider the correlation pattern between stream inflow and demand in the historic record.
The correlation patterns between inflow and demand found in this analysis support the initial hypothesis that inflow and demand are correlated with one another. More specifically, there is a strong negative correlation between inflow and demand week to week (along the diagonal in the above figure). Contextually, this makes sense; low reservoir inflow correspond to generally dryer climatic conditions. When considering that agriculture accounts for a substantial contribution to demand in the region, it is understandable that demand will be high during dry periods, when are farmers require more reservoir supply to irrigate their crops. During wet periods, they depend less upon the reservoir supply.
Interestingly, there appears to be some type of lag-correlation, between variables across different weeks (dark coloring on the off-diagonals in the matrix). For example, there exists strong negative correlation between the inflow during week 15 with the demands in weeks 15, 16, 17 and 18. This may be indicative of persistence in climatic conditions which influence demand for several subsequent weeks.
### Synthetic Streamflow Results
Consideration of the above flow duration curves reveal that the synthetic streamflow generated through the mFGN method exceedance probabilities are in close alignment with the historic record. While it should not be assumed that future hydrologic conditions will follow historic trends (Milly et al., 2008), the focus of this analysis is the replication of historic patterns. This result confirms previous studies by Mandelbrot and Wallis (1968) that the FGN method is capable of capturing flood and drought patterns from the historic record.
### Synthetic Demand Results
The above figure shows a comparison of the ranges in unit demand data between historic and synthetic data sets. Like the synthetic streamflow data, these figures reveal that both demand generation techniques are producing timeseries that align closely with historic patterns. The joint probability sampling method does appear to produce consistently higher unit demands than the historic record, but this discrepancy is not significant enough to disregard the method, and may be corrected with some tweaking of the PDF-sampling scheme.
### Synthetic Correlation Patterns
Now that we know both synthetic inflow and demand data resemble historic ranges, it is important to consider how correlation is replicated in those variables.
Take a second to compare the historic correlation patterns in Figure 1 with the correlation in the synthetic data shown in Figure 4. The methods are working!
As in the historic data, the synthetic data contain strong negative correlations between inflow and demand week-to-week (along the diagonal).
Visualizing the joint distributions of the standardized data provides more insight into the correlation of the data. The Pearson correlation coefficients for each aggregated data set are shown in the upper right of each scatter plot, and in the table below.
One concern with this result is that the correlation is actually too strong in the synthetic data. For both methods, the Pearson Correlation coefficient is greater in the synthetic data than it is in the historic data.
This may be due to the fact that correlation is highly variable throughout the year in the historic record, but the methods used here only separate the year into two seasons – non-irrigation and irrigation seasons. Aggregated across these seasons, the historic correlations are negative. However, there exist weeks (e.g., during the winter months) when weekly correlations are 0 or even positive. Imposing the aggregated negative-correlation to every week during the generation process may be the cause of the overly-negative correlation in the synthetic timeseries.
It may be possible to produce synthetic data with better preservation of historic correlations by performing the same demand generation methods but with more than two seasons.
## Conclusions
When generating synthetic timeseries, it is important to replicate the historic means and variances of the data, but also to capture the correlation that exist between variables. Interactions between exogenous variables can have critical implications for policy outcomes.
For example, when evaluating water resource policies, strong negative correlation between demand and inflow can constitute a compounding risk (Simpson et al., 2021), where the risk associated with low streamflow during a drought is then compounded by high demand at the same time.
Here, I’ve shared two different methods of producing correlated synthetic timeseries which do well in preserving historic correlation patterns. Additionally, I’ve tried to demonstrate different analyses and visualizations that can be used to verify this preservation. While demonstrated using inflow and demand data, the methods described in this post can be applied to a variety of different timeseries variables.
Lastly, I want to thank David Gold and David Gorelick for sharing their data and insight on this project. I also want to give a shout out to Professor Scott Steinschneider whose Multivariate Environmental Statistics class at Cornell motivated this work, and who fielded questions along the way.
Happy programming!
# References
Gold, D. F., Reed, P. M., Gorelick, D. E., & Characklis, G. W. (2022). Power and Pathways: Exploring Robustness, Cooperative Stability, and Power Relationships in Regional Infrastructure Investment and Water Supply Management Portfolio Pathways. Earth’s Future10(2), e2021EF002472.
Kirsch, B. R., Characklis, G. W., & Zeff, H. B. (2013). Evaluating the impact of alternative hydro-climate scenarios on transfer agreements: Practical improvement for generating synthetic streamflows. Journal of Water Resources Planning and Management, 139(4), 396-406.
Lettenmaier, D. P., Leytham, K. M., Palmer, R. N., Lund, J. R., & Burges, S. J. (1987). Strategies for coping with drought: Part 2, Planning techniques and reliability assessment (No. EPRI-P-5201). Washington Univ., Seattle (USA). Dept. of Civil Engineering; Electric Power Research Inst., Palo Alto, CA (USA).
Mandelbrot, B. B., & Wallis, J. R. (1968). Noah, Joseph, and operational hydrology. Water resources research, 4(5), 909-918.
McLeod, A. I., & Hipel, K. W. (1978). Preservation of the rescaled adjusted range: 1. A reassessment of the Hurst Phenomenon. Water Resources Research14(3), 491-508.
Simpson, N. P., Mach, K. J., Constable, A., Hess, J., Hogarth, R., Howden, M., … & Trisos, C. H. (2021). A framework for complex climate change risk assessment. One Earth4(4), 489-501.
Zeff, H. B., Herman, J. D., Reed, P. M., & Characklis, G. W. (2016). Cooperative drought adaptation: Integrating infrastructure development, conservation, and water transfers into adaptive policy pathways. Water Resources Research, 52(9), 7327-7346.
# CNNs for Time Series Applications
This post is meant to be an introduction to convolutional neural networks (CNNs) and how they can be applied to continuous prediction problems, such as time series predictions. CNNs have historically been utilized in image classification applications. At a high level, CNNs use small kernels (filters) that can slide over localized regions of an image and detect features from edges to faces, much in the same way as the visual cortex of a brain (Hubel and Wiesel, 1968). The basic concepts of a CNN were first introduced by Kunihiko Fukushima in 1980 and the first use of CNNs for image recognition were carried out by Yann LeCun in 1988. The major breakthrough for the algorithm didn’t happen until 2000 with the advent of GPUs and by 2015, CNNs were favored to win image recognition contests over other deep networks.
It is believed that recurrent style networks such as LSTMs are the most appropriate algorithms for time series prediction, but studies have been conducted that suggest that CNNs can perform equivalently (or better) and that appropriate filters can extract features that are coupled across variables and time while being computationally efficient to train (Bai et al., 2018, Rodrigues et al., 2021). Below, I’ll demonstrate some of the key characteristics of CNNs and how CNNs can be used for time series prediction problems.
## Architecture
Figure 1: CNN schematic for image classification (Sharma, 2018)
Figure 1 shows a schematic of a CNN’s architecture. The architecture is primarily comprised of a series of convolution and pooling layers followed by a fully connected network. In each convolution layer are kernel matrices that are convolved with the input into the convolution layer. It is up to the user to define the number of kernels and size of the kernels, but the weights in the kernel are learned using backpropagation. A bias is added to the output of the convolution layer and then passed through an activation function, such as ReLU function to yield feature maps. The feature maps are stacked in a cuboid of a depth that equals the number of filters. If the convolution layer is followed by a pooling layer, the feature maps are down-sampled to produce a lower dimensional representation of the feature maps. The output from the final pooling or convolutional layer is flattened and fed to the fully connected layers.
We will now look at the components of the architecture in more detail. To demonstrate how the convolutional layer works, we will use a toy example shown in Figure 2.
Figure 2: Convolution of a 3×3 kernel with the original image
Let’s say that our input is an image is represented as a 5×5 array and the filter is a 3×3 kernel that will be convolved with the image. The result is the array termed Conv1 which is just another array where each cell is the dot product between the filter and the 3×3 subsections of the image. The numbers in color represent the values that the filter is centered on. Note that the convolution operation will result in an output that is smaller than the input and can result in a loss of information around the boundaries of the image. Zero padding, which constitutes adding border of zeros around the input array, can be used to preserve the input size. The kernel matrices are the mechanisms by which the CNN is able to identify underlying patterns. Figure 3 shows examples of what successive output from convolution layers, or feature maps, can look like.
Figure 3: Convolutional layer output for a CNN trained to distinguish between cats and dogs (Dertat, 2017)
The filters in the first convolutional layer of a CNN retain most of the information of the image, particularly edges. The brightest colors represent the most active pixels. The feature maps tend to become more abstract or focused on specific features as you move deeper into the network (Dertat, 2017). For example, Block 3 seems to be tailored to distinguish eyes.
The other key type of layer is a pooling layer. A pooling layer is added after convolution to reduce dimensionality, which can both reduce computational time to train by reducing parameters but can also reduce the chances of overfitting. The most common type of pooling is max pooling which returns the max value in a NxN matrix pooling filter. This type of pooling retains the most active pixels in the feature map. As demonstrated in Figure 4, max pooling, using a 2×2 filter with a stride (or shift) of 2 pixels, reduces our Conv1 layer into a 2×2 lower dimensional matrix. One can also do average pooling instead of max pooling which would take the average of the values in each 2×2 subsection of the Conv1 layer.
Figure 4: Max pooling example
## Application to Regression
CNNs are easiest to understand and visualize for image applications which provide a basis for thinking about how we can use CNNs in a regression or prediction application for time series. Let’s use a very simple example of a rainfall-runoff problem that uses daily precipitation and temperature to predict outflow in an ephemeral sub-basin within the Tuolumne Basin. Because the sub-basin features a creek that is ephemeral, this means that the creek can dry up across the simulation period and there can be extended periods of zero flow. This can make predictions in the basin very difficult. Here, we also implement a lag which allows us to consider the residence time of the basin and that precipitation/temperature from days before likely will contribute to predicting the outflow today. We use a lag of 18, meaning that we use the previous 18 values of precipitation and temperature to predict outflow. The CNN model is implemented within Keras in the code below.
#import modules
import numpy as np
import pandas as pd
from keras.utils import to_categorical
from keras.layers import LSTM, Dense
from keras.layers.convolutional import Conv1D, Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers import Dropout, Activation, Flatten
from keras.optimizers import SGD
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from tqdm import tqdm_notebook
import seaborn as sns
import os
os.getcwd()
os.chdir("C:/Users/Rohini/Documents/")
#Check for nulls
print("checking if any null values are present\n", df_ge.isna().sum())
#Specify the training columns by their names
train_cols = ["Precipitation","Temperature"]
label_cols = ["Outflow"]
# This function normalizes the input data
def Normalization_Transform(x):
x_mean=np.mean(x, axis=0)
x_std= np.std(x, axis=0)
xn = (x-x_mean)/x_std
return xn, x_mean,x_std
# This function reverses the normalization
def inverse_Normalization_Transform(xn, x_mean,x_std):
xd = (xn*x_std)+x_mean
return xd
# building timeseries data with given timesteps (lags)
def timeseries(X, Y, Y_actual, time_steps, out_steps):
input_size_0 = X.shape[0] - time_steps
input_size_1 = X.shape[1]
X_values = np.zeros((input_size_0, time_steps, input_size_1))
Y_values = np.zeros((input_size_0,))
Y_values_actual = np.zeros((input_size_0,))
for i in tqdm_notebook(range(input_size_0)):
X_values[i] = X[i:time_steps+i]
Y_values[i] = Y[time_steps+i-1, 0]
Y_values_actual[i] = Y_actual[time_steps+i-1, 0]
print("length of time-series i/o",X_values.shape,Y_values.shape)
return X_values, Y_values, Y_values_actual
df_train, df_test = train_test_split(df_ge, train_size=0.8, test_size=0.2, shuffle=False)
x_train = df_train.loc[:,train_cols].values
y_train = df_train.loc[:,label_cols].values
x_test = df_test.loc[:,train_cols].values
y_test = df_test.loc[:,label_cols].values
#Normalizing training data
x_train_nor = xtrain_min_max_scaler.fit_transform(x_train)
y_train_nor = ytrain_min_max_scaler.fit_transform(y_train)
# Normalizing test data
x_test_nor = xtest_min_max_scaler.fit_transform(x_test)
y_test_nor = ytest_min_max_scaler.fit_transform(y_test)
# Saving actual train and test y_label to calculate mean square error later after training
y_train_actual = y_train
y_test_actual = y_test
#Building timeseries
X_Train, Y_Train, Y_train_actual = timeseries(x_train_nor, y_train_nor, y_train_actual, time_steps=18, out_steps=1)
X_Test, Y_Test, Y_test_actual = timeseries(x_test_nor, y_test_nor, y_test_actual, time_steps=18, out_steps=1)
#Define CNN model
def make_model(X_Train):
input_layer = Input(shape=(X_Train.shape[1],X_Train.shape[2]))
conv1 = Conv1D(filters=16, kernel_size=2, strides=1,
conv2 = Conv1D(filters=32, kernel_size=3,strides = 1,
conv3 = Conv1D(filters=64, kernel_size=3,strides = 1,
flatten = Flatten()(conv3)
dense1 = Dense(1152, activation='relu')(flatten)
dense2 = Dense(576, activation='relu')(dense1)
output_layer = Dense(1, activation='linear')(dense2)
return Model(inputs=input_layer, outputs=output_layer)
model = make_model(X_Train)
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_Train, Y_Train, epochs=10)
#Prediction and inverting results
ypred = model.predict(X_Test)
predict =inverse_Normalization_Transform(ypred,y_mean_train, y_std_train)
#Plot results
plt.figure(figsize=(11, 7))
plt.plot(y_test)
plt.plot((predict))
plt.title('Outflow Prediction (Precipitation+Temperature,Epochs=10, Lag=18 hours)')
plt.ylabel('Outflow (cfs)')
plt.xlabel('Day')
plt.legend(['Actual Values','Predicted Values'], loc='upper right')
plt.show()
Just as with any algorithm, we normalize the input data and split it into testing and training sets. The CNN model is implemented in Keras and consists of three convolutional layers with kernel sizes that are explicitly defined to extract patterns that are coupled across variables and time. A schematic of the setup is shown in Figure 5.
Figure 5: Convolution layer setup for the Tuolumne case
Layer 1 uses a 1D convolutional layer with 16 filters of size 1×2 in order to extract features and interactions across the precipitation and temperature time series as demonstrated in the top left of Figure 5. The result of this is an output layer of 1x18x16. The second convolution layer uses 32, 3×1 filters which now will further capture temporal interactions down the output column vector. The third layer uses 64, 3×1 filters to capture more complex temporal trends which is convolved with the output from the Conv2 layer. Note that zero padding is added (padding =”same” in the code) to maintain the dimensions of the layers. The three convolutional layers are followed by a flattening layer and a three-layer dense network. The CNN was run 20 times and the results from the last iteration are shown in Figure 6. We also compare to an LSTM that has an equivalent 3-layer setup and that is also run 20 times. The actual outflow is shown in blue while predictions are shown in red.
Figure 6: CNN vs LSTM prediction
For all purposes, the visual comparison yields that CNNs and LSTMs work equivalently, though the CNN was considerably faster to train. Notably, the CNN does a better job of capturing the large extremes recorded on day 100 and day 900, while still capturing the dynamics of the lower flow regime. While these results are preliminary and largely un-optimized, the CNN shows the ability to outperform an LSTM for a style of problem that it is not technically designed for. Using the specialized kernels, the CNN learns the interactions (both across variables and temporally) without needing a mechanism specifically designed for memory, such as a cell state in an LSTM. Furthermore, CNNs can greatly take advantage of additional speedups from GPUs which doesn’t always produce large gain in efficiency for LSTM training. For now, we can at least conclude that CNNs are fast and promising alternatives to LSTMs that you may not have considered before. Future blog posts will dive more into the capabilities of CNNs in problems with more input variables and complex interactions, particularly if there seems to be a benefit from CNNs in resolving complex relationships that help to predict extremes.
References
Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology195(1), 215-243.
Bai, S., Kolter, J. Z., & Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
Rodrigues, N. M., Batista, J. E., Trujillo, L., Duarte, B., Giacobini, M., Vanneschi, L., & Silva, S. (2021). Plotting time: On the usage of CNNs for time series classification. arXiv preprint arXiv:2102.04179.
# MORDM Basics I: Synthetic Streamflow Generation
In this post, we will break down the key concepts underlying synthetic streamflow generation, and how it fits within the Many Objective Robust Decision Making (MORDM) framework (Kasprzyk, Nataraj et. al, 2012). This post is the first in a series on MORDM which will begin here: with generating and validating the data used in the framework. To provide some context as to what we are about to attempt, please refer to this post by Jon Herman.
What is synthetic streamflow generation?
Synthetic streamflow generation is a non-parametric, direct statistical approach used to generate synthetic streamflow timeseries from a reasonably long historical record. It is used when there is a need to diversify extreme event scenarios, such as flood and drought, or when we want to generate flows to reflect a shift in the hydrologic regime due to climate change. It is favored as it relies on a re-sampling of the historical record, preserving temporal correlation up to a certain degree, and results in a more realistic synthetic dataset. However, its dependence on a historical record also implies that this approach requires a relatively long historical inflow data. Jon Lamontagne’s post goes into further detail regarding this approach.
Why synthetic streamflow generation?
An important step in the MORDM framework is scenario discovery, which requires multiple realistic scenarios to predict future states of the world (Kasprzyk et. al., 2012). Depending solely on the historical dataset is insufficient; we need to generate multiple realizations of realistic synthetic scenarios to facilitate a comprehensive scenario discovery process. As an approach that uses a long historical record to generate synthetic data that has been found to preserve seasonal and annual correlation (Kirsch et. al., 2013; Herman et. al., 2016), this method provides us with a way to:
1. Fully utilize a large historical dataset
2. Stochastically generate multiple synthetic datasets while preserving temporal correlation
3. Explore many alternative climate scenarios by changing the mean and the spread of the synthetic datasets
The basics of synthetic streamflow generation in action
To better illustrate the inner workings of synthetic streamflow generation, it is helpful to use a test case. In this post, the historical dataset is obtained from the Research Triangle Region in North Carolina. The Research Triangle region consists of four main utilities: Raleigh, Durham, Cary and the Orange County Water and Sewer Authority (OWASA). These utilities are receive their water supplies from four water sources: the Little River Reservoir, Lake Wheeler, Lake Benson, and the Jordan Lake (Figure 1), and historical streamflow data is obtained from ten different stream gauges located at each of these water sources. For the purpose of this example, we will be using 81 years’ worth of weekly streamflow data available here.
The statistical approach that drive synthetic streamflow generation is called the Kirsch Method (Kirsch et. al., 2013). In plain language, this method does the following:
1. Converts the historical streamflows from real space to log space, and then standardize the log-space data.
2. Bootstrap the log-space historical matrix to obtain an uncorrelated matrix of historical data.
3. Obtain the correlation matrix of the historical dataset by performing Cholesky decomposition.
4. Impose the historical correlation matrix upon the uncorrelated matrix obtained in (2) to generate a standardized synthetic dataset. This preserves seasonal correlation.
5. De-standardize the synthetic data, and transform it back into real space.
6. Repeat steps (1) to (5) with a historical dataset that is shifted forward by 6 months (26 weeks). This preserves year-to-year correlation.
This post by Julie Quinn delves deeper into the Kirsch Method’s theoretical steps. The function that executes these steps can be found in the stress_dynamic.m Matlab file, which in turn is executed by the wsc_main_rate.m file by setting the input variable p = 0 as shown on Line 27. Both these files are available on GitHub here.
However, this is simply where things get interesting. Prior to this, steps (1) to (6) would have simply generated a synthetic dataset based on only historical statistical characteristics as validated here in Julie’s second blog post on a similar topic. Out of the three motivations for using synthetic streamflow generation, the third one (exploration of multiple scenarios) has yet to be satisfied. This is a nice segue into out next topic:
Generating multiple scenarios using synthetic streamflow generation
The true power of synthetic streamflow generation lies in its ability to generate multiple climate (or in this case, streamflow) scenarios. This is done in stress_dynamic.m using three variables:
These three variables bootstrap (increase the length of) the historical record while allow us to perturb the historical streamflow record streamflows to reflect an increase in frequency or severity of extreme events such as floods and droughts using the following equation:
new_hist_years = old_historical_years + [(p*old_historical_years)*ni ] + (old_hist_years – [(p*old_historical_years)mi])
The stress_dynamic.m file contains more explanation regarding this step.
This begs the question: how do we choose the value of p? This brings us to the topic of the standardized streamflow indicator (SSI6).
The SSI6 is the 6-month moving average of the standardized streamflows to determine the occurrence and severity of drought on the basis of duration and frequency (Herman et. al., 2016). Put simply, this method determines the occurrence of drought if the the value of the SSI6 < 0 continuously for at least 3 months, and SSI6 < -1 at least once during the 6-month interval. The periods and severity (or lack thereof) of drought can then be observed, enabling the decision on the length of both the n and m vectors (which correspond to the number of perturbation periods, or climate event periods). We will not go into further detail regarding this method, but there are two important points to be made:
1. The SSI6 enables the determination of the frequency (likelihood) and severity of drought events in synthetic streamflow generation through the values contained in p, n and m.
2. This approach can be used to generate flood events by exchanging the values between the n and m vectors.
A good example of point (2) is done in this test case, in which more-frequent and more-severe floods was simulated by ensuring that most of the values in m where larger than those of n. Please refer to Jon Herman’s 2016 paper titled ‘Synthetic drought scenario generation to support bottom-up water supply vulnerability assessments’ for further detail.
A brief conceptual letup
Now we have shown how synthetic streamflow generation satisfies all three factors motivating its use. We should have three output folders:
• synthetic-data-stat: contains the synthetic streamflows based on the unperturbed historical dataset
• synthetic-data-dyn: contains the synthetic streamflows based on the perturbed historical dataset
Comparing these two datasets, we can compare how increasing the likelihood and severity of floods has affected the resulting synthetic data.
Validation
To exhaustively compare the statistical characteristics of the synthetic streamflow data, we will perform two forms of validation: visual and statistical. This method of validation is based on Julie’s post here.
Visual validation
Done by generating flow duration curves (FDCs) . Figure 2 below compares the unperturbed (left) and perturbed (right) synthetic datasets.
The bottom plots in Figure 2 shows an increase in the volume of weekly flows, as well as an smaller return period, when the the historical streamflows were perturbed to reflect an increasing frequency and magnitude of flood events. Together with the upper plots in Figure 2, this visually demonstrates that the synthetic streamflow generation approach (1) faithfully reconstructs historical streamflow patterns, (2) increases the range of possible streamflow scenarios and (3) can model multiple extreme climate event scenarios by perturbing the historical dataset. The file to generate this Figure can be found in the plotFDCrange.py file.
Statistical validation
The mean and standard deviation of the perturbed and unperturbed historical datasets are compared to show if the perturbation resulted in significant changes in the synthetic datasets. Ideally, the perturbed synthetic data would have higher means and similar standard deviations compared to the unperturbed synthetic data.
The mean and tails of the synthetic streamflow values of the bottom plots in Figure 3 show that the mean and maximum values of the synthetic flows are significantly higher than the unperturbed values. In addition, the spread of the standard deviations of the perturbed synthetic streamflows are similar to that of its unperturbed counterpart. This proves that synthetic streamflow generation can be used to synthetically change the occurrence and magnitude of extreme events while maintaining the periodicity and spread of the data. The file to generate Figure 3 can be found in weekly-moments.py.
Synthetic streamflow generation and internal variability
The generation of multiple unperturbed realizations of synthetic streamflow is vital for characterizing the internal variability of a system., otherwise known as variability that arises from natural variations in the system (Lehner et. al., 2020). As internal variability is intrinsic to the system, its effects cannot be eliminated – but it can be moderated. By evaluating multiple realizations, we can determine the number of realizations at which the internal variability (quantified here by standard deviation as a function of the number of realizations) stabilizes. Using the synthetic streamflow data for the Jordan Lake, it is shown that more than 100 realizations are required for the standard deviation of the 25% highest streamflows across all years to stabilize (Figure 4). Knowing this, we can generate sufficient synthetic realizations to render the effects of internal variability insignificant.
The file internal-variability.py contains the code to generate the above figure.
How does this all fit within the context of MORDM?
So far, we have generated synthetic streamflow datasets and validated them. But how are these datasets used in the context of MORDM?
Synthetic streamflow generation lies within the domain of the second part of the MORDM framework as shown in Figure 5 above. Specifically, synthetic streamflow generation plays an important role in the design of experiments by preserving the effects of deeply uncertain factors that cause natural events. As MORDM requires multiple scenarios to reliably evaluate all possible futures, this approach enables the simulation of multiple scenarios, while concurrently increasing the severity or frequency of extreme events in increments set by the user. This will allow us to evaluate how coupled human-natural systems change over time given different scenarios, and their consequences towards the robustness of the system being evaluated (in this case, the Research Triangle).
Typically, this evaluation is performed in two main steps:
1. Generation and evaluation of multiple realizations of unperturbed annual synthetic streamflow. The resulting synthetic data is used to generate the Pareto optimal set of policies. This step can help us understand how the system’s internal variability affects future decision-making by comparing it with the results in step (2).
2. Generation and evaluation of multiple realizations of perturbed annual synthetic streamflow. These are the more extreme scenarios in which the previously-found Pareto-optimal policies will be evaluated against. This step assesses the robustness of the base state under deeply uncertain deviations caused by the perturbations in the synthetic data and other deeply uncertain factors.
Conclusion
Overall, synthetic streamflow generation is an approach that is highly applicable in the bottom-up analysis of a system. It preserves historical characteristics of a streamflow timeseries while providing the flexibility to modify the severity and frequency of extreme events in the face of climate change. It also allows the generation of multiple realizations, aiding in the characterization and understanding of a system’s internal variability, and a more exhaustive scenario discovery process.
This summarizes the basics of data generation for MORDM. In my next blog post, I will introduce risk-of-failure (ROF) triggers, their background, key concepts, and how they are applied within the MORDM framework.
## References
Herman, J. D., Reed, P. M., Zeff, H. B., & Characklis, G. W. (2015). How should robustness be defined for water systems planning under change? Journal of Water Resources Planning and Management, 141(10), 04015012. doi:10.1061/(asce)wr.1943-5452.0000509
Herman, J. D., Zeff, H. B., Lamontagne, J. R., Reed, P. M., & Characklis, G. W. (2016). Synthetic drought scenario generation to support bottom-up water supply vulnerability assessments. Journal of Water Resources Planning and Management, 142(11), 04016050. doi:10.1061/(asce)wr.1943-5452.0000701
Kasprzyk, J. R., Nataraj, S., Reed, P. M., & Lempert, R. J. (2013). Many objective robust decision making for complex environmental systems undergoing change. Environmental Modelling & Software, 42, 55-71. doi:10.1016/j.envsoft.2012.12.007
Kirsch, B. R., Characklis, G. W., & Zeff, H. B. (2013). Evaluating the impact of alternative hydro-climate scenarios on transfer agreements: Practical improvement for generating synthetic streamflows. Journal of Water Resources Planning and Management, 139(4), 396-406. doi:10.1061/(asce)wr.1943-5452.0000287
Mankin, J. S., Lehner, F., Coats, S., & McKinnon, K. A. (2020). The value of initial condition large ensembles to Robust Adaptation Decision‐Making. Earth’s Future, 8(10). doi:10.1029/2020ef001610
Trindade, B., Reed, P., Herman, J., Zeff, H., & Characklis, G. (2017). Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty. Advances in Water Resources, 104, 195-209. doi:10.1016/j.advwatres.2017.03.023
# How to make horizon plots in Python
Horizon plots were invented about a decade ago to facilitate visual comparison between two time series. They are not intuitive to read right away, but they are great for comparing and presenting many sets of timeseries together. They can take advantage of a minimal design by avoiding titles and ticks on every axis and packing them close together to convey a bigger picture. The example below shows percent changes in the price of various food items in 25 years.
The way they are produced and read is by dividing the values along the y axis in bands based on ranges. The color of each band is given by a divergent color map. By collapsing the bands to the zero axis and layering the higher bands on top, one can create a time-varying heatmap of sorts.
I wasn’t able to find a script that could produce this in Python, besides some code in this github repository, that is about a decade old and cannot really run in Python 3. I cleaned it up and updated the scripts with some additional features. I also added example data comparing USGS streamflow data with model simulation data for the same locations for 38 years. The code can be found here and can be used with any two datasets that one would like to compare with as many points of comparison as needed (I used eight below, but the script can accept larger csv files with more or less comparison points, which will be detected automatically). The script handles the transformation of the data to uniform bands and produces the following figure, with every subplot comparing model output with observations at eight gauges, i.e. model prediction error. When the model is over predicting the area is colored blue, when the area is underpredicting, the area is colored red. Darker shades indicate further divergence from the zero axis. The script automatically uses three bands for both positive or negative divergence, but more can be added, as long as the user defines additional colors to be used.
Using this type of visualization for these data allows for time-varying comparisons of multiple locations in the same basin. The benefit of it is most exploited with many subplots that make up a bigger picture.
Future extensions in this repository will include code to accept more file types than csv, more flexibility in how the data is presented and options to select different colormaps when executing.
# From MATLAB to Julia: Insights from Translating an Opensource Kirsch-Nowak Streamflow Generator to Julia
## A quick look into translating code: speed comparisons, practicality, and comments
As I am becoming more and more familiar with Julia—an open-source programming language—I’ve been attracted to translate code to not only run it on an opensource and free language but also to test its performance. Since Julia was made to be an open source language made to handle matrix operations efficiently (when compared to other high-level opensource languages), finding a problem to utilize these performance advantages only makes sense.
As with any new language, understanding how well it performs relative to the other potential tools in your toolbox is vital. As such, I decided to use a problem that is easily scalable and can be directly compare the performances of MATLAB and Julia—the Kirsch-Nowak synthetic stationary streamflow generator.
So, in an effort to sharpen my understanding of the Kirsch-Nowak synthetic stationary streamflow generator created by Matteo GiulianiJon Herman and Julianne Quinn, I decided to take on this project of converting from this generator from MATLAB. This specific generator takes in historical streamflow data from multiple sites (while assuming stationarity) and returns a synthetically generated daily timeseries of streamflow. For a great background on synthetic streamflow generation, please refer to this post by Jon Lamontagne.
### Model Description
The example is borrowed from Julie’s code utilizes data from the Susquehanna River flows (cfs) at both Marietta (USGS station 01576000) and Muddy Run along with lateral inflows (cfs) between Marietta and Conowingo Damn (1932-2001). Additionally, evaporation rates (in/day) over the Conowingo and Muddy Run Dams (from an OASIS model simulation) utilized. The generator developed by Kirsch et al. (2013) utilizes a Cholesky decomposition to create a monthly synthetic record which preserves the autocorrelation structure of the historical data. The method proposed by Nowak et al. (2010) is then used to disaggregate to daily flows (using a historical month +/- 7 days). A full description of the methods can be found at this link.
## Comparing Julia and MATLAB
### Comparison of Performance between Julia and MATLAB
To compare the speeds of each language, I adapted the MATLAB code into Julia (shown here) on as nearly of equal basis as possible. I attempted to keep the loops, data structures, and function formulation as similar as possible, even calling similar libraries for any given function.
When examining the performance between Julia (solid lines) and MATLAB (dashed lines), there is only one instance where MATLAB(x) outperformed Julia(+)—in the 10-realization, 1000-year simulation shown in the yellow dots in the upper left. Needless to say, Julia easily outperformed MATLAB in all other situations and required only 53% of the time on average (all simulations considered equal). However, Julia was much proportionally faster at lower dimensions of years (17-35% of the time required) than MATLAB. This is likely because I did not handle arrays optimally—the code could likely be sped up even more.
### Considerations for Speeding Up Code
#### Row- Versus Column-Major Array Architecture
It is worth knowing how a specific language processes its arrays/matrices. MATLAB and Julia are both column-major languages, meaning the sequential indexes and memory paths are grouped by descending down row by row through a column then going through the next column. On the other hand, Numpy in Python specifically uses row-major architecture. The Wikipedia article on this is brief but well worthwhile for understanding these quirks.
This is especially notable because ensuring that proper indexing and looping methods are followed can substantially speed up code. In fact, it is likely that the reason Julia slowed down significantly on a 10-realization 1000-year simulation when compared to both its previous performances and MATLAB because of how the arrays were looped through. As a direct example shown below, when exponentiating through a [20000, 20000] array row-by-row took approximately 47.7 seconds while doing the same operation column-by-column only took 12.7 seconds.
#### Dealing with Arrays
Simply put, arrays and matrices in Julia are a pain compared to MATLAB. As an example of the bad and the ugly, unlike in MATLAB where you can directly declare any size array you wish to work with, you must first create an array and then fill the array with individual array in Julia. This is shown below where an array of arrays is initialized below. However, once an array is established, Julia is extremely fast in loops, so dealing with filling a previously established array makes for a much faster experience.
# initialize output
qq = Array{Array}(undef, num_sites) #(4, 100, 1200)
for i = 1:num_sites
qq[i] = Array{Float64}(undef, nR, nY * 12)
end
Once the plus side when creating arrays, Julia is extremely powerful in its ability to assign variable types to the components of a given array. This can drastically speed up your code during the day. Shown below, it is easy to the range of declarations and assignments being made to populate the array. There’s an easy example of declaring an array with zeros, and another where we’re populating an array using slices of another. Note the indexing structure for Qd_cg in the second loop–it is not technically a 3-D array but rather a 2-D array nested within a 1-D array–showing the issues mentioned prior.
delta = zeros(n_totals)
for i = 1:n_totals
for j = 1:n_sites
delta[i] += (Qtotals[month][j][i] - Z[j]) ^ 2
end
end
q_ = Array{Float64, 2}(undef, num_realizations[k], 365 * num_years[k])
for i = 1: Nsites
# put into array of [realizations, 365*num_yrs]
for j = 1: num_realizations[k]
q_[j, :] = Qd_cg[j][:, i]'
end
end
#### Code Profiling: Order of Experiments
An interesting observation I’ve noticed is that Julia’s first run on a given block of code is substantially slower than every other attempt. Thus, it is likely worthwhile to run a smaller-scale array through to initialize the code if there are plans to move on to substantially more expensive operations (i.e. scaling up).
In the example below, we can see that the second iteration of the same exact code was over 10% faster when calling it a second time. However, when running the code without the function wrapper (in the original timed runs), the code was 10% faster (177 seconds) than the second sequential run shown below. This points to the importance of profiling and experimenting with sections of your code.
Basic profiling tools are directly built into Julia, as shown in the Julia profiling documentation. This can be visualized easily using the ProfileView library. The Juno IDE (standard with Julia Pro) allegedly has a good built-in profile as well. However, it should be expected that most any IDE should do the trick (links to IDEs can be found here).
#### Syntax and Library Depreciation
While Julia is very similar in its structure and language to MATLAB, much of the similar language has depreciated as Julia has been rapidly upgraded. Notably, Julia released V1.0 in late 2018 and recently released V1.1, moving further away from similarities in function names. Thus, this stands as a lesson for individuals wishing to translate all of their code between these languages. I found a useful website that assists in translating general syntax, but many of the functions have depreciated. However, as someone who didn’t have any experience with MATLAB but was vaguely familiar with Julia, this was a godsend for learning differences in coding styles.
For example, creating an identity matrix in MATLAB utilizes the function eye(size(R)) to create an nxn matrix the size of R. While this was initially the language used in Julia, this specific language was depreciated in V0.7. To get around this, either ‘I’ can be used to create a scalable identity matrix or Matrix{Float64}(I, size(R), size(R)) declare an identity matrix of size(R) by size(R) for a more foolproof and faster operation.
When declaring functions, I have found Julia to be relatively straightforward and Pythonic in its declarations. While I still look to insert colons at the ends of declarations while forgetting to add ‘end’ at the end of functions, loops, and more, the ease of creating, calling, and interacting with functions makes Julia very accessible. Furthermore, its ability to interact with matrices in without special libraries (e.g. Numpy in Python) allows for more efficient coding without having to know specific library notation.
#### Debugging Drawbacks
One of the most significant drawbacks I run into when using Julia is the lack of clarity in generated error codes for common mistakes, such as adding extra brackets. For example, the following error code is generated in Python when adding an extra parenthesis at the end of an expression.
However, Julia produces the follow error for an identical mistake:
One simple solution to this is to simply upgrade my development environment from Jupyter Notebooks to a general IDE to more easily root out issues by running code line-by-line. However, I see the lack of clarity in showing where specific errors arise a significant drawback to development within Julia. However, as shown in the example below where an array has gone awry, an IDE (such as Atom shown below) can make troubleshooting and debugging a relative breeze.
Furthermore, when editing auxiliary functions in another file or module that was loaded as a library, Julia is not kind enough to simply reload and recompile the module; to get it to properly work in Atom, I had to shut down the Julia kernel then rerun the entirety of the code. Since Julia takes a bit to initially load and compile libraries and code, this slows down the debugging process substantially. There is a specific package (Revise) that exists to take care of this issue, but it is not standard and requires loading this specific library into your code.
## GitHub Repositories: Streamflow Generators
PyMFGM: A parallelized Python version of the code, written by Bernardo Trindade
Kirsch-Nowak Stationary Generator in Julia: Please note that the results are not validated. However, you can easily access the Jupyter Notebook version to play around with the code in addition to running the code from your terminal using the main.jl script.
Full Kirsch-Nowak Streamflow Generator: Also developed by Matteo GiulianiJon Herman and Julianne Quinn and can handle rescaling flows for changes due to monsoons. I would highly suggest diving into this code alongside the relevant blog posts: Part 1 (explanation), Part 2 (validation).
# Magnitude-varying sensitivity analysis and visualization (Part 2)
In my last post, I talked about producing these flow-duration-curve-type figures for an output time-series one might be interested in, and talked about their potential use in an exploratory approach for the purpose of robust decision making. Again, the codes to perform the analysis and visualization are in this Github repository.
Fig. 1: Historical data vs. range of experiment outputs
As already discussed, there are multiple benefits for visualizing the output in such manner: we are often concerned with the levels and frequencies of extremes when making decisions about systems (e.g. “how bad is the worst case?”, “how rare is the worst case?”), or we might like to know how often we exceed a certain threshold (e.g. “how many years exceed an annual shortage of 1000 af?“). The various percentiles tell a different part of the story of how a system operates, the 5th percentile tells as that its level is exceeded 95% of the time, the 99th tells as that its level is only reached once in every 100 years in our records. These might seem obvious to the readers of this blog, but often times we perform our analyses for only some of these percentiles, “the worst event”, “the average”, etc., which is certainly very informative, but can potentially miss part of the bigger picture.
In this post I’m going to walk the reader through performing a sensitivity analysis using the output of an experiment using multiple Latin Hypercube Samples. The analysis will be magnitude-varying, i.e., it will be performed at different magnitudes of our output of interest. For this particular example, we aim to see what are the most significant drivers of shortage at the different levels it’s experienced by this user. In other words, if some factors appear to be driving the frequent small shortages experienced, are those factors the same for the rare large shortages?
To perform the sensitivity analysis, I am going to use SALib (featured in this blog multiple times already), to perform a Delta Moment-Independent Analysis [1] (also produces a first order Sobol sensitivity index [2]). You’ll probably need to install SALib if it’s not a package you’ve used already. I’m also going to use statsmodels, to perform a simple linear regression on the outputs and look at their R2 values. But, why, you might ask, perform not one, not two, but three sensitivity analyses for this? There are nuanced, yet potentially important differences between what the three methods capture:
Delta method: Look for parameters most significantly affecting the density function of observed shortages. This method is moment-independent, i.e., it looks at differences in the entire distribution of the output we’re interested in.
First order Sobol (S1): Look for parameters that most significantly affect the variance of observed outputs, including non-linear effects.
R2: Look for parameters best able to describe the variance of observed outputs, limited to linear effects.
Another important thing to note is that using the First order Sobol index, the total variance resulting from the parameters should equal 1. This means that if we sum up the S1’s we get from our analysis, the sum represents the variance described by the first order effects of our parameters, leaving whatever is left to interactions between our variables (that S1 cannot capture). The same holds using R2, as we are repeatedly fitting our parameters and scoring them on how much of the output variance they describe as a sole linear predictor (with no interactions or other relationships).
The following Python script will produce all three as well as confidence intervals for the Delta index and S1. The script essentially loops through all percentiles in the time-series and performs the two analyses for each one. In other words, we’re are looking at how sensitive each magnitude percentile is to each of the sampled parameters.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
import numpy as np import pandas as pd import statsmodels.api as sm from SALib.analyze import delta # Load parameter samples LHsamples = np.loadtxt('./LHsamples.txt') params_no = len(LHsamples[0,:]) param_bounds=np.loadtxt('./uncertain_params.txt', usecols=(1,2)) # Parameter names param_names=['IWRmultiplier','RESloss','TBDmultiplier','M_Imultiplier', 'Shoshone','ENVflows','EVAdelta','XBM_mu0','XBM_sigma0', 'XBM_mu1','XBM_sigma1','XBM_p00','XBM_p11'] # Define problem class problem = { 'num_vars': params_no, 'names': param_names, 'bounds': param_bounds.tolist() } # Percentiles for analysis to loop over percentiles = np.arange(0,100) # Function to fit regression with Ordinary Least Squares using statsmodels def fitOLS(dta, predictors): # concatenate intercept column of 1s dta['Intercept'] = np.ones(np.shape(dta)[0]) # get columns of predictors cols = dta.columns.tolist()[-1:] + predictors #fit OLS regression ols = sm.OLS(dta['Shortage'], dta[cols]) result = ols.fit() return result # Create empty dataframes to store results DELTA = pd.DataFrame(np.zeros((params_no, len(percentiles))), columns = percentiles) DELTA_conf = pd.DataFrame(np.zeros((params_no, len(percentiles))), columns = percentiles) S1 = pd.DataFrame(np.zeros((params_no, len(percentiles))), columns = percentiles) S1_conf = pd.DataFrame(np.zeros((params_no, len(percentiles))), columns = percentiles) R2_scores = pd.DataFrame(np.zeros((params_no, len(percentiles))), columns = percentiles) DELTA.index=DELTA_conf.index=S1.index=S1_conf.index = R2_scores.index = param_names # Read in experiment data expData = np.loadtxt('./experiment_data.txt') # Identify magnitude at each percentiles syn_magnitude = np.zeros([len(percentiles),len(LHsamples[:,0])]) for j in range(len(LHsamples[:,0])): syn_magnitude[:,j]=[np.percentile(expData[:,j], i) for i in percentiles] # Delta Method analysis for i in range(len(percentiles)): if syn_magnitude[i,:].any(): try: result= delta.analyze(problem, LHsamples, syn_magnitude[i,:], print_to_console=False, num_resamples=2) DELTA[percentiles[i]]= result['delta'] DELTA_conf[percentiles[i]] = result['delta_conf'] S1[percentiles[i]]=result['S1'] S1_conf[percentiles[i]]=result['S1_conf'] except: pass S1.to_csv('./S1_scores.csv') S1_conf.to_csv('./S1_conf_scores.csv') DELTA.to_csv('./DELTA_scores.csv') DELTA_conf.to_csv('./DELTA_conf_scores.csv') # OLS regression analysis dta = pd.DataFrame(data = LHsamples, columns=param_names) # fig = plt.figure() for i in range(len(percentiles)): shortage = np.zeros(len(LHsamples[:,0])) for k in range(len(LHsamples[:,0])): shortage[k]=syn_magnitude[i,k] dta['Shortage']=shortage for m in range(params_no): predictors = dta.columns.tolist()[m😦m+1)] result = fitOLS(dta, predictors) R2_scores.at[param_names[m],percentiles[i]]=result.rsquared R2_scores.to_csv('./R2_scores.csv')
The script produces the sensitivity analysis indices for each magnitude percentile and stores them as .csv files.
I will now present a way of visualizing these outputs, using the curves from Fig. 1 as context. The code below reads in the values for each sensitivity index, normalizes them to the range of magnitude at each percentile, and then plots them using matplotlib’s stackplot fuction, which stacks the contribution of each parameter to the sum (in this case the maximum of the resulting range)
I’ll go through what the code does in more detail:
First, we take the range boundaries (globalmax and globalmin) which give us the max and min values for each percentile. We then read in the values for each sensitivity index and normalize them to that range (i.e. globalmaxglobalmin for each percentile). The script also adds two more arrays (rows in the pandas dataframe), one representing interaction and one representing the globalmin, upon which we’re going to stack the rest of the values. [Note: This is a bit of a roundabout way of getting the figures how we like them, but it’s essentially creating a pseudo-stack for the globalmin, that we’re plotting in white.]
The interaction array is only used when normalizing the S1 and R2 values, where we attribute to it the difference between 1 and the sum of the calculated indices (i.e. we’re attributing the rest to interaction between the parameters). We don’t need to do this for the delta method indices (if you run the code the array remains empty), but the reason I had to put it there was to make it simpler to create labels and a single legend later.
The plotting simply creates three subplots and for each one uses stackplot to plot the normalized values and then the edges in black. It is important to note that the colorblocks in each figure do not represent the volume of shortage attributed to each parameter at each percentile, but rather the contribution of each parameter to the change in the metric, namely, the density distribution (Delta Method), and the variance (S1 and R2). The code for this visualization is provided at the bottom of the post.
Fig. 2: Magnitude sensitivity curves using three sensitivity indeces
The first thing that pops out from this figure is the large blob of peach, which represents the irrigation demand multiplier in our experiment. The user of interest here was an irrigation user, which would suggest that their shortages are primarily driven by increases in their own demands and of other irrigation users. This is important, because irrigation demand is an uncertainty for which we could potentially have direct or indirect control over, e.g. through conservation efforts.
Looking at the other factors, performing the analysis in a magnitude-varying manner, allowed us to explore the vulnerabilities of this metric across its different levels. For example, dark blue and dark green represent the mean flow of dry and wet years, respectively. Across the three figures we can see that the contribution of mean wet-year flow is larger in the low-magnitude percentiles (left hand side) and diminishes as we move towards the larger-magnitude percentiles.
Another thing that I thought was interesting to note was the difference between the S1 and the R2 plots. They are both variance-based metrics, with R2 limited to linear effects in this case. In this particular case, the plots are fairly similar which would suggest that a lot of the parameter effects on the output variance are linear. Larger differences between the two would point to non-linearities between changes in parameter values and the output.
The code to produce Fig. 2:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
# Percentiles for analysis to loop over percentiles = np.arange(0,100) # Estimate upper and lower bounds globalmax = [np.percentile(np.max(expData_sort[:,:],1),p) for p in percentiles] globalmin = [np.percentile(np.min(expData_sort[:,:],1),p) for p in percentiles] delta_values = pd.read_csv('./DELTA_scores.csv') delta_values.set_index(list(delta_values)[0],inplace=True) delta_values = delta_values.clip(lower=0) bottom_row = pd.DataFrame(data=np.array([np.zeros(100)]), index= ['Interaction'], columns=list(delta_values.columns.values)) top_row = pd.DataFrame(data=np.array([globalmin]), index= ['Min'], columns=list(delta_values.columns.values)) delta_values = pd.concat([top_row,delta_values.loc[:],bottom_row]) for p in range(len(percentiles)): total = np.sum(delta_values[str(percentiles[p])])-delta_values.at['Min',str(percentiles[p])] if total!=0: for param in param_names: value = (globalmax[p]-globalmin[p])*delta_values.at[param,str(percentiles[p])]/total delta_values.set_value(param,str(percentiles[p]),value) delta_values = delta_values.round(decimals = 2) delta_values_to_plot = delta_values.values.tolist() S1_values = pd.read_csv('./S1_scores.csv') S1_values.set_index(list(S1_values)[0],inplace=True) S1_values = S1_values.clip(lower=0) bottom_row = pd.DataFrame(data=np.array([np.zeros(100)]), index= ['Interaction'], columns=list(S1_values.columns.values)) top_row = pd.DataFrame(data=np.array([globalmin]), index= ['Min'], columns=list(S1_values.columns.values)) S1_values = pd.concat([top_row,S1_values.loc[:],bottom_row]) for p in range(len(percentiles)): total = np.sum(S1_values[str(percentiles[p])])-S1_values.at['Min',str(percentiles[p])] if total!=0: diff = 1-total S1_values.set_value('Interaction',str(percentiles[p]),diff) for param in param_names+['Interaction']: value = (globalmax[p]-globalmin[p])*S1_values.at[param,str(percentiles[p])] S1_values.set_value(param,str(percentiles[p]),value) S1_values = S1_values.round(decimals = 2) S1_values_to_plot = S1_values.values.tolist() R2_values = pd.read_csv('./R2_scores.csv') R2_values.set_index(list(R2_values)[0],inplace=True) R2_values = R2_values.clip(lower=0) bottom_row = pd.DataFrame(data=np.array([np.zeros(100)]), index= ['Interaction'], columns=list(R2_values.columns.values)) top_row = pd.DataFrame(data=np.array([globalmin]), index= ['Min'], columns=list(R2_values.columns.values)) R2_values = pd.concat([top_row,R2_values.loc[:],bottom_row]) for p in range(len(percentiles)): total = np.sum(R2_values[str(percentiles[p])])-R2_values.at['Min',str(percentiles[p])] if total!=0: diff = 1-total R2_values.set_value('Interaction',str(percentiles[p]),diff) for param in param_names+['Interaction']: value = (globalmax[p]-globalmin[p])*R2_values.at[param,str(percentiles[p])] R2_values.set_value(param,str(percentiles[p]),value) R2_values = R2_values.round(decimals = 2) R2_values_to_plot = R2_values.values.tolist() color_list = ["white", "#F18670", "#E24D3F", "#CF233E", "#681E33", "#676572", "#F3BE22", "#59DEBA", "#14015C", "#DAF8A3", "#0B7A0A", "#F8FFA2", "#578DC0", "#4E4AD8", "#F77632"] fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(14.5,8)) ax1.stackplot(percentiles, delta_values_to_plot, colors = color_list, labels=parameter_names_long) l1 = ax1.plot(percentiles, globalmax, color='black', linewidth=2) l2 = ax1.plot(percentiles, globalmin, color='black', linewidth=2) ax1.set_title("Delta index") ax1.set_xlim(0,100) ax2.stackplot(np.arange(0,100), S1_values_to_plot, colors = color_list, labels=parameter_names_long) ax2.plot(percentiles, globalmax, color='black', linewidth=2) ax2.plot(percentiles, globalmin, color='black', linewidth=2) ax2.set_title("S1") ax2.set_xlim(0,100) ax3.stackplot(np.arange(0,100), R2_values_to_plot, colors = color_list, labels=parameter_names_long) ax3.plot(percentiles, globalmax, color='black', linewidth=2) ax3.plot(percentiles, globalmin, color='black', linewidth=2) ax3.set_title("R^2") ax3.set_xlim(0,100) handles, labels = ax3.get_legend_handles_labels() ax1.set_ylabel('Annual shortage (af)', fontsize=12) ax2.set_xlabel('Shortage magnitude percentile', fontsize=12) ax1.legend((l1), ('Global ensemble',), fontsize=10, loc='upper left') fig.legend(handles[1:], labels[1:], fontsize=10, loc='lower center',ncol = 5) plt.subplots_adjust(bottom=0.2) fig.savefig('./experiment_sensitivity_curves.png')
References:
[1]: Borgonovo, E. “A New Uncertainty Importance Measure.” Reliability Engineering & System Safety 92, no. 6 (June 1, 2007): 771–84. https://doi.org/10.1016/j.ress.2006.04.015.
[2]: Sobol, I. M. (2001). “Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates.” Mathematics and Computers in Simulation, 55(1-3):271-280, doi:10.1016/S0378-4754(00)00270-6.
# Magnitude-varying sensitivity analysis and visualization (Part 1)
Various posts have discussed sensitivity analysis and techniques in this blog before. The purpose of this post is to show an application of the methods and demonstrate how they can be used in an exploratory manner, for the purposes of robust decision making (RDM). RDM aims to evaluate the performance of a policy/strategy/management plan over an ensemble of deeply uncertain parameter combinations – commonly referred to as “states of the world” (SOWs) – and then identify the policies that are most robust to those uncertainties. Most importantly, this process allows the decision maker to examine the implications of their assumptions about the world (or how it will unfold) on their candidate strategies [1].
This is Part 1 of a two part post. In this first post, I’ll introduce the types of figures I’ll be talking about, and some visualization code. In the second post (up in a couple days), I’ll discuss sensitivity analysis for the system as well as some visuals. All the code and data to produce the figures below can be found in this repository.
Now assume the performance of a system is described by a time-series, produced by our model as an output. This might be a streamflow we care about, reservoir releases, nutrient loading, or any type of time-series produced by a run of our model. For the purposes of this example, I’ll use a time-series from the system I’ve been working on, which represents historical shortages for an agricultural user.
Fig. 1: Historical data in series
We can sort and rank these data, in the style of a flow duration curve, which would allow us to easily see, levels for median shortage (50th percentile), worst (99th), etc. The reasons one might care about these things (instead of, say, just looking at the mean, or at the time series as presented in Fig. 1) are multiple : we are often concerned with the levels and frequencies of our extremes when making decisions about systems (e.g. “how bad is the worst case?”, “how rare is the worst case?”), we might like to know how often we exceed a certain threshold (e.g. “how many years exceed an annual shortage of 1000 af?“), or, simply, maintain the distributional information of the series we care about in an easily interpretable format.
Fig. 2: Historical data sorted by percentile
For the purposes of an exploratory experiment, we would like to see how this time-series of model output might change under different conditions (or SOWs). There are multiple ways one might go about this [2], and in this study we sampled a broad range of parameters that we thought would potentially affect the system using Latin Hypercube Sampling [3], producing 1000 parameter combinations. We then re-simulated the system and saved all equivalent outputs for this time-series. We would like to see how this output changes under all the sampled runs.
Fig. 3: Historical data vs. experiment outputs (under 1000 SOWs)
Another way of visualizing this information, if we’re not interested in seeing all the individual lines, is to look at the range of outputs. To produce Fig. 4, I used the fill_between function in matplotlib, filling between the max and min values at each percentile level.
Fig. 4: Historical data vs. range of experiment outputs
By looking at the individual lines or the range, there’s one piece of potentially valuable information we just missed. We have little to no idea of what the density of outputs is within our experiment. We can see the max and min range, the lines thinning out at the edges, but it’s very difficult to infer any density of output within our samples. To address this, I’ve written a little function that loops through 10 frequency levels (you can also think of them as percentiles) and uses the fill_between function again. The only tricky thing to figure out was how to appropriately represent each layer of increasing opacity in the legend – they are all the same color and transparency, but become darker as they’re overlaid. I pulled two tricks for this. First, I needed a function that calculates the custom alpha, or the transparency, as it is not cumulative in matplotlib (e.g., two objects with transparency 0.2 together will appear as a single object with transparency 0.36).
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
def alpha(i, base=0.2): l = lambda x: x+base-x*base ar = [l(0)] for j in range(i): ar.append(l(ar[-1])) return ar[-1]
view raw alpha.py hosted with ❤ by GitHub
Second, I needed proxy artists representing the color at each layer. These are the handles in the code below, produced with every loop iteration.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
handles = [] labels=[] fig = plt.figure() ax=fig.add_subplot(1,1,1) for i in range(len(p)): ax.fill_between(P, np.min(expData_sort[:,:],1), np.percentile(expData_sort[:,:], p[i], axis=1), color='#4286f4', alpha = 0.1) ax.plot(P, np.percentile(expData_sort[:,:], p[i], axis=1), linewidth=0.5, color='#4286f4', alpha = 0.3) handle = matplotlib.patches.Rectangle((0,0),1,1, color='#4286f4', alpha=alpha(i, base=0.1)) handles.append(handle) label = "{:.0f} %".format(100-p[i]) labels.append(label) ax.plot(P,hist_sort, c='black', linewidth=2, label='Historical record') ax.set_xlim(0,100) ax.legend(handles=handles, labels=labels, framealpha=1, fontsize=8, loc='upper left', title='Frequency in experiment',ncol=2) ax.set_xlabel('Shortage magnitude percentile', fontsize=12) plt.savefig('experiment_data_density.png')
view raw plot.py hosted with ❤ by GitHub
Fig. 5: Historical data vs. frequency of experiment outputs
This allows us to draw some conclusions about how events of different magnitudes/frequencies shift under the SOWs we evaluated. For this particular case, it seems that high frequency, small shortages (left hand side) are becoming smaller and/or less frequent, whereas low frequency, large shortages (right hand side) are becoming larger and/or more frequent. Of course, the probabilistic inference here depends on the samples we chose, but it serves the exploratory purposes of this analysis.
References:
[1]: Bryant, Benjamin P., and Robert J. Lempert. “Thinking inside the Box: A Participatory, Computer-Assisted Approach to Scenario Discovery.” Technological Forecasting and Social Change 77, no. 1 (January 1, 2010): 34–49. https://doi.org/10.1016/j.techfore.2009.08.002.
[2]: Herman, Jonathan D., Patrick M. Reed, Harrison B. Zeff, and Gregory W. Characklis. “How Should Robustness Be Defined for Water Systems Planning under Change?” Journal of Water Resources Planning and Management 141, no. 10 (2015): 4015012. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000509.
[3]: McKay, M. D., R. J. Beckman, and W. J. Conover. “A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code.” Technometrics 21, no. 2 (1979): 239–45. https://doi.org/10.2307/1268522.
# Time series forecasting in Python for beginners
This semester I am teaching Engineering Management Methods here at Cornell University. The course is aimed at introducing engineering students to systems thinking and a variety of tools and analyses they can use to analyze data. The first chapter has been on time series forecasting, where we discussed some of the simpler models one can use and apply for forecasting purposes, including Simple and Weighted Moving Average, Single and Double Exponential Smoothing, Additive and Multiplicative Seasonal Models, and Holt Winter’s Method.
The class applications as well as the homework are primarily performed in Excel, but I have been trying, with limited success, to encourage the use of programming languages for the assignments. One comment I’ve received by a student has been that it takes significantly more time to perform the calculations by coding; they feel that it’s a waste of time. I initially attributed the comment to the fact that the student was new to coding and it takes time in the beginning, but on later reflection I realized that, in fact, the student was probably simply manually repeating the same Excel operations by using code: take a set of 30 observations, create an array to store forecasts, loop through every value and calculate forecast using model formula, calculate error metrics, print results, repeat steps for next set of data. It occurred to me that of course they think it’s a waste of time, because doing it that way completely negates what programming is all about: designing and building an executable program or function to accomplish a specific computing task. In this instance, the task is to forecast using each of the models we learn in class and the advantage of coding comes with the development of some sort of program or function that performs these operations for us, given a set of data as input. Simply going through the steps of performing a set of calculations for a problem using code is not much different than doing so manually or in Excel. What is different (and beneficial) is designing a code so that it can then be effortlessly applied to all similar problems without having to re-perform all calculations. I realize this is obvious to the coding virtuosos frequenting this blog, but it’s not immediately obvious to the uninitiated who are rather confused on why Dr. Hadjimichael is asking them to waste so much time for a meager bonus on the homework.
So this blog post, is aimed at demonstrating to coding beginners how one can transition from one way of thinking to the other, and providing a small time-series-forecasting toolkit for users that simply want to apply the models to their data.
The code and data for this example can be found on my GitHub page and I will discuss it below. I will be using a wine sales dataset that lists Australian wine sales (in kiloliters) from January 1980 to October 1991. The data looks like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Date Sales 1/1/80 464 2/1/80 675 3/1/80 703 4/1/80 887 5/1/80 1139 6/1/80 1077 7/1/80 1318 8/1/80 1260 9/1/80 1120 10/1/80 963 11/1/80 996 12/1/80 960
view raw wine_sales.csv hosted with ❤ by GitHub
And this is what the time series looks like:
We first need to import the packages we’ll be using and load the data. I will be using Pandas in this example (but there’s other ways). I’m also defining the number of seasonal periods in a cycle, in this case 12.
import numpy as np #Package we'll use for numerical calculations
import matplotlib.pyplot as plt #From matplotlib package we import pyplot for plots
import pandas #Package to data manipulation
import scipy.optimize #Package we'll use to optimize
plt.style.use('seaborn-colorblind') #This is a pyplot style (optional)
'''Load the data into a pandas series with the name wine_sales'''
P=12 #number of seasonal periods in a cycle
In class, I’ve always mentioned that one should use a training and a validation set for model development, primarily to avoid overfitting our model to the specific training set. In this example, the functions are written as they apply to the training set. Should you choose to apply the functions listed here, you should apply the functions for the training set, extract forecasts and then use those to initialize your validation period. To divide the observations, you would do something like this:
training = time_series[0:108] # Up to December '88
validation = time_series[108:] # From January '89 until end
Now, if say, we wanted to apply the Naive model of the next steps forecast being equal to the current observation, i.e., $\hat{y}_{t+1}=y_t$, we’d do something like:
y_hat=pandas.Series().reindex_like(time_series) # Create an array to store forecasts
y_hat[0]= time_series[0] # Initialize forecasting array with first observation
''' Loop through every month using the model to forecast y_hat'''
for t in range(len(y_hat)-1): # Set a range for the index to loop through
y_hat[t+1]= time_series[t] # Apply model to forecast time i+1
Now if we’d like to use this for any time series, so we don’t have to perform our calculations every time, we need to reformat this a bit so it’s a function:
def naive(time_series):
y_hat=pandas.Series().reindex_like(time_series)
y_hat[0]= time_series[0] # Initialize forecasting array with first observation
''' Loop through every month using the model to forecast y'''
#This sets a range for the index to loop through
for t in range(len(y_hat)-1):
y_hat[t+1]= time_series[t] # Apply model to forecast time i+1
return y_hat
Now we can just call define this function at the top of our code and just call it with any time series as an input. The function as I’ve defined it returns a pandas.Series with all our forecasts. We can then do the same for all the other modeling methods (below). Some things to note:
• The data we read in the top, outside the functions, as well as any parameters defined (P in this case) are global variables and do not need to be defined as an input to the function. The functions below only need a list of parameter values as inputs.
• For the models with seasonality and/or trend we need to create separate series to store those estimates for E, S, and T.
• Each model has its own initialization formulas and if we wanted to apply them to the validation set that follows our training set, we’d need to initialize with the last values of our training.
'''SIMPLE MOVING AVERAGE
Using this model, y_hat(t+1)=(y(t)+y(t-1)...+y(t-k+1))/k (i.e., the predicted
next value is equal to the average of the last k observed values).'''
def SMA(params):
k=int(np.array(params))
y_hat=pandas.Series().reindex_like(time_series)
y_hat[0:k]=time_series[0:k]
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(k-1,len(y_hat)-1): #This sets a range for the index to loop through
y_hat[t+1]= np.sum(time_series[t-k+1:t+1])/k # Apply model to forecast time i+1
return y_hat
'''WEIGHTED MOVING AVERAGE
Using this model, y_hat(t+1)=w(1)*y(t)+w(2)*y(t-1)...+w(k)*y(t-k+1) (i.e., the
predicted next value is equal to the weighted average of the last k observed
values).'''
def WMA(params):
weights = np.array(params)
k=len(weights)
y_hat=pandas.Series().reindex_like(time_series)
y_hat[0:k]=time_series[0:k] # Initialize values
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(k-1,len(y_hat)-1): #This sets a range for the index to loop through
y_hat[t+1]= np.sum(time_series[t-k+1:t+1].multiply(weights)) # Apply model to forecast time i+1
return y_hat
'''This model includes the constraint that all our weights should sum to one.
To include this in our optimization later, we need to define it as a function of our
weights.'''
def WMAcon(params):
weights = np.array(params)
return np.sum(weights)-1
'''SINGLE EXPONENTIAL SMOOTHING
Using this model, y_hat(t+1)=y_hat(t)+a*(y(t)-y_hat(t))(i.e., the
predicted next value is equal to the weighted average of the last forecasted value and its
difference from the observed).'''
def SES(params):
a = np.array(params)
y_hat=pandas.Series().reindex_like(time_series)
y_hat[0]=time_series[0] # Initialize values
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(len(y_hat)-1): #This sets a range for the index to loop through
y_hat[t+1]= y_hat[t]+a*(time_series[t]-y_hat[t])# Apply model to forecast time i+1
return y_hat
'''DOUBLE EXPONENTIAL SMOOTHING (Holts Method)
Using this model, y_hat(t+1)=E(t)+T(t) (i.e., the
predicted next value is equal to the expected level of the time series plus the
trend).'''
def DES(params):
a,b = np.array(params)
y_hat=pandas.Series().reindex_like(time_series)
'''We need to create series to store our E and T values.'''
E = pandas.Series().reindex_like(time_series)
T = pandas.Series().reindex_like(time_series)
y_hat[0]=E[0]=time_series[0] # Initialize values
T[0]=0
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(len(y_hat)-1): #This sets a range for the index to loop through
E[t+1] = a*time_series[t]+(1-a)*(E[t]+T[t])
T[t+1] = b*(E[t+1]-E[t])+(1-b)*T[t]
y_hat[t+1] = E[t] + T[t] # Apply model to forecast time i+1
return y_hat
Using this model, y_hat(t+1)=E(t)+S(t-p) (i.e., the
predicted next value is equal to the expected level of the time series plus the
appropriate seasonal factor). We first need to create an array to store our
forecast values.'''
def ASM(params):
a,b = np.array(params)
p = P
y_hat=pandas.Series().reindex_like(time_series)
'''We need to create series to store our E and S values.'''
E = pandas.Series().reindex_like(time_series)
S = pandas.Series().reindex_like(time_series)
y_hat[:p]=time_series[0] # Initialize values
'''We need to initialize the first p number of E and S values'''
E[:p] = np.sum(time_series[:p])/p
S[:p] = time_series[:p]-E[:p]
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(p-1, len(y_hat)-1): #This sets a range for the index to loop through
E[t+1] = a*(time_series[t]-S[t+1-p])+(1-a)*E[t]
S[t+1] = b*(time_series[t]-E[t])+(1-b)*S[t+1-p]
y_hat[t+1] = E[t] + S[t+1-p] # Apply model to forecast time i+1
return y_hat
'''MULTIPLICATIVE SEASONAL
Using this model, y_hat(t+1)=E(t)*S(t-p) (i.e., the
predicted next value is equal to the expected level of the time series times
the appropriate seasonal factor). We first need to create an array to store our
forecast values.'''
def MSM(params):
a,b = np.array(params)
p = P
y_hat=pandas.Series().reindex_like(time_series)
'''We need to create series to store our E and S values.'''
E = pandas.Series().reindex_like(time_series)
S = pandas.Series().reindex_like(time_series)
y_hat[:p]=time_series[0] # Initialize values
'''We need to initialize the first p number of E and S values'''
E[:p] = np.sum(time_series[:p])/p
S[:p] = time_series[:p]/E[:p]
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(p-1, len(y_hat)-1): #This sets a range for the index to loop through
E[t+1] = a*(time_series[t]/S[t+1-p])+(1-a)*E[t]
S[t+1] = b*(time_series[t]/E[t])+(1-b)*S[t+1-p]
y_hat[t+1] = E[t]*S[t+1-p] # Apply model to forecast time i+1
return y_hat
Using this model, y_hat(t+1)=(E(t)+T(t))*S(t-p) (i.e., the
predicted next value is equal to the expected level of the time series plus the
trend, times the appropriate seasonal factor). We first need to create an array
to store our forecast values.'''
def AHW(params):
a, b, g = np.array(params)
p = P
y_hat=pandas.Series().reindex_like(time_series)
'''We need to create series to store our E and S values.'''
E = pandas.Series().reindex_like(time_series)
S = pandas.Series().reindex_like(time_series)
T = pandas.Series().reindex_like(time_series)
y_hat[:p]=time_series[0] # Initialize values
'''We need to initialize the first p number of E and S values'''
E[:p] = np.sum(time_series[:p])/p
S[:p] = time_series[:p]-E[:p]
T[:p] = 0
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(p-1, len(y_hat)-1): #This sets a range for the index to loop through
E[t+1] = a*(time_series[t]-S[t+1-p])+(1-a)*(E[t]+T[t])
T[t+1] = b*(E[t+1]-E[t])+(1-b)*T[t]
S[t+1] = g*(time_series[t]-E[t])+(1-g)*S[t+1-p]
y_hat[t+1] = E[t]+T[t]+S[t+1-p] # Apply model to forecast time i+1
return y_hat
'''MUTLIPLICATIVE HOLT-WINTERS METHOD
Using this model, y_hat(t+1)=(E(t)+T(t))*S(t-p) (i.e., the
predicted next value is equal to the expected level of the time series plus the
trend, times the appropriate seasonal factor). We first need to create an array
to store our forecast values.'''
def MHW(params):
a, b, g = np.array(params)
p = P
y_hat=pandas.Series().reindex_like(time_series)
'''We need to create series to store our E and S values.'''
E = pandas.Series().reindex_like(time_series)
S = pandas.Series().reindex_like(time_series)
T = pandas.Series().reindex_like(time_series)
y_hat[:p]=time_series[0] # Initialize values
'''We need to initialize the first p number of E and S values'''
S[:p] = time_series[:p]/(np.sum(time_series[:p])/p)
E[:p] = time_series[:p]/S[:p]
T[:p] = 0
''' Loop through every month using the model to forecast y.
Be careful with Python indexing!'''
for t in range(p-1, len(y_hat)-1): #This sets a range for the index to loop through
E[t+1] = a*(time_series[t]/S[t+1-p])+(1-a)*(E[t]+T[t])
T[t+1] = b*(E[t+1]-E[t])+(1-b)*T[t]
S[t+1] = g*(time_series[t]/E[t])+(1-g)*S[t+1-p]
y_hat[t+1] = (E[t]+T[t])*S[t+1-p] # Apply model to forecast time i+1
return y_hat
Having defined this, I can then, for example, call the Multiplicative Holt Winters method by simply typing:
MHW([0.5,0.5,0.5])
This will produce a forecast using the Multiplicative Holt Winters method with those default parameters, but we would like to calibrate them to get the “best” forecasts from our model. To do so, we need to define what we mean by “best”, and in this example I’m choosing to use Mean Square Error as my performance metric. I define it below as a function that receives the parameters and some additional arguments as inputs. I only need to set it up this way because my optimization function is trying to minimize the MSE function by use of those parameters. I’m using the “args” array to simply tell the function which model it’s using to forecast.
def MSE(params, args):
model, = args
t_error = np.zeros(len(time_series))
forecast = model(params)
for t in range(len(time_series)):
t_error[t] = time_series[t]-forecast[t]
MSE = np.mean(np.square(t_error))
return MSE
To perform the optimization in Excel, we’d use Solver, but in Python we have other options. SciPy is a Python package that allows us, among many other things, to optimize such single-objective problems. What I’m doing here is that I define a list of all the models I want to optimize, their default parameters, and the parameters’ bounds. I then use a loop to go through my list of models and run the optimization. To store the minimized MSE values as well as the parameter values that produce them, we can create an array to store the MSEs and a list to store the parameter values for each model. The optimization function produces a “dictionary” item that contains the minimized MSE value (under ‘fun’), the parameters that produce it (under ‘x’) and other information.
''' List of all the models we will be optimizing'''
models = [SES, DES, ASM, MSM, AHW, MHW]
''' This is a list of all the default parameters for the models we will be
optimizing. '''
#SES, DES, ASM
default_parameters = [[0.5],[0.5,0.5],[0.5,0.5],
#MSM, AHW, MHW
[0.5,0.5],[0.5,0.5,0.5],[0.5,0.5,0.5]]
''' This is a list of all the bounds for the default parameters we will be
optimizing. All the a,b,g's are weights between 0 and 1. '''
bounds = [[(0,1)],[(0,1)]*2, [(0,1)]*2,
[(0,1)]*2,[(0,1)]*3,[(0,1)]*3]
min_MSEs = np.zeros(len(models)) # Array to store minimized MSEs
opt_params = [None]*len(models) # Empty list to store optim. parameters
for i in range(len(models)):
res = scipy.optimize.minimize(MSE, # Function we're minimizing (MSE in this case)
default_parameters[i], # Default parameters to use
# Additional arguments that the optimizer
# won't be changing (model in this case)
args=[models[i]],
method='L-BFGS-B', # Optimization method to use
bounds=bounds[i]) # Parameter bounds
min_MSEs[i] = res['fun'] #Store minimized MSE value
opt_params[i] = res['x'] #Store parameter values identified by optimizer
Note: For the WMA model, the weights should sum to 1 and this should be input to our optimization as a constraint. To do so, we need to define the constraint function as a dictionary and include the following in our minimization call: constraints=[{‘type’:’eq’,’fun’: WMAcon}]. The number of periods to consider cannot be optimized by this type of optimizer.
Finally, we’d like to present our results. I’ll do so by plotting the observations and all my models as well as their minimized MSE values:
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1) # Create figure
ax.set_title("Australian wine sales (kilolitres)") # Set figure title
l1 = ax.plot(time_series, color='black', linewidth=3.0, label='Observations') # Plot observations
for i in range(len(models)):
ax.plot(time_series.index,models[i](opt_params[i]), label = models[i].__name__)
ax.legend() # Activate figure legend
plt.show()
print('The estimated MSEs for all the models are:')
for i in range(len(models)):
print(models[i].__name__ +': '+str(min_MSEs[i]))
This snippet of code should produce this figure of all our forecasts, as well as a report of all MSEs:
The estimated MSEs for all the models are:
SES: 133348.78
DES: 245436.67
ASM: 80684.00
MSM: 64084.48
AHW: 72422.34
MHW: 64031.19
The Multiplicative Holt Winters method appears to give the smallest MSE when applied to these data.
# Fitting Hidden Markov Models Part II: Sample Python Script
This is the second part of a two-part blog series on fitting hidden Markov models (HMMs). In Part I, I explained what HMMs are, why we might want to use them to model hydro-climatological data, and the methods traditionally used to fit them. Here I will show how to apply these methods using the Python package hmmlearn using annual streamflows in the Colorado River basin at the Colorado/Utah state line (USGS gage 09163500). First, note that to use hmmlearn on a Windows machine, I had to install it on Cygwin as a Python 2.7 library.
For this example, we will assume the state each year is either wet or dry, and the distribution of annual streamflows under each state is modeled by a Gaussian distribution. More states can be considered, as well as other distributions, but we will use a two-state, Gaussian HMM here for simplicity. Since streamflow is strictly positive, it might make sense to first log-transform the annual flows at the state line so that the Gaussian models won’t generate negative streamflows, so that’s what we do here.
After installing hmmlearn, the first step is to load the Gaussian hidden Markov model class with from hmmlearn.hmm import GaussianHMM. The fit function of this class requires as inputs the number of states (n_components, here 2 for wet and dry), the number of iterations to run of the Baum-Welch algorithm described in Part I (n_iter; I chose 1000), and the time series to which the model is fit (here a column vector, Q, of the annual or log-transformed annual flows). You can also set initial parameter estimates before fitting the model and only state those which need to be initialized with the init_params argument. This is a string of characters where ‘s’ stands for startprob (the probability of being in each state at the start), ‘t’ for transmat (the probability transition matrix), ‘m’ for means (mean vector) and ‘c’ for covars (covariance matrix). As discussed in Part I it is good to test several different initial parameter estimates to prevent convergence to a local optimum. For simplicity, here I simply use default estimates, but this tutorial shows how to pass your own. I call the model I fit on line 5 model.
Among other attributes and methods, model will have associated with it the means (means_) and covariances (covars_) of the Gaussian distributions fit to each state, the state probability transition matrix (transmat_), the log-likelihood function of the model (score) and methods for simulating from the HMM (sample) and predicting the states of observed values with the Viterbi algorithm described in Part I (predict). The score attribute could be used to compare the performance of models fit with different initial parameter estimates.
It is important to note that which state (wet or dry) is assigned a 0 and which state is assigned a 1 is arbitrary and different assignments may be made with different runs of the algorithm. To avoid confusion, I choose to reorganize the vectors of means and variances and the transition probability matrix so that state 0 is always the dry state, and state 1 is always the wet state. This is done on lines 22-26 if the mean of state 0 is greater than the mean of state 1.
from hmmlearn.hmm import GaussianHMM
def fitHMM(Q, nSamples):
# fit Gaussian HMM to Q
model = GaussianHMM(n_components=2, n_iter=1000).fit(np.reshape(Q,[len(Q),1]))
# classify each observation as state 0 or 1
hidden_states = model.predict(np.reshape(Q,[len(Q),1]))
# find parameters of Gaussian HMM
mus = np.array(model.means_)
sigmas = np.array(np.sqrt(np.array([np.diag(model.covars_[0]),np.diag(model.covars_[1])])))
P = np.array(model.transmat_)
# find log-likelihood of Gaussian HMM
logProb = model.score(np.reshape(Q,[len(Q),1]))
# generate nSamples from Gaussian HMM
samples = model.sample(nSamples)
# re-organize mus, sigmas and P so that first row is lower mean (if not already)
if mus[0] > mus[1]:
mus = np.flipud(mus)
sigmas = np.flipud(sigmas)
P = np.fliplr(np.flipud(P))
hidden_states = 1 - hidden_states
return hidden_states, mus, sigmas, P, logProb, samples
# log transform the data and fit the HMM
logQ = np.log(AnnualQ)
hidden_states, mus, sigmas, P, logProb, samples = fitHMM(logQ, 100)
Okay great, we’ve fit an HMM! What does the model look like? Let’s plot the time series of hidden states. Since we made the lower mean always represented by state 0, we know that hidden_states == 0 corresponds to the dry state and hidden_states == 1 to the wet state.
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
def plotTimeSeries(Q, hidden_states, ylabel, filename):
sns.set()
fig = plt.figure()
xs = np.arange(len(Q))+1909
ax.plot(xs, Q, c='k')
ax.set_xlabel('Year')
ax.set_ylabel(ylabel)
handles, labels = plt.gca().get_legend_handles_labels()
fig.legend(handles, labels, loc='lower center', ncol=2, frameon=True)
fig.savefig(filename)
fig.clf()
return None
plt.switch_backend('agg') # turn off display when running with Cygwin
plotTimeSeries(logQ, hidden_states, 'log(Flow at State Line)', 'StateTseries_Log.png')
Wow, looks like there’s some persistence! What are the transition probabilities?
print(model.transmat_)
Running that we get the following:
[[ 0.6794469 0.3205531 ]
[ 0.34904974 0.65095026]]
When in a dry state, there is a 68% chance of transitioning to a dry state again in the next year, while in a wet state there is a 65% chance of transitioning to a wet state again in the next year.
What does the distribution of flows look like in the wet and dry states, and how do these compare with the overall distribution? Since the probability distribution of the wet and dry states are Gaussian in log-space, and each state has some probability of being observed, the overall probability distribution is a mixed, or weighted, Gaussian distribution in which the weight of each of the two Gaussian models is the unconditional probability of being in their respective state. These probabilities make up the stationary distribution, π, which is the vector solving the equation π = πP, where P is the probability transition matrix. As briefly mentioned in Part I, this can be found using the method described here: π = (1/ Σi[ei])e in which e is the eigenvector of PT corresponding to an eigenvalue of 1, and ei is the ith element of e. The overall distribution for our observations is then Y ~ π0N(μ0,σ02) + π1*N(μ1,σ12). We plot this distribution and the component distributions on top of a histogram of the log-space annual flows below.
from scipy import stats as ss
def plotDistribution(Q, mus, sigmas, P, filename):
# calculate stationary distribution
eigenvals, eigenvecs = np.linalg.eig(np.transpose(P))
one_eigval = np.argmin(np.abs(eigenvals-1))
pi = eigenvecs[:,one_eigval] / np.sum(eigenvecs[:,one_eigval])
x_0 = np.linspace(mus[0]-4*sigmas[0], mus[0]+4*sigmas[0], 10000)
fx_0 = pi[0]*ss.norm.pdf(x_0,mus[0],sigmas[0])
x_1 = np.linspace(mus[1]-4*sigmas[1], mus[1]+4*sigmas[1], 10000)
fx_1 = pi[1]*ss.norm.pdf(x_1,mus[1],sigmas[1])
x = np.linspace(mus[0]-4*sigmas[0], mus[1]+4*sigmas[1], 10000)
fx = pi[0]*ss.norm.pdf(x,mus[0],sigmas[0]) + \
pi[1]*ss.norm.pdf(x,mus[1],sigmas[1])
sns.set()
fig = plt.figure()
ax.hist(Q, color='k', alpha=0.5, density=True)
l1, = ax.plot(x_0, fx_0, c='r', linewidth=2, label='Dry State Distn')
l2, = ax.plot(x_1, fx_1, c='b', linewidth=2, label='Wet State Distn')
l3, = ax.plot(x, fx, c='k', linewidth=2, label='Combined State Distn')
handles, labels = plt.gca().get_legend_handles_labels()
fig.legend(handles, labels, loc='lower center', ncol=3, frameon=True)
fig.savefig(filename)
fig.clf()
return None
plotDistribution(logQ, mus, sigmas, P, 'MixedGaussianFit_Log.png')
Looks like a pretty good fit – seems like a Gaussian HMM is a decent model of log-transformed annual flows in the Colorado River at the Colorado/Utah state line. Hopefully you can find relevant applications for your work too. If so, I’d recommend reading through this hmmlearn tutorial, from which I learned how to do everything I’ve shown here.
# Fitting Hidden Markov Models Part I: Background and Methods
Hydro-climatological variables often exhibit long-term persistence caused by regime-shifting behavior in the climate, such as the El Niño-Southern Oscillations (ENSO). One popular way of modeling this long-term persistence is with hidden Markov models (HMMs) [Thyer and Kuczera, 2000; Akintug and Rasmussen, 2005; Bracken et al., 2014]. What is an HMM? Recall from my five blog posts on weather generators, that the occurrence of precipitation is often modeled by a (first order) Markov model in which the probability of rain on a given day depends only on whether or not it rained on the previous day. A (first order) hidden Markov model is similar in that the climate “state” (e.g., wet or dry) at a particular time step depends only on the state from the previous time step, but the state in this case is “hidden,” i.e. not observable. Instead, we only observe a random variable (discrete or continuous) that was generated under a particular state, but we don’t know what that state was.
For example, imagine you are a doctor trying to diagnose when an individual has the flu. On any given day, this person is in one of two states: sick or healthy. These states are likely to exhibit great persistence; when the person gets the flu, he/she will likely have it for several days or weeks, and when he/she is heathy, he/she will likely stay healthy for months. However, suppose you don’t have the ability to test the individual for the flu virus and can only observe his/her temperature. Different (overlapping) distributions of body temperatures may be observed depending on whether this person is sick or healthy, but the state itself is not observed. In this case, the person’s temperature can be modeled by an HMM.
So why are HMMs useful for describing hydro-climatological variables? Let’s go back to the example of ENSO. Maybe El Niño years in a particular basin tend to be wetter than La Niña years. Normally we can observe whether or not it is an El Niño year based on SST anomalies in the tropical Pacific, but suppose we only have paleodata of tree ring widths. We can infer from the tree ring data (with some error) what the total precipitation might have been in each year of the tree’s life, but we may not know what the SST anomalies were those years. Or even if we do know the SST anomalies, maybe there is another more predictive regime-shifting teleconnection we haven’t yet discovered. In either case, we can model the total annual precipitation with an HMM.
What is the benefit of modeling precipitation in these cases with an HMM as opposed to say, an autoregressive model? Well often the year to year correlation of annual precipitation may not actually be that high, but several consecutive wet or consecutive dry years are observed [Bracken et al., 2014]. Furthermore, paleodata suggests that greater persistence (e.g. megadroughts) in precipitation is often observed than would be predicted by autoregressive models [Ault et al., 2013; Ault et al., 2014]. This is where HMMs may come in handy.
Here I will explain how to fit HMMs generally, and in Part II I will show how to apply these methods using the Python package hmmlearn. To understand how to fit HMMs, we first need to define some notation. Let Yt be the observed variable at time t (e.g., annual streamflow). The distribution of Yt depends on the state at time t, Xt (e.g., wet or dry). Let’s assume for simplicity that our observations can be modeled by Gaussian distributions. Then f(Yt | Xt = i) ~ N(μi,σi 2) and f(Yt | Xt = j) ~ N(μj,σj 2) for a two-state HMM. The state at time t, Xt, depends on the state at the previous time step, Xt-1. Let P be the state transition matrix, where each element pi,j represents the probability of transitioning from state i at time t to state j at time t+1, i.e. pij = P(Xt+1 = j | Xt = i). P is a n x n matrix where n is the number of states (e.g. 2 for wet and dry). In all Markov models (hidden or not), the unconditional probability of being in each state, π can be modeled by the equation π = πP, where π is a 1 x n vector in which each element πi represents the unconditional probability of being in state i, i.e. πi = P(Xt = i). π is also called the stationary distribution and can be calculated from P as described here. Since we have no prior information on which to condition the first set of observations, we assume the initial probability of being in each state is the stationary distribution.
In fitting a two-state Gaussian HMM, we therefore need to estimate the following vector of parameters: θ = [μ0, σ0, μ1, σ1, p00, p11]. Note p01 = 1 – p00 and p10 = 1 – p11. The most common approach to estimating these parameters is through the Baum-Welch algorithm, an application of Expectation-Maximization built off of the forward-backward algorithm. The first step of this process is to set initial estimates for each of the parameters. These estimates can be random or based on an informed prior. We then begin with the forward step, which computes the joint probability of observing the first t observations and ending up in state i at time t, given the initial parameter estimates: P(Xt = i, Y1 = y1, Y2 = y2, …, Yt = yt | θ). This is computed for all t ϵ {1, …, T}. Then in the backward step, the conditional probability of observing the remaining observations after time t given the state observed at time t is computed: P(Yt+1 = yt+1, …, YT = yT | Xt=i, θ). Using Bayes’ theorem, it can shown that the product of the forward and backward probabilities is proportional to the probability of ending up in state i at time t given all of the observations, i.e. P(Xt = i | Y1 = y1,…, YT = yT, θ). This is derived below:
1) $P(X_t=i \vert Y_1=y_1,..., Y_T=y_T, \theta) = \frac{P(Y_1=y_1, ..., Y_T=y_T \vert X_t=i, \theta) P(X_t=i \vert \theta)}{P(Y_1=y_1, ..., Y_T=y_t \vert \theta)}$
2) $P(X_t=i \vert Y_1=y_1,..., Y_T=y_T, \theta) = \frac{P(Y_1=y_1, ..., Y_t=y_t \vert X_t=i, \theta) P(Y_{t+1} = y_{t+1}, ..., Y_T=y_T \vert X_t=i, \theta) P(X_t=i \vert \theta)}{P(Y_1=y_1, ..., Y_T=y_t \vert \theta)}$
3) $P(X_t=i \vert Y_1=y_1,..., Y_T=y_T, \theta) = \frac{P(X_t=i, Y_1=y_1, ..., Y_t=y_t \vert \theta) P(Y_{t+1} = y_{t+1}, ..., Y_T=y_T \vert X_t=i, \theta)}{P(Y_1=y_1, ..., Y_T=y_t \vert \theta)}$
4) $P(X_t=i \vert Y_1=y_1,..., Y_T=y_T, \theta) \propto P(X_t=i, Y_1=y_1, ..., Y_t=y_t \vert \theta) P(Y_{t+1}=y_{t+1}, ..., Y_T=y_T \vert X_t=i, \theta)$
The first equation is Bayes’ Theorem. The second equation is derived by the conditional independence of the observations up to time t (Y1, Y2, …, Yt) and the observations after time t (Yt+1, Yt+2, …, YT), given the state at time t (Xt). The third equation is derived from the definition of conditional probability, and the fourth recognizes the denominator as a normalizing constant.
Why do we care about the probability of ending up in state i at time t given all of the observations (the left hand side of the above equations)? In fitting a HMM, our goal is to find a set of parameters, θ, that maximize this probability, i.e. the likelihood function of the state trajectories given our observations. This is therefore equivalent to maximizing the product of the forward and backward probabilities. We can maximize this product using Expectation-Maximization. Expectation-Maximization is a two-step process for maximum likelihood estimation when the likelihood function cannot be computed directly, for example, because its observations are hidden as in an HMM. The first step is to calculate the expected value of the log likelihood function with respect to the conditional distribution of X given Y and θ (the left hand side of the above equations, or proportionally, the right hand side of equation 4). The second step is to find the parameters that maximize this function. These parameter estimates are then used to re-implement the forward-backward algorithm and the process repeats iteratively until convergence or some specified number of iterations. It is important to note that the maximization step is a local optimization around the current best estimate of θ. Hence, the Baum-Welch algorithm should be run multiple times with different initial parameter estimates to increase the chances of finding the global optimum.
Another interesting question beyond fitting HMMs to observations is diagnosing which states the observations were likely to have come from given the estimated parameters. This is often performed using the Viterbi algorithm, which employs dynamic programming (DP) to find the most likely state trajectory. In this case, the “decision variables” of the DP problem are the states at each time step, Xt, and the “future value function” being optimized is the probability of observing the true trajectory, (Y1, …,YT), given those alternative possible state trajectories. For example, let the probability that the first state was k be V1,k. Then V1,k = P(X1 = k) = P(Y1 = y1 | X1 = k)πk. For future time steps, Vt,k = P(Yt = yt | Xt = k)pik*Vt-1,i where i is the state in the previous time step. Thus, the Viterbi algorithm finds the state trajectory (X1, …, XT) maximizing VT,k.
Now that you know how HMMs are fit using the Baum-Welch algorithm and decoded using the Viterbi algorithm, read Part II to see how to perform these steps in practice in Python! |
12 class Maths Notes Chapter 4 Determinants free PDF| Quick revision Determinants Notes class 12 maths
CBSE Revision Notes for CBSE Class 12 Mathematics Determinants Determinant of a square matrix (up to 3 x 3 matrices), properties of determinants, minors, co-factors and applications of determinants in finding the area of a triangle. Adjoint and inverse of a square matrix. Consistency, inconsistency and number of solutions of system of linear equations by examples, solving system of linear equations in two or three variables (having unique solution) using inverse of a matrix.
Class 12 Maths Chapter-4 Determinants Quick Revision Notes Free Pdf
🔷 Chapter - 4 🔷
👉 Determinants 👈
🔹 System of algebraic equations can be expressed in the form of matrices.
• Linear Equations Format
a1x+b1y=c1
a2x+b2y=c2
• Matrix Format:
🔹 The values of the variables satisfying all the linear equations in the system, is called solution of system of linear equations.
🔹 If the system of linear equations has a unique solution. This unique solution is called determinant of Solution or det A
✳️ Applications of Determinants
👉 Science
👉 Economics
👉 Social Science, etc.
✳️ Determinant
🔹 A determinant is defined as a (mapping) unction from the set o square matrices to the set of real numbers
🔹 Every square matrix A is associated with a number, called its determinant
🔹 Denoted by det (A) or |A| or ∆
🔹 Only square matrices have determinants.
🔹 The matrices which are not square do not have determinants
🔹 For matrix A, |A| is read as determinant of A and not modulus of A.
✳️ Types of Determinant
🔷 1. First Order Determinant
🔹 Let A = [a ] be the matrix of order 1, then determinant of A is defined to be equal to a
🔹 If A = [a], then det (A) = |A| = a
🔷 2. Second Order Determinant
🔷 3. Third Order Determinant
🔹 Can be determined by expressing it in terms of second order determinants
The below method is explained for expansion around Row 1
The value of the determinant, thus will be the sum of the product of element in line parallel to the diagonal minus the sum of the product of elements in line perpendicular to the line segment. Thus,
The same procedure can be repeated for Row 2, Row 3, Column 1, Column 2, and Column 3
🔷 Note
🔹 Expanding a determinant along any row or column gives same value.
🔹 This method doesn't work for determinants of order greater than 3.
🔹 For easier calculations, we shall expand the determinant along that row or column which contains maximum number of zeros
🔹 In general, if A = kB where A and B are square matrices of order n, then | A| = kⁿ |B |, where n = 1, 2, 3
✳️ Properties of Determinants
🔹 Helps in simplifying its evaluation by obtaining maximum number of zeros in a row or a column.
🔹 These properties are true for determinants of any order.
✳️ Property 1
🔹 The value of the determinant remains unchanged if its rows and columns are interchanged
🔹 Verification:
Expanding ∆₁ along first column, we get
∆₁ =a₁ (b₂ c₃ - c₂ b₃) - a₂(b₁ c₃ - b₃ c₁) + a₃ (b₁ c₂ - b₂ c₁)
Hence ∆ = ∆₁
🔷 Note:
🔹 It follows from above property that if A is a square matrix,
Then det (A) = det (A'), where A' = transpose of A
🔹 If Ri = ith row and Ci = ith column, then for interchange of row and
🔹 columns, we will symbolically write Ci⇔Ri
✳️ Property 2
🔹 If any two rows (or columns) of a determinant are interchanged, then sign of determinant changes.
🔹 Verification :
✳️ Property 3
🔹 If any two rows (or columns) of a determinant are identical (all corresponding elements are same), then value of determinant is zero.
🔹 Verification:
🔹 If we interchange the identical rows (or columns) of the determinant ∆, then ∆ does not change.
🔹 However, by Property 2, it follows that ∆ has changed its sign
🔹 Therefore ∆ = -∆ or ∆ = 0
✳️ Property 4
🔹 If each element of a row (or a column) of a determinant is multiplied by a constant k, then its value gets multiplied by k
🔹 Verification
✳️ Property 5
🔹 If some or all elements of a row or column of a determinant are expressed as sum of two (or more) terms, then the determinant can be expressed as sum of two (or more) determinants.
🔹 Verification:
✳️ Property 6
🔹 If, to each element of any row or column of a determinant, the equimultiples of corresponding elements of other row (or column) are added, then value of determinant remains the same, i.e., the value of determinant remain same if we apply the operation
✳️ Property 7
🔹 If each element of a row (or column) of a determinant is zero, then its value is zero
✳️ Property 8
🔹 In a determinant, If all the elements on one side of the principal diagonal are Zero's , then the value of the determinant is equal to the product of the elements in the principal diagonal
✳️ Area of a Triangle
🔹 Let (x₁,y₁), (X₂, y₂), and (x₃, y₃) be the vertices of a triangle, then
✳️ Note
🔹 Area is a positive quantity, we always take the absolute value of the determinant .
🔹 If area is given, use both positive and negative values of the determinant for caleulation.
🔹 The area of the triangle formed by three collinear points is zero.
✳️ Minors and Cofactors
✳️ Minor
🔹 If the row and column containing the element a₁₁ (i.e., 1st row and 1st column)are removed, we get the second order determinant which is called the Minor of element a₁₁
🔹 Minor of an element aij of a determinant is the determinant obtained by deleting its ith row and jth column which element aij lies.
🔹 Minor of an element aij is denoted by Mij
🔹Minor of an element of a determinant of order n(n ≥ 2) is a determinant of order n-1
🔹 Eg: Find Minor o the element 6 in the determinant A given
✳️ Cofactor
🔹 If the minors are multiplied by the proper signs we get cofactors
🔹The cofactor of the element aij is Cij = (-1) Mij
🔹The signs to be multiplied are given by the rule
🔹 Cofactor of 4 is A₁₂ =(-1) M₁₂ =(-1)³(4) =-4
✳️ Adjoint and Inverse of a Matrix
🔹 Adjoint of matrix is the transpose of the matrix of cofactors of the given matrix
✳️ Theorem 1
🔹 If A be any given square matrix of order n,
Where I is the identity matrix of order n
🔹 Verification:
Similarly, we can show (adj A) A = AI
✳️ Singular & No Singular Matrix:
🔹 A square matrix A is said to be singular if |A| = o
🔹 A square matrix A is said to be non-singular if |A | 0
✳️ Theorem 2
🔹 If A and B are non-singular matrices of the same order, then AB and BA are also non- singular matrices of the same order.
✳️ Theorem 3
🔹 The determinant of the product of matrices is equal to product of their respective determinants, that is, AB =|A| |B| , where A and B are square matrices of the same order
✳️ Theorem 4
🔹 A square matrix A is invertible if and only if A is non-singular matrix.
🔹 Verification
Let A be invertible matrix of order n and I be the identity matrix of order n. Then, there exists a square matrix B of order n such that AB = BA = I
Now AB = I. So |AB| = I or |A| |B| = 1 (since |I|= 1, |AB|=| A||B|). This gives |A| 0. Hence A is non-singular.
Conversely, let A be non-singular. Then |A| ≠ 0
Now A (adj A) = (adj A) A = |A| I (Theorem 1)
✳️Applications of Determinants and Matrices
🔹 Used for solving the system of linear equations in two or three variables and for checking the consistency of the system of linear equations.
🔷 Consistent system
🔹 A system of equations is said to be consistent if its solution (one or more) exists.
🔷 Inconsistent system
🔹 A system of equations is said to be inconsistent if its solution does not exist
✳️ Solution of system of linear equations using inverse of a matrix
🔹 Let the system of Equations be as below:
a₁x+b₁y +c₁z=d₁
a₂x +b₂y +c₂z=d₂
a₃x+b₃y+c₃z=d₃
✳️ Case I
If A is a non-singular matrix, then its inverse exists.
AX = B
A⁻¹(AX) = A⁻¹B (premultiplying by A⁻¹)
(A⁻¹A)X -A⁻¹B (by associative property)
1X = A⁻¹B
X = A⁻¹B
This matrix equation provides unique solution for the given system of equations as inverse of a matrix is unique. This method of solving system of equations is known as Matrix Method
✳️ Case II
If A is a singular matrix, then |A| = 0.
In this case, we calculate (adj A) B.
If (adj A) B O, (O being zero matrix), then solution does not exist and the system of equations is called inconsistent.
If (adj A) B = O, then system may be either consistent or inconsistent according as the system have either infinitely many solutions or no solution
✳️ Summary
For a square matrix A in matrix equation AX = B
🔹 |A| 0, there exists unique solution
🔹 |A| = 0 and (adj A) B 0, then there exists no solution
🔹 |A| o and (adj A) B = 0, then system may or may not be consistent. |
Our Discord hit 10K members! 🎉 Meet students and ask top educators your questions.Join Here!
# Light of wavelength 121.6 $\mathrm{nm}$ is emitted by a hydrogen atom. What are the (a) higher quantum number and (b) lowerquantum number of the transition producing this emission? (c)What is the name of the series that includes the transition?
## (a) $n_{1}=2$(b) $n_{2}=1$(c) Referring to Fig. $39-18$, we see that this must be one of the Lyman series transitions.
Wave Optics
### Discussion
You must be signed in to discuss.
### Video Transcript
in here we have a hydrogen atom which emits a wave. A light off the event 1 21.6 centimeters. So the hydrogen atom emits a light off a violent 1 21.69 meters. Let's evaluate the energy of this fortune. So the energy of the fort on in this case because really equal to let's see your Lambda. Since this isn't Nanometers will make use of the fact that it seems equal toe 12 40 electron golden tone on a meter divided by 1 21.6 millimeters. So this becomes equal toe, then point to electron worlds. Not this energy must have come from a certain transition. We make a transition from into to a certain anyone where this is the low quantum number and this is higher quantum number. The electron makes the transition it emits before done off a violent lambda. So in that case, he do minus even should be equal to the energy off the four done and we can write e to minus iguanas Negative off 13.6 Van wert into square minus for Noah en one square. No, it must be noted that such a high value off the ground. State energy will first notice that the downside energy it's always given us minus 13.6 Hueys now because we noted that such a high value of the Ford on energy must come from a transition. But the cab it men and women and who is very large and such high gaps are only seen for the few initial values like for anyone is able to 123 and four. If I make this small clearly, if this this one, then the difference between and one and two is very large. Then between 213 Reduces between three and four introduces even more between four and five produces even more, and then it keeps reducing. So let us try to put a value off two and one for this, when if it's absurd and two was equal to and end one as equal toe. One BC that this calculation exactly becomes 10 point build TVs. So in this case, our initial guests exactly solve Sansa so high. Oh, quantum numbers two and the loop quantum number is one, and because it makes a transition tow n equal to one, it's a lineman cities. So all the transitions, which ultimately land up at anyone equal to one, are always called Lyman Cities |
# Mirrored Functions
Calculus Level 2
Find the shortest distance between the two curves $f(x)=\ln(x)$ and $g(x)=e^x$.
× |
## Estimating where two functions intersect using data
| categories: data analysis | tags: | View Comments
Suppose we have two functions described by this data:
T(K) E1 E2
300 -208 -218
400 -212 -221
500 -215 -220
600 -218 -222
700 -220 -222
800 -223 -224
900 -227 -225
1000 -229 -227
1100 -233 -228
1200 -235 -227
1300 -240 -229
We want to determine the temperature at which they intersect, and more importantly what the uncertainty on the intersection is. There is noise in the data, which means there is uncertainty in any function that could be fit to it, and that uncertainty would propagate to the intersection. Let us examine the data.
import matplotlib.pyplot as plt
T = [x[0] for x in data]
E1 = [x[1] for x in data]
E2 = [x[2] for x in data]
plt.plot(T, E1, T, E2)
plt.legend(['E1', 'E2'])
plt.savefig('images/intersection-0.png')
Our strategy is going to be to fit functions to each data set, and get the confidence intervals on the parameters of the fit. Then, we will solve the equations to find where they are equal to each other and propagate the uncertainties in the parameters to the answer.
These functions look approximately linear, so we will fit lines to each function. We use the regress function in pycse to get the uncertainties on the fits. Then, we use the uncertainties package to propagate the uncertainties in the analytical solution to the intersection of two lines.
import numpy as np
from pycse import regress
import matplotlib.pyplot as plt
import uncertainties as u
T = np.array([x[0] for x in data])
E1 = np.array([x[1] for x in data])
E2 = np.array([x[2] for x in data])
# columns of the x-values for a line: constant, T
A = np.column_stack([T**0, T])
p1, pint1, se1 = regress(A, E1, alpha=0.05)
p2, pint2, se2 = regress(A, E2, alpha=0.05)
# Now we have two lines: y1 = m1*T + b1 and y2 = m2*T + b2
# they intersect at m1*T + b1 = m2*T + b2
# or at T = (b2 - b1) / (m1 - m2)
b1 = u.ufloat((p1[0], se1[0]))
m1 = u.ufloat((p1[1], se1[1]))
b2 = u.ufloat((p2[0], se2[0]))
m2 = u.ufloat((p2[1], se2[1]))
T_intersection = (b2 - b1) / (m1 - m2)
print T_intersection
# plot the data, the fits and the intersection and \pm 2 \sigma.
plt.plot(T, E1, 'bo ', label='E1')
plt.plot(T, np.dot(A,p1), 'b-')
plt.plot(T, E2, 'ro ', label='E2')
plt.plot(T, np.dot(A,p2), 'r-')
plt.plot(T_intersection.nominal_value,
(b1 + m1*T_intersection).nominal_value, 'go',
ms=13, alpha=0.2, label='Intersection')
plt.plot([T_intersection.nominal_value - 2*T_intersection.std_dev(),
T_intersection.nominal_value + 2*T_intersection.std_dev()],
[(b1 + m1*T_intersection).nominal_value,
(b1 + m1*T_intersection).nominal_value],
'g-', lw=3, label='$\pm 2 \sigma$')
plt.legend(loc='best')
plt.savefig('images/intersection-1.png')
813.698630137+/-62.407180552
You can see there is a substantial uncertainty in the temperature at approximately the 90% confidence level (± 2 σ).
Update 7-7-2013
After a suggestion from Prateek, here we subtract the two data sets, fit a line to that data, and then use fsolve to find the zero. We wrap fsolve in the uncertainties package to directly get the uncertainty on the root.
import numpy as np
from pycse import regress
import matplotlib.pyplot as plt
import uncertainties as u
from scipy.optimize import fsolve
T = np.array([x[0] for x in data])
E1 = np.array([x[1] for x in data])
E2 = np.array([x[2] for x in data])
E = E1 - E2
# columns of the x-values for a line: constant, T
A = np.column_stack([T**0, T])
p, pint, se = regress(A, E, alpha=0.05)
b = u.ufloat((p[0], se[0]))
m = u.ufloat((p[1], se[1]))
@u.wrap
def f(b, m):
X, = fsolve(lambda x: b + m * x, 800)
return X
print f(b, m)
813.698630137+/-54.0386903923
Interesting that this uncertainty is a little smaller than the previously computed uncertainty. Here you can see we have to wrap the function in a peculiar way. The function must return a single float number, and take arguments with uncertainty. We define the polynomial fit (a line in this case) in a lambda function inside the function. It works ok.
org-mode source |
## Exercise 07: VTBI And Infusion Rate Calculator
Date completed: 02/09/2019
Hospitals use programmable pumps to deliver medications and fluids to intra-venous lines at a set number of milliliters per hour. Write a program to output information for the labels the hospital pharmacy places on bags of I.V. medications indicating the volume of medication to be infused and the rate at which the pump should be set. The program should prompt the user to enter the quantity of fluid in the bag and the number of minutes over which it should be infused. Output the VTBI (volume to be infused) in $ml$ and the infusion rate in $ml/hr$.
### Sample run:
Volume to be infused (ml) => 100
Minutes over which to infuse => 20
VTBI: 100 ml
Rate: 300 ml/hr
### The Created Code
The created source code can be found . It has been compressed in to a 7-Zip file.
### Some screen prints
Description Screen Print Out
The output as seen in the command prompt |
## College Algebra (10th Edition)
Using the ZERO function of a graphing utility we get: $ZERO(3x^2+5x+1)=-1.4343\approx-1.43$ |
# Relate Rates Problem
• December 3rd 2012, 07:50 PM
mjo
Relate Rates Problem
Here is the question I am having trouble with:
The sun is passing over a 100 m tall building. The angle θ made by the sun with the ground is increasing at a rate of pi/20 rads/min. At what rate is the length of the shadow of the building changing when the shadow is 60 m long? Give your answer in exact values.
So far I have got:
100cosfata=x
dx/dt=-100(cosfata)(dfata/dx)
Dont no where to get cosfata from.
• December 4th 2012, 10:05 AM
SujiCorp12345
Re: Relate Rates Problem
"Increasing at a rate of rads/min?"
• December 4th 2012, 01:28 PM
mjo
Re: Relate Rates Problem
Oh my gosh I have read this problem so many times I can just picture the numbers when they are not even there. Ugh.
• December 4th 2012, 03:25 PM
skeeter
Re: Relate Rates Problem
Quote:
The sun is passing over a 100 m tall building. The angle θ made by the sun with the ground is increasing at a rate of pi/20 rads/min. At what rate is the length of the shadow of the building changing when the shadow is 60 m long?
$\cot{\theta} = \frac{x}{100}$
$-\csc^2{\theta} \cdot \frac{d\theta}{dt} = \frac{1}{100} \cdot \frac{dx}{dt}$
• December 5th 2012, 01:23 PM
mjo
Re: Relate Rates Problem
But how do I find a value for -csc^2fata
• December 5th 2012, 02:19 PM
skeeter
Re: Relate Rates Problem
Quote:
Originally Posted by mjo
But how do I find a value for -csc^2fata
review your basic right triangle trig ...
$\csc{\theta} = \frac{hypotenuse}{opposite} = \frac{\sqrt{60^2+100^2}}{100}$
square the result and change its sign to get the value of
$-\csc^2{\theta}$
btw ... $\theta$ is prounounced "theta" , not "fata"
• December 6th 2012, 11:00 AM
mjo
Re: Relate Rates Problem
I found my original problem. Thanks guys |
# Integrals
There are many different uses for integrals. These include finding volumes of solids of revolution, centres of mass, and the distance that Gus the snail has travelled during his attempts on the land-speed record. Definite integrals also have many applications in Physics. One common use of integrals is to find the area under the graph of a function.
An integral is a giant sum. To find the area under the graph of a function, we add up the areas of little rectangles whose base length approaches zero, as shown in the diagram. Fortunately, we don't often have to calculate these sums. There are rules of integration that can help us out.
## Anatomy of an Integral
The symbol for an integral looks a bit like an elongated "S". Think of this as "S" for sum. The other parts of the integral are shown in the diagram below:
Note that the integrand (the function we're integrating) is placed immediately after the integral sign, and the whole thing is finished off with a $d\text{(variable of integration)}$ to indicate that we're integrating with respect to this variable. Of course, this variable needn't be an $x$: it could be anything you like. For example, Gus the snail might think of writing:
$\displaystyle{\int_a^b f(\text{cabbage})\;d\text{cabbage}}$
to work out the total number of cabbage leaves that were eaten over the period from $\text{day } a$ to $\text{day } b$ after his feral caterpillar infestation.
## Definite Integrals
As opposed to indefinite integrals, definite integrals have beginning and end values that decorate the integral sign. The bottom value indicates the beginning of the interval, and the value up the top indicates the end value.
If we can find a function $F(x)$ (sometimes called a Primitive Function) such that the indefinite integral $\displaystyle{\int f(x)\;dx = F(x)}$, then the indefinite integral over the interval $(a,b)$ is calculated by evaluating:
$\displaystyle{ \int_a^b f(x)\; dx = F(b) - F(a)}.$
We sometimes indicate this using the notation
$\displaystyle{ \int_a^b f(x)\; dx = [F(x)]_a^b}.$
Some books might use the notation
$\displaystyle{ \int_a^b f(x)\; dx = \left. F(x)\right\vert_a^b}$
to mean the same thing.
Let's see this in action on some examples.
### Example
Find the definite integral $\displaystyle{\int_1^2 (6 - 2x) \; dx}$
The indefinite integral is given by $\displaystyle{\int (6 - 2x)\; dx = 6x - x^2 + C}$. Find its values at the end points:
• At $x = 1$: $\displaystyle{\int (6 - 2x)\; dx = 6(1) - (1)^2 + C = 5 + C}$.
• At $x = 2$: $\displaystyle{\int (6 - 2x)\; dx = 6(2) - (2)^2 + C = 8 + C}$.
Subtracting these gives the value of the definite integral:
$\displaystyle{\int_1^2 (6 - 2x)\; dx = (8 + C) - 5 + C = 3 + (C - C) = 3}.$
Did you notice how the constants cancelled each other out? This always happens with definite integrals, so you can ignore constants in definite integrals. We can actually write the answer like this:
$\displaystyle{\int_1^2 (6 - 2x)\; dx = \left[6x - x^2\right]^2_1 = (6(2) - 2^2) - (6(1) - 1^2) = 3}.$
Remember that the integral should give us the area under the curve? We can use the geometry of this simple example to check that our calculations are correct. The region is a trapezium, so its area is equal to
$\dfrac{h}{2}(b_1 + b_2),$
where $b_1$ and $b_2$ are the parallel sides of the trapezium. The formula gives us an area of $3$ for the trapezium, so it looks like we got the integral right!
Time for another example!
### Example
Find the definite integral $\displaystyle{\int_{0.5}^1 \sin (x) \; dx}$
The indefinite integral is given by $\displaystyle{\int \sin (x) \; dx = - \cos (x) + C}$. As we saw above, we can ignore the $C$ when we evaluate the definite integral, so:
$\displaystyle{\int_{0.5}^1 \sin(x)\; dx = [-\cos(x)]^{1}_{0.5} }= ( - \cos (1)) - (- \cos (0.5)) \approx 0.337.$
The next example shows something to watch out for. Don't assume that the indefinite integral is always zero at $x = 0$.
### Example
Find the definite integral $\displaystyle{\int_{0}^1 (x^2 + \sin(x)) \; dx}$
The indefinite integral is given by $\displaystyle{\int (x^2 + \sin(x) )\; dx = \dfrac{x^3}{3} - \cos(x) + C}$. As we saw above, we can ignore the $C$ when we evaluate the definite integral, so:
$\displaystyle{\int_{0}^1 (x^2 + \sin(x))\; dx = \left[\dfrac{x^3}{3} - \cos(x) \right]^{1}_{0} = \left( \dfrac{1}{3} - \cos(1)\right) - (-\cos(0)) \approx 0.793.}$
If we'd assumed that the indefinite integral was zero at $x = 0$, we would have obtained the incorrect negative answer $-0.207$.
Sometimes we do get negative areas. We need to take care when parts of the curve are below the $x$-axis. If you are asked for an integral, just proceed as before. If you are asked for an area, you need to split the integral up and take the absolute values of the integrals of the bits below the $x$-axis.
### Example
Find the definite integral $\displaystyle{\int_{0}^2 (x^2 - 2) \; dx}$
The indefinite integral is given by $\displaystyle{\int (x^2 -2 )\; dx = \dfrac{x^3}{3} - 2x + C}$. As we saw above, we can ignore the $C$ when we evaluate the definite integral, so:
$\displaystyle{\int_{0}^2 (x^2 -2 )\; dx = \left[\dfrac{x^3}{3} - 2x\right]^{2}_{0} = \left( \dfrac{8}{3} - 2(2)\right) - (0) = - \dfrac{4}{3}}.$
The answer is negative because more of the graph lies below the $x$-axis than above it.
In the next example, we're asked for an area. We need to work out where the graph crosses the $x$-axis so that we can split the integral up and take the absolute values of the integrals of the bits below the $x$-axis.
### Example
Find the area between the graph of $f(x) = x^2 - 2$ and the $x$-axis between $x = 0$ and $x = 2$.
This example involves the same function as the preceding one, but this time we're asked for an area, so we need to split the integral up into two sections, and take the absolute value of the integral corresponding to the section under the $x$-axis. First, we need to work out where the graph crosses the $x$-axis:
\begin{align*} x^2 - 2 &= 0\\ x^2 &= 2\\ x&= \pm \sqrt{2}. \end{align*}
Only $\sqrt{2}$ lies between $x = 0$ and $x = 2$, so this is where we need to make the split. The section of the graph between $x = 0$ and $x = \sqrt{2}$ lies under the $x$-axis, so the required area is
$\text{Area} = \left\vert \displaystyle{\int_0^{\sqrt{2}} (x^2 - 2) \; dx} \right\vert + \displaystyle{\int_{\sqrt{2}}^2 (x^2 - 2)\; dx.}$
The indefinite integral is given by $\displaystyle{\int (x^2 -2) \; dx = \dfrac{x^3}{3} - 2x + C}$. As we saw above, we can ignore the $C$ when we evaluate the definite integrals. Let's evaluate the two parts separately, take the absolute value of the negative one, and then add them together.
From $x = 0$ to $x = \sqrt{2}$:
$\displaystyle{\int_{0}^{\sqrt{2}} (x^2 -2 )\; dx = \left[\dfrac{x^3}{3} - 2x\right]^{\sqrt{2}}_{0} = \left( \dfrac{2\sqrt{2}}{3} - 2\sqrt{2}\right) - (0) = - \dfrac{4\sqrt{2}}{3}}.$
From $x = \sqrt{2}$ to $x = 2$:
$\displaystyle{\int_{\sqrt{2}}^{2} (x^2 -2 )\; dx = \left[\dfrac{x^3}{3} - 2x\right]^{2}_{\sqrt{2}} = \left( \dfrac{8}{3} - 4\right) - \left(-\dfrac{4\sqrt{2}}{3}\right) = \dfrac{8}{3} - 4 + \dfrac{4\sqrt{2}}{3} = 0.5518 \dots}.$
So the overall area is
$\text{Area} = \left\vert -\dfrac{4\sqrt{2}}{3} \right \vert + \dfrac{8}{3} - 4 + \dfrac{4\sqrt{2}}{3} = 2.438 \dots$
### Continuity
In order to evaluate a definite integral, we need the integrand (the function we are integrating) to be continuous on the interval $(a,b)$. That means it can't contain any jumps, holes or vertical asymptotes. In case you've forgotten, vertical asymptotes occur at $x$-values where the function is undefined. As the function approaches these $x$-values, the function values approach plus or minus infinity. There are ways around the jumps and holes: we can simply split the function up into parts where it is continuous and add up the definite integrals of the parts. However, vertical asymptotes cause all sorts of problems.
## Properties of Definite Integrals
The following properties can help you to split up integrals into chunks that are easy to calculate.
### Reversing the Limits
If we reverse the order of the limits, the resulting integral is minus the value of the integral with the limits in the original order:
$\displaystyle{\int^a_b f(x) \; dx = - \int_a^b f(x)\; dx}$
### Both Limits Equal
If both limits are equal, the integral is zero:
$\displaystyle{\int^a_a f(x) \; dx = 0}$
### Combining Two Intervals
If two intervals are adjacent, the integral over the combined interval is equal to the sum of the integrals over the two intervals:
$\displaystyle{\int_a^c f(x) \; dx + \int^b_c f(x) \; dx = \int_a^b f(x)\; dx}$
### Conclusion
To find a definite integral of $f(x)$ over the interval $(a,b)$, first find the indefinite integral $F(x) = \displaystyle{\int f(x)\;dx}$ and evaluate $F(x)$ at $a$ and at $b$. We then have
$\displaystyle{\int^b_a f(x)\; dx = F(b) - F(a).}$
This result is called the "Fundamental Theorem of Calculus".
### Description
Calculus is the branch of mathematics that deals with the finding and properties of derivatives and integrals of functions, by methods originally based on the summation of infinitesimal differences. The two main types are differential calculus and integral calculus.
### Environment
It is considered a good practice to take notes and revise what you learnt and practice it.
### Audience
You must be logged in as Student to ask a Question. |
How to shift foward actions using python
I have two sequential actions, and I want to shift them both foward. I can list the actions with bpy.data.actons, but I don't know how to modify them.
While you can list the actions with bpy.data.actions, once they are in the NLA Editor you need to access them slightly differently.
Once in the NLA Editor they are classed as strips on nla_tracks which are stored in the animation_data for that object.
For example, in the case of the image you have shown you will use this to access an individual strip:
bpy.data.objects['sash_KIN_TYPE_S'].animation_data.nla_tracks['KIN_TYPE_SL'].strips['KIN_TYPE_SL_close']
From there you can access the frame_start and frame_end properties of the strip. Increasing both of these by the same amount will move the strip. For example:
strip.frame_start += 10
strip.frame_end += 10
To shift multiple strips you can loop over the strips collection in the same way you would loop over actions:
strips = bpy.data.objects['sash_KIN_TYPE_S'].animation_data.nla_tracks['KIN_TYPE_SL'].strips
for strip in strips:
strip.frame_start += 10
strip.frame_end += 10
I found all of this out by hovering my mouse over the properties in the NLA Editor and looking in the tooltip (because I enabled Python Tooltips in the User Preferences). |
# While solving an LPP (defined by n variables and m equations, m < n) through simplex method, basic solutions are determined by setting n – m variables equal to zero and solving m equations to obtain solution for remaining m variables, provided the resulting solutions are unique. This means that the maximum number of basic solutions is:
This question was previously asked in
TNTRB 2017 ME Official Question Paper
View all TN TRB ME Papers >
1. $$\frac{{n!}}{{m!\left( {n - m} \right)!}}$$
2. $$\frac{{m!}}{{n!\left( {n - m} \right)!}}$$
3. $$\frac{{n!}}{{m!\left( {n + m} \right)!}}$$
4. $$\frac{{m!}}{{n!\left( {n + m} \right)!}}$$
Option 1 : $$\frac{{n!}}{{m!\left( {n - m} \right)!}}$$
Free
TNTRB 2017 ME Official Question Paper
467
150 Questions 190 Marks 180 Mins
## Detailed Solution
Explanation:
• The standard LPP form includes "m" simultaneous linear equations in "n" variables. i.e. m < n.
• The "n" variables are divided into two sets:
• n-m variables, to which we assign zero values, and
• The remaining "m" variables, whose values are determined by solving the resulting "m" equations.
• If the m equations yield a unique solution, then the associated m variables are called basic variables, and the remaining n - m zero variables are referred to as non-basic variables.
• In this case, the resulting unique solution comprises a basic solution. If all the variables assume non-negative values, then the basic solution is feasible. otherwise, it is infeasible.
• By definition, the maximum number of possible basic solutions for 'm' equations in n unknowns is
• $$\left( {\begin{array}{*{20}{c}} n\\ m \end{array}} \right) = \frac{{n!}}{{m!\left( {m - n} \right)!}}$$ |
Confidence Measures in Multiple pronunciations Modeling For Speaker Verification
This paper investigates the use of multiple pronunciations modeling for User-Customized Password Speaker Verification (UCP-SV). The main characteristic of the UCP-SV is that the system does not have any {\it a priori} knowledge about the password used by the speaker. Our aim is to exploit the information about how the speaker pronounces a password in the decision process. This information is extracted automatically by using a speaker-independent speech recognizer. In this paper, we investigate and compare several techniques. Some of them are based on the combination of confidence scores estimated by different models.In this context, we propose a new confidence measure that uses acoustic information extracted during the speaker enrollment and based on {\it log likelihood ratio} measure. These techniques show significant improvement ($15.7\%$ relative improvement in terms of equal error rate) compared to a UCP-SV baseline system where the speaker is modeled by only one model (corresponding to one utterance).
Year:
2003
Publisher:
IDIAP
Keywords:
Note:
in Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-04), 2004
Laboratories: |
Learning with Square Loss: Localization through Offset Rademacher Complexity
# Learning with Square Loss: Localization through Offset Rademacher Complexity
Tengyuan Liang Department of Statistics, The Wharton School, University of Pennsylvania Alexander Rakhlin 11footnotemark: 1 Karthik Sridharan Department of Computer Science, Cornell University
###### Abstract
We consider regression with square loss and general classes of functions without the boundedness assumption. We introduce a notion of offset Rademacher complexity that provides a transparent way to study localization both in expectation and in high probability. For any (possibly non-convex) class, the excess loss of a two-step estimator is shown to be upper bounded by this offset complexity through a novel geometric inequality. In the convex case, the estimator reduces to an empirical risk minimizer. The method recovers the results of [18] for the bounded case while also providing guarantees without the boundedness assumption.
## 1 Introduction
Determining the finite-sample behavior of risk in the problem of regression is arguably one of the most basic problems of Learning Theory and Statistics. This behavior can be studied in substantial generality with the tools of empirical process theory. When functions in a given convex class are uniformly bounded, one may verify the so-called “Bernstein condition.” The condition—which relates the variance of the increments of the empirical process to their expectation—implies a certain localization phenomenon around the optimum and forms the basis of the analysis via local Rademacher complexities. The technique has been developed in [9, 8, 5, 2, 4], among others, based on Talagrand’s celebrated concentration inequality for the supremum of an empirical process.
In a recent pathbreaking paper, [14] showed that a large part of this heavy machinery is not necessary for obtaining tight upper bounds on excess loss, even—and especially—if functions are unbounded. Mendelson observed that only one-sided control of the tail is required in the deviation inequality, and, thankfully, it is the tail that can be controlled under very mild assumptions.
In a parallel line of work, the search within the online learning setting for an analogue of “localization” has led to a notion of an “offset” Rademacher process [17], yielding—in a rather clean manner—optimal rates for minimax regret in online supervised learning. It was also shown that the supremum of the offset process is a lower bound on the minimax value, thus establishing its intrinsic nature. The present paper blends the ideas of [14] and [17]. We introduce the notion of an offset Rademacher process for i.i.d. data and show that the supremum of this process upper bounds (both in expectation and in high probability) the excess risk of an empirical risk minimizer (for convex classes) and a two-step Star estimator of [1] (for arbitrary classes). The statement holds under a weak assumption even if functions are not uniformly bounded.
The offset Rademacher complexity provides an intuitive alternative to the machinery of local Rademacher averages. Let us recall that the Rademacher process indexed by a function class is defined as a stochastic process where are held fixed and are i.i.d. Rademacher random variables. We define the offset Rademacher process as a stochastic process
g↦1nn∑t=1ϵtg(xt)−cg(xt)2
for some . The process itself captures the notion of localization: when is large in magnitude, the negative quadratic term acts as a compensator and “extinguishes” the fluctuations of the term involving Rademacher variables. The supremum of the process will be termed offset Rademacher complexity, and one may expect that this complexity is of a smaller order than the classical Rademacher averages (which, without localization, cannot be better than the rate of ).
The self-modulating property of the offset complexity can be illustrated on the canonical example of a linear class , in which case the offset Rademacher complexity becomes
1npsupw∈Rp{wT(n∑t=1ϵtxt)−c∥w∥2Σ}=14cn∥∥ ∥∥n∑t=1ϵtxt∥∥ ∥∥2Σ−1
where . Under mild conditions, the above expression is of the order in expectation and in high probability — a familiar rate achieved by the ordinary least squares, at least in the case of a well-specified model. We refer to Section 6 for the precise statement for both well-specified and misspecified case.
Our contributions can be summarized as follows. First, we show that offset Rademacher complexity is an upper bound on excess loss of the proposed estimator, both in expectation and in deviation. We then extend the chaining technique to quantify the behavior of the supremum of the offset process in terms of covering numbers. By doing so, we recover the rates of aggregation established in [18] and, unlike the latter paper, the present method does not require boundedness (of the noise and functions). We provide a lower bound on minimax excess loss in terms of offset Rademacher complexity, indicating its intrinsic nature for the problems of regression. While our in-expectation results for bounded functions do not require any assumptions, the high probability statements rest on a lower isometry assumption that holds, for instance, for subgaussian classes. We show that offset Rademacher complexity can be further upper bounded by the fixed-point complexities defined by Mendelson [14]. We conclude with the analysis of ordinary least squares.
## 2 Problem Description and the Estimator
Let be a class of functions on a probability space . The response is given by an unknown random variable , distributed jointly with according to . We observe a sample distributed i.i.d. according to and aim to construct an estimator with small excess loss , where
E(g) ≜ E(g−Y)2−pinff∈FE(f−Y)2 (1)
and is the expectation with respect to . Let denote the empirical expectation operator and define the following two-step procedure:
ˆg=argminf∈F ˆE(f(X)−Y)2, ˆf=argminf∈star(F,ˆg) ˆE(f(X)−Y)2 (2)
where is the star hull of around . (we abbreviate as .) This two-step estimator was introduced (to the best of our knowledge) by [1] for a finite class . We will refer to the procedure as the Star estimator. Audibert showed that this method is deviation-optimal for finite aggregation — the first such result, followed by other estimators with similar properties [10, 6] for the finite case. We present analysis that quantifies the behavior of this method for arbitrary classes of functions. The method has several nice features. First, it provides an alternative to the 3-stage discretization method of [18], does not require the prior knowledge of the entropy of the class, and goes beyond the bounded case. Second, it enjoys an upper bound of offset Rademacher complexity via relatively routine arguments under rather weak assumptions. Third, it naturally reduces to empirical risk minimization for convex classes (indeed, this happens whenever ).
Let denote the minimizer
f∗=argminf∈F E(f(X)−Y)2,
and let denote the “noise”
ξ=Y−f∗.
We say that the model is misspecified if the regression function , which means is not zero-mean. Otherwise, we say that the model is well-specified.
## 3 A Geometric Inequality
We start by proving a geometric inequality for the Star estimator. This deterministic inequality holds conditionally on , and therefore reduces to a problem in .
###### Lemma 1 (Geometric Inequality).
The two-step estimator in (2) satisfies
ˆE(h−Y)2−ˆE(ˆf−Y)2≥c⋅ˆE(ˆf−h)2 (3)
for any and . If is convex, (3) holds with . Moreover, if is a linear subspace, (3) holds with equality and by the Pythagorean theorem.
###### Remark 1.
In the absence of convexity of , the two-step estimator mimics the key Pythagorean identity, though with a constant . We have not focused on optimizing but rather on presenting a clean geometric argument.
###### Proof of Lemma 1.
Define the empirical distance to be, for any , and empirical product to be . We will slightly abuse the notation by identifying every function with its finite-dimensional projection on .
Denote the ball (and sphere) centered at and with radius to be (and , correspondingly). In a similar manner, define and . By the definition of the Star algorithm, we have . The statement holds with if , and so we may assume . Denote by the conic hull of with origin at . Define the spherical cap outside the cone to be (drawn in red in Figure 3).
First, by the optimality of , for any , we have , i.e. any is not in the interior of . Furthermore, is not in the interior of the cone , as otherwise there would be a point inside strictly better than . Thus .
Second, and it is a contact point of and . Indeed, is necessarily on a line segment between and a point outside that does not pass through the interior of by optimality of . Let be the set of all contact points – potential locations of .
Now we fix and consider the two dimensional plane that passes through three points , depicted in Figure 3. Observe that the left-hand-side of the desired inequality (3) is constant as ranges over . To prove the inequality it therefore suffices to choose a value that maximizes the right-hand-side. The maximization of over is achieved by . This can be argued simply by symmetry: the two-dimensional plane intersects in a line and the distance between and is maximized at the extreme point of this intersection. Hence, to prove the desired inequality, we can restrict our attention to the plane and instead of .
For any , define the projection of onto the shell to be . We first prove (3) for and then extend the statement to . By the geometry of the cone,
∥f′−ˆg∥n≥12∥ˆg−h⊥∥n.
By triangle inequality,
∥f′−ˆg∥n≥12∥ˆg−h⊥∥n≥12(∥f′−h⊥∥n−∥f′−ˆg∥n).
Rearranging,
∥f′−ˆg∥2n≥19∥f′−h⊥∥2n.
By the Pythagorean theorem,
∥h⊥−Y∥2n−∥f′−Y∥2n=∥ˆg−Y∥2n−∥f′−Y∥2n=∥f′−ˆg∥2n≥19∥f′−h⊥∥2n,
thus proving the claim for for constant .
We can now extend the claim to . Indeed, due to the fact that and the geometry of the projection , we have . Thus
∥h−Y∥2n−∥f′−Y∥2n =∥h⊥−h∥2n+∥h⊥−Y∥2n−2⟨h⊥−Y,h⊥−h⟩n−∥f′−Y∥2n ≥∥h⊥−h∥2n+(∥h⊥−Y∥2n−∥f′−Y∥2n) ≥∥h⊥−h∥2n+19∥f′−h⊥∥2n≥118(∥h⊥−h∥n+∥f′−h⊥∥n)2 ≥118∥f′−h∥2n.
This proves the claim for with constant .
An upper bound on excess loss follows immediately from Lemma 1.
###### Corollary 2.
Conditioned on the data , we have a deterministic upper bound for the Star algorithm:
E(ˆf) ≤(ˆE−E)[2(f∗−Y)(f∗−ˆf)]+E(f∗−ˆf)2−(1+c)⋅ˆE(f∗−ˆf)2, (4)
with the value of constant given in Lemma 1.
###### Proof.
E(ˆf) =E(ˆf(X)−Y)2−pinff∈FE(f(X)−Y)2 ≤E(ˆf−Y)2−E(f∗−Y)2+[ˆE(f∗−Y)2−ˆE(ˆf−Y)2−c⋅ˆE(ˆf−f∗)2] =(ˆE−E)[2(f∗−Y)(f∗−ˆf)]+E(f∗−ˆf)2−(1+c)⋅ˆE(f∗−ˆf)2.
An attentive reader will notice that the multiplier on the negative empirical quadratic term in (4) is slightly larger than the one on the expected quadratic term. This is the starting point of the analysis that follows.
## 4 Symmetrization
We will now show that the discrepancy in the multiplier constant in (4) leads to offset Rademacher complexity through rather elementary symmetrization inequalities. We perform this analysis both in expectation (for the case of bounded functions) and in high probability (for the general unbounded case). While the former result follows from the latter, the in-expectation statement for bounded functions requires no assumptions, in contrast to control of the tails.
###### Theorem 3.
Define the set . The following expectation bound on excess loss of the Star estimator holds:
EE(ˆf)≤(2M+K(2+c)/2)⋅Epsuph∈H{1nn∑i=12ϵih(Xi)−c′h(Xi)2}
where are independent Rademacher random variables, , , and almost surely.
The proof of the theorem involves an introduction of independent Rademacher random variables and two contraction-style arguments to remove the multipliers . These algebraic manipulations are postponed to the appendix.
The term in the curly brackets will be called an offset Rademacher process, and the expected supremum — an offset Rademacher complexity. While Theorem 3 only applies to bounded functions and bounded noise, the upper bound already captures the localization phenomenon, even for non-convex function classes (and thus goes well beyond the classical local Rademacher analysis).
As argued in [14], it is the contraction step that requires boundedness of the functions when analyzing square loss. Mendelson uses a small ball assumption (a weak condition on the distribution, stated below) to split the analysis into the study of the multiplier and quadratic terms. This assumption allows one to compare the expected square of any function to its empirical version, to within a multiplicative constant that depends on the small ball property. In contrast, we need a somewhat stronger assumption that will allow us to take this constant to be at least . We phrase this condition—the lower isometry bound—as follows. 111We thank Shahar Mendelson for pointing out that the small ball condition in the initial version of this paper was too weak for our purposes.
###### Definition 1 (Lower Isometry Bound).
We say that a function class satisfies the lower isometry bound with some parameters and if
P(pinff∈F∖{0}1nn∑i=1f2(Xi)Ef2≥1−η)≥1−δ (5)
for all , where depends on the complexity of the class.
In general this is a mild assumption that requires good tail behavior of functions in , yet it is stronger than the small ball property. Mendelson [16] shows that this condition holds for heavy-tailed classes assuming the small ball condition plus a norm-comparison property . We also remark that Assumption 1 holds for sub-gaussian classes using concentration tools, as already shown in [11]. For completeness, let us also state the small ball property:
###### Definition 2 (Small Ball Property [14, 15]).
The class of functions satisfies the small-ball condition if there exist constants and for every ,
P(|f(X)|≥κ(Ef2)1/2)≥ϵ.
Armed with the lower isometry bound, we now prove that the tail behavior of the deterministic upper bound in (4) can be controlled via the tail behavior of offset Rademacher complexity.
###### Theorem 4.
Define the set . Assume the lower isometry bound in Definition 1 holds with and some , where is the constant in (3). Let . Define
A:=psuph∈HEh4(Eh2)2 and B:=psupX,YEξ4.
Then there exist two absolute constants (only depends on ), such that
P(E(ˆf)>4u)≤4δ+4P(psuph∈H1nn∑i=1ϵiξih(Xi)−~c⋅h(Xi)2>u)
for any
u>32√ABc′⋅1n,
as long as .
Theorem 4 states that excess loss is stochastically dominated by offset Rademacher complexity. We remark that the requirement in holds under the mild moment conditions.
###### Remark 2.
In certain cases, Definition 1 can be shown to hold for (rather than all ), for some critical radius , as soon as (see [16]). In this case, the bound on the offset complexity is only affected additively by .
We postpone the proof of the Theorem to the appendix. In a nutshell, it extends the classical probabilistic symmetrization technique [7, 13] to the non-zero-mean offset process under the investigation.
Let us summarize the development so far. We have shown that excess loss of the Star estimator is upper bounded by the (data-dependent) offset Rademacher complexity, both in expectation and in high probability, under the appropriate assumptions. We claim that the necessary properties of the estimator are now captured by the offset complexity, and we are now squarely in the realm of empirical process theory. In particular, we may want to quantify rates of convergence under complexity assumptions on , such as covering numbers. In contrast to local Rademacher analyses where one would need to estimate the data-dependent fixed point of the critical radius in some way, the task is much easier for the offset complexity. To this end, we study the offset process with the tools of empirical process theory.
### 5.1 Chaining Bounds
The first lemma describes the behavior of offset Rademacher process for a finite class.
###### Lemma 5.
Let be a finite set of vectors of cardinality . Then for any ,
Eϵmaxv∈V[1nn∑i=1ϵivi−Cv2i]≤12ClogNn.
Furthermore, for any ,
P(maxv∈V[1nn∑i=1ϵivi−Cv2i]≥12ClogN+log1/δn)≤δ.
When the noise is unbounded,
Eϵmaxv∈V[1nn∑i=1ϵiξivi−Cv2i]≤M⋅logNn, Pϵ(maxv∈V[1nn∑i=1ϵiξivi−Cv2i]≥M⋅logN+log1/δn)≤δ,
where
M:=psupv∈V∖{0}∑ni=1v2iξ2i2C∑ni=1v2i. (6)
Armed with the lemma for a finite collection, we upper bound the offset Rademacher complexity of a general class through the chaining technique. We perform the analysis in expectation and in probability. Recall that a -cover of a subset in a metric space is a collection of elements such that the union of the -balls with centers at the elements contains . A covering number at scale is the size of the minimal -cover.
One of the main objectives of symmetrization is to arrive at a stochastic process that can be studied conditionally on data, so that all the relevant complexities can be made sample-based (or, empirical). Since the functions only enter offset Rademacher complexity through their values on the sample , we are left with a finite-dimensional object. Throughout the paper, we work with the empirical distance
dn(f,g)=(1nn∑i=1(f(Xi)−g(Xi))2)1/2.
The covering number of at scale with respect to will be denoted by .
###### Lemma 6.
Let be a class of functions from to . Then for any
Eϵpsupg∈G[1nn∑t=1ϵig(zi)−Cg(zi)2] ≤pinfγ≥0,α∈[0,γ]{(2/C)logN2(G,γ)n +4α+12√n∫γα√logN2(G,δ)dδ}
where is an -cover of on at scale (assumed to contain ).
Instead of assuming that is contained in the cover, we may simply increase the size of the cover by , which can be absorbed by a small change of a constant.
Let us discuss the upper bound of Lemma 6. First, we may take , unless the integral diverges (which happens for very large classes with entropy growth of , ). Next, observe that first term is precisely the rate of aggregation with a finite collection of size . Hence, the upper bound is an optimal balance of the following procedure: cover the set at scale and pay the rate of aggregation for this finite collection, plus pay the rate of convergence of ERM within a -ball. The optimal balance is given by some (and can be easily computed under assumptions on covering number behavior — see [17]). The optimal quantifies the localization radius that arises from the curvature of the loss function. One may also view the optimal balance as the well-known equation
logN(G,γ)n≍γ2,
studied in statistics [19] for well-specified models. The present paper, as well as [18], extend the analysis of this balance to the misspecified case and non-convex classes of functions.
Now we provide a high probability analogue of Lemma 6.
###### Lemma 7.
Let be a class of functions from to . Then for any and any ,
Pϵ(psupg∈G[1nn∑t=1ϵig(zi)−Cg(zi)2]>u⋅pinfα∈[0,γ]{4α+12√n∫γα√logN2(G,δ)dδ}+2ClogN2(G,γ)+un) ≤21−e−2exp(−cu2)+exp(−u)
where is an -cover of on at scale (assumed to contain ) and are universal constants.
The above lemmas study the behavior of offset Rademacher complexity for abstract classes . Observe that the upper bounds in previous sections are in terms of the class . This class, however, is not more complex that the original class (with the exception of a finite class ). More precisely, the covering numbers of and are bounded as
logN2(F+F′,2ϵ), logN2(F−F′,2ϵ)≤logN2(F,ϵ)+logN2(F′,ϵ)
for any . The following lemma shows that the complexity of the star hull is also not significantly larger than that of .
###### Lemma 8 ([12], Lemma 4.5).
For any scale , the covering number of and that of are bounded in the sense
logN2(F,2ϵ)≤logN2(star(F),2ϵ)≤log2ϵ+logN2(F,ϵ).
Now let us study the critical radius of offset Rademacher processes. Let and define
αn(H,κ,δ)≜pinf{r>0:P(psuph∈H∩rB{1nn∑i=12ϵiξih(Xi)−c′1nn∑i=1h2(Xi)}≤κr2)≥1−δ}. (7)
###### Theorem 9.
Assume is star-shaped around 0 and the lower isometry bound holds for . Define the critical radius
r=αn(H,c′(1−ϵ),δ).
Then we have with probability at least ,
psuph∈H{2nn∑i=1ϵiξih(Xi)−c′1nn∑i=1h2(Xi)}=psuph∈H∩rB{2nn∑i=1ϵiξih(Xi)−c′1nn∑i=1h2(Xi)},
which further implies
psuph∈H{2nn∑i=1ϵiξih(Xi)−c′1nn∑i=1h2(Xi)}≤r2.
The first statement of Theorem 9 shows the self-modulating behavior of the offset process: there is a critical radius, beyond which the fluctuations of the offset process are controlled by those within the radius. To understand the second statement, we observe that the complexity is upper bounded by the corresponding complexity in [14], which is defined without the quadratic term subtracted off. Hence, offset Rademacher complexity is no larger (under our Assumption 1) than the upper bounds obtained by [14] in terms of the critical radius.
## 6 Examples
In this section, we briefly describe several applications. The first is concerned with parametric regression.
###### Lemma 10.
Consider the parametric regression , where need not be centered. The offset Rademacher complexity is bounded as
Eϵpsupβ∈Rp{1nn∑i=12ϵiξiXTiβ−CβTXiXTiβ}=tr(G−1H)Cn
and
Pϵ(psupβ∈Rp{1nn∑i=12ϵiξiXTiβ−CβTXiXTiβ}≥tr(G−1H)Cn+√tr([G−1H]2)n(4√2log1δ+64log1δ))≤δ
where is the Gram matrix and . In the well-specified case (that is, are zero-mean), assuming that conditional variance is , then conditionally on the design matrix, and excess loss is upper bounded by order .
###### Proof.
The offset Rademacher can be interpreted as the Fenchel-Legendre transform, where
psupβ∈Rp{n∑i=12ϵiξiXTiβ−CβTXiXTiβ}=∑ni,j=1ϵiϵjξiξjXTiG−1XjCn. (8)
Thus we have in expectation
Eϵ1npsupβ∈Rp{n∑i=12ϵiξiXTiβ−CβTXiXTiβ}=∑ni=1ξ2iXTiG−1XiCn=tr[G−1(∑ni=1ξ2iXiXTi)]Cn. (9)
For high probability bound, note the expression in Equation (8) is Rademacher chaos of order two. Define symmetric matrix with entries
Mij=ξiξjXTiG−1Xj
and define
Z=n∑i,j=1ϵiϵjξiξjXTiG−1Xj=n∑i,j=1ϵiϵjMij.
Then
EZ=tr[G−1(n∑i=1ξ2iXiXTi)],
and
En∑i=1(n∑j=1ϵjMij)2=∥M∥2F=tr[G−1(n∑i=1ξ2iXiXTi)G−1(n∑i=1ξ2iXiXTi)].
Furthermore,
∥M∥≤∥M∥F= ⎷tr[G−1(n∑i=1ξ2iXiXTi)G−1(n∑i=1ξ2iXiXTi)]
We apply the concentration result in [3, Exercise 6.9],
P(Z−EZ≥4√2∥M∥F√t+64∥M∥t)≤e−t. (10)
For the finite dictionary aggregation problem, the following lemma shows control of offset Rademacher complexity.
###### Lemma 11.
Assume is a finite class of cardinality . Define which contains the Star estimator defined in Equation (2). The offset Rademacher complexity for is bounded as
Eϵpsuph∈H{1nn∑i=12ϵiξih(Xi)−Ch(Xi)2}≤~C⋅log(N∨n)n
and
Pϵ⎛⎝psuph∈H{1nn∑i=12ϵiξih(Xi)−Ch(Xi)2}≤~C⋅log(N∨n)+log1δn⎞⎠≤δ.
where is a constant depends on and
M:=psuph∈H∖{0}∑ni=1h(Xi)2ξ2i2C∑ni=1h(Xi)2.
We observe that the bound of Lemma 11 is worse than the optimal bound of [1] by an additive term. This is due to the fact that the analysis for finite case passes through the offset Rademacher complexity of the star hull, and for this case the star hull is more rich than the finite class. For this case, a direct analysis of the Star estimator is provided in [1].
While the offset complexity of the star hull is crude for the finite case, the offset Rademacher complexity does capture the correct rates for regression with larger classes, initially derived in [18]. We briefly mention the result. The proof is identical to the one in [17], with the only difference that offset Rademacher is defined in that paper as a sequential complexity in the context of online learning.
###### Corollary 12.
Consider the problem of nonparametric regression, as quantified by the growth
logN2(F,ϵ)≤ϵ−p.
In the regime , the upper bound of Lemma 7 scales as . In the regime , the bound scales as , with an extra logarithmic factor at .
For the parametric case of , one may also readily estimate the offset complexity. Results for VC classes, sparse combinations of dictionary elements, and other parametric cases follow easily by plugging in the estimate for the covering number or directly upper bounding the offset complexity (see [18, 17]).
## 7 Lower bound on Minimax Regret via Offset Rademacher Complexity
We conclude this paper with a lower bound on minimax regret in terms of offset Rademacher complexity.
###### Theorem 13 (Minimax Lower Bound on Regret).
Define the offset Rademacher complexity over as
Ro(n,F)=psup{xi}ni=1∈X⊗nEϵpsupf∈F{1nn∑i=12ϵif(xi)−f(xi)2}
then the following minimax lower bound on regret holds:
pinf^g∈GpsupP{E(^g−Y)2−pinff∈FE(f−Y)2}≥Ro((1+c)n,F)−c1+cRo(cn,G),
for any .
For the purposes of matching the performance of the Star procedure, we can take .
## Appendix A Proofs
###### Proof of Theorem 3.
Since is in the star hull around , must lie in the set . Hence, in view of (4), excess loss is upper bounded by
psupf∈H{(ˆE−E)[2(f∗−Y)(f∗−f)]+E(f∗−f)2−(1+c)⋅ˆE(f∗−f)2} (11) ≤psupf∈H{(ˆE−E)[2(f∗−Y)(f∗−f)]+(1+c/4)E(f∗−f)2−(1+3c/4)⋅ˆE(f∗−f)2 −(c/4)(ˆE(f∗−f)2+E(f∗−f)2)} ≤psupf∈H{(ˆE−E)[2(f∗−Y)(f∗−f)]−(c/4)(ˆE(f∗−f)2+E(f∗−f)2)} (12) +psupf∈H{(1+c/4)E(f∗−f)2−(1+3c/4)⋅ˆE(f∗−f)2} (13)
We invoke the supporting Lemma 14 (stated and proved below) for the term (13):
(14) ≤K(2+c)2⋅Epsupf∈H1n{n∑i=12ϵi(f(Xi)−f∗(Xi))−c4 |
# Changing Reaction Rates with Temperature
The vast majority of reactions depend on thermal activation, so the major factor to consider is the fraction of the molecules that possess enough kinetic energy to react at a given temperature. According to kinetic molecular theory, a population of molecules at a given temperature is distributed over a variety of kinetic energies that is described by the Maxwell-Boltzman distribution law.
The two distribution plots shown here are for a lower temperature T1 and a higher temperature T2. The area under each curve represents the total number of molecules whose energies fall within particular range. The shaded regions indicate the number of molecules which are sufficiently energetic to meet the requirements dictated by the two values of Ea that are shown.
It is clear from these plots that the fraction of molecules whose kinetic energy exceeds the activation energy increases quite rapidly as the temperature is raised. This the reason that virtually all chemical reactions (and all elementary reactions) proceed more rapidly at higher temperatures.
Temperature is considered a major factor that affects the rate of a chemical reaction. It is considered a source of energy in order to have a chemical reaction occur. Svante Arrhenius, a Swedish chemist, believed that the reactants in a chemical reaction needed to gain a small amount of energy in order to become products. He called this type of energy the activation energy. The amount of energy used in the reaction is known to be greater than the activation energy in the reaction. Arrhenius came up with an equation that demonstrated that rate constants of different kinds of chemical reactions varied with temperature. This equation indicates a rate constant that has a proportional relationship with temperature. For example, as the rate constant increases, the temperature of the chemical reaction generally also increases. The result is given below:
$\ln \frac{k_2}{k_1} = \frac{E_a}{R}\left(\frac{1}{T_1} - \frac{1}{T_2}\right)$
This equation is known as Arrhenius' equation. T1and T2are temperature variables expressed in Kelvin. Tcan be expressed as the initial or lower temperature of the reaction, while T2 is the final or higher temperature of the reaction. Rate constants, k1and k2, are values at T1 and T2. Ea is the activation energy expressed in (Joules/mole)=(J/mol). R is the gas constant expressed as 8.3145 (Joules/mole × Kelvin)=(J/mol×K)
Some may ask how the temperature actually affects the chemical reaction rate. The answer to this is that this phenomenon is related to the collision theory. Molecules only react if they have sufficient energy for a reaction to take place. When the temperature of a solution increases, the molecular energy levels also increase, causing the reaction to proceedfaster.
The graph of ln K vs. 1/T is linear, allowing the calculation of the activation energy needed for the reaction. An alternate form of the Arrhenius equation is given below:
$k = A_e^{-\frac{E_a}{RT}}$
Some interesting examples:
1. Salt or food coloring is added to cold water, room temperature water, and hot water. When the substance mixes with the hot water, the high temperatures allow it become a homogeneous mixture. This is because due to water molecules moving faster when the temperature is higher and speeding up the dissolution reaction.
2. Another form of energy is light. One example of the effect of temperature on chemical reaction rates is the use of lightsticks or glowsticks. The lightstick undergoes a chemical reaction that is called chemiluminescence; but this reaction does not require or produce heat. Its rate, however, is influenced by temperature. If the lightstick is put in a cold environment, the chemical reaction slows down, allowing it to give off light longer. If the lightstick is in a hot environment, the reaction speeds up causing the light to wear out faster. (This example is from "How Things Work—Lightsticks" from chemistry.about.com)
### References
1. Petrucci, Harwood, Madura, Herring. General Chemistry: Principles & Modern Applications, 9th edition., Sec.14-9: The Effect of Temperature on Reaction Rates, p. 594.
2. Frost, Arthur A., Kinetics and Mechanism: A Study of Homogeneous Chemical Reactions., Ch.2: Effect of Temperature on Reaction Rate, pgs. 23-24.
### Problems
1. True or false- When the temperature increases in a chemical reaction, the rate also increases.
2. True or false- Temperatures can be both positive and negative.
3. A chemical reaction has a rate constant of 5.10x10-9h-1 at 225 K and 6.36x10-3 h-1 at 400 K. What is the activation energy in kJ/mol for this reaction.
### Contributors
• Andrea B.
Stephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook |
# Why is kg the standard unit for mass and not g in SI?
Why is $\mathrm{kg}$ the standard unit for mass and not $\mathrm{g}$?
I know that there is the kilogramme des Archives which is a kilogram and not a gram. But originally on April 7, 1795 the gram was defined as
The absolute weight of a volume of pure water equal to the cube of the hundredth part of the metre, and at the temperature of melting ice.
What is the reason that they switched to the $\mathrm{kg}$ when using the kilogramme des Archives? Perhaps it was easier to make and less sensitive to mistakes? Are there other reasons?
To clarify why I think this is weird: The other six standards, namely metre, second, ampere, Kelvin, mole and candela don't have a SI prefix when used as standard unit.
• It depends on what do you mean by 'standard.' Different kinds of units are used depending on the context, specially in physics. In the end a kilo is just a prefix to the unit, so the elementary unit is the gram. On the other hand, the widespread use of $\text{kg}$ as a unit in our daily life is probably due the fact that the order of magnitude of most of the things we deal with (including our own weight) is in the kilogram order. – hjhjhj57 Jul 20 '15 at 20:15
• Because using grams makes most commonly occurring masses into large (and long) numbers, which is inconvenient. – Conifold Jul 20 '15 at 21:35
• Some people use cgs (centimeter-gram-second) units and others use SI (meter-kilogram-second), so it's not really true that the kg is the basic unit for mass. The physical artifact used as a standard is presumably a kilogram rather than a gram for reasons of convenience and precision, e.g., corrosion or dust would be more significant on a smaller object. – Ben Crowell Jul 21 '15 at 2:52
• Depending on the field you're in, you can use cgs, mks, or any other system of units you choose. In Astrophysics, for example, cgs is more common. In particle physics, one uses neither this nor that, but rather a "natural" set of units. – Omry Jul 21 '15 at 8:25
• @Conifold That wouldn't explain though why it is the SI standard. In everyday people als talk more about hours (in the order of 1 ks), and weeks (in the order of 1 Ms), than in seconds itself. – wythagoras Jul 21 '15 at 19:27
Why is kg the standard unit for mass and not g?
Tongue in cheek answer: Because a foolish consistency is the hobgoblin of little minds?
More seriously, none of the immediate predecessors of the SI bothered to have all of their base units be consistent with the prefix-free units. Gauss proposed a millimeter-gram-second system in the 1830s. Maxwell and Thomson modified this to a centimeter-gram-second system in the 1860s. There was a lot of infighting over the electromagnetic units in those CGS systems. Giorgi proposed yet another system in 1901, the meter-kilogram-second-ampere system. This system is the immediate predecessor to the current International System.
What is the reason that they switched to the kg when using the kilogramme des Archives? Perhaps it was easier to make and less sensitive to mistakes? Are there other reasons?
You have it backwards. The original concept of mass by the French revolutionaries working on the metric system was the mass of a liter of water. This unit of mass was to be called the grave. French scientists worked on making this realizable (the mass of a volume water turned out not to form a good basis). The Republican government that followed the French Revolution thought this grave was too big for practical uses, so they invented the gramme as the mass of a milliliter of water. The work on the grave prototype continued, only now this would be called the kilogram prototype.
• Why grave ? ? ? – Pacerier Jul 30 '17 at 14:09
• @Pacerier -- Because the inventors were French, not English. The French grave comes from the Latin gravis, which means "heavy". (Note: Thanks to 1066, this is one of the two very distinct meanings of the English grave. The other meaning comes from the old English grafan, "to dig".) – David Hammen Jul 30 '17 at 14:25
• I still wonder why didn't they simply rename kg to something else that doesn't have a prefix so all fundamental units would have been unprefixed. – Calmarius Sep 25 '20 at 16:29
The kilogram is the base unit of mass because electrical engineers in the late 19th century chose a particular set of practical electrical units. Their practical units were a success, and we are still using them today: ohm, volt, and ampere. In 1881 the International Electrotechnical Commission (IEC) created two sets of units: a set of theoretical units, and a set of practical units. The theoretical electrical units, abampere, abvolt, abohm were coherent with the mechanical units cm, g, s. Coherence in this case primarily means that electrical energy and mechanical energy have identical units: $V\cdot I\cdot t = F \cdot L$. Unfortunately, the abvolt and abohm were inconveniently small. On the other hand, the practical electrical units, ampere, volt, and ohm, were not coherent with cm, g, s, nor with m, g, s. However, by coincidence they were coherent with m, kg, s. That is why the kilogram was chosen as the base unit of mass in the SI system, in 1960.
• So the engineers won again. – Pacerier Jul 30 '17 at 14:09
• @Pacerier As it should be :-). – Russell McMahon Jan 3 '20 at 3:39
• This is correct, although it leaves out a subtle point. I added an answer as a sort of addendum to this answer. The subtle point is that the non-mechanical units like the volt and the ampere are not coherent in the three-dimensional MKS system—only in the QES system are all practical units coherent. Only the purely mechanical practical units such as the watt and the joule are coherent in the three dimensional MKS, and that truly is a lucky accident. It is what makes it possible to include the practical units in a coherent four-dimensional MKSA system (whereas e.g. 'CGSA' would not work). – linguisticturn Jun 11 '20 at 12:30
The overall presentation is largely borrowed from here, but the actual facts come mostly from these sources: here, here and here.
Introduction
One has to be careful when saying that the practical units (the volt, the ampere, etc.) were 'coherent with' the meter-kilogram-second (MKS) system.
If by 'MKS' we mean a three-dimensional mass-length-time system, then the volt, the ampere, and the ohm were most definitely not coherent units in it. (However—and this will be the key—the product volt × ampere, which was named the watt, is a purely mechanical unit—of power—which was coherent in MKS. It is that fact—that the watt is coherent in MKS—that truly was a lucky accident.)
On the other hand, if by 'MKS' we really mean a four-dimensional, MKSX system, where 'X' is the unit of some non-mechanical electric quantity,1 then it is incorrect to say that it is an accident that the volt, the ampere, etc. were coherent units in such a system—of course they were, since one of them was chosen to be a base unit!
1Serious consideration was given to proposals where X was either the coulomb, or the ampere, or the ohm, or the volt. Eventually, metrological considerations turned out to favor the ampere.
The key here are the units that straddle both the electric and the mechanical domains—in particular, the watt. It is those units that would have made it impossible to extend the CGS system by e.g. adding the ampere to it as a fourth independent base unit: the watt is not equal to the erg per second, and so would not be a coherent derived unit in such a system. However, the watt does turn out to be kg × meter2/second3, and so the practical electric units can be integrated with the MKS system by adding a fourth independent base unit.
Discussion
At the time the 'practical' electric units were adopted (1873–1893), everyone as a matter of course assumed that a scientific system of units should be absolute, meaning that the base dimensions should be just the three mechanical ones: length, mass, and time. The abvolt, for example, is g1/2cm3/2/s2 when expressed in the base cgs units (see here). Now, there is indeed an absolute (i.e. a three-dimensional, length-mass-time) system is which the practical units are coherent, but it is not the meter-kilogram-second system. It is, rather, a system in which the base unit of length is 107 meters (called a quadrant, as it is very nearly one half of a meridian of the Earth), and the base unit of mass is 10-11 grams (an eleventh-gram): the quadrant-eleventh-gram-second (QES) system.
This can be derived from the following facts. The practical units were defined in 1873 as decimal multiples and submultiples of the 'electromagnetic' absolute cgs units, cgs-emu. We will somewhat anachronistically use the following names for the emu units: the 'abvolt' for the potential, the 'abampere' for the current, etc.1 When expressed in the base cgs units, the abvolt is g1/2cm3/2/s2, the abampere is g1/2cm1/2/s, and the abcoulomb is g1/2cm1/2 (see here, here, and here). On the other hand, the volt was defined as 108 abvolts, the ampere as 0.1 abamperes, and the coulomb as 0.1 abcoulombs (see the same three links). Now imagine we change the base units of lenghts, mass, and time by factors of M, L, and T, respectively. Then the base unit of potental will become (M g)1/2(L cm)3/2/(T s)2 = M1/2L3/2/T2 × g1/2cm3/2/s2 = M1/2L3/2/T2 abvolts. We want this new unit to be the volt, so we must have M1/2L3/2/T2 = 108. Similarly, if we want the new unit for current to be the ampere, we obtain that M1/2L1/2/T = 0.1, and if we want the new unit of charge to be the coulomb, we obtain that M1/2L1/2 = 0.1. We thus have a system of three equations with three unknwns. The solution is L = 109 (so the base unif of length should be 109 cm = 107 m), M = 10-11 (so the base unit of mass should be 10-11 g), and T = 1 (so the second remains the base unit of time).
1This naming convention, where the name of the emu unit is formed by adding a prefix 'ab-' (short for 'absolute') to the name of the corresponding practical unit, came only in 1903, three decades after the practical units were originally defined in terms of the absolute emu units. At that earlier time, the absolute cgs electric units themselves didn't have any special names. One just used 'e.m.u.' or 'C.G.S', as in 'a current of 5 e.m.u.' or '5 C.G.S. units of current' (or perhaps one could also use the base units, e.g. a current of 5 g1/2cm1/2/s) However, for convenience, we will use 'abvolt', 'abampere', etc. in what follows.
The way the meter-kilogram-second system enters the story is this. In addition to the purely electric and magnetic units such as the ohm, the volt, the ampere, etc., the practical system of units also had to include some purely mechanical units. This is because of relations such as voltage × current = power. In particular, the volt times the ampere gives a unit of power, which was in 1882 given a special name: the watt. Then the watt times the second gives a unit of energy, which was named the joule. Of course, these purely mechanical practical units were coherent in the QES system. However, they are in fact coherent in a whole family of systems. To see why that is so, recall that the dimensions of power are ML2/T3. It follows that if the watt is coherent in a system, it will also be coherent in any system obtained from the original system by simultaneously changing the base unit of length by a factor of L and the base unit of mass by a factor of M in such a way that ML2=1, i.e. in such a way that M=L-2. We are told that the watt is coherent in the QES system; thus, it will also be coherent in any system in which the base unit of length is L × 107 meters while the base unit of mass is L-2 × 10-11 grams. Picking L = 10-7 gives the meter and the kilogram. Moreover, it is easy to check that, if we insist that the new base units should be decimal multiples or submultiples of the meter and the gram, then the choice L = 10-7 is the only choice that produces base units of practical sizes. For example, if we pick L = 10-8, so the base unit of length is the decimeter, then the base unit of mass becomes 1016 × 10-11 grams = 105 grams = 100 kg, which is impractically large.
Probably many people noticed that the watt is coherent in the meter-kilogram-second system, but it was Giovanni Giorgi who really took note of it. He had the further insight—which was sort of iconoclastic at the time—that while the purely electric and magnetic units cannot be made coherent in the three-dimensional meter-kilogram-second system, they could be made coherent in a four-dimensional extension of that system. Thus he proposed, in 1901, to introduce a fourth base dimension, which would be purely electric or magnetic. In principle, this fourth independent dimension could be any electromagnetic quantity, but only four received serious consideration: electric charge, electric current, electric resistance, and electric potential. Eventually, electric current was chosen because it was most advantageous metrologically. Another selling point of Giorgi's system was that it made it possible to rationalize (i.e. remove the awkward factors of 4π from) Maxwell's equations without a corresponding redefinition of units by factors of (4π)1/2 (which is what happens when the Gaussian system is rationalized, giving the Lorentz-Heaviside system).
The Giorgi proposal (with the ampere as the fourth base unit) was adopted by the International Electrotechnical Commission in 1935 and by the CGPM in 1946; the CGMP later incorporated it into the SI system.
Summary
The fact that the kilogram rather than the gram is the base unit of mass in the SI is all the more remarkable given that, for about a century, the scientific community had been almost universally using the centimeter-gram-second system. Let me summarize the main reason why the CGS was abandoned and the meter-kilogram-second (MKS) was adopted. The main background facts to be aware of are that (a) by the end of the 19th century, the so-called 'practical system' of electric units had become nearly universaly accepted in practical applications of electricity such as telegraphy, and (b) this system of units included the volt and the ampere, and therefore also their product; but this product is a purely mechanical unit (of power), and if one multiplies that by the second, one gets another purely mechanical unit (of energy). In 1882, these two units were named, respectively, the watt and the joule. Now: the MKS is the unique system which has all three of the following characteristics (and which keeps the second as a unit of time): 1. the watt and the joule are coherent, 2. the base units of length and mass are decimal multiples of the meter and the gram (so that the system is 'properly metric'), and 3. the sizes of the base units of length and mass are convenient (more or less) for practical work. All this assumes that the second remains the base unit of time; but it is definitely true that any proposal to replace the second would have been dismissed out of hand. The non-mechanical units such as the volt, the ampere, etc. are not coherent in a three-dimensional MKS system, which is why a fourth independent dimension was added: the ampere became a new base unit, dimensionally independent from the meter, the kilogram, and the second.
• What I find odd is why they didn't go for the metric tonne. It makes way more sense together with the meter (a cubic meter of water is a metric tonne). So both units are used in the same fields (construction, bulk production, ...). Just as we use "kilo" now as a shorthand for kilogram, we could use "milli" as a shorthand for millitonne (the same amount). And it sounds nicer to weigh 90 millis than 90 kilos :D – sanderd17 Jul 17 '20 at 6:43
• @sanderd17 What you are referring to is called an MTS system. It was technically the only legal system in France between 1919 and 1961 (although it wasn't actually used much), and it was also official in the Soviet Union from 1933 to 1955. It had various named derived units, such as the sthene for force, the pieze for pressure, and thermie for heat energy. – linguisticturn Jul 17 '20 at 16:51
• @sanderd17 A key principle that the SI was supposed to follow was that the practical electric units (the volt, the ampere, the watt) should be coherent in it. And the watt simply isn't coherent in the MTS system. As I explained above, in order for the watt to be coherent, the base units must be L meters and 1/L^2 kg. If you want the base unit of length to be a decimal multiple or submultiple of the meter, your choices are (1 m, 1 kg), (1 dm, 100 kg), (1 cm, 10 000 kg), (1 mm, 10^6 kg), ... If you really want the metric ton, your base unit of length must be 1/(1000)^(1/2) = 3.1623… cm. – linguisticturn Jul 17 '20 at 16:52
• @sanderd17 Why didn't the MTS see more use in e.g. France? Well, note that the base unit of mass/weight is often on the order of a kilogram (both 0.5 kg and 4 kg are ''on the order' of 1 kg): the pound (~450 g); the Chinese catty (500 g-600 g); the Japanese kan (~3.75 kg); the Indian ser (~640 g). We may conclude that units in the 1 kg range are the most convenient for most kinds of everyday use. – linguisticturn Jul 17 '20 at 16:52
• @sanderd17 Much smaller and much bigger units do have their uses, but these are usually more specialized, so systems that are based on such units don't see wide adoption. The CGS system, for example, was widely used by scientists (and some fields still use it almost exclusively, e.g. astronomy), but even engineers didn't use it that much, not to speak of the public at large. And even scientists shied away from adopting Gauss's milligram-based system; using milligrams made sense for Gauss (who was mostly interested in masses of magnetic compass needles), but not for most other people. – linguisticturn Jul 17 '20 at 16:52 |
# How to set exact radius for a node?
I have a bunch of nodes and would like to size them so that their areas represent some numbers I have (so that if I have two nodes with corresponding values 1 and 2, then the second node's area should be double the first's). I'm trying to achieve this using some combination of minimum size and inner sep, but I've noticed that if I set minimum size to 0pt, then a node with inner sep = 2pt will not be double the area of a node with inner sep = 1pt.
-
Do they have to be nodes? Whilst nodes are often convenient, it is possible to draw shapes in TikZ without them and it is easier to have direct control if done this way. – Loop Space Mar 19 '12 at 8:31
For nodes you can set inner sep=0pt and then use minimum size (if the node text is empty or shorter than the declared size) to control the area:
\documentclass{article}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[every node/.style={draw=blue,thick,circle,inner sep=0pt}]
\draw[help lines] (-3,-3) grid (3,3);
\node[minimum size=2cm] (0,0) {};
\node[minimum size=2.828cm] (0,0) {};
\node[minimum size=4cm] (0,0) {};
\end{tikzpicture}
\end{document}
If you want to keep the size fixed independently of the node text, you can set text width (and perhaps also text height):
\documentclass{article}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[every node/.style={draw=blue,thick,circle,inner sep=0pt}]
\draw[help lines] (-3,-3) grid (3,3);
\node[text width=2cm] (0,0) {};
\node[text width=2.828cm] (0,0) {};
\node[text width=4cm] (0,0) {};
\end{tikzpicture}
\end{document}
As noted by Andrew Stacey, you could use shapes instead of nodes and this gives you the possibility to easily control the shape attributes; here are the same three circles using the circle operation:
\documentclass{article}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[every node/.style={draw=blue,thick,circle,inner sep=0pt}]
\draw[help lines] (-3,-3) grid (3,3);
\end{tikzpicture}
\end{document}
-
I think the following should be working:
\documentclass[parskip]{scrartcl}
\usepackage[margin=15mm]{geometry}
\usepackage{tikz}
\begin{document}
\pgfmathsetmacro{\nodebasesize}{1} % A node with a value of one will have this diameter
\pgfmathsetmacro{\nodeinnersep}{0.1}
\newcommand{\propnode}[5]{% position, name, options, value, label
\pgfmathsetmacro{\minimalwidth}{sqrt(#4*\nodebasesize)}
\node[#3,minimum width=\minimalwidth*1cm,inner sep=\nodeinnersep*1cm,circle,draw] (#2) at (#1) {#5};
}
\begin{tikzpicture}
\draw[<->] (2,-0.5) -- node[right] {$r=\sqrt{1} \Rightarrow A=\pi(\sqrt{1})^2=\pi$} (2,0.5);
\draw[gray] (2,-0.5) -- (0,-0.5);
\draw[gray] (2,0.5) -- (0,0.5);
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}
\draw[<->] (2,2-0.707) -- node[right] {$r=\sqrt{2} \Rightarrow A=\pi(\sqrt{2})^2=2\pi$} (2,2+0.707);
\draw[gray] (2,2-0.707) -- (0,2-0.707);
\draw[gray] (2,2+0.707) -- (0,2+0.707);
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}
\draw[<->] (2,4-0.866) -- node[right] {$r=\sqrt{3} \Rightarrow A=\pi(\sqrt{3})^2=3\pi$} (2,4+0.866);
\draw[gray] (2,4-0.866) -- (0,4-0.866);
\draw[gray] (2,4+0.866) -- (0,4+0.866);
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}
\draw[<->] (11,1) -- node[right] {$r=\sqrt{9} \Rightarrow A=\pi(\sqrt{9})^2=9\pi$} (11,4);
\draw[gray] (11,1) -- (9,1);
\draw[gray] (11,4) -- (9,4);
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}
\draw[<->] (11,-0.354) -- node[right] {$r=\sqrt{0.5} \Rightarrow A=\pi(\sqrt{0.5})^2=0.5\pi$} (11,0.354);
\draw[gray] (11,-0.354) -- (9,-0.354);
\draw[gray] (11,0.354) -- (9,0.354);
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}
\draw[ultra thick,red] (8.5,-0.5) -- (11.5,-0.5);
\end{tikzpicture}\\[2cm]
\pgfmathsetmacro{\nodebasesize}{1} % A node with a value of one will have this diameter
\pgfmathsetmacro{\nodeinnersep}{0.0}
\begin{tikzpicture}
\draw[<->] (2,-0.5) -- node[right] {$r=\sqrt{1} \Rightarrow A=\pi(\sqrt{1})^2=\pi$} (2,0.5);
\draw[gray] (2,-0.5) -- (0,-0.5);
\draw[gray] (2,0.5) -- (0,0.5);
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}
\draw[<->] (2,2-0.707) -- node[right] {$r=\sqrt{2} \Rightarrow A=\pi(\sqrt{2})^2=2\pi$} (2,2+0.707);
\draw[gray] (2,2-0.707) -- (0,2-0.707);
\draw[gray] (2,2+0.707) -- (0,2+0.707);
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}
\draw[<->] (2,4-0.866) -- node[right] {$r=\sqrt{3} \Rightarrow A=\pi(\sqrt{3})^2=3\pi$} (2,4+0.866);
\draw[gray] (2,4-0.866) -- (0,4-0.866);
\draw[gray] (2,4+0.866) -- (0,4+0.866);
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}
\draw[<->] (11,1) -- node[right] {$r=\sqrt{9} \Rightarrow A=\pi(\sqrt{9})^2=9\pi$} (11,4);
\draw[gray] (11,1) -- (9,1);
\draw[gray] (11,4) -- (9,4);
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}
\draw[<->] (11,-0.354) -- node[right] {$r=\sqrt{0.5} \Rightarrow A=\pi(\sqrt{0.5})^2=0.5\pi$} (11,0.354);
\draw[gray] (11,-0.354) -- (9,-0.354);
\draw[gray] (11,0.354) -- (9,0.354);
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}
\end{tikzpicture}\\[2cm]
\pgfmathsetmacro{\nodebasesize}{1.5} % A node with a value of one will have this diameter
\pgfmathsetmacro{\nodeinnersep}{0.0}
\begin{tikzpicture}
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}
\end{tikzpicture}
\end{document}
Example 1: If your node diameter becomes to small, your nodes will get to big (see red underline)
Example 2: If that occurs, you might decrease the inner sep:
Example 3: Here this helps, but in case it is still not enough, you may increase the base size of the nodes:
Edit 1: I added the option to draw control lines automatically as well as influence the node's font size, so you should be able to choose fitting settings easily:
\documentclass[parskip]{scrartcl}
\usepackage[margin=15mm]{geometry}
\usepackage{tikz}
\usetikzlibrary{calc}
\usepackage{xifthen}
\begin{document}
\pgfmathsetmacro{\nodebasesize}{1} % A node with a value of one will have this diameter
\pgfmathsetmacro{\nodeinnersep}{0.1}
\newcommand{\propnode}[7]{% position, name, options, value, label, show control lines (s for show), font size
\pgfmathsetmacro{\minimalwidth}{sqrt(#4*\nodebasesize)}
\node[#3,minimum width=\minimalwidth*1cm,inner sep=\nodeinnersep*1cm,circle,draw] (#2) at (#1) {#7 #5};
\ifthenelse{\equal{#6}{s}}
{ \draw[gray] ($(#1)+(0,\minimalwidth/2)$) -- ($(#1)+(\minimalwidth/2+1,\minimalwidth/2)$);
\draw[gray] ($(#1)+(0,-\minimalwidth/2)$) -- ($(#1)+(\minimalwidth/2+1,-\minimalwidth/2)$);
\draw[very thick,<->] ($(#1)+(\minimalwidth/2+1,\minimalwidth/2)$) -- ($(#1)+(\minimalwidth/2+1,-\minimalwidth/2)$);
}
{}
}
\begin{tikzpicture}
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}{s}{}
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}{s}{}
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}{s}{}
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}{s}{}
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}{s}{}
\end{tikzpicture}\\[2cm]
\pgfmathsetmacro{\nodebasesize}{0.5}
\pgfmathsetmacro{\nodeinnersep}{0.2}
\begin{tikzpicture}
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}{s}{\tiny}
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}{s}{\tiny}
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}{s}{\tiny}
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}{s}{\tiny}
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}{s}{\tiny}
\end{tikzpicture}\\[2cm]
\pgfmathsetmacro{\nodebasesize}{0.2}
\pgfmathsetmacro{\nodeinnersep}{0}
\begin{tikzpicture}
\propnode{0,0}{n1}{fill=red,text=blue}{1}{1}{s}{\scriptsize}
\propnode{0,2}{n2}{fill=green,text=black}{2}{2}{s}{\scriptsize}
\propnode{0,4}{n3}{fill=yellow,text=violet}{3}{3}{s}{\scriptsize}
\propnode{9,2.5}{n9}{fill=black,text=white}{9}{9}{s}{\scriptsize}
\propnode{9,0}{n05}{fill=pink,text=black}{0.5}{0.5}{s}{\scriptsize}
\end{tikzpicture}
\end{document}
-
You need to consider the width of the line ! \propnode{0,0}{n1}{fill=red,text=blue,draw=red,line width=8mm}{1}{1} – Alain Matthes Apr 27 '12 at 19:42
In priciple yes, but as a decision: closed, won't fix. Feel free to improve it though ;) – Tom Bombadil Apr 27 '12 at 20:33
Update
If their areas of the circle nodes represent some numbers with proportionality then you need to know exactly the radius. The radius depends of minimum width and of \pgflinewidth.
we have : radius = (minimum width + line width) / 2 if inner sep = 0pt
In the next example, I choice first minimum width=2cm then minimum width=2cm,line width=5mm and finally line width=5mm,minimum width=2cm-\pgflinewidth with in all cases inner sep= 0 pt.
\documentclass{scrartcl}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\begin{tikzpicture}
\draw[help lines,step=0.1,,draw=orange] (0,0) grid (8,1);
\draw[help lines] (0,0) grid (8,1);
\node[minimum width=2cm,circle,inner sep=0pt,fill=blue!20,fill opacity=.5]{};
\node[minimum width=2cm,circle,inner sep=0pt,fill=blue!20,fill opacity=.5,
line width=5mm,draw=gray,opacity=.5] at (3,0){};
\node[circle,inner sep=0pt,fill=blue!20,,fill opacity=.5,
line width=5mm,draw=gray,opacity=.5,minimum width=2cm-\pgflinewidth] at (6,0) {};
\end{tikzpicture}
\end{document}
Now if I want to get three circles with areas equal to pi, 2pi and 3pi I created a macro def\lw{2mm} to change quickly the line width in all nodes
\documentclass{scrartcl}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\tikzset{myrad/.style 2 args={circle,inner sep=0pt,minimum width=(2*(sqrt(#1)*1 cm ) - \pgflinewidth,fill=#2,draw=#2,fill opacity=.5,opacity=.8}}
\begin{tikzpicture}
\def\lw{2mm}
\draw[help lines,step=0.1,,draw=orange] (0,0) grid (8,1);
\draw[help lines] (0,0) grid (8,1);
\node[line width=\lw, myrad={3}{green!20}] at (7,0) {3};
\end{tikzpicture}
\end{document}
Finally If you want nodes with areas equal to 1 cm^2, 2 cm^2 and 3 cm^2 : I change the line width for the second group of nodes
\documentclass{scrartcl}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\tikzset{myrad/.style 2 args={circle,inner sep=0pt,minimum width=(2*(sqrt(#1/3.1415)*1 cm ) - \pgflinewidth,fill=#2,draw=#2,fill opacity=.5,opacity=.8}}
\begin{tikzpicture}
\def\lw{2mm}
\draw[help lines,step=0.1,,draw=orange] (0,0) grid (8,1);
\draw[help lines] (0,0) grid (8,1);
\node[line width=\lw, myrad={3}{green!20}] at (7,0) {3};
\end{tikzpicture}
\begin{tikzpicture}
\def\lw{5mm}
\draw[help lines,step=0.1,,draw=orange] (0,0) grid (8,1);
\draw[help lines] (0,0) grid (8,1);
\node[line width=\lw, myrad={3}{green!20}] at (7,0) {3};
\end{tikzpicture}
\end{document}
To avoid this kind of problem, we can use circles instead of circle nodes. But we need to adjust the radius wit the pgflinewidth. In the next example,I want a radius = 2cm so I need to use : radius=2cm-0.5\pgflinewidth. Then I need to create a node with the same dimensions.
Like the question about node and rectangle here, we can associate a node to the shape The main problem : we can't use scale but it's more easy to place a label.
\documentclass{scrartcl}
\usepackage{tikz}
\usetikzlibrary{calc}
\begin{document}
\tikzset{set node/.style={insert path={%
\pgfextra{%
\node[inner sep=0pt,outer sep = 0pt,draw=black, % draw= none only to show what I do
circle, |
# zbMATH — the first resource for mathematics
The collected works of Arne Beurling. Volume 1: Complex analysis. Volume 2: Harmonic analysis. Ed. by Lennart Carleson, Paul Malliavin, John Neuberger, John Wermer. (English) Zbl 0732.01042
Contemporary Mathematicians. Boston etc.: Birkhäuser Verlag. xx, 475 p./v.1; xx, 389 p./v.2 sFr. 168.00/set; DM 198.00/set (1989).
Arne Karl-August Beurling (1905-1986) was Professor at Uppsala from 1937 till 1954; since 1954, he became a Permanent Member and Professor at the Institute for Advanced Study in Princeton; he was a member of several Academies, and was awarded several Scientific Prizes. From “Arne Beurling in memoriam” by L. Ahlfors and L. Carleson, Acta Math. 161, 1-9 (1988), reprinted in the volumes under review, we quote “[he] was a highly creative mathematician whose legacy will influence future mathematics for many years to come, maybe even for generations.... He published very selectively..., and a sizeable part of his work has never appeared in print.”
“The work of Arne Beurling falls into three main categories: complex analysis, harmonic analysis, and potential theory. In a characteristic way he transformed all of these areas of mathematics and made them interact with each other. This unity and confluence of original ideas and methods make him unique analysts of our time”.
Arne Beurling published 46 papers in French and English (including joint papers with L. V. Ahlfors, H. Helson, A. E. Livingstone, J. Deny and P. Malliavin; about a quarter of his papers are published in Acta Mathematica). Among his doctoral students are C.-G. Esseen, L. Carleson, G. Borg, B. Nyman, and Sonja Lyttkens.
The two volumes (“In accordance with Beurling’s wishes, the editors have divided the papers into two parts: complex analysis and harmonic analysis.”) “The collected works of Arne Beurling” contain all of his papers (including the thesis “Études sur un problème de majoration”, Upsal 1933), and, in addition, the Mittag-Leffler Lectures on Complex and Harmonic Analysis (1977-1978), written up by L. Carleson and J. Wermer (hitherto unpublished), Selected Seminars on Complex Analysis, University of Uppsala, 1938-1952, and Selected Seminars on Harmonic Analysis, University of Uppsala, 1938-1952. In the Mittag-Leffler lectures “[Beurling] described the development of his ideas in various fields of analysis”. Beurling himself was not able “to review the unpublished papers as they appear here.”
Moreover the two volumes contain the above-mentioned memorial article by L. Ahlfors and L. Carleson, and a Séminaire Bourbaki lecture “Quotients des fonctions définies-négatives” by J.-P. Kahane, describing unpublished joint work of A. Beurling and J. Deny.
In a short review it does not seem to be possible to give an adequate description of the papers collected in these two volumes. To get an impression, what is dealt with, we again quote from the paper “In memoriam Arne Beurling”.
(The thesis is) “... a whole program for research in function theory in the broadest sense. As such it has been one of the most influential mathematical publications... Beurling’s leading idea was to find new estimates for the harmonic measure by introducing concepts... which are inherently invariant under conformal mapping.” An important concept was the notion of “extremal distance”, “a forerunner of the notion of “extremal length”, which is at the basis of quasiconformal mappings and... Teichmüller theory”. His paper “Ensembles exceptionnels” (1940) “became the origin of numerous studies of exceptional sets and boundary behaviour of holomorphic features... Beurling’s treatment of quasi-analyticity was combined with harmonic analysis and potential theory.” Beurling’s most famous theorem as well as the definition of “inner” and “outer” functions may be found in his paper in Acta Math. 81 (1949); the theorem is given, for example, in W. Rudin “Real and Complex Analysis” in 17.21 as “Beurling’s theorem”.
“Beurling’s first paper in harmonic analysis is” his extension of Wiener’s proof of the prime number theorem to “generalized integers”. This paper [mentioned also in Rudin’s “Functional Analysis”] is the first one in a long series of papers on “generalized integers”, see for example J. Knopfmacher’s “Abstract Analytic Number Theory”. Furthermore he proved and emphasized the spectral radius formula, and in a highly original manner, he dealt with the problem of approximating bounded functions $$\phi$$ by linear combinations of exponentials from the spectrum of $$\phi$$. His papers [3] and [6] are referred to, explicitly for example in L. Loomis’ “Introduction to Abstract Harmonic Analysis”.
Beurling’s investigations concerning duality between capacity measures and the Dirichlet integral, and concerning the importance of contractions for spectral synthesis “led him to... a new foundation of potential theory”. The notion of Dirichlet space, which is summarized in the Encyclopedic Dictionary of Mathematics in 338Q, was introduced by Beurling.
Concluding, every mathematician working in harmonic analysis, theory of complex functions or potential theory, ought to be grateful to the publishers for the fact that now he has easy access to Beurling’s papers.
##### MSC:
01A75 Collected or selected works; reprintings or translations of classics 01A70 Biographies, obituaries, personalia, bibliographies
##### Keywords:
complex analysis; harmonic analysis; potential theory |
数学代写|黎曼几何代写Riemannian geometry代考|МАТН6205
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|The First Dirichlet Eigenvalue Comparison Theorem
Following standard notations and setting (see, e.g., [Cha1] or in this context the seminal survey by Grigoryan in [Gri1]), for any precompact open set $\Omega$ in a Riemannian manifold $M$ we denote by $\lambda(\Omega)$ the smallest number $\lambda$ for which the following Dirichlet eigenvalue problem has a non-zero solution
\left{\begin{aligned} \Delta u+\lambda u &=0 \text { at all points } x \text { in } \Omega \ u(x) &=0 \text { at all points } x \text { in } \partial \Omega \end{aligned}\right.
We shall need the following beautiful observation due to Barta:
Theorem $7.1$ ([B], [Cha1]). Consider any smooth function $f$ on a domain $\Omega$ which satisfies $f_{\left.\right|{\Omega}}>0$ and $f{\mid \text {an }}=0$, and let $\lambda(\Omega)$ denote the first eigenvalue of the Dirichlet problem for $\Omega$. Then
$$\inf {\Omega}\left(\frac{\Delta f}{f}\right) \leq-\lambda(\Omega) \leq \sup {\Omega}\left(\frac{\Delta f}{f}\right)$$
If equality occurs in one of the inequalities, then they are both equalities, and $f$ is an eigenfunction for $\Omega$ corresponding to the eigenvalue $\lambda(\Omega)$.
Proof. Let $\phi$ be an eigenfunction for $\Omega$ corresponding to $\lambda(\Omega)$.
Then $\phi_{\Omega}>0$ and $\phi_{\left.\right|{\Omega}}=0$. If we let $h$ denote the difference $h=\phi-f$, then \begin{aligned} -\lambda(\Omega)=\frac{\Delta \phi}{\phi} &=\frac{\Delta f}{f}+\frac{f \Delta h-h \Delta f}{f(f+h)} \ &=\inf {\Omega}\left(\frac{\Delta f}{f}\right)+\sup {\Omega}\left(\frac{f \Delta h-h \Delta f}{f(f+h)}\right) \ &=\sup {\Omega}\left(\frac{\Delta f}{f}\right)+\inf {\Omega}\left(\frac{f \Delta h-h \Delta f}{f(f+h)}\right) \end{aligned} Here the supremum, $\sup {\Omega}\left(\frac{f \Delta h-h \Delta f}{f(f+h)}\right)$ is necessarily positive since
$$\left.f(f+h)\right|{\Omega}>0$$ and since by Green’s second formula $(6.8)$ in Theorem $6.4$ we have $$\int{\Omega}(f \Delta h-h \Delta f) d V=0 \text {. }$$
For the same reason, the infimum, $\inf _{\Omega}\left(\frac{f \Delta h-h \Delta f}{f(f+h)}\right)$ is necessarily negative. This gives the first part of the theorem. If equality occurs, then $(f \Delta h-h \Delta f)$ must vanish identically on $\Omega$, so that $-\lambda(\Omega)=\frac{\Delta f}{f}$, which gives the last part of the statement.
As already alluded to in the introduction, the key heuristic message of this report is that the Laplacian is a particularly ‘swift actor’ on minimal submanifolds (i.e., minimal extrinsic regular $R$-balls $D_{R}$ ) in ambient spaces with an upper bound $b$ on its sectional curvatures. This is to be understood in comparison with the ‘action’ of the Laplacian on totally geodesic $R$-balls $B_{R}^{b, m}$ in spaces of constant curvature b. In this section we will use Barta’s theorem to show that this phenomenon can indeed be ‘heard’ by ‘listening’ to the bass note of the Dirichlet spectrum of any given $D_{R}$.
数学代写|黎曼几何代写Riemannian geometry代考|Isoperimetric Relations
In this and the following two sections we survey some comparison results concerning inequalities of isoperimetric type, mean exit times and capacities, respectively, for extrinsic minimal balls in ambient spaces with an upper bound on sectional curvature. This has been developed in a series of papers, see [Pa] and [MaP1][MaP4].
We will still assume a standard situation as in the previous section, i.e., $D_{R}$ denotes an extrinsic minimal ball of a minimal submanifold $P$ in an ambient space $N$ with the upper bound $b$ on the sectional curvatures.
Proposition 8.1. We define the following function of $t \in \mathbb{R}{+} \cup{0}$ for every $b \in \mathbb{R}$, for every $q \in \mathbb{R}$, and for every dimension $m \geq 2$ : $$L{q}^{b, m}(t)=q\left(\frac{\operatorname{Vol}\left(S_{t}^{b, m-1}\right)}{m h_{b}(t)}-\operatorname{Vol}\left(B_{t}^{b, m}\right)\right)$$
Then
$$L_{q}^{b, m}(0)=0 \text { for all } b, q, \text { and } m$$
and
$$\operatorname{sign}\left(\frac{d}{d t} L_{q}^{b, m}(t)\right)=\operatorname{sign}(b q) \text { for all } b, q, m, \text { and } t>0 \text {. }$$
Proof. This follows from a direct computation using the definition of $h_{b}(t)$ from equation (3.5) together with the volume formulae (cf. [Gr])
\begin{aligned} \operatorname{Vol}\left(B_{t}^{b, m}\right) &=\operatorname{Vol}\left(S_{1}^{0, m-1}\right) \cdot \int_{0}^{t}\left(Q_{b}(u)\right)^{m-1} d u \ \operatorname{Vol}\left(S_{t}^{b, m-1}\right) &=\operatorname{Vol}\left(S_{1}^{0, m-1}\right) \cdot\left(Q_{b}(t)\right)^{m-1} \end{aligned}
数学代写|黎曼几何代写Riemannian geometry代考|A Consequence of the Co-area Formula
The co-area equation (6.4) applied to our setting gives the following
Proposition 9.1. Let $D_{R}(p)$ denote a regular extrinsic minimal ball of $P$ with center $p$ in $N$. Then
$$\frac{d}{d u} \operatorname{Vol}\left(D_{u}\right) \geq \operatorname{Vol}\left(\partial D_{u}\right) \text { for all } u \leq R$$
Proof. We let $f: \bar{D}{R} \rightarrow \mathbb{R}$ denote the function $f(x)=R-r(x)$, which clearly vanishes on the boundary of $D{R}$ and is smooth except at $p$. Following the notation of the co-area formula we further let
\begin{aligned} \Omega(t) &=D_{(R-t)} \ V(t) &=\operatorname{Vol}\left(D_{(R-t)}\right) \text { and } \ \Sigma(t) &=\partial D_{(R-t)} \end{aligned}
Then
\begin{aligned} \operatorname{Vol}\left(D_{u}\right) &=V(R-u) \text { so that } \ \frac{d}{d u} \operatorname{Vol}\left(D_{u}\right) &=-V^{\prime}(t){\left.\right|{i=n-u}} . \end{aligned}
The co-area equation (6.4) now gives
\begin{aligned} -V^{\prime}(t) &=\int_{\partial D_{(R-t)}}\left|\nabla^{P} r\right|^{-1} d A \ & \geq \operatorname{Vol}\left(\partial D_{(R-t)}\right) \ &=\operatorname{Vol}\left(\partial D_{u}\right) \end{aligned}
and this proves the statement.
Exercise 9.2. Explain why the non-smoothness of the function $f$ at $p$ does not create problems for the application of equation (6.4) in this proof although smoothness is one of the assumptions in Theorem 6.1.
数学代写黎曼几何代写Riemannian geometry代 考|lsoperimetric Relations
$$L q^{b, m}(t)=q\left(\frac{\operatorname{Vol}\left(S_{t}^{b, m-1}\right)}{m h_{b}(t)}-\operatorname{Vol}\left(B_{t}^{b, m}\right)\right)$$
$$L_{q}^{b, m}(0)=0 \text { for all } b, q, \text { and } m$$
$$\operatorname{sign}\left(\frac{d}{d t} L_{q}^{b, m}(t)\right)=\operatorname{sign}(b q) \text { for all } b, q, m, \text { and } t>0$$
$$\operatorname{Vol}\left(B_{t}^{b, m}\right)=\operatorname{Vol}\left(S_{1}^{0, m-1}\right) \cdot \int_{0}^{t}\left(Q_{b}(u)\right)^{m-1} d u \operatorname{Vol}\left(S_{t}^{b, m-1}\right)=\operatorname{Vol}\left(S_{1}^{0, m-1}\right) \cdot\left(Q_{b}(t)\right)^{m-1}$$
数学代写黎曼几何代写Riemannian geometry代考|A Consequence of the Co-area Formula
$$\frac{d}{d u} \operatorname{Vol}\left(D_{u}\right) \geq \operatorname{Vol}\left(\partial D_{u}\right) \text { for all } u \leq R$$
$$\Omega(t)=D_{(R-t)} V(t)=\operatorname{Vol}\left(D_{(R-t)}\right) \text { and } \Sigma(t)=\partial D_{(R-t)}$$
$$\operatorname{Vol}\left(D_{u}\right)=V(R-u) \text { so that } \frac{d}{d u} \operatorname{Vol}\left(D_{u}\right) \quad=-V^{\prime}(t) \mid i=n-u$$
$$-V^{\prime}(t)=\int_{\partial D_{(R-t)}}\left|\nabla^{P} r\right|^{-1} d A \geq \operatorname{Vol}\left(\partial D_{(R-t)}\right)=\operatorname{Vol}\left(\partial D_{u}\right)$$
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3342
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Analysis of Lorentzian Distance Functions
For comparison, and before going further into the Riemannian setting, we briefly present the corresponding Hessian analysis of the distance function from a point in a Lorentzian manifold and its restriction to a spacelike hypersurface. The results can be found in [AHP], where the corresponding Hessian analysis was also carried out, i.e., the analysis of the Lorentzian distance from an achronal spacelike hypersurface in the style of Proposition 3.9. Recall that in Section 3 we also considered
the analysis of the distance from a totally geodesic hypersurface $P$ in the ambient Riemannian manifold $N$.
Let $\left(N^{n+1}, g\right)$ denote an $(n+1)$-dimensional spacetime, that is, a timeoriented Lorentzian manifold of dimension $n+1 \geq 2$. The metric tensor $g$ has index 1 in this case, and, as we did in the Riemannian context, we shall denote it alternatively as $g=\langle,$,$rangle (see, e.g., [O’N] as a standard reference for this section).$
Given $p, q$ two points in $N$, one says that $q$ is in the chronological future of $p$, written $p \ll q$, if there exists a future-directed timelike curve from $p$ to $q$. Similarly, $q$ is in the causal future of $p$, written $p<q$, if there exists a future-directed causal (i.e., nonspacelike) curve from $p$ to $q$.
Then the chronological future $I^{+}(p)$ of a point $p \in N$ is defined as
$$I^{+}(p)={q \in N: p \ll q} .$$
The Lorentzian distance function $d: N \times N \rightarrow[0,+\infty]$ for an arbitrary spacetime may fail to be continuous in general, and may also fail to be finite-valued. But there are geometric restrictions that guarantee a good behavior of $d$. For example, globally hyperbolic spacetimes turn out to be the natural class of spacetimes for which the Lorentzian distance function is finite-valued and continuous.
Given a point $p \in N$, one can define the Lorentzian distance function $d_{p}$ :
$M \rightarrow[0,+\infty]$ with respect to $p$ by
$$d_{p}(q)=d(p, q) .$$
In order to guarantee the smoothness of $d_{p}$, we need to restrict this function on certain special subsets of $N$. Let $\left.T_{-1} N\right|{p}$ be the following set $$\left.T{-1} N\right|{p}=\left{v \in T{p} N: v \text { is a future-directed timelike unit vector }\right} .$$
Define the function $s_{p}:\left.T_{-1} N\right|{p} \rightarrow[0,+\infty]$ by $$s{p}(v)=\sup \left{t \geq 0: d_{p}\left(\gamma_{v}(t)\right)=t\right},$$
where $\gamma_{v}:[0, a) \rightarrow N$ is the future inextendible geodesic starting at $p$ with initial velocity $v$. Then we define
$$\tilde{\mathcal{I}}^{+}(p)=\left{t v: \text { for all }\left.v \in T_{-1} N\right|{p} \text { and } 0{p}(v)\right}$$
and consider the subset $\mathcal{I}^{+}(p) \subset N$ given by
$$\mathcal{I}^{+}(p)=\exp {p}\left(\operatorname{int}\left(\tilde{\mathcal{I}}^{+}(p)\right)\right) \subset I^{+}(p) .$$ Observe that the exponential map $$\exp {p}: \operatorname{int}\left(\tilde{\mathcal{I}}^{+}(p)\right) \rightarrow \mathcal{I}^{+}(p)$$
is a diffeomorphism and $\mathcal{I}^{+}(p)$ is an open subset (possible empty).
Remark 4.1. When $b \geq 0$, the Lorentzian space form of constant sectional curvature $b$, which we denote as $N_{b}^{n+1}$, is globally hyperbolic and geodesically complete, and every future directed timelike unit geodesic $\gamma_{b}$ in $N_{b}^{n+1}$ realizes the Lorentzian distance between its points. In particular, if $b \geq 0$ then $\mathcal{I}^{+}(p)=I^{+}(p)$ for every point $p \in N_{b}^{n+1}$ (see [EGK, Remark 3.2]).
数学代写|黎曼几何代写Riemannian geometry代考|Concerning the Riemannian Setting and Notation
Returning now to the Riemannian case: Although we indeed do have the possibility of considering 4 basically different settings determined by the choice of $p$ or $V$ as the ‘base’ of our normal domain and the choice of $K_{N} \leq b$ or $K_{N} \geq b$ as the curvature assumption for the ambient space $N$, we will, however, mainly consider the ‘first’ of these. Specifically we will (unless otherwise explicitly stated) apply the following assumptions and denotations:
Definition 5.1. A standard situation encompasses the following:
(1) $P^{m}$ denotes an $m$-dimensional complete minimally immersed submanifold of the Riemannian manifold $N^{n}$. We always assume that $P$ has dimension $m \geq 2 .$
(2) The sectional curvatures of $N$ are assumed to satisfy $K_{N} \leq b, b \in \mathbb{R}$, cf. Proposition $3.10$, equation (3.13).
(3) The intersection of $P$ with a regular ball $B_{R}(p)$ centered at $p \in P$ (cf. Definition 3.4) is denoted by
$$D_{R}=D_{R}(p)=P^{m} \cap B_{R}(p)$$
and this is called a minimal extrinsic $R$-ball of $P$ in $N$, see the Figures 3-7 of extrinsic balls, which are cut out from some of the well-known minimal surfaces in $\mathbb{R}^{3}$.
(4) The totally geodesic $m$-dimensional regular $R$-ball centered at $\tilde{p}$ in $\mathbb{K}^{n}(b)$ is denoted by
$$B_{R}^{b, m}=B_{R}^{b, m}(\tilde{p})$$
whose boundary is the $(m-1)$-dimensional sphere
$$\partial B_{R}^{b, m}=S_{R}^{b, m-1}$$
(5) For any given smooth function $F$ of one real variable we denote
$$W_{F}(r)=F^{\prime \prime}(r)-F^{\prime}(r) h_{b}(r) \text { for } 0 \leq r \leq R$$
We may now collect the basic inequalities from our previous analysis as follows.
数学代写|黎曼几何代写Riemannian geometry代考|Green’s Formulae and the Co-area Formula
Now we recall the coarea formula. We follow the lines of [Sa] Chapter II, Section 5. Let $(M, g)$ denote a Riemannian manifold and $\Omega$ a precompact domain in $M$. Let $\psi: \Omega \rightarrow \mathbb{R}$ be a smooth function such that $\psi(\Omega)=[a, b]$ with $a<b$. Denote by $\Omega_{0}$ the set of critical points of $\psi$. By Sard’s theorem, the set of critical values $S_{\psi}=\psi\left(\Omega_{0}\right)$ has null measure, and the set of regular values $R_{\psi}=[a, b]-S_{\psi}$ is open. In particular, for any $t \in R_{\psi}=[a, b]-S_{\psi}$, the set $\Gamma(t):=\psi^{-1}(t)$ is a smooth embedded hypersurface in $\Omega$ with $\partial \Gamma(t)=\emptyset$. Since $\Gamma(t) \subseteq \Omega-\Omega_{0}$ then $\nabla \psi$ does not vanish along $\Gamma(t)$; indeed, a unit normal along $\Gamma(t)$ is given by $\nabla \psi /|\nabla \psi|$.
Now we let
\begin{aligned} &A(t)=\operatorname{Vol}(\Gamma(t)) \ &\Omega(t)={x \in \bar{\Omega} \mid \psi(x)<t} \ &V(t)=\operatorname{Vol}(\Omega(t)) \end{aligned}
Theorem 6.1.
i) For every integrable function $u$ on $\bar{\Omega}$ :
$$\int_{\Omega} u \cdot|\nabla \psi| d V=\int_{a}^{b}\left(\int_{\Gamma(t)} u d A_{t}\right) d t,$$
where $d A_{t}$ is the Riemannian volume element defined from the induced metric $g_{t}$ on $\Gamma(t)$ from $g$.
ii) The function $V(t)$ is a smooth function on the regular values of $\psi$ given by:
$$V(t)=\operatorname{Vol}\left(\Omega_{0} \cap \Omega(t)\right)+\int_{a}^{t}\left(\int_{\Gamma(t)}|\nabla \psi|^{-1} d A_{t}\right)$$
and its derivative is
$$\frac{d}{d t} V(t)=\int_{\Gamma(t)}|\nabla \psi|^{-1} d A_{t}$$
数学代写|黎曼几何代写Riemannian geometry代考|Analysis of Lorentzian Distance Functions
dp(q)=d(p,q).
\left.T{-1} N\right|{p}=\left{v \in T{p} N: v \text { 是一个面向未来的类时单位向量 }\right} 。\left.T{-1} N\right|{p}=\left{v \in T{p} N: v \text { 是一个面向未来的类时单位向量 }\right} 。
s{p}(v)=\sup \left{t \geq 0: d_{p}\left(\gamma_{v}(t)\right)=t\right},s{p}(v)=\sup \left{t \geq 0: d_{p}\left(\gamma_{v}(t)\right)=t\right},
\tilde{\mathcal{I}}^{+}(p)=\left{t v: \text { for all }\left.v \in T_{-1} N\right|{p} \text { 和} 0{p}(v)\right}\tilde{\mathcal{I}}^{+}(p)=\left{t v: \text { for all }\left.v \in T_{-1} N\right|{p} \text { 和} 0{p}(v)\right}
数学代写|黎曼几何代写Riemannian geometry代考|Concerning the Riemannian Setting and Notation
(1)磷米表示一个米黎曼流形的一维完全最小浸没子流形ñn. 我们总是假设磷有维度米≥2.
(2) 截面曲率ñ假设满足ķñ≤b,b∈R,参见。主张3.10,等式(3.13)。
(3) 交集磷用普通球乙R(p)以p∈磷(参见定义 3.4)表示为
DR=DR(p)=磷米∩乙R(p)
(4) 完全测地线米维规则R- 球为中心p~在ķn(b)表示为
∂乙Rb,米=小号Rb,米−1
(5) 对于任何给定的平滑函数F我们表示的一个实变量
数学代写|黎曼几何代写Riemannian geometry代考|Green’s Formulae and the Co-area Formula
i) 对于每个可积函数在上Ω¯ :
∫Ω在⋅|∇ψ|d在=∫一个b(∫Γ(吨)在d一个吨)d吨,
ii) 功能在(吨)是一个关于正则值的平滑函数ψ给出:
dd吨在(吨)=∫Γ(吨)|∇ψ|−1d一个吨
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3405
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Appetizer and Introduction
It is a natural and indeed a classical question to ask: “What is the effective resistance of, say, a hyperboloid or a helicoid if the surface is made of a homogeneous conducting material?”.
In these notes we will study the precise meaning of this and several other related questions and analyze how the answers depend on the curvature and topology of the given surfaces and manifolds. We will focus mainly on minimal submanifolds in ambient spaces which are assumed to have a well-defined upper (or lower) bound on their sectional curvatures.
One key ingredient is the comparison theory for distance functions in such spaces. In particular we establish and use a comparison result for the Laplacian of geometrically restricted distance functions. It is in this setting that we obtain information about such diverse phenomena as diffusion processes, isoperimetric inequalities, Dirichlet eigenvalues, transience, recurrence, and effective resistance of the spaces in question. In this second edition of the present notes we extend those previous findings in four ways: Firstly, we include comparison results for the exit time moment spectrum for compact domains in Riemannian manifolds; Secondly, and most substantially, we report on very recent results obtained by the first and third author together with C. Rosales concerning comparison results for the capacities and the type problem (transient versus recurrent) in weighted Riemannian manifolds; Thirdly we survey how some of the purely Riemannian results on transience and recurrence can be lifted to the setting of spacelike submanifolds in Lorentzian manifolds; Fourthly, the comparison spaces that we employ for some of the new results are typically so-called model spaces, i.e., warped products (gen= eralized surfaces of revolution) where ‘all the geometry’ in each case is determined by a given radial warping function and a given weight function.In a sense, all the different phenomena that we consider are ‘driven’ by the Laplace operator which in turn depends on the background curvatures and the weight function. One key message of this report is that the Laplacian is a particularly ‘swift’ operator – for example on minimal submanifolds in ambient spaces with small sectional curvatures – but depending on the weight functions. Specifically, we observe and report new findings about this behaviour in the contexts of both Riemannian, Lorentzian, and weighted geometries, see Sections 12 and $20-27$. Similar results generally hold true within the intrinsic geometry of the manifolds themselves – often even with Ricci curvature lower bounds (see, e.g., the survey [Zhu]) as a substitute for the specific assumption of a lower bound on sectional curvatures.
数学代写|黎曼几何代写Riemannian geometry代考|The Comparison Setting and Preliminaries
We consider a complete immersed submanifold $P^{m}$ in a Riemannian manifold $N^{n}$, and denote by $\mathrm{D}^{P}$ and $\mathrm{D}^{N}$ the Riemannian connections of $P$ and $N$, respectively. We refer to the excellent general monographs on Riemannian geometry – e.g., [Sa], [CheeE], and [Cha2] – for the basic notions, that will be applied in these notes. In particular we shall be concerned with the second-order behavior of certain functions on $P$ which are obtained by restriction from the ambient space $N$ as displayed in Proposition $3.1$ below. The second-order derivatives are defined in terms of the Hessian operators Hess ${ }^{N}$, Hess ${ }^{P}$ and their traces $\Delta^{N}$ and $\Delta^{P}$, respectively (see, e.g., [Sa] p. 31). The difference between these operators quite naturally involves geometric second-order information about how $P^{m}$ actually sits inside $N^{n}$. This information is provided by the second fundamental form $\alpha$ (resp. the mean curvature $H$ ) of $P$ in $N$ (see [Sa] p. 47). If the functions under consideration are essentially distance functions in $N$ – or suitably modified distance functions then their second-order behavior is strongly influenced by the curvatures of $N$, as is directly expressed by the second variation formula for geodesics ([Sa] p. 90).
As is well known, the ensuing and by now classical comparison theorems for Jacobi fields give rise to the celebrated Toponogov theorems for geodesic triangles and to powerful results concerning the global structure of Riemannian spaces ([Sa], Chapters IV-V). In these notes, however, we shall mainly apply the Jacobi field comparison theory only off the cut loci of the ambient space $N$, or more precisely, within the regular balls of $N$ as defined in Definition $3.4$ below. On the other hand, from the point of view of a given (minimal) submanifold $P$ in $N$, our results for $P$ are semi-global in the sense that they apply to domains which are not necessarily distance-regular within $P$.
数学代写|黎曼几何代写Riemannian geometry代考|Analysis of Riemannian Distance Functions
Let $\mu: N \mapsto \mathbb{R}$ denote a smooth function on $N$. Then the restriction $\tilde{\mu}=\mu_{\left.\right|{P}}$ is a smooth function on $P$ and the respective Hessians $\operatorname{Hess}^{N}(\mu)$ and $\operatorname{Hess}^{P}(\tilde{\mu})$ are related as follows: Proposition $3.1([\mathrm{JK}]$ p. 713$)$. \begin{aligned} \operatorname{Hess}^{P}(\tilde{\mu})(X, Y)=& \operatorname{Hess}^{N}(\mu)(X, Y) \ &+\left\langle\nabla^{N}(\mu), \alpha(X, Y)\right\rangle \end{aligned} for all tangent vectors $X, Y \in T P \subseteq T N$, where $\alpha$ is the second fundamental form of $P$ in $N$. Proof. \begin{aligned} \operatorname{Hess}^{P}(\tilde{\mu})(X, Y) &=\left\langle\mathrm{D}{X}^{P} \nabla^{P} \tilde{\mu}, Y\right\rangle \ &=\left\langle\mathrm{D}{X}^{N} \nabla^{P} \tilde{\mu}-\alpha\left(X, \nabla^{P} \tilde{\mu}\right), Y\right\rangle \ &=\left\langle\mathrm{D}{X}^{N} \nabla^{P} \tilde{\mu}, Y\right\rangle \ &=X\left(\left\langle\nabla^{P} \tilde{\mu}, Y\right\rangle\right)-\left\langle\nabla^{P} \tilde{\mu}, \mathrm{D}{X}^{N} Y\right\rangle \ &=\left\langle\mathrm{D}{X}^{N} \nabla^{N} \mu, Y\right\rangle+\left\langle\left(\nabla^{N} \mu\right)^{\perp}, \mathrm{D}_{X}^{N} Y\right\rangle \ &=\operatorname{Hess}^{N}(\mu)(X, Y)+\left\langle\left(\nabla^{N} \mu\right)^{\perp}, \alpha(X, Y)\right\rangle \ &=\operatorname{Hess}^{N}(\mu)(X, Y)+\left\langle\nabla^{N} \mu, \alpha(X, Y)\right\rangle \end{aligned}
If we modify $\mu$ to $F \circ \mu$ by a smooth function $F: \mathbb{R} \mapsto \mathbb{R}$, then we get
Lemma 3.2.
\begin{aligned} \operatorname{Hess}^{N}(F \circ \mu)(X, X)=& F^{\prime \prime}(\mu) \cdot\left\langle\nabla^{N}(\mu), X\right\rangle^{2} \ &+F^{\prime}(\mu) \cdot \operatorname{Hess}^{N}(\mu)(X, X) \end{aligned}
for all $X \in T N^{n}$
In the following we write $\mu=\tilde{\mu}$. Combining (3.1) and (3.3) then gives
Corollary 3.3.
\begin{aligned} \operatorname{Hess}^{P}(F \circ \mu)(X, X)=& F^{\prime \prime}(\mu) \cdot\left\langle\nabla^{N}(\mu), X\right\rangle^{2} \ &+F^{\prime}(\mu) \cdot \operatorname{Hess}^{N}(\mu)(X, X) \ &+\left\langle\nabla^{N}(\mu), \alpha(X, X)\right\rangle \end{aligned}
for all $X \in T P^{m}$.
In what follows the function $\mu$ will always be a distance function in $N$-either from a point $p$ in which case we set $\mu(x)=\operatorname{dist}{N}(p, x)=r(x)$, or from a totally geodesic hypersurface $V^{n-1}$ in $N$ in which case we let $\mu(x)=$ dist ${N}(V, x)=$ $\eta(x)$. The function $F$ will always be chosen, so that $F \circ \mu$ is smooth inside the respective regular balls around $p$ and inside the regular tubes around $V$, which we now define. The sectional curvatures of the two-planes $\Omega$ in the tangent bundle of the ambient space $N$ are denoted by $K_{N}(\Omega)$, see, e.g., [Sa], Section II.3. Concerning the notation: In the following both Hess $^{N}$ and Hess will be used invariantly for both the Hessian in the ambient manifold $N$, as well as in a purely intrinsic context where only $N$ and not any of its submanifolds is under consideration.
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3342
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|The Minimal Control and the Length of an Admissible Curve
We start by defining the sub-Riemannian norm for vectors that belong to the distribution of a sub-Riemannian manifold.
Definition 3.8 Let $v \in \mathcal{D}{q}$. We define the sub-Riemannian norm of $v$ as follows: $$|v|:=\min \left{|u|, u \in U{q} \text { s.t. } v=f(q, u)\right} .$$
Notice that since $f$ is linear with respect to $u$, the minimum in $(3.9)$ is always attained at a unique point. Indeed, the condition $f(q, \cdot)=v$ defines an affine subspace of $U_{q}$ (which is nonempty since $v \in \mathcal{D}_{q}$ ) and the minimum in (3.9) is uniquely attained at the orthogonal projection of the origin onto this subspace (see Figure 3.2).
Exercise 3.9 Show that $|\cdot|$ is a norm in $\mathcal{D}{q}$. Moreover prove that it satisfies the parallelogram law, i.e., it is induced by a scalar product $\langle\cdot \mid \cdot\rangle{q}$ on $\mathcal{D}{q}$ that can be recovered by the polarization identity $$\langle v \mid w\rangle{q}=\frac{1}{4}|v+w|^{2}-\frac{1}{4}|v-w|^{2}, \quad v, w \in \mathcal{D}{q} .$$ Exercise $3.10$ Let $u{1}, \ldots, u_{m} \in U_{q}$ be an orthonormal basis for $U_{q}$. Define $v_{i}=f\left(q, u_{i}\right)$. Show that if $f(q, \cdot)$ is injective then $v_{1}, \ldots, v_{m}$ is an orthonormal basis for $\mathcal{D}_{q}$.
An admissible curve $\gamma:[0, T] \rightarrow M$ is Lipschitz, hence differentiable at almost every point. Hence the unique control $t \mapsto u^{*}(t)$ associated with $\gamma$ and realizing the minimum in $(3.9)$ is well defined a.e. on $[0, T]$.
数学代写|黎曼几何代写Riemannian geometry代考|Equivalence of Sub-Riemannian Structures
In this section we introduce the notion of the equivalence of sub-Riemannian structures on the same base manifold $M$ and the notion of isometry between sub-Riemannian manifolds.
Definition $3.18$ Let $(\mathbf{U}, f),\left(\mathbf{U}^{\prime}, f^{\prime}\right)$ be two sub-Riemannian structures on a smooth manifold $M$. They are said to be equivalent as distributions if the following conditions hold:
(i) there exist a Euclidean bundle $\mathbf{V}$ and two surjective vector bundle morphisms $p: \mathbf{V} \rightarrow \mathbf{U}$ and $p^{\prime}: \mathbf{V} \rightarrow \mathbf{U}^{\prime}$ such that the following diagram is commutative:
The structures $(\mathbf{U}, f)$ and $\left(\mathbf{U}^{\prime}, f^{\prime}\right)$ are said to be equivalent as sub-Riemannian structures (or simply equivalent) if (i) is satisfied and moreover
(ii) the projections $p, p^{\prime}$ are compatible with the scalar product, i.e., it holds that
\begin{aligned} |u| &=\min {|v|, p(v)=u}, & \forall u \in \mathbf{U}, \ \left|u^{\prime}\right| &=\min \left{|v|, p^{\prime}(v)=u^{\prime}\right}, & \forall u^{\prime} \in \mathbf{U}^{\prime} . \end{aligned}
Remark $3.19$ If $(\mathbf{U}, f)$ and $\left(\mathbf{U}^{\prime}, f^{\prime}\right)$ arc cquivalcnt as sub-Ricmannian structures on $M$ then:
(a) the distributions $\mathcal{D}{q}$ and $\mathcal{D}{q}^{\prime}$ defined by $f$ and $f^{\prime}$ coincide, since $f\left(U_{q}\right)=f^{\prime}\left(U_{q}^{\prime}\right)$ for all $q \in M$;
(b) for each $w \in \mathcal{D}_{q}$ we have $|w|=|w|^{\prime}$, where $|\cdot|$ and $|\cdot|^{\prime}$ are the norms induced by $(\mathbf{U}, f)$ and $\left(\mathbf{U}^{\prime}, f^{\prime}\right)$ respectively.
In particular the lengths of admissible curves for two equivalent subRiemannian structures are the same.
Exercise 3.20 Prove that $(M, \mathbf{U}, f)$ and $\left(M, \mathbf{U}^{\prime}, f^{\prime}\right)$ are equivalent as distributions if and only if the moduli of the horizontal vector fields $\mathcal{D}$ and $\mathcal{D}^{\prime}$ coincide.
数学代写|黎曼几何代写Riemannian geometry代考|Sub-Riemannian Distance
In this section we introduce the sub-Riemannian distance and prove the Rashevskii-Chow theorem.
Recall that, thanks to the results of Section 3.1.4, in what follows we can assume that the sub-Riemannian structure on $M$ is free, with generating family $\mathcal{F}=\left{f_{1}, \ldots, f_{m}\right}$. Notice that, by the definition of a sub-Riemannian manifold, $M$ is assumed to be connected and $\mathcal{F}$ is assumed to be bracketgenerating.
Definition 3.30 Let $M$ be a sub-Riemannian manifold and $q_{0}, q_{1} \in M$. The sub-Riemannian distance (or Carnot-Carathéodory distance) between $q_{0}$ and $q_{1}$ is
$d\left(q_{0}, q_{1}\right)=\inf \left{\ell(\gamma) \mid \gamma:[0, T] \rightarrow M\right.$ admissible, $\left.\gamma(0)=q_{0}, \gamma(T)=q_{1}\right} .$
We now state the main result of this section.
Theorem $3.31$ (Rashevskii-Chow) Let $M$ be a sub-Riemannian manifold. Then
(i) $(M, d)$ is a metric space,
(ii) the topology induced by $(M, d)$ is equivalent to the manifold topology.
In particular, $d: M \times M \rightarrow \mathbb{R}$ is continuous.
One of the main consequences of this result is that, thanks to the bracketgenerating condition, for every $q_{0}, q_{1} \in M$ there exists an admissible curve that joins them. Hence $d\left(q_{0}, q_{1}\right)<+\infty$.
In what follows $B(q, r)$ (sometimes denoted also $B_{r}(q)$ ) is the (open) subRiemannian ball of radius $r$ and center $q$ :
$$B(q, r):=\left{q^{\prime} \in M \mid d\left(q, q^{\prime}\right)<r\right} .$$
数学代写|黎曼几何代写Riemannian geometry代考|The Minimal Control and the Length of an Admissible Curve
|v|:=\min \left{|u|, u \in U{q} \text { st } v=f(q, u)\right} 。|v|:=\min \left{|u|, u \in U{q} \text { st } v=f(q, u)\right} 。
⟨在∣在⟩q=14|在+在|2−14|在−在|2,在,在∈Dq.锻炼3.10让在1,…,在米∈在q是一个正交基在q. 定义在一世=F(q,在一世). 证明如果F(q,⋅)是内射的在1,…,在米是一个正交基Dq.
数学代写|黎曼几何代写Riemannian geometry代考|Equivalence of Sub-Riemannian Structures
(i) 存在欧几里得丛在和两个满射向量丛态射p:在→在和p′:在→在′使得下图是可交换的:
(ii)p,p′与标量积兼容,即它认为
\begin{对齐} |u| &=\min {|v|, p(v)=u}, & \forall u \in \mathbf{U}, \ \left|u^{\prime}\right| &=\min \left{|v|, p^{\prime}(v)=u^{\prime}\right}, & \forall u^{\prime} \in \mathbf{U}^{\主要} 。\end{对齐}\begin{对齐} |u| &=\min {|v|, p(v)=u}, & \forall u \in \mathbf{U}, \ \left|u^{\prime}\right| &=\min \left{|v|, p^{\prime}(v)=u^{\prime}\right}, & \forall u^{\prime} \in \mathbf{U}^{\主要} 。\end{对齐}
(a) 分布Dq和Dq′被定义为F和F′巧合,因为F(在q)=F′(在q′)对所有人q∈米;
(b) 对于每个在∈Dq我们有|在|=|在|′, 在哪里|⋅|和|⋅|′规范是由(在,F)和(在′,F′)分别。
数学代写|黎曼几何代写Riemannian geometry代考|Sub-Riemannian Distance
d\left(q_{0}, q_{1}\right)=\inf \left{\ell(\gamma) \mid \gamma:[0, T] \rightarrow M\right.$可接受,$\left .\gamma(0)=q_{0}, \gamma(T)=q_{1}\right} 。d\left(q_{0}, q_{1}\right)=\inf \left{\ell(\gamma) \mid \gamma:[0, T] \rightarrow M\right.$可接受,$\left .\gamma(0)=q_{0}, \gamma(T)=q_{1}\right} 。
(一)(米,d)是一个度量空间,
(ii)由(米,d)等价于流形拓扑。
B(q, r):=\left{q^{\prime} \in M \mid d\left(q, q^{\prime}\right)<r\right} 。
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3405
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Vector Bundles
Heuristically, a smooth vector bundle on a smooth manifold $M$ is a smooth family of vector spaces parametrized by points in $M$.
Definition 2.47 Let $M$ be an $n$-dimensional manifold. A smooth vector bundle of rank $k$ over $M$ is a smooth manifold $E$ with a surjective smooth map $\pi: E \rightarrow M$ such that:
(i) the set $E_{q}:=\pi^{-1}(q)$, the $f$ iber of $E$ at $q$, is a $k$-dimensional vector space;
(ii) for every $q \in M$ there exist a neighborhood $O_{q}$ of $q$ and a linear-on-fibers diffeomorphism (called a local trivialization) $\psi: \pi^{-1}\left(O_{q}\right) \rightarrow O_{q} \times \mathbb{R}^{k}$ such that the following diagram commutes:
The space $E$ is called total space and $M$ is the base of the vector bundle. We will refer to $\pi$ as the canonical projection, and rank $E$ will denote the rank of the bundle.
Remark $2.48$ A vector bundle $E$, as a smooth manifold, has dimension
$$\operatorname{dim} E=\operatorname{dim} M+\operatorname{rank} E=n+k .$$
In the case when there exists a global trivialization map (i.e., when one can choose a local trivialization with $O_{q}=M$ for all $q \in M$ ), then $E$ is diffeomorphic to $M \times \mathbb{R}^{k}$ and we say that $E$ is trivializable.
Example 2.49 For any smooth $n$-dimensional manifold $M$, the tangent bundle $T M$, defined as the disjoint union of the tangent spaces at all points of $M$,
$$T M=\bigcup_{q \in M} T_{q} M,$$
has the natural structure of a $2 n$-dimensional smooth manifold, equipped with the vector bundle structure (of rank $n$ ) induced by the canonical projection map
$$\pi: T M \rightarrow M, \quad \pi(v)=q \quad \text { if } \quad v \in T_{q} M .$$
In the same way one can consider the cotangent bundle $T^{} M$, defined as $$T^{} M=\bigcup_{q \in M} T_{q}^{*} M .$$
数学代写|黎曼几何代写Riemannian geometry代考|Submersions and Level Sets of Smooth Maps
If $\varphi: M \rightarrow N$ is a smooth map, we define the rank of $\varphi$ at $q \in M$ to be the rank of the linear map $\varphi_{*, q}: T_{q} M \rightarrow T_{\varphi(q)} N$. It is, of course, just the rank of the matrix of partial derivatives of $\varphi$ in any coordinate chart, or the dimension
of $\operatorname{im}\left(\varphi_{*, q}\right) \subset T_{\varphi(q)} N$. If $\varphi$ has the same rank $k$ at every point, we say $\varphi$ has constant rank and write rank $\varphi=k$.
An immersion is a smooth map $\varphi: M \rightarrow N$ with the property that $\varphi_{}$ is injective at each point (or equivalently $\operatorname{rank} \varphi=\operatorname{dim} M$ ). Similarly, a submersion is a smooth map $\varphi: M \rightarrow N$ such that $\varphi_{}$ is surjective at each point (equivalently, $\operatorname{rank} \varphi=\operatorname{dim} N$ ).
Theorem $2.56$ (Rank theorem) Suppose that $M$ and $N$ are smooth manifolds of dimensions $m$ and $n$ respectively and that $\varphi: M \rightarrow N$ is a smooth map with constant rank $k$ in a neighborhood of $q \in M$. Then there exist coordinates $\left(x_{1}, \ldots, x_{m}\right)$ centered at $q$ and $\left(y_{1}, \ldots, y_{n}\right)$ centered at $\varphi(q)$ in which $\varphi$ has the following coordinate representation:
$$\varphi\left(x_{1}, \ldots, x_{m}\right)=\left(x_{1}, \ldots, x_{k}, 0, \ldots, 0\right) .$$
Remark $2.57$ The previous theorem can be rephrased in the following way.
Let $\varphi: M \rightarrow N$ be a smooth map between two smooth manifolds. Then the following are equivalent:
(i) $\varphi$ has constant rank in a neighborhood of $q \in M$;
(ii) there exist coordinates near $q \in M$ and $\varphi(q) \in N$ in which the coordinate representation of $\varphi$ is linear.
In the case of a submersion, from Theorem $2.56$ one can deduce the following result.
数学代写|黎曼几何代写Riemannian geometry代考|Basic Definitions
We start by introducing a bracket-generating family of vector fields.
Definition $3.1$ Let $M$ be a smouth manifold and let $\mathcal{F} \subset \operatorname{Vec}(M)$ be a family of smooth vector fields. The Lie algebra generated by $\mathcal{F}$ is the smallest subalgebra of $\operatorname{Vec}(M)$ containing $\mathcal{F}$, namely
$$\operatorname{Lie} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\right]\right], X_{i} \in \mathcal{F}, j \in \mathbb{N}\right}$$
We will say that $\mathcal{F}$ is bracket-generating (or that it satisfies the Hörmander condition) if
$$\operatorname{Lie}{q} \mathcal{F}:={X(q), X \in \text { Lie } \mathcal{F}}=T{q} M, \quad \forall q \in M$$
Moreover, for $s \in \mathbb{N}$, we define
$$\operatorname{Lie}^{s} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\right]\right], X_{i} \in \mathcal{F}, j \leq s\right}$$
We say that the family $\mathcal{F}$ has step s at $q$ if $s \in \mathbb{N}$ is the minimal integer satisfying
$$\operatorname{Lie}{q}^{s} \mathcal{F}:=\left{X(q) X \in \operatorname{Lie}^{s} \mathcal{F}\right}=T{q} M$$
Notice that, in general, the step $s$ may depend on the point on $M$ and that $s=s(q)$ can be unbounded on $M$ even for bracket-generating families.
数学代写|黎曼几何代写Riemannian geometry代考|Vector Bundles
(i) 集合和q:=圆周率−1(q), 这F伊伯尔和在q, 是一个ķ维向量空间;
(ii) 对于每个q∈米有一个社区○q的q和纤维上的线性微分同胚(称为局部平凡化)ψ:圆周率−1(○q)→○q×Rķ使得下图通勤:
数学代写|黎曼几何代写Riemannian geometry代考|Submersions and Level Sets of Smooth Maps
(i)披在附近有恒定的排名q∈米;
(ii) 附近有坐标q∈米和披(q)∈ñ其中的坐标表示披是线性的。
数学代写|黎曼几何代写Riemannian geometry代考|Basic Definitions
\operatorname{Lie} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\right]\right ], X_{i} \in \mathcal{F}, j \in \mathbb{N}\right}\operatorname{Lie} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\right]\right ], X_{i} \in \mathcal{F}, j \in \mathbb{N}\right}
\operatorname{Lie}^{s} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\对]\right], X_{i} \in \mathcal{F}, j \leq s\right}\operatorname{Lie}^{s} \mathcal{F}:=\operatorname{span}\left{\left[X_{1}, \ldots,\left[X_{j-1}, X_{j}\对]\right], X_{i} \in \mathcal{F}, j \leq s\right}
\operatorname{Lie}{q}^{s} \mathcal{F}:=\left{X(q) X \in \operatorname{Lie}^{s} \mathcal{F}\right}=T{q } 米\operatorname{Lie}{q}^{s} \mathcal{F}:=\left{X(q) X \in \operatorname{Lie}^{s} \mathcal{F}\right}=T{q } 米
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MTH 3022
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Frobenius’ Theorem
In this section we prove Frobenius’ theorem about vector distributions.
Definition 2.33 Let $M$ be a smooth manifold. A vector distribution $D$ of rank $m$ on $M$ is a family of vector subspaces $D_{q} \subset T_{q} M$, where $\operatorname{dim} D_{q}=m$ for every $q$.
A vector distribution $D$ is said to be smooth if, for every point $q_{0} \in M$, there exists a neighborhood $O_{q_{0}}$ of $q_{0}$ and a family of smooth vector fields $X_{1}, \ldots, X_{m}$ such that
$$D_{q}=\operatorname{span}\left{X_{1}(q), \ldots, X_{m}(q)\right}, \quad \forall q \in O_{q_{0}}$$
Definition 2.34 A smooth vector distribution $D$ (or rank $m$ ) on $M$ is said to be involutive if there exists a local basis of vector fields $X_{1}, \ldots, X_{m}$ satisfying (2.38), and smooth functions $a_{i j}^{k}$ on $M$, such that
$$\left[X_{i}, X_{k}\right]=\sum_{j=1}^{m} a_{i j}^{k} X_{j}, \quad \forall i, k=1, \ldots, m$$
Exercise 2.35 Prove that a smooth vector distribution $D$ is involutive if and only if for every local basis of vector fields $X_{1}, \ldots, X_{m}$ satisfying (2.38) there exist smooth functions $a_{i j}^{k}$ such that (2.39) holds.
Definition 2.36 A smooth vector distribution $D$ on $M$ is said to be flat if for every point $q_{0} \in M$ there exists a local diffeomorphism $\phi: O_{q_{0}} \rightarrow \mathbb{R}^{n}$ such that $\phi_{*, q}\left(D_{q}\right)=\mathbb{R}^{m} \times{0}$ for all $q \in O_{q_{0}}$.
Theorem 2.37 (Frobenius Theorem) A smooth distribution is involutive if and only if it is flat.
Proof The statement is local, hence it is sufficient to prove the statement on a neighborhood of every point $q_{0} \in M$.
数学代写|黎曼几何代写Riemannian geometry代考|An Application of Frobenius’ Theorem
Let $M$ and $N$ be two smooth manifolds. Given vector fields $X \in \operatorname{Vec}(M)$ and $Y \in \operatorname{Vec}(N)$ we define the vector field $X \times Y \in \operatorname{Vec}(M \times N)$ as the derivation
$$(X \times Y) a=X a_{y}^{1}+Y a_{x}^{2},$$
where, given $a \in C^{\infty}(M \times N)$, we define $a_{y}^{1} \in C^{\infty}(M)$ and $a_{x}^{2} \in C^{\infty}(N)$ as follows:
$$a_{y}^{1}(x):=a(x, y), \quad a_{x}^{2}(y):=a(x, y), \quad x \in M, y \in N .$$
Notice that, if we denote by $p_{1}: M \times N \rightarrow M$ and $p_{2}: M \times N \rightarrow N$ the two projections, we have
$$\left(p_{1}\right){}(X \times Y)=X, \quad\left(p{2}\right){}(X \times Y)=Y .$$
Exercise 2.40 Let $X{1}, X_{2} \in \operatorname{Vec}(M)$ and $Y_{1}, Y_{2} \in \operatorname{Vec}(N)$. Prove that
$$\left[X_{1} \times Y_{1}, X_{2} \times Y_{2}\right]=\left[X_{1}, X_{2}\right] \times\left[Y_{1}, Y_{2}\right]$$
We can now prove the following result, which is important when dealing with Lie groups (see Chapter 7 and Section 17.5).
数学代写|黎曼几何代写Riemannian geometry代考|Cotangent Space
In this section we introduce covectors, which are linear functionals on the tangent space. The space of all covectors at a point $q \in M$, called cotangent space, is in algebraic terms simply the dual space to the tangent space.
Definition 2.42 Let $M$ be an $n$-dimensional smooth manifold. The cotangent space at a point $q \in M$ is the set
$$T_{q}^{} M:=\left(T_{q} M\right)^{}=\left{\lambda: T_{q} M \rightarrow \mathbb{R}, \lambda \text { linear }\right}$$
For $\lambda \in T_{q}^{*} M$ and $v \in T_{q} M$, we will denote by $\langle\lambda, v\rangle:=\lambda(v)$ the evaluation of the covector $\lambda$ on the vector $v$.
As we have seen, the differential of a smooth map yields a linear map between tangent spaces. The dual of the differential gives a linear map between cotangent spaces.
Definition 2.43 Let $\varphi: M \rightarrow N$ be a smooth map and $q \in M$. The pullback of $\varphi$ at point $\varphi(q)$, where $q \in M$, is the map
$$\varphi^{}: T_{\varphi(q)}^{} N \rightarrow T_{q}^{} M, \quad \lambda \mapsto \varphi^{} \lambda,$$
defined by duality in the following way:
$$\left\langle\varphi^{} \lambda, v\right\rangle:=\left\langle\lambda, \varphi_{} v\right\rangle, \quad \forall v \in T_{q} M, \forall \lambda \in T_{\varphi(q)}^{} N .$$ Example 2.44 Let $a: M \rightarrow \mathbb{R}$ be a smooth function and $q \in M$. The differential $d_{q} a$ of the function $a$ at the point $q \in M$, defined through the formula $$\left\langle d_{q} a, v\right):=\left.\frac{d}{d t}\right|{t=0} a(\gamma(t)), \quad v \in T{q} M,$$
where $\gamma$ is any smooth curve such that $\gamma(0)=q$ and $\gamma(0)=v$, is an element of $T_{q}^{} M$. Indeed, the right-hand side of $(2.43)$ is linear with respect to $v$.
数学代写|黎曼几何代写Riemannian geometry代考|Frobenius’ Theorem
D_{q}=\operatorname{span}\left{X_{1}(q), \ldots, X_{m}(q)\right}, \quad \forall q \in O_{q_{0}}D_{q}=\operatorname{span}\left{X_{1}(q), \ldots, X_{m}(q)\right}, \quad \forall q \in O_{q_{0}}
[X一世,Xķ]=∑j=1米一个一世jķXj,∀一世,ķ=1,…,米
数学代写|黎曼几何代写Riemannian geometry代考|An Application of Frobenius’ Theorem
(X×是)一个=X一个是1+是一个X2,
(p1)(X×是)=X,(p2)(X×是)=是.
[X1×是1,X2×是2]=[X1,X2]×[是1,是2]
数学代写|黎曼几何代写Riemannian geometry代考|Cotangent Space
T_{q}^{} M:=\left(T_{q} M\right)^{}=\left{\lambda: T_{q} M \rightarrow \mathbb{R}, \lambda \text { 线性}\正确的}T_{q}^{} M:=\left(T_{q} M\right)^{}=\left{\lambda: T_{q} M \rightarrow \mathbb{R}, \lambda \text { 线性}\正确的}
⟨披λ,在⟩:=⟨λ,披在⟩,∀在∈吨q米,∀λ∈吨披(q)ñ.例 2.44 让一个:米→R是一个平滑的函数并且q∈米. 差速器dq一个功能的一个在这一点上q∈米, 通过公式定义
⟨dq一个,在):=dd吨|吨=0一个(C(吨)),在∈吨q米,
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MAST90029
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Nonautonomous Vector Fields
Definition $2.13$ A nonautonomous vector field is family of vector fields $\left{X_{t}\right}_{t \in \mathbb{R}}$ such that the $\operatorname{map} X(t, q)=X_{t}(q)$ satisfies the following properties:
(C1) the map $t \mapsto X(t, q)$ is measurable for every fixed $q \in M$;
(C2) the map $q \mapsto X(t, q)$ is smooth for every fixed $t \in \mathbb{R}$;
(C3) for every system of coordinates defined in an open set $\Omega \subset M$ and every compact $K \subset \Omega$ and compact interval $I \subset \mathbb{R}$ there exist two functions $c(t), k(t)$ in $L^{\infty}(I)$ such that, for all $(t, x),(t, y) \in I \times K$,
$$|X(t, x)| \leq c(t), \quad|X(t, x)-X(t, y)| \leq k(t)|x-y|$$
Conditions (C1) and (C2) are equivalent to requiring that for every smooth function $a \in C^{\infty}(M)$ the scalar function $\left.(t, q) \mapsto X_{t} a\right|_{q}$ defined on $\mathbb{R} \times M$ is measurable in $t$ and smooth in $q$.
Remark $2.14$ In what follows we are mainly interested in nonautonomous vector fields of the following form:
$$X_{t}(q)=\sum_{i=1}^{m} u_{i}(t) f_{i}(q)$$
where the $u_{i}$ are $L^{\infty}$ functions and the $f_{i}$ are smooth vector fields on $M$. For this class of nonautonomous vector fields, assumptions (C1)-(C2) are trivially satisfied. Regarding $(\mathrm{C} 3)$, thanks to the smoothness of $f_{i}$, for every compact set $K \subset \Omega$ we can find two positive constants $C_{K}, L_{K}$ such that, for all $i=1, \ldots, m$, and $j=1, \ldots, n$, we have
$$\left|f_{i}(x)\right| \leq C_{K}, \quad\left|\frac{\partial f_{i}}{\partial x_{j}}(x)\right| \leq L_{K}, \quad \forall x \in K,$$
and we obtain, for all $(t, x),(t, y) \in I \times K$,
$$|X(t, x)| \leq C_{K} \sum_{i=1}^{m}\left|u_{i}(t)\right|, \quad|X(t, x)-X(t, y)| \leq L_{K} \sum_{i=1}^{m}\left|u_{i}(t)\right| \cdot|x-y| .$$
The existence and uniqueness of integral curves of a nonautonomous vector field are guaranteed by the following theorem (see [BP07]).
数学代写|黎曼几何代写Riemannian geometry代考|Differential of a Smooth Map
A smooth map between manifolds induces a map between the corresponding tangent spaces.
Definition $2.17$ Let $\varphi: M \rightarrow N$ be a smooth map between smooth manifolds and let $q \in M$. The differential of $\varphi$ at the point $q$ is the linear map
$$\varphi_{, q}: T_{q} M \rightarrow T_{\varphi(q)} N$$ defined as follows: $$\varphi_{, q}(v)=\left.\frac{d}{d t}\right|{t=0} \varphi(\gamma(t)) \quad \text { if } \quad v=\left.\frac{d}{d t}\right|{t=0} \gamma(t), \quad q=\gamma(0) .$$
It is easily checked that this definition depends only on the equivalence class of $\gamma$.
The differential $\varphi_{: q}$ of a smooth map $\varphi: M \rightarrow N$ (see Figure 2.1), also called its pushforward, is sometimes denoted by the symbols $D_{q} \varphi$ or $d_{q} \varphi$. Exercise 2.18 Let $\varphi: M \rightarrow N, \psi: N \rightarrow Q$ be smooth maps between manifolds. Prove that the differential of the composition $\psi \circ \varphi: M \rightarrow Q$ satisfies $(\psi \circ \varphi){}=\psi{} \circ \varphi_{}$.
As we said, a smooth map induces a transformation of tangent vectors. If we deal with diffeomorphisms, we can also obtain a pushforward for a vector field.
数学代写|黎曼几何代写Riemannian geometry代考|Lie Brackets
In this section we introduce a fundamental notion for sub-Riemannian geometry, the Lie bracket of two vector fields $X$ and $Y$. Geometrically it is defined as an infinitesimal version of the pushforward of the second vector field along the flow of the first. As explained below, it measures how much $Y$ is modified by the flow of $X$.
Definition 2.22 Let $X, Y \in \operatorname{Vec}(M)$. We define their Lie bracket as the vector field
$$[X, Y]:=\left.\frac{\partial}{\partial t}\right|{t=0} e{}^{-t X} Y .$$ Remark $2.23$ The geometric meaning of the Lie bracket can be understood by writing explicitly \begin{aligned} {\left.[X, Y]\right|{q} } &=\left.\left.\frac{\partial}{\partial t}\right|{t=0} e_{}^{-t X} Y\right|{q}=\left.\frac{\partial}{\partial t}\right|{t=0} e_{}^{-t X}\left(\left.Y\right|{e^{t} X}(q)\right.\ &=\left.\frac{\partial}{\partial s \partial t}\right|{t=s=0} e^{-t X} \circ e^{s Y} \circ e^{t X}(q) \end{aligned}
Proposition 2.24 As derivations on functions, one has the identity
$$\lfloor X, Y \mid=X Y-Y X$$
Proof By definition of the Lie bracket we have $[X, Y] a=\left.(\partial / \partial t)\right|{t=0}$ $\left(e{}^{-t X} Y\right) a$. Hence we need to compute the first-order term in the expansion, with respect to $t$, of the map
$t \mapsto\left(e_{}^{-t X} Y\right) a .$ Using formula (2.28), we have $$\left(e_{}^{-t X} Y\right) a=Y\left(a \circ e^{-t X}\right) \circ e^{t X} .$$
By Remark 2.9, we have $a \circ e^{-t X}=a-t X a+O\left(t^{2}\right)$, hence
\begin{aligned} \left(e_{}^{-t X} Y\right) a &=Y\left(a-t X a+O\left(t^{2}\right)\right) \circ e^{t X} \ &=\left(Y a-t Y X a+\bar{O}\left(t^{2}\right)\right) \circ e^{t X} . \end{aligned} Denoting $b=Y a-t Y X a+O\left(t^{2}\right), b_{t}=b \circ e^{t X}$, and using again the above expansion, we get \begin{aligned} \left(e_{}^{-t X} Y\right) a &=\left(Y a-t Y X a+O\left(t^{2}\right)\right)+t X\left(Y a-t Y X a+O\left(t^{2}\right)\right)+O\left(t^{2}\right) \ &=Y a+t(X Y-Y X) a+O\left(t^{2}\right) \end{aligned}
which proves that the first-order term with respect to $t$ in the expansion is $(X Y-Y X) a$.
Proposition $2.24$ shows that $(\operatorname{Vec}(M),[\cdot, \cdot])$ is a Lie algebra.
数学代写|黎曼几何代写Riemannian geometry代考|Nonautonomous Vector Fields
(C1)地图吨↦X(吨,q)是可测量的每个固定的q∈米;
(C2) 地图q↦X(吨,q)对于每个固定的都是平滑的吨∈R;
(C3) 对于在开放集中定义的每个坐标系统Ω⊂米和每一个契约ķ⊂Ω和紧区间我⊂R存在两个功能C(吨),ķ(吨)在大号∞(我)这样,对于所有人(吨,X),(吨,是)∈我×ķ,
|X(吨,X)|≤C(吨),|X(吨,X)−X(吨,是)|≤ķ(吨)|X−是|
X吨(q)=∑一世=1米在一世(吨)F一世(q)
|F一世(X)|≤Cķ,|∂F一世∂Xj(X)|≤大号ķ,∀X∈ķ,
|X(吨,X)|≤Cķ∑一世=1米|在一世(吨)|,|X(吨,X)−X(吨,是)|≤大号ķ∑一世=1米|在一世(吨)|⋅|X−是|.
数学代写|黎曼几何代写Riemannian geometry代考|Lie Brackets
[X,是]:=∂∂吨|吨=0和−吨X是.评论2.23李括号的几何意义可以通过显式书写来理解
[X,是]|q=∂∂吨|吨=0和−吨X是|q=∂∂吨|吨=0和−吨X(是|和吨X(q) =∂∂s∂吨|吨=s=0和−吨X∘和s是∘和吨X(q)
⌊X,是∣=X是−是X
(和−吨X是)一个=是(一个∘和−吨X)∘和吨X.
(和−吨X是)一个=是(一个−吨X一个+○(吨2))∘和吨X =(是一个−吨是X一个+○¯(吨2))∘和吨X.表示b=是一个−吨是X一个+○(吨2),b吨=b∘和吨X,并再次使用上述展开式,我们得到
(和−吨X是)一个=(是一个−吨是X一个+○(吨2))+吨X(是一个−吨是X一个+○(吨2))+○(吨2) =是一个+吨(X是−是X)一个+○(吨2)
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MAST90143
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Negative Curvature: The Hyperbolic Plane
The negative constant curvature model is the hyperbolic plane $H_{r}^{2}$ obtained as the surface of $\mathbb{R}^{3}$, endowed with the hyperbolic metric, defined as the zero level set of the function
$$a(x, y, z)=x^{2}+y^{2}-z^{2}+r^{2} .$$
Indeed, this surface is a two-fold hyperboloid, so we can restrict our attention to the set of points $H_{r}^{2}=a^{-1}(0) \cap{z>0}$.
In analogy with the positive constant curvature model (which is the set of points in $\mathbb{R}^{3}$ whose Euclidean norm is constant) the negative constant curvature model can be seen as the set of points whose hyperbolic norm is constant in $\mathbb{R}^{3}$. In other words,
$$H_{r}^{2}=\left{q=(x, y, z) \in \mathbb{R}^{3} \mid|q|_{h}^{2}=-r^{2}\right} \cap{z>0}$$
The hyperbolic Gauss map associated with this surface can be easily computed, since it is explicitly given by
$$\mathcal{N}: H_{r}^{2} \rightarrow H^{2}, \quad \mathcal{N}(q)=\frac{1}{r} \nabla_{q} a$$
Exercise 1.63 Prove that the Gaussian curvature of $H_{r}^{2}$ is $\kappa=-1 / r^{2}$ at every point $q \in H_{r}^{2}$.
We can now discuss the structure of geodesics and curves with constant geodesic curvature on the hyperbolic space. We start with a result that can be proved in an analogous way to Proposition $1.60$. The proof is left to the reader.
Proposition 1.64 Let $\gamma:[0, T] \rightarrow H_{r}^{2}$ be a curve with unit speed and constant geodesic curvature equal to $c \in \mathbb{R}$. For every vector $w \in \mathbb{R}^{3}$, the function $\alpha(t)=\langle\dot{\gamma}(t) \mid w\rangle_{h}$ is a solution of the differential equation
$$\ddot{\alpha}(t)+\left(c^{2}-\frac{1}{r^{2}}\right) \alpha(t)=0 .$$
数学代写|黎曼几何代写Riemannian geometry代考|Tangent Vectors and Vector Fields
Let $M$ be a smooth $n$-dimensional manifold and let $\gamma_{1}, \gamma_{2}: I \rightarrow M$ be two smooth curves based at $q=\gamma_{1}(0)=\gamma_{2}(0) \in M$. We say that $\gamma_{1}$ and $\gamma_{2}$ are equivalent if they have the same first-order Taylor polynomial in some (or, equivalently, in every) coordinate chart. This defines an equivalence relation on the space of smooth curves based at $q$.
Definition 2.1 Let $M$ be a smooth $n$-dimensional manifold and let $\gamma: I \rightarrow$ $M$ be a smooth curve such that $\gamma(0)=q \in M$. Its tangent vector at $q=\gamma(0)$, denoted by
$$\left.\frac{d}{d t}\right|_{t=0} \gamma(t) \quad \text { or } \quad \dot{\gamma}(0),$$
is the equivalence class in the space of all smooth curves in $M$ such that $\gamma(0)=$ $q$ (with respect to the equivalence relation defined above).
It is easy to check, using the chain rule, that this definition is well posed (i.e., it does not depend on the representative curve).
Definition $2.2$ Let $M$ be a smooth $n$-dimensional manifold. The tangent space to $M$ at a point $q \in M$ is the set
$$T_{q} M:=\left{\left.\frac{d}{d t}\right|{t=0} \gamma(t) \mid \gamma: I \rightarrow M \text { smooth, } \gamma(0)=q\right} .$$ It is a standard fact that $T{q} M$ has a natural structure of an $n$-dimensional vector space, where $n=\operatorname{dim} M$.
Definition 2.3 A smooth vector field on a smooth manifold $M$ is a smooth map
$$X: q \mapsto X(q) \in T_{q} M$$
that associates with every point $q$ in $M$ a tangent vector at $q$. We denote by $\operatorname{Vec}(M)$ the set of smooth vector fields on $M$.
In coordinates we can write $X=\sum_{i=1}^{n} X^{i}(x) \partial / \partial x_{i}$, and the vector field is smooth if its components $X^{i}(x)$ are smooth functions. The value of a vector field $X$ at a point $q$ is denoted, in what follows, by both $X(q)$ and $\left.X\right|_{q}$.
数学代写|黎曼几何代写Riemannian geometry代考|Flow of a Vector Field
Given a complete vector field $X \in \operatorname{Vec}(M)$ we can consider the family of maps
$$\phi_{t}: M \rightarrow M, \quad \phi_{t}(q)=\gamma(t ; q), \quad t \in \mathbb{R}{2}$$ where $\gamma(t ; q)$ is the integral curve of $X$ starting at $q$ when $t=0$. By Theorem $2.5$ it follows that the map $$\phi: \mathbb{R} \times M \rightarrow M{,} \quad \phi(t, q)=\phi_{t}(q)$$
is smooth in both variables and the family $\left{\phi_{t}, t \in \mathbb{R}\right}$ is a one-parametric subgroup of Diff $(M)$; namely, it satisfies the following identities:
\begin{aligned} \phi_{0} &=\mathrm{Id}{+} \ \phi{t} \circ \phi_{s} &=\phi_{s} \circ \phi_{t}=\phi_{t+s}, \quad \forall t, s \subset \mathbb{R}, \ \left(\phi_{t}\right)^{-1} &=\phi_{-t}, \quad \forall t \in \mathbb{R} . \end{aligned}
Moreover, by construction, we have
$$\frac{\partial \phi_{t}(q)}{\partial t}=X\left(\phi_{t}(q)\right), \quad \phi_{0}(q)=q, \quad \forall q \in M$$
The family of maps $\phi_{t}$ defined by $(2.5)$ is called the flow generated by $X$. For the flow $\phi_{t}$ of a vector field $X$ it is convenient to use the exponential notation $\phi_{t}:=e^{t X}$, for every $t \in \mathbb{R}$. Using this notation, the group properties (2.6) take the form
$$\begin{gathered} e^{0 X}=\mathrm{Id}, \quad e^{t X} \circ e^{s X}=e^{s X} \circ e^{t X}=e^{(t+s) X}, \quad\left(e^{t X}\right)^{-1}=e^{-t X} \ \frac{d}{d t} e^{t X}(q)=X\left(e^{t X}(q)\right), \quad \forall q \in M \end{gathered}$$
Remark $2.8$ When $X(x)=A x$ is a linear vector field on $\mathbb{R}^{n}$, where $A$ is an $n \times n$ matrix, the corresponding flow $\phi_{t}$ is the matrix exponential $\phi_{t}(x)=e^{t A} x$.
数学代写|黎曼几何代写Riemannian geometry代考|Negative Curvature: The Hyperbolic Plane
H_{r}^{2}=\left{q=(x, y, z) \in \mathbb{R}^{3} \mid|q|_{h}^{2}=-r^{ 2}\right} \cap{z>0}H_{r}^{2}=\left{q=(x, y, z) \in \mathbb{R}^{3} \mid|q|_{h}^{2}=-r^{ 2}\right} \cap{z>0}
ñ:Hr2→H2,ñ(q)=1r∇q一个
数学代写|黎曼几何代写Riemannian geometry代考|Tangent Vectors and Vector Fields
dd吨|吨=0C(吨) 或者 C˙(0),
T_{q} M:=\left{\left.\frac{d}{d t}\right|{t=0} \gamma(t) \mid \gamma: I \rightarrow M \text { smooth, } \伽马(0)=q\right} 。T_{q} M:=\left{\left.\frac{d}{d t}\right|{t=0} \gamma(t) \mid \gamma: I \rightarrow M \text { smooth, } \伽马(0)=q\right} 。一个标准的事实是吨q米有一个自然的结构n维向量空间,其中n=暗淡米.
X:q↦X(q)∈吨q米
数学代写|黎曼几何代写Riemannian geometry代考|Flow of a Vector Field
φ吨:米→米,φ吨(q)=C(吨;q),吨∈R2在哪里C(吨;q)是积分曲线X开始于q什么时候吨=0. 按定理2.5随之而来的是地图
φ:R×米→米,φ(吨,q)=φ吨(q)
φ0=我d+ φ吨∘φs=φs∘φ吨=φ吨+s,∀吨,s⊂R, (φ吨)−1=φ−吨,∀吨∈R.
∂φ吨(q)∂吨=X(φ吨(q)),φ0(q)=q,∀q∈米
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3903
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Model Spaces of Constant Curvature
In this section we briefly discuss surfaces embedded in $\mathbb{R}^{3}$ (with Euclidean or Minkowski inner product) that have constant Gaussian curvature and play the role of model spaces. For each model space we are interested in describing the geodesics and, more generally, the curves of constant geodesic curvature. These results will be useful in the study of sub-Riemannian model spaces in dimension 3 (see Chapter 7 ).
Assume that the surface $M$ has constant Gaussian curvature $\kappa \in \mathbb{R}$. We already know that $\kappa$ is a metric invariant of the surface, i.e., it does not depend on the embedding of the surface in $\mathbb{R}^{3}$. We will distinguish the following three cases:
(i) $\kappa=0$ : this is the flat model, corresponding to the Euclidean plane,
(ii) $\kappa>0$ : this corresponds to the sphere,
(iii) $\kappa<0$ : this corresponds to the hyperbolic plane.
We will briefly discuss case (i), since it is trivial, and study in more detail cases
(ii) and (iii), of spherical and hyperbolic geometry respectively.
数学代写|黎曼几何代写Riemannian geometry代考|Zero Curvature: The Euclidean Plane
The Euclidean plane can be realizéd as the surface of $\mathbb{R}^{3}$ defined by the zero level set of the function
$$a: \mathbb{R}^{3} \rightarrow \mathbb{R}, \quad a(x, y, z)=z$$
It is an easy exercise, applying the results of the previous sections, to show that the Gaussian curvature of this surface is zero (the Gauss map is constant) and to characterize geodesics and curves with constant geodesic curvature.
Exercise 1.59 Prove that geodesics on the Euclidean plane are lines. Moreover, show that curves with constant geodesic curvature $c \neq 0$ are circles of radius $1 / c$.
数学代写|黎曼几何代写Riemannian geometry代考|Positive Curvature: The Sphere
Let us consider the sphere $S_{r}^{2}$ of radius $r$ as the surface of $\mathbb{R}^{3}$ defined as the zero level set of the function
$$S_{r}^{2}=a^{-1}(0), \quad a(x, y, z)=x^{2}+y^{2}+z^{2}-r^{2} .$$
If we denote, as usual, by $\langle\cdot \mid \cdot\rangle$ the Euclidean inner product in $\mathbb{R}^{3}, S_{r}^{2}$ can be viewed also as the set of points $q=(x, y, z)$ whose Euclidean norm is constant:
$$S_{r}^{2}=\left{q \in \mathbb{R}^{3} \mid\langle q \mid q\rangle=r^{2}\right} .$$
The Gauss map associated with this surface can be easily computed, and it is explicitly given by
$$\mathcal{N}: S_{r}^{2} \rightarrow S^{2}, \quad \mathcal{N}(q)=\frac{1}{r} q$$
It follows immediately from (1.75) that the Gaussian curvature of the sphere is $\kappa=1 / r^{2}$ at every point $q \in S_{r}^{2}$. Let us now recover the structure of geodesics and curves with constant geodesic curvature on the sphere.
Proposition $1.60$ Let $\gamma:[0, T] \rightarrow S_{r}^{2}$ be a curve with unit speed and constant geodesic curvature equal to $c \in \mathbb{R}$. Then, for every $w \in \mathbb{R}^{3}$, the function $\alpha(t)=\langle\dot{\gamma}(t) \mid w\rangle$ is a solution of the differential equation
$$\ddot{\alpha}(t)+\left(c^{2}+\frac{1}{r^{2}}\right) \alpha(t)=0 .$$
Proof Differentiating twice the equality $a(\gamma(t))=0$, where $a$ is the function defined in (1.74), we get (in matrix notation):
$$\dot{\gamma}(t)^{T}\left(\nabla_{\gamma(t)}^{2} a\right) \dot{\gamma}(t)+\ddot{\gamma}(t)^{T} \nabla_{\gamma(t)} a=0 .$$
Moreover, since $|\dot{\gamma}(t)|$ is constant and $\gamma$ has constant geodesic curvature equal to $c$, there exists a function $b(t)$ such that
$$\ddot{\gamma}(t)=b(t) \nabla_{\gamma(t)} a+c \eta(t),$$
where $c$ is the gcodesic curvature of the curve and $\eta(t)=\dot{\gamma}(t)^{\perp}$ is the vector orthogonal to $\dot{\gamma}(t)$ in $T_{\gamma(t)} S_{r}^{2}$ (defined in such a way that $\dot{\gamma}(t)$ and $\eta(t)$ form a positively oriented frame). Reasoning as in the proof of Proposition $1.8$ and noticing that $\nabla_{\gamma(t)} a$ is proportional to the vector $\gamma(t)$, one can compute $b(t)$ and obtain that $\gamma$ satisfies the differential equation
$$\ddot{\gamma}(t)=-\frac{1}{r^{2}} \gamma(t)+c \eta(t) .$$
数学代写|黎曼几何代写Riemannian geometry代考|Model Spaces of Constant Curvature
(i)ķ=0:这是平面模型,对应于欧几里得平面,
(ii)ķ>0:这对应于球体,
(iii)ķ<0:这对应于双曲平面。
(ii)和(iii)。
数学代写|黎曼几何代写Riemannian geometry代考|Positive Curvature: The Sphere
S_{r}^{2}=\left{q \in \mathbb{R}^{3} \mid\langle q \mid q\rangle=r^{2}\right} 。S_{r}^{2}=\left{q \in \mathbb{R}^{3} \mid\langle q \mid q\rangle=r^{2}\right} 。
ñ:小号r2→小号2,ñ(q)=1rq
C˙(吨)吨(∇C(吨)2一个)C˙(吨)+C¨(吨)吨∇C(吨)一个=0.
C¨(吨)=b(吨)∇C(吨)一个+C这(吨),
C¨(吨)=−1r2C(吨)+C这(吨).
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
数学代写|黎曼几何代写Riemannian geometry代考|MATH3968
statistics-lab™ 为您的留学生涯保驾护航 在代写黎曼几何Riemannian geometry方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写黎曼几何Riemannian geometry代写方面经验极为丰富,各种代写黎曼几何Riemannian geometry相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
数学代写|黎曼几何代写Riemannian geometry代考|Gauss–Bonnet Theorem: Global Version
Now we state the global version of the Gauss-Bonnet theorem. In other words we want to generalize $(1.33)$ to the case when $\Gamma$ is a region of $M$ that is not
necessarily homeomorphic to a disk; see for instance Figure 1.4. As we will find, the result depends on the Euler characteristic $\chi(\Gamma)$ of this region.
In what follows, by a triangulation of $M$ we mean a decomposition of $M$ into curvilinear polygons (see Definition $1.31$ ). Notice that every compact surface admits a triangulation. 3
Definition 1.34 Let $M \subset \mathbb{R}^{3}$ be a compact oriented surface with piecewise smooth boundary $\partial M$. Consider a triangulation of $M$. We define the Euler characteristic of $M$ as
$$\chi(M):=n_{2}-n_{1}+n_{0},$$
where $n_{i}$ is the number of $i$-dimensional faces in the triangulation.
The Euler characteristic can be defined for every region $\Gamma$ of $M$ in the same way. Here, by a region $\Gamma$ on a surface $M$ we mean a closed domain of the manifold with piecewise smooth boundary.
数学代写|黎曼几何代写Riemannian geometry代考|Consequences of the Gauss–Bonnet Theorems
Definition $1.39$ Let $M, M^{\prime}$ be two surfaces in $\mathbb{R}^{3}$. A smooth map $\phi: \mathbb{R}^{3} \rightarrow$ $\mathbb{R}^{3}$ is called a local isometry between $M$ and $M^{\prime}$ if $\phi(M)=M^{\prime}$ and for every $q \in M$ it satisfies
$$\langle v \mid w\rangle=\left\langle D_{q} \phi(v) \mid D_{q} \phi(w)\right\rangle, \quad \forall v, w \in T_{q} M$$
If, moreover, the map $\phi$ is a bijection then $\phi$ is called a global isometry. Two surfaces $M$ and $M^{\prime}$ are said to be locally isometric (resp. globally isometric) if there exists a local isometry (resp. global isometry) between $M$ and $M^{\prime}$. Notice that the restriction $\phi$ of an isometry of $\mathbb{R}^{3}$ to a surface $M \subset \mathbb{R}^{3}$ always defines a global isometry between $M$ and $M^{\prime}=\phi(M)$.
Formula (1.52) says that a local isometry between two surfaces $M$ and $M^{\prime}$ preserves the angles between tangent vectors and, a fortiori, the lengths of curves and the distances between points.
By Corollary $1.33$, thanks to the fact that the angles and the volumes are preserved by isometries, one obtains that the Gaussian curvature is invariant under local isometries, in the following sense.
Theorem 1.40 (Gauss’ theorema egregium) Let $\phi$ be a local isometry between $M$ and $M^{\prime}$. Then for every $q \in M$ one has $\kappa(q)=\kappa^{\prime}(\phi(q))$, where $\kappa$ (resp. $\kappa^{\prime}$ ) is the Gaussian curvature of $M$ (resp. $\left.M^{\prime}\right)$.
This result says that the Gaussian curvature $\kappa$ depends only on the metric structure on $M$ and not on the specific fact that the surface is embedded in $\mathbb{R}^{3}$ with the induced inner product.
数学代写|黎曼几何代写Riemannian geometry代考|The Gauss Map
We end this section with a geometric characterization of the Gaussian curvature of a manifold $M$, using the Gauss map. The Gauss map is a map from the surface $M$ to the unit sphere $S^{2}$ of $\mathbb{R}^{3}$.
Definition 1.44 Let $M$ be an oriented surface. We define the Gauss map associated with $M$ as
$$\mathcal{N}: M \rightarrow S^{2}, \quad q \mapsto v_{q}$$
where $v_{q} \in S^{2} \subset \mathbb{R}^{3}$ denotes the external unit normal vestor to $M$ at $q$.
Let us consider the differential of the Gauss map at the point $q$,
$$D_{q} \mathcal{N}: T_{q} M \rightarrow T_{\mathcal{N}(q)} S^{2}$$
Notice that a tangent vector to the sphere $S^{2}$ at $\mathcal{N}(q)$ is by construction orthogonal to $\mathcal{N}(q)$. Hence it is possible to identify $T_{\mathcal{N}(q)} S^{2}$ with $T_{q} M$ and to think of the differential of the Gauss map $D_{q} \mathcal{N}$ as an endomorphism of $T_{q} M$
Theorem 1.45 Let $M$ be a surface of $\mathbb{R}^{3}$ with Gauss map $\mathcal{N}$ and Gaussian curvature к. Then
$$\kappa(q)=\operatorname{det}\left(D_{q} \mathcal{N}\right),$$
where $D_{q} \mathcal{N}$ is interpreted as an endomorphism of $T_{q} M$.
We start by proving an important property of the Gauss map.
Lemma $1.46$ For every $q \in M$, the differential $D_{q} \mathcal{N}$ of the Gauss map is a symmetric operator, i.e., it satisfies
$$\left\langle D_{q} \mathcal{N}(\xi) \mid \eta\right\rangle=\left\langle\xi \mid D_{q} \mathcal{N}(\eta)\right\rangle, \quad \forall \xi, \eta \in T_{q} M .$$
Proof The statement is local, hence it is not restrictive to assume that $M$ is parametrized by a function $\phi: \mathbb{R}^{2} \rightarrow M$. In this case $T_{q} M=\operatorname{Im} D_{u} \phi$, where $\phi(u)=q$. Let $v, w \in \mathbb{R}^{2}$ such that $\xi=D_{u} \phi(v)$ and $\eta=D_{u} \phi(w)$. Since $\mathcal{N}(q) \in T_{q} M^{\perp}$ we have
$$\langle\mathcal{N}(q) \mid \eta\rangle=\left\langle\mathcal{N}(q) \mid D_{u} \phi(w)\right\rangle=0$$
χ(米):=n2−n1+n0,
数学代写|黎曼几何代写Riemannian geometry代考|Consequences of the Gauss–Bonnet Theorems
⟨在∣在⟩=⟨Dqφ(在)∣Dqφ(在)⟩,∀在,在∈吨q米
数学代写|黎曼几何代写Riemannian geometry代考|The Gauss Map
ñ:米→小号2,q↦在q
Dqñ:吨q米→吨ñ(q)小号2
ķ(q)=这(Dqñ),
⟨Dqñ(X)∣这⟩=⟨X∣Dqñ(这)⟩,∀X,这∈吨q米.
⟨ñ(q)∣这⟩=⟨ñ(q)∣D在φ(在)⟩=0
有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 |
# What are environment hooks in catkin? Or maybe, how do catkin workspaces work? [closed]
I've been trying to understand what the setup.sh file does and how it changes my shell environment such that catkin builds correctly.
Simply to describe my motivation for understanding, there are times when I get into a dirty build state (genjava and rosjava freaking out about something), and I have to blow away build, devel directories, and maven repos and build from scratch. However if I naively call catkin_make, rosjava and genjava start breaking all over the place because I no longer have a setup.bash file to source other than the ros/indigo/setup.bash. So I end up in this weird place where my first build command has to be ROS_PACKAGE_PATH=/path/to/src:\$ROS_PACKAGE_PATH catkin_make. Then I can source the proper setup.bash file, call catkin_make, and everything builds fine again.
I'd like to understand more about the env hooks because my hope is I can just prefix catkin_make with all the env variables to get an arbitrary set of troubled packages (i.e. rosjava and genjava) happy and have a single command to solve my problems.
edit retag reopen merge delete |
# 9.3.3E: Parabolas and Non-Linear Systems (Exercises)
Section 9.3 Exercises
In problems 1–4, match each graph with one of the equations A–D.
A. $$y^2 = 4x$$
B. $$x^2 = 4y$$
C. $$x^2 = 8y$$
D. $$y^2 + 4x = 0$$
1. 2. 3. 4.
In problems 5–14, find the vertex, axis of symmetry, directrix, and focus of the parabola.
5. $$y^2 = 16x$$
6. $$x^2 = 12y$$
7. $$y = 2x^2$$
8. $$x = - \dfrac{y^2}{8}$$
9. $$x + 4y^2 = 0$$
10. $$8y + x^2 = 0$$
11. $$(x - 2)^2 = 8(y + 1)$$
12. $$(y + 3)^2 = 4(x - 2)$$
13. $$y = \dfrac{1}{4}(x + 1)^2 + 4$$
14. $$x = - \dfrac{1}{12}(y + 1)^2 + 1$$
In problems 15–16, write an equation for the graph.
15. 16.
In problems 17-20, find the standard form of the equation for a parabola satisfying the given conditions.
17. Vertex at (2, 3), opening to the right, focal length 3
18. Vertex at (-1, 2), opening down, focal length 1
19. Vertex at (0, 3), focus at (0, 4)
20. Vertex at (1, 3), focus at (0, 3)
21. The mirror in an automobile headlight has a parabolic cross-section with the light bulb at the focus. On a schematic, the equation of the parabola is given as?$$x^2 = 4y^2$$.At what coordinates should you place the light bulb?
22. If we want to construct the mirror from the previous exercise so that the focus is located at (0, 0.25), what should the equation of the parabola be?
23. A satellite dish is shaped like a paraboloid of revolution. This means that it can be formed by rotating a parabola around its axis of symmetry. The receiver is to be located at the focus. If the dish is 12 feet across at its opening and 4 feet deep at its center, where should the receiver be placed?
24. Consider the satellite dish from the previous exercise. If the dish is 8 feet across at the opening and 2 feet deep, where should we place the receiver?
25. A searchlight is shaped like a paraboloid of revolution. A light source is located 1 foot from the base along the axis of symmetry. If the opening of the searchlight is 2 feet across, find the depth.
26. If the searchlight from the previous exercise has the light source located 6 inches from the base along the axis of symmetry and the opening is 4 feet wide, find the depth.
In problems 27–34, solve each system of equations for the intersections of the two curves.
27. $$\begin{array}{l} {y = 2x} \\ {y^2 - x^2} = 1 \end{array}$$
28. $$\begin{array}{l} {y = x + 1} \\ {2x^2 + y^2} = 1 \end{array}$$
29. $$\begin{array}{l} {x^2 + y^2} = 11 \\ {x^2 - 4y^2} = 1 \end{array}$$
30. $$\begin{array}{l} {2x^2 + y^2} = 4 \\ {y^2 - x^2} = 1 \end{array}$$
31. $$\begin{array}{l} {y = x^2} \\ {y^2 - 6x^2} = 16 \end{array}$$
32. $$\begin{array}{l} {x = y^2} \\ {\dfrac{x^2}{4} + \dfrac{y^2}{9} = 1} \end{array}$$
33. $$\begin{array}{l} {x^2 - y^2} = 1 \\ {4y^2 - x^2 = 1} \end{array}$$
34. $$\begin{array}{l} {x^2 = 4(y - 2)} \\ {x^2 = 8(y + 1)} \end{array}$$
35. A LORAN system has transmitter stations A, B, C, and D at (-125, 0), (125, 0), (0, 250), and (0, -250), respectively. A ship in quadrant two computes the difference of its distances from A and B as 100 miles and the difference of its distances from C and D as 180 miles. Find the x- and y-coordinates of the ship’s location. Round to two decimal places.
36. A LORAN system has transmitter stations A, B, C, and D at (-100, 0), (100, 0), (-100, -300), and (100, -300), respectively. A ship in quadrant one computes the difference of its distances from A and B as 80 miles and the difference of its distances from C and D as 120 miles. Find the $$x$$- and $$y$$-coordinates of the ship’s location. Round to two decimal places.
1. C
3. A
5. Vertex: (0, 0). Axis of symmetry: $$y = 0$$. Directrix: $$x = -4$$. Focus: (4, 0)
7. Vertex: (0, 0). Axis of symmetry: $$x = 0$$. Directrix: $$y = -1/8$$. Focus: (0, 1/8)
9. Vertex: (0, 0). Axis of symmetry: $$y = 0$$. Directrix: $$x = 1/16$$. Focus: (-1/16, 0)
11. Vertex: (2, -1). Axis of symmetry: $$x = 2$$. Directrix: $$y = -3$$. Focus: (2, 1)
13. Vertex: (-1, 4). Axis of symmetry: $$x = -1$$. Directrix: $$y = 3$$. Focus: (-1, 5)
15. $$(y - 1)^2 = -(x - 3)$$
17. $$(y - 3)^2 = 12(x - 2)$$
19. $$x^2 = 4(y - 3)$$
21. At the focus, (0,1)
23. 2.25 feet above the vertex.
25. 0.25 ft
27. $$(\dfrac{1}{\sqrt{3}}, \dfrac{2}{\sqrt{3}})$$, $$(\dfrac{-1}{\sqrt{3}}, \dfrac{-2}{\sqrt{3}})$$
29. $$(3, \sqrt{2})$$, $$(3, -\sqrt{2})$$, $$(-3, \sqrt{2})$$, $$(-3, -\sqrt{2})$$
31. $$(2\sqrt{2}, 8)$$, $$(-2\sqrt{2}, 8)$$
33. $$(\dfrac{5}{3}, \dfrac{2}{3})$$, $$(-\dfrac{5}{3}, \dfrac{2}{3})$$, $$(\dfrac{5}{3}, -\dfrac{2}{3})$$, $$(-\dfrac{5}{3}, -\dfrac{2}{3})$$
35. (-64.50476622, 93.37848007) $$\approx$$ (-64.50, 93.38) |
Evolution of the temporal and the spectral properties in 2010 and 2011 outbursts of H 1743-322
# Evolution of the temporal and the spectral properties in 2010 and 2011 outbursts of H 1743-322
Dipak Debnath111Tel.: + 91 33 24366003/24622153, Extn: 26; Fax: + 91 33 24366003/24622153, Extn: 28 Indian Centre For Space Physics, 43 Chalantika, Garia Station Road, Kolkata, 700084, India Sandip K. Chakrabarti222Also affiliated to Indian Centre For Space Physics, 43 Chalantika, Garia Station Road, Kolkata, 700084, India S. N. Bose National Center for Basic Sciences, JD-Block, Salt Lake, Kolkata, 700098, India Anuj Nandi Space Astronomy Group, SSIF/ISITE Campus, ISRO Satellite Centre, Outer Ring Road, Marathahalli, Bangalore, 560037, India
###### Abstract
The Galactic black hole candidate H 1743-322 exhibited two X-ray outbursts in rapid succession: one in August 2010 and the other in April 2011. We analyze archival data of this object from the PCA instrument on board RXTE (2-25 keV energy band) to study the evolution of its temporal and spectral characteristics during both the outbursts, and hence to understand the behavioral change of the accretion flow dynamics associated with the evolution of the various X-ray features. We study the evolution of QPO frequencies during the rising and the declining phases of both the outbursts. We successfully fit the variation of QPO frequency using the Propagating Oscillatory Shock (POS) model in each of the outbursts and obtain the accretion flow parameters such as the instantaneous shock locations, the shock velocity and the shock strength. Based on the degree of importance of the thermal (disk black body) and the non-thermal (power-law) components of the spectral fit and properties of the QPO (if present), the entire profiles of the 2010 and 2011 outbursts are subdivided into four different spectral states: hard, hard-intermediate, soft-intermediate and soft. We attempt to explain the nature of the outburst profile (i.e., hardness-intensity diagram) with two different types of mass accretion flow.
###### keywords:
X-Rays:binaries, Black Holes, shock waves, accretion disks, Stars:individual (H 1743-322)
## 1 Introduction
Galactic transient black hole candidates (BHCs) are the most fascinating objects to study in X-ray domain since these sources exhibit evolutions in their timing and spectral properties during their outbursts. Several attempts (McClintock & Remillard, 2006; Belloni et al., 2005; Remillard & McClintock, 2006; Debnath et al., 2008; Nandi et al., 2012) were made for a thorough study on the temporal and spectral evolutions of the transient black hole (BH) binaries during their outbursts. Various spectral states were identified during different phases of the outburst. In general, four basic spectral states (, , , ) are observed during the outburst of a transient BHC (McClintock & Remillard, 2006; Belloni et al., 2005; Nandi et al., 2012). One can find detailed discussions about these spectral states and their transitions in the literature (Homan & Belloni, 2005a; Belloni, 2010c; Dunn et al., 2010; Nandi et al., 2012). It was also reported by several authors (Fender et al., 2004; Homan & Belloni, 2005a; Belloni, 2010c; Nandi et al., 2012) that the observed spectral states form a hysteresis loop during their outbursts. Also, these different spectral states of the hysteresis-loop are found to be associated with different branches of a q-like plot of X-ray color vs intensity i.e., the hardness-intensity diagram (HID) (Maccarone & Coppi, 2003; Homan & Belloni, 2005a).
The transient low-mass Galactic X-ray binary H 1743-322 was first discovered (Kaluzienski & Holt, 1977) with the Ariel-V All-Sky Monitor and subsequently observed with the HEAO-1 satellite (Doxsey et al., 1977) in X-rays during the period of Aug-Sep, 1977. During the 1977-78 outburst, the source was observed several times in the hard X-ray band of keV energy range with the HEAO-1 satellite (Cooke et al., 1984). The observation revealed that the soft X-ray transient (based on the keV spectral properties) also emits X-rays in the energy range of keV (Cooke et al., 1984). White & Marshall (1984) categorized the source as a potential black hole candidate (BHC) based on the ‘color-color’ diagram using the spectral data of the HEAO-1 satellite.
After almost two decades, in 2003, the INTEGRAL satellite discovered signatures of renewed activity in hard X-rays (Revnivtsev et al., 2003) and later, RXTE also verified the presence of such an activity (Markwardt & Swank, 2003). During the 2003 outburst, the source was continuously and extensively monitored in X-rays (Parmar et al., 2003; Homan et al., 2005b; Remillard et al., 2006; McClintock et al., 2009), IR (Steeghs et al., 2003), and in Radio bands (Rupen et al., 2003) to reveal the multi-wavelength properties of the source. The multi-wavelength campaign on this source during its 2003 and 2009 outbursts were also carried out by McClintock et al. (2009); Miller-Jones et al. (2012) respectively.
The low-frequency as well as high frequency quasi-periodic oscillations (QPOs) along with a strong spectral variability are observed in the 2003 and other outbursts of the source in RXTE PCA data (Capitanio et al., 2005; Homan et al., 2005b; Remillard et al., 2006; Kalemci et al., 2006; Prat et al., 2009; McClintock et al., 2009; Stiele et al., 2013). These have resemblance with several other typical Galactic black hole candidates (e.g., GRO J1655-40, XTE J1550-564, GX 339-4 etc.). Another important discovery of large-scale relativistic X-ray and radio jets associated with the 2003 outburst (Rupen et al., 2004; Corbel et al., 2005) put the source in the category of ‘micro-quasar’. This was also reconfirmed by McClintock et al. (2009), from their comparative study on the timing and the spectral properties of this source with XTE J1550-564.
Recently in 2010 and 2011, the transient black hole candidate H 1743-322 again exhibited outbursts (Yamaoka et al., 2010; Kuulkers et al., 2011) with similar characteristics of state transitions (Shaposhnikov & Tomsick, 2010a; Shaposhnikov, 2010b; Belloni et al., 2010a, b, 2011) as observed in other outburst sources (Homan & Belloni, 2005a; Nandi et al., 2012). Recently, Altamirano & Strohmayer (2012) reported a new class of accretion state dependent mHz QPO frequency during the early initial phase of both the outbursts under study.
RXTE has observed both these outbursts on a daily basis, which continued for a time period of around two months. We made a detailed study on the temporal and the spectral properties of H 1743-322 during these two outbursts using archival data of PCA instrument on board RXTE satellite. Altogether observations starting from 2010 August 9 (MJD = 55417) to 2010 September 30 (MJD = 55469) of the 2010 outburst are analyzed in this paper. After remaining in the quiescence state for around seven months, H 1743-322 again became active in X-rays on 2011 April 6 (MJD = 55657), as reported by Kuulkers et al. (2011). RXTE started monitoring the source six days later (on 2011 April 12, MJD = 55663). Here, we also analyze RXTE PCA archival data of observations spread over the entire outburst, starting from 2011 April 12 to 2011 May 19 (MJD = 55700). The preliminary results of this work were already presented in COSPAR 2012 (Debnath et al., 2012).
Apart from the 2010 and 2011 outbursts, there are six outbursts of H 1743-322 observed by RXTE in recent past. Detailed results of these outbursts have already been reported in the literature (Capitanio et al., 2009; McClintock et al., 2009; Dunn et al., 2010; Chen et al., 2010; Coriat et al., 2011; Miller-Jones et al., 2012) and the evolution of all outbursts typically follow the ‘q-diagram’ in the hardness-intensity plane (see for example, Maccarone & Coppi, 2003; Homan & Belloni, 2005a), except the 2008 outburst which does not follow the ‘standard’ outburst profile and is termed as the ‘failed-outburst’ (Capitanio et al., 2009).
Although the mass of the black hole has not yet been measured dynamically, there are several attempts to measure the mass of the black hole based on the timing and spectral properties of H 1743-322. From the model of high frequency QPOs based on the mass-angular momentum (i.e., spin of the black hole) relation, Pétri (2008) predicted that the mass can fall in the range between to .
The evolution of QPO frequency during the outburst phases of the transient BHCs has been well reported for a long time (Belloni & Hasinger, 1990; Belloni et al., 2005; Debnath et al., 2008; Nandi et al., 2012). Same type of QPO evolutions were observed during both the rising and the declining phases of these two outbursts as of other black hole candidates, such as, 2005 outburst of GRO J1655-40 (Chakrabarti et al., 2005, 2008), 1998 outburst of XTE J1550-564 (Chakrabarti et al., 2009) and 2010-11 outburst of GX 339-4 (Debnath et al., 2010; Nandi et al., 2012). The successful interpretation of these QPO evolutions with the Propagating Oscillatory Shock (POS) model (Chakrabarti et al., 2005, 2008) motivated us to fit the QPO evolutions of the recent outbursts of H 1743-322 with the same model. From the model fit, accretion flow parameters are calculated (see, Table 1 below).
This Paper is organized in the following way: In the next Section, we discuss about the observation and data analysis procedures using HEASARC’s HEASoft software package. In §3, we present temporal and spectral results of our observation. In §3.1, the evolution of light curves (2-25 keV count rates) and hardness ratios of the 2010 and 2011 outbursts of H 1743-322 are discussed. In §3.2, we compare the evolution of the hardness-intensity diagrams of these two outbursts. In §3.3, we show the time evolving (decreasing or increasing) nature of QPO frequency observed during rising and declining phases in both the outbursts (2010 and 2011) of H 1743-322 and apply the propagating oscillatory shock (POS) model to explain the variations of the centroid QPO frequency over time. In §3.4, we present the spectral analysis results and classify the entire duration of the outbursts into four spectral states: hard, hard-intermediate, soft-intermediate and soft. Finally, in §4, we present the brief discussion and concluding remarks.
## 2 Observation and Data Analysis
The campaigns carried out with RXTE cover the entire 2010 and 2011 outbursts of H 1743-322 starting from 2010 August 9 (MJD = 55417) to 2010 September 30 (MJD = 55469) and from 2011 April 12 (MJD = 55663) to 2011 May 19 (MJD = 55700). We analyzed archival data of the RXTE PCA instrument and follow the standard data analysis techniques as done by Nandi et al. (2012). The HEAsoft 6.11 version of the software package was used to analyze the PCA data. We extract data from the most stable and well calibrated proportional counter unit 2 (PCU2; all the three layers are co-added).
For the timing analysis, we use the PCA Event mode data with a maximum timing resolution of . To generate the power-density spectra (PDS), we use the “powspec” task of XRONOS package with a normalization factor of ‘-2’ to have the expected ‘white’ noise subtracted rms fractional variability on 2-15 keV (0-35 channels of PCU2) light curves of sec time bins. The power obtained has the unit of rms/Hz. Observed QPOs are generally of Lorentzian type (Nowak, 2000; van der Klis, 2005). So, to find centroid frequency of QPOs, power density spectra(PDS) are fitted with Lorentzian profiles and fit error limits are obtained by using “fit err” command. For the selection of QPOs in PDS, we use the standard method (see Nowak, 2000; van der Klis, 2005) based on the coherence parameter (= /) and amplitudes (= % rms), where , are the centroid QPO frequency and full-width at half maximum respectively as discussed in Debnath et al. (2008). Here, for these two outbursts, observed values and amplitudes are varied from and respectively. In the entire PCA data analysis, we include the dead-time corrections and also PCA break down corrections (arising due to the leakage of propane layers of PCUs).
For the spectral analysis, the standard data reduction procedure for extracting RXTE PCA (PCU2) spectral data are used. The HEASARC’s software package XSPEC (version 12.5) is used for analyzing and modeling the spectral data. A fixed value of 1% systematic error and the hydrogen column density () of (Capitanio et al., 2009) for absorption model wabs, are used to fit the spectra. keV background subtracted PCA spectra are fitted with a combination of standard thermal (diskbb) and non-thermal (power-law) models or with only power-law component, where thermal photon contribution was much less (mainly in spectra from the hard and hard-intermediate spectral states). To achieve best fit, a single Gaussian Iron line keV is also used. The fluxes of different model components of the spectra are calculated using cflux calculation method.
## 3 Results
The accretion flow properties during the outburst phases of the transient BHCs can be understood in a better manner by studying X-ray properties of these sources both in temporal and spectral domains. It is pointed out by Debnath et al. (2010) that depending upon the outburst light curve profiles, there are mainly two types of outbursting BHCs: one is ‘fast-rise slow-decay’ (FRSD) type and the other is ‘slow-rise slow-decay’ (SRSD) type. The source, H 1743-322 belongs to the first category. Although the general nature of the transient X-ray binaries is more complex (see for example, Chen et al., 1997).
### 3.1 Light curve evolution
For studying X-ray intensity variations of the 2010 and 2011 outbursts of H 1743-322, we extract light curves from PCU2 data of RXTE/PCA instrument in different energy bands: keV ( channels), keV ( channels), and keV ( channels). We have divided the keV energy band in the above two bands because keV photons mainly come from the thermally cool Keplerian disk, whereas the photons in the higher energy band ( keV) come from the Comptonized sub-Keplerian disk (Compton corona). This fact may not be true always because the contributions for different spectral components also depend on accretion states. Variations of PCA count rates in keV energy band and hardness ratios between keV and keV count rates of the 2010 and 2011 outbursts of H 1743-322 are shown in Fig. 1(a-b).
### 3.2 Hardness-Intensity-Diagram (HID)
In Fig. 2, we plot a combined keV PCA count rates of the 2010 and 2011 outbursts against X-ray color (PCA count ratio between keV and keV energy bands), which are well known as HID (Fender et al., 2004; Homan & Belloni, 2005a; Debnath et al., 2008; Mandal & Chakrabarti, 2010; Nandi et al., 2012). The marked points , , , , , , , and are on MJD = , MJD = , MJD = , MJD = , MJD = , MJD = , MJD = , and MJD = respectively for the 2010 outburst. Here points and respectively are the indicators of the start and the end of RXTE observations for the outburst and the points , , , , , are the points on the days where the state transitions from hard hard intermediate, hard-intermediate soft-intermediate, soft-intermediate soft, soft soft-intermediate, soft-intermediate hard-intermediate, and hard-intermediate hard, respectively occurred. Similarly, the points , , , , , , , and indicate MJD = , MJD = , MJD = , MJD = , MJD = , MJD = , MJD = , and MJD = respectively for the 2011 outburst.
In the Figure, both the plots show similar nature and state transitions during the outburst, except that in the 2011 outburst, the PCA count rate is observed to be lower in rising phase and higher in declining phase of the outburst. During both the outbursts, the RXTE missed the initial rising days (supposed to be in the hard state) and the observational data was not available.
Recently, Altamirano & Strohmayer (2012) also studied HIDs of both the outbursts. However, their analysis does not include spectral modeling of HIDs. From the detailed temporal and spectral study of these outbursts of H 1743-322, we have been able to connect different branches of the HIDs with different spectral states (see, Figs. 2, 6, 7). In the subsequent subsections, the variations of the spectral properties during the outbursts along with the POS model fitted evolutions of QPO frequency during the rising and the declining phases of the outbursts are discussed.
### 3.3 Evolution of QPO frequency and its modeling by POS solution
Studying temporal variability and finding QPOs in power density spectra (PDS) is an important aspect for any black hole candidate (BHC). It is observed (mainly at hard and hard-int-ermediate spectral states) that the frequency of QPOs are seen to evolve with time. LFQPOs are reported extensively in the literature, although there is some uncertainty about the origin of these QPOs. So far, many models are introduced to explain the origin of this important temporal feature of BHCs, such as trapped oscillations and disko-seismology (Kato & Manmoto, 2000), oscillations of warped disks (Shirakawa & Lai, 2002), accretion-ejection instability at the inner radius of the Keplerian disk (Rodriguez et al., 2002), global disk oscillations (Titarchuk & Osherovich, 2000), and perturbations inside a Keplerian disk (Trudolyubov et al., 1999), propagating mass accretion rate fluctuations in hotter inner disk flow (Ingram & Done, 2011), and oscillations from a transition layer in between the disk and hot Comptonized flow (Stiele et al., 2013). However, none of these models attempt to explain long duration continuous observations and the evolutions of QPOs during the outburst phases of transient BHCs. One satisfactory model namely shock oscillation model (SOM) by Chakrabarti and his collaborators (Molteni et al., 1996), shows that the oscillation of X-ray intensity could be due to the oscillation of the post-shock (Comptonizing) region. According to SOM, shock wave oscillates either because of resonance (where the cooling time scale of the flow is comparable to the infall time scale; (Molteni et al., 1996)) or because the Rankine-Hugoniot condition is not satisfied (Ryu et al., 1997) to form a steady shock. The QPO frequency is inversely proportional to the infall time () in the post-shock region. The Propagating Oscillatory Shock (POS) model, which can successfully explain the evolutions of QPO frequency, is nothing but a special case (time varying form) of SOM.
As explained in our earlier papers on POS model (Chakrabarti et al., 2005, 2008, 2009; Debnath et al., 2010; Nandi et al., 2012) during the rising phase, the shock moves towards the black hole and during the declining phase it moves away from the black hole. This movement of the shock wave depends on the non-satisfaction of Rankine-Hugoniot condition which is due to the temperature and energy differences between pre- and post- shock regions. Moreover, sometimes in soft-intermediate states, QPOs are observed sporadically (for e.g., during the 2010-11 outburst of GX 339-4; see Nandi et al., 2012) and vanishes in soft spectral states and reappears in declining intermediate/hard states. This disappearance and appearance of QPO frequency depends on the compression ratio () due to the velocity/density difference in pre- and post- shock regions or could be due to the ejection of Jets (see Radhika & Nandi, 2013; Nandi et al., 2013).When , i.e., density of pre- and post- shock region more or less becomes the same, a shock wave vanishes, and so does the QPO.
We now present the results of the evolution of QPO frequency observed in both rising and declining phases of both the outbursts. So far in the literature, there is no consensus on the origin of QPOs despite its long term discovery (Belloni & Hasinger, 1990; Belloni et al., 2005), other than our group (Chakrabarti et al., 2005, 2008, 2009; Debnath et al., 2010; Nandi et al., 2012). In this work, we have tried to connect the nature of the observed QPOs and their evolutions during the rising and the declining phases of the current outbursts with the same POS model and find their implications on accretion disk dynamics. From the fits, physical flow parameters, such as instantaneous location, velocity, and strengths of the propagating shock wave are extracted. Detailed modeling and comparative study between QPO evolutions observed in the rising and the declining phases of the outbursts of transient BHCs will be presented in our follow-up works, where we will compare the POS model fit parameters with the spectral/temporal properties (such as count rates, hardness ratios, spectral fluxes, photon indices etc.) of the BHCs. This study can predict the mass of the BHCs, whose masses are not measured dynamically till now (for e.g., H 1743-322). Similarly, our study can predict the properties of QPOs in subsequent days, once the data for the first few days is available.
The monotonically increasing nature of QPO frequency (from Hz to Hz for the 2010 outburst and from Hz to Hz for the 2011 outburst) during the rising phases and the monotonically decreasing nature of QPO frequency (from Hz to Hz for the 2010 outburst and from Hz to Hz for the 2011 outburst) during the declining phases of the recent successive two outbursts of H 1743-322 are very similar to what is observed in the 2005 outburst of GRO J1655-40 (Chakrabarti et al., 2005, 2008), 1998 outburst of XTE J1550-564 (Chakrabarti et al., 2009), and 2010 outburst of GX 339-4 (Debnath et al., 2010; Nandi et al., 2012). This motivated us to study and compare these evolutions with the same POS model solution. We found that during the rising and the declining phases of these two outbursts of H 1743-322, QPO evolutions also fit well with the POS model. The POS model fitted parameters (for e.g., shock location, strength, velocity etc.) are consistent with the QPO evolutions of GRO J1655-40, XTE J1550-564, and GX 339-4. The POS model fitted accretion flow parameters of the 2010 and 2011 outbursts of H 1743-322 are given in Table 1. Only noticeable difference observed during the present QPO frequency evolutions of H 1743-322 with that of the 2005 outburst of GRO J1655-40 and 2010-11 outburst of GX 339-4 is that during both the rising phases of GRO J1655-40 and GX 339-4 outbursts, the shock was found to move in with a constant speed of , and respectively, whereas during the same phases of the current two outbursts of H 1743-322, the shock was found to move in with an acceleration. On the other hand, during the declining phase for all these outbursts of GRO J1655-40, GX 339-4, and H 1743-322, the shock was found to be moved away with constant acceleration. It is also noticed that during both the rising and the declining phases of the 2010 outburst, the shock moved away with an acceleration twice as compared to that of 2011 outburst. It seems to be an interesting result, which may occur due to the lack of supply of matter (mostly Keplerian) into the disk from the companion that could have created a sudden ‘void’ in the disk for the shock to move away rapidly outward.
According to the POS solution (Chakrabarti et al., 2008, 2009; Debnath et al., 2010; Nandi et al., 2012), one can obtain the QPO frequency if one knows the instantaneous shock location or vise-versa and the compression ratio ( = /, where and are the densities in the post- and the pre- shock flows) at the shock. According to POS model in the presence of a shock (Chakrabarti & Manickam, 2000; Chakrabarti et al., 2008), the infall time in the post-shock region is given by,
tinfall∼rs/v∼Rrs(rs−1)1/2,
where, is the shock location in units of the Schwarzschild radius , is the velocity of propagating shock wave in .
The QPO frequency happens to be inversely proportional to the in-fall time scale from the post-shock region. According to the shock oscillation model (Molteni et al., 1996), oscillations of the X-ray intensity are generated due to the oscillation of the post-shock region. This is also the centrifugal pressure supported boundary layer (or, CENBOL) which behaves as a Compton cloud in the Chakrabarti & Titarchuk (1995) model of two component accretion flow (TCAF). According to the numerical simulations of the sub-Keplerian (low-angular momentum) accretion which includes the dynamical cooling (Ryu et al., 1997) or the thermal cooling (Molteni et al., 1996; Chakrabarti et al., 2004), the frequency of the shock oscillation is similar to the observed QPO frequency for BHCs. Thus, the instantaneous QPO frequency (in ) is expected to be
νQPO=νs0/tinfall=νs0/[Rrs(rs−1)1/2].
Here, is the inverse of the light crossing time of the black hole of mass in unit of and is the velocity of light. In a drifting shock scenario, is the time-dependent shock location given by
rs(t)=rs0±v0t/rg,
where, is the shock location at time (first QPO observed day) and is the corresponding shock velocity in the laboratory frame. The ‘+’ ve sign in the second term is to be used for an outgoing shock in the declining phase and the ‘-’ ve sign is to be used for the in-falling shock in the rising phase. When the velocity of the shock wave (as in the rising phase of the 2005 GRO J1655-40 outburst) is constant, . For the accelerating case (as in the rising and declining phases of the 2010 and 2011 outbursts of H 1743-322) is time-dependent and can be defined as , where is the acceleration of the shock front.
Since in the presence of cooling, the shock moves close to the black hole, at the rising phase of the outburst, where the cooling gradually increases due to rise of the Keplerian rate, the shock wave moves towards the black hole and thus the QPO frequency rises on a daily basis. The reverse is true in the declining phases. The POS model fitted results of the QPO evolutions during rising and declining phases of the 2010 and 2011 outbursts are presented in the following sub-sections.
#### 3.3.1 2010 QPO Evolutions
The QPOs are observed in observations out of total observations starting from 2010 August 9 (MJD = 55417) to 2010 September 30 (MJD = 55469) during the entire outburst. Out of these QPO observations, are observed in the rising phase and the remaining are observed in the declining phase of the outburst.
Rising Phase:On the very first observation day (2010 August 9, MJD = 55417), a QPO of type ‘C’ (van der Klis, 2004) of Hz and its first harmonics of Hz were observed. On subsequent days, QPO frequencies are observed to be increased till 2010 August 16 (MJD = 55424, where Hz QPO is observed). From the next day, the frequency of the observed QPO (type ‘B’) is decreased ( Hz). We have fitted this evolution of the QPO frequency with the POS model (Fig. 3a) and we found that the shock wave started moving towards the black hole from Schwarzschild radii () and reached at (Fig. 5a) within days. Also, we found that during this period, the shock velocity is varied from cm s to cm s with an acceleration of cm s d and the shock compression ratio , which is inverse of the shock strength , is changed from to . Unlike 2005 GRO J1655-40 or 2010 GX 339-4, we did not start with the strongest possible shock () in the present case. This is because RXTE missed this object in the first few days of observation. In the first ’observed’ day, the shock has already moved in and the QPO frequency is already too high ( Hz). According to our model, if the RXTE monitoring started a few days earlier, we would have observed mHz QPOs as in other black hole sources. The compression ratio decreased with time by the relation , where is the initial compression ratio (here ), is the time in days (assuming first observation day as 0 day). Here, is a constant () which determines how the shock (strength) becomes weaker with time and reaches its lowest possible value when . In principle, these parameters, including shock propagation velocity can be determined from the shock formation theory when the exact amount of viscosity and cooling effects are supplied (Chakrabarti, 1990).
Declining PhaseThe source is seen to move to this phase on 2010 September 16 (MJD = 55455), when a QPO of Hz frequency is observed. On subsequent days, the observed frequency of the QPO decreases, and it reaches to its lowest detectable value of mHz on the 2010 September 30 (MJD = 55469) within a period of days. Before Sept. 16th, QPOs are sporadically observed at around Hz starting from 2010 September 11 (MJD = 55450). According to the POS model fit (shown in Fig. 4a), the shock is observed to recede back starting from till (Fig. 5a). The shock compression ratio appears to remain constant at . Also, during this phase, the shock velocity varies from cm s to due to an acceleration of cm s d.
#### 3.3.2 2011 QPO Evolutions
The QPOs are observed in observations out of a total of observations spread over the entire outburst. Out of these observations, are observed in the rising phase and the remaining are in the declining phase of the outburst.
Rising PhaseDuring this phase of the outburst, a QPO of frequency Hz is observed on the first RXTE PCA observation day (2011 April 12, MJD = 55663). Similar to the rising phase of the 2010 outburst, QPO frequencies are observed to be increasing with time and reached its maximum value of Hz (as observed by RXTE) on 2011 April 21 (MJD = 55672). On 2011 April 23 (MJD =55674) and 2011 April 25 (MJD = 55676), the frequencies of the observed QPOs are seen to be at Hz and Hz respectively. The evolutionary track of the QPO frequency is fitted with the POS model (Fig. 3b) with the method same as that used for 2010 data and here also it is found that the shock wave moved towards the black hole starting from the launching of shock location at (Fig. 5b). This reached at within a period of days. From the POS solution, it is observed that during the evolution period, the shock velocity is varied from cm s to cm s with the effect of the acceleration of cm s d and the shock compression ratio varies from to . The compression ratio followed the same equation as the rising phase of the 2010 outburst with different constant values of and . This is primarily because RXTE started observing at different days after the onsets of these two outbursts. It is difficult to predict the acceleration of the shock front without knowing how the matter is supplied at the outer boundary, these are treated as parameters in the present solution.
Declining PhaseThe source is observed to reach at this phase of the QPO evolution on the 2011 May 9 (MJD = 55690), where the QPO of frequency Hz is observed. Subsequently, as in the 2010 outburst, the frequency of the observed QPO decreased with time and reached to its lowest detectable value of Hz on 2011 May 19 (MJD = 55700). Three days prior to the start of this phase of QPO evolution, QPO of Hz is observed on 2011 May 6 (MJD = 55687). This behavior was also seen in the declining phase of the 2010 outburst. Here also we have fitted with the POS model solution (Fig. 4b) as the 2010 outburst and found that the shock moved away from the black hole with accelerating velocity and constant shock strength ( i.e., ). During the outburst phase of days, the shock wave was found to move from to (Fig. 5b) with a change of velocity from cm s to cm s due to an acceleration of cm s d.
### 3.4 Evolutions of Spectral States in 2010 and 2011 Outbursts
In the previous Section, we showed that the QPO frequencies increased in the first few days and then decreased (declining phase) systematically in both the outbursts and indeed similar to the other outbursts studied by the same group. The movements of the shock location is related to the spectral evolution and thus it is worthwhile to check if the spectral evolution of H 1743-322 is also similar to those studied earlier. For studying the spectral properties, we fit the RXTE PCA spectra of keV energy band with the combination of the thermal (disk black body) and the non-thermal (power-law) components or with only non-thermal (power-law) component. To achieve the best fit, a single Gaussian line keV was used. We found that only a non-thermal power-law component is sufficient to fit the initial rising and final declining phases of the PCA spectra in keV energy range. A similar kind of the spectral behavior also observed in GX 339-4, as studied by Motta et al. (2009). For all observations, we kept hydrogen column density () for absorption model wabs to be fixed at (Capitanio et al., 2009).
Based on the degree of importance of the disk black body and power-law components (according to fitted component value and their individual flux) and nature (shape, frequency, value, rms% etc.) of QPO (if present), the entire outburst periods of 2010 and 2011 are divided into four different spectral states: hard (HS), hard-intermediate (HIMS), soft-intermediate (SIMS) and soft (SS) (see, Homan & Belloni (2005a) for the definitions of these basic spectral states). Out of these four spectral states, the low frequency quasi-periodic oscillations (LFQPOs) are observed during hard, hard-intermediate and soft-intermediate spectral states while according to POS, the QPO evolutions are observed only during the hard and hard-intermediate spectral states. In soft-intermediate states, QPOs are observed sporadically. In general, observed QPOs during the hard and hard-intermediate spectral states are of ‘C’ type (van der Klis, 2004) with Q-value and rms and during soft-intermediate spectral state are of ‘B’ type with lesser Q and rms value. During both the outbursts, these four spectral states are observed in the same sequence and completed a hysteresis-type loop, with hard spectral state in both the start and the end phases while other three spectral states in between. It is to be noted that during the spectral evolution, the soft state is observed only once, during the mid-region of the outburst (see, Fig. 1(a-b), Fig. 2, Fig. 6, and Fig. 7). In Table 2, the model fitted values of the disk black body temperature ( in keV) and power-law photon index () and their flux contribution to the spectra in keV energy range for seven observations, selected from seven different spectral states of the 2010 and 2011 outbursts are enlisted.
Daily variations of the model fitted parameters and their flux contribution in keV spectra of the 2010 and 2011 outbursts are plotted in Figs. 6 & 7 respectively. The variations of the black body temperature (), the power-law photon index () and their flux contributions in keV energy range are shown in these Figures. These variations justify the spectral classifications. The Figures also show clearly that the evolutions of the spectral parameters and model fluxes are similar during the same spectral states of the two consecutive outbursts of H 1743-322.
#### 3.4.1 2010 spectral evolution
(i) Rising Hard State: Initial days of the RXTE observations (from MJD = 55417.3 to 55419.1) belong to this spectral state, where the spectra are fitted with only power-law (PL) component. So, during this phase, the spectra are dominated by the non-thermal photons without any signature of thermal photons. The QPO frequency is observed to increase monotonically from Hz to Hz.
(ii) Rising Hard-Intermediate State:
In the following days (up to MJD = 55424.1), the source is observed to be at the hard-intermediate spectral state. Initial 3 days spectra are fitted without diskbb component, but in the rest of the two days spectra are fitted with the combination of diskbb (DBB) and power-law components. This is because as the day progresses, the spectra started becoming softer, due to enhanced supply of Keplerian matter. During this state the spectra are mostly dominated by the non-thermal PL photons, although the thermal DBB rate is increased. The QPO frequency is found to be increased monotonically from Hz to Hz.
(iii) Rising Soft-Intermediate State: On the following day (MJD = 55425.2), the observed QPO frequency is decreased to Hz. After that no QPOs are observed for the next several days. We refer this particular observation as the soft-intermediate spectral state, because of sudden rise in DBB photon flux from its previous day value, whereas the PL flux does not increase very much.
(iv) Soft State: The source is observed at this spectral state for the next days (up to MJD = 55448.8), where spectra are mostly dominated by thermal photons (i.e, low energy DBB photons). No QPOs are observed during this spectral state (see Figs. 6 & 7).
(v) Declining Soft-Intermediate State: For the following days (up to MJD = 55454.5), the source is observed at this spectral state. Here, and values are observed to be almost constant at keV and respectively. During this phase, disk black body flux is observed to be constant at , although there is an initial rise and then steady fall in the PL flux. Sporadic QPOs of Hz are observed during this spectral phase.
(vi) Declining Hard-Intermediate State: The source is observed to be in this spectral state for the next days (up to MJD = 55457.1), where first two days spectra are fitted with combination of DBB and PL component and remaining day’s spectrum is fitted with only PL component. The reason behind this is that as the day progresses, spectra became harder, because of lack of supply of Keplerian matter from the companion. It was also found that during this phase, the observed QPO frequency is monotonically decreased from Hz to Hz.
(vii) Declining Hard State: This spectral state completes the hysteresis-like loop of the spectral state evolution (see Fig. 2). The source has been observed during this spectral state till the end of RXTE PCA observation of the 2010 outburst. In this phase of evolution, the spectra are dominated by the non-thermal (power-law) flux. So, we fitted keV spectra with only PL model component. Similar to the previous spectral state, the QPO frequency is found to be monotonically decreasing from Hz to mHz during this phase.
#### 3.4.2 2011 spectral evolution
(i) Rising Hard State: Initial days of PCA observations (from MJD = 55663.7 to 55668.5) belong to this spectral state, where spectra are fitted with only non-thermal power-law (PL) component. During this spectral state, the energy spectra ( keV) are mostly dominated by non-thermal photons without any signatures of thermal photons. The observed QPO frequency is found to be monotonically increased from Hz to Hz.
(ii) Rising Hard-Intermediate State: In the next observations (up to MJD = 55672.8), the source was observed to be in this spectral state, where first 2 days spectra are fitted without diskbb component, but remaining day’s spectrum is fitted with the combination of diskbb (DBB) and power-law components. As the day progresses spectrum became softer, because of supply of more Keplerian matter (i.e, thermal emission) from the companion. During this state, the QPO frequency is observed to be increased monotonically from Hz to Hz.
(iii) Rising Soft-Intermediate State: The source is observed to be in this spectral state for the next days (up to MJD = 55676.4), where and values are observed to be almost constant at keV and respectively. A sharp rise in keV DBB flux over the previous state value is observed, where as the PL flux in the same energy range is observed to be nearly constant. As in the 2010 outburst, here also sporadic QPOs of frequency Hz are observed during this spectral state.
(iv) Soft State: Next days (up to MJD = 55684.6), the source is observed to be in this spectral state, where and values are varied from to keV and from to respectively. During this phase, the spectra are mostly dominated by low energy DBB flux (i.e., thermal emission) with decreasing in nature. QPOs are not observed during this state, which are also missing during the soft state of the 2010 outburst (see Figs. 6 & 7).
(v) Declining Soft-Intermediate State: On the next day (MJD = 55687.6), the source is observed to be in this spectral state with a weak presence of thermal emission and the energy spectra started dominating by the PL flux. The particular observation showed a QPO signature at Hz.
(vi) Declining Hard-Intermediate State: After that up to MJD = 55691.5, the source was observed to be at this spectral state, where spectra are fitted without diskbb component. The spectra are dominated by non-thermal PL photons, because of lack of supply of Keplerian matter. QPOs are also observed during this spectral state and found to be decreased monotonically from Hz to Hz.
(vii) Declining Hard State: At the final phase of the outburst, the source is found to be in the hard state again, which completes the hysteresis-like loop of the spectral state evolutions (see Fig. 2). Similar to the ‘canonical’ hard state in the rising phase, here we also found that diskbb component is not essential to fit the PCA spectra in keV range, only PL component is sufficient to fit the spectra along with an Gaussian line at keV. At the same time, during this spectral state, the QPO frequency is found to be decreased monotonically from Hz to Hz.
## 4 Discussions and concluding remarks
We carried out the temporal and the spectral analysis of the data of the 2010 and 2011 outbursts of the black hole candidate H 1743-322. We studied the evolution of quasi-periodic oscillation frequency during the rising as well as the declining phases. We also studied the evolution of spectral states during both the outbursts. The variations of QPO frequencies can be fitted assuming that an oscillating shock wave progressively moves towards the black hole during the rising phase and moves away from the black hole in the declining phase. Fundamentally, it is possible that a sudden rise in viscosity not only causes the Keplerian rate to rise but also causes the inner edge to move towards the black hole. Initially, the higher angular momentum flow forms the shock far away, but as the viscosity transports the angular momentum, the shock moves in, especially so due to enhanced cooling effects in the post-shock region. The Keplerian disk moves in along with the shock.
This scenario accomplishes all that we observe in an outbursting source: (a) The QPO frequency rises/decreases with time in the rising/declining phase, mainly observed during the hard and hard-intermediate spectral states and during the soft-intermediate spectral state QPOs are seen sporadically (see Nandi et al., 2013). It is to be noted that shocks exist only in these states. (b) The spectrum softens as the Keplerian disk moves in with a higher rate. (c) At the intermediate state(s), the Keplerian and the sub-Keplerian rates are similar. (d) During the declining phase, when the viscosity is reduced, the shock and the Keplerian disk moves back to a larger distance and the QPO frequency is also reduced. (e) The outflows can form only from the post-shock region (CENBOL), namely, the subsonic region between the shock and the inner sonic point. In softer states, the CENBOL disappears and the outflows also disappear. Our model predicts that since the QPOs could be due to the oscillation of the shocks, whose frequency is roughly the inverse of the infall time scale, the frequency gives the location of the shock when the compression ratio is provided. In our scenario, a strong shock () starts at , but by the time it comes closer to the black hole, it becomes weaker due to the rapid cooling by enhanced Keplerian disk rate. QPO ceases to exist when the compression ratio is unity. These constraints allowed us to compute the shock strength as a function of time.
As far as the evolution of the spectral states during the two outbursts of the transient BHC H 1743-322 is concerned, this can be well understood by the detailed study of the spectral properties. During both the outbursts, it has been observed that the source starts from the hard state and finally return back to hard state again after passing through the hard-intermediate, soft-intermediate and soft spectral states. It completes hysteresis loop of . Several attempts have already been made to understand these type of hysteresis spectral state transitions in black hole sources and to find their correlations with HIDs (Meyer et al., 2007; Meyer-Hofmeister et al., 2009), but one can easily explain this type of evolution of spectral states with the TCAF model
(Chakrabarti & Titarchuk, 1995), where the low-angular momentum sub-Keplerian matter flows in nearly free-fall time scale, while the high angular momentum Keplerian matter flows in the slow viscous time scale (Mandal & Chakrabarti, 2010). Initially the spectra are dominated by the sub-Keplerian flow and as a result, the spectra are hard. As the day progresses, more and more sub-Keplerian matter is converted to Keplerian matter (through viscous transport of angular momentum) and the spectra become softer, progressively through hard-intermediate (Keplerian rate slightly less than the sub-Keplerian rate), soft-intermediate (Keplerian rate comparable to the sub-Keplerian rate) and soft state (dominating Keplerian rate). When viscosity is turned off at the outer edge, the declining phase begins. At the declining phase of the outburst, the Keplerian rate starts decreasing, and the spectra start to become harder again. However, the spectrum need not be retrace itself, since the information about the decrease of viscosity had to arrive at the viscous time scale. This causes a hysteresis effect. But the spectra still follows the declining soft-intermediate, hard-intermediate and hard states.
In this work, we successfully applied the POS model fit evolutions of QPO frequency during both the rising and declining phases of two (2010 and 2011) outbursts of H 1743-322 and shock wave parameters related to the evolutions are extracted. Earlier, the same POS model was also applied to explain the evolution of QPO frequency of other black hole candidates (e.g., GRO J1655-40, XTE J1550-564, GX 339-4, etc.) very successfully (Chakrabarti et al., 2005, 2008, 2009; Debnath et al., 2010; Nandi et al., 2012). All these objects seem to exhibit a similar behaviour as far as the QPO and spectral evolutions are concerned. In future, we will carry out detailed modeling and comparative study between QPO evolutions observed in other outbursts of H 1743-322 and other transient BHCs with this POS model and hence to understand accretion flow behaviours during the outburst phases more precisely. However, the basic questions still remain: (a) What are the sources of enhanced viscosity? (b) Does it scale with the mass of the black hole or the mass of the donor? (c) Is the duration of the high viscosity phase (i.e., the duration between the end of the rising phase and the beginning of the declining phase) predictable, or it is totally random and depends mostly on the physical conditions of the donor? (d) Which processes decide the total time interval for which an outburst may last? And finally, (e) What determines the interval between two outbursts? If the cause is the enhancement of viscosity, then clearly it may be also random. We are in the process of exploring these aspects through comparison of all the known candidates. Recently, we have been able to include TCAF model in XSPEC as a local additive model, and from the spectral fit using this model directly we obtain instantaneous location of the shock () and compression ratio () other than two component (Keplerian and sub-Keplerian) accretion rates (see Debnath et al., 2013a). As we know from the POS model, one can determine the QPO frequency if the values of and are known or vise-versa (see, Eqn. 2). So, from the spectral fit, we will be able to predict the observed QPO frequency. The preliminary result on this work is already presented in a Conference Proceeding (Debnath et al., 2013b).
## References
• Altamirano & Strohmayer (2012) Altamirano, D. & Strohmayer, T., 2012. Low Frequency (11 mHz) oscillations in H1743-322: A new class of black hole QPOs ? ApJ, 754, L23.
• Belloni & Hasinger (1990) Belloni, T. & Hasinger, G., 1990. An atlas of aperiodic variability in HMXB. A&A, 230, 103.
• Belloni et al. (2005) Belloni, T. M., Homan, J., & Casella, P., et al., 2005. The evolution of the timing properties of the black-hole transient GX 339-4 during its 2002/2003 outburst. A&A, 440, 207-222.
• Belloni et al. (2010a) Belloni, T. M., Muñoz-Darias, T., & Motta, S., et al., 2010a. H 1743-322 approaching state transition. ATel, 2792, 1.
• Belloni et al. (2010b) Belloni, T. M., Muñoz-Darias, T., & Motta, S., et al., 2010b. State transition in H 1743-322. ATel, 2797, 1.
• Belloni (2010c) Belloni, T., 2010c. States and Transitions in Black Hole Binaries, The Jet Paradigm, Lecture Notes in Physics, 794.
• Belloni et al. (2011) Belloni, T., Stiele, H., & Motta, S., et al., 2011. H 1743-322 moves towards a state transition. ATel, 3301, 1.
• Capitanio et al. (2005) Capitanio, F., Ubertini, P., & Bazzano, A., et al., 2005. 3-200 keV Spectral States and Variability of the INTEGRAL Black Hole Binary IGR J17464-3213. ApJ, 622, 503-507.
• Capitanio et al. (2009) Capitanio, F., Belloni, T., & Del Santo, M., et al., 2009. A failed outburst of H1743-322. MNRAS, 398, 1194-1200.
• Chakrabarti (1990) Chakrabarti, S. K., 1990. “Theory of Transonic Astrophysical Flows”, World Scientific (Singapore).
• Chakrabarti & Titarchuk (1995) Chakrabarti, S.K. & Titarchuk, L.G., 1995. Spectral Properties of Accretion Disks around Galactic and Extragalactic Black Holes. ApJ, 455, 623-639.
• Chakrabarti & Manickam (2000) Chakrabarti, S.K. & Manickam, S.G., 2000. Correlation among Quasi-Periodic Oscillation Frequencies and Quiescent-State Duration in Black Hole Candidate GRS 1915+105. ApJ, 531, L41-L44.
• Chakrabarti et al. (2004) Chakrabarti, S.K., Acharyya, K., & Molteni, D., 2004. The effect of cooling on time dependent behaviour of accretion flows around black holes. A&A, 421, 1-8.
• Chakrabarti et al. (2005) Chakrabarti, S.K., Nandi, A., & Debnath, D., et. al., 2005. Propagating Oscillatory Shock Model for QPOS in GRO J1655-40 During the March 2005 Outburst, IJP, 79(8). 841-845 (arXiv:astro-ph/0508024).
• Chakrabarti et al. (2008) Chakrabarti, S.K., Debnath, D., & Nandi, A., et. al., 2008. Evolution of the quasi-periodic oscillation frequency in GRO J1655-40 - Implications for accretion disk dynamics. A&A, 489, L41-L44.
• Chakrabarti et al. (2009) Chakrabarti, S.K., Dutta, B.G., & Pal, P.S., 2009. Accretion flow behaviour during the evolution of the quasi-periodic oscillation frequency of XTE J in 1998 outburst. MNRAS, 394, 1463-1468.
• Chen et al. (1997) Chen, W., Shrader, C. R., & Linio, M., 1997. The Properties of X-Ray and Optical Light Curves of X-Ray Novae. ApJ, 491, 312.
• Chen et al. (2010) Chen, Y. P., Zhang, S., & Torres, D. F., et al., 2010. The 2009 outburst of H 1743-322 as observed by RXTE. A&A, 522, A99.
• Cooke et al. (1984) Cooke, B. A., Levine, A. M., & Lang, F. L., et al., 1984. HEAO 1 high-energy X-ray observations of three bright transient X-ray sources H1705-250 (Nova Ophiuchi), H1743-322, and H1833-077 (Scutum X-1). ApJ, 285, 258-263.
• Corbel et al. (2005) Corbel, S., Kaaret, P., & Fender, R. P., et al., 2005. Discovery of X-Ray Jets in the Microquasar H1743-322. ApJ, 632, 504-513.
• Coriat et al. (2011) Coriat, M., Corbel, S., Prat, L., Miller-Jones, J. C. A., & Cseh, D., et al., 2011. Radiatively effiecient accreting black holes in the hard state: the case study of H1743-322. MNRAS, 414, 677.
• Debnath et al. (2008) Debnath, D., Chakrabarti, S.K., & Nandi, A., et. al., 2008. Spectral and timing evolution of GRO J1655-40 during its outburst of 2005. BASI, 36, 151-189.
• Debnath et al. (2010) Debnath, D., Chakrabarti, S.K., & Nandi, A., 2010. Properties of the propagating shock wave in the accretion flow around GX 339-4 in the 2010 outburst. A&A, 520, A98.
• Debnath et al. (2012) Debnath, D., Chakrabarti, S.K., & Nandi, A., 2008. A comparative study of the timing and the spectral properties during two recent outbursts (2010 & 2011) of H 1743-322. cosp, 39, 431.
• Debnath et al. (2013a) Debnath, D., Mondal, S., & Chakrabarti, S.K., 2013a. Characterization of GX 339-4 outburst of 2010-11: Analysis by XSPEC using Two Component Advective Flow model. ApJ (submitted) (arXiv:astro-ph/1306.3745).
• Debnath et al. (2013b) Debnath, D., Mondal, S., & Chakrabarti, S. K., 2013b. Extracting Flow parameters of H 1743-322 during the early phase of its 2010 outburst using Two Component Advective Flow model. ASI Conf. Series (in press) (arXiv:astro-ph/1309.3604).
• Doxsey et al. (1977) Doxsey, R., Bradt, H., & Fabbiano, G., et al., 1977. H 1743-32. IAU Circ., 3113, 1.
• Dunn et al. (2010) Dunn, R. J. H., Fender, R. P., & Körding, E. G., et al., 2010. A global spectral study of black hole X-ray binaries. MNRAS, 403, 61-82.
• Fender et al. (2004) Fender, R. P., Belloni, T. M., & Gallo, E., 2004. Towards a unified model for black hole X-ray binary jets. MNRAS, 355, 1105.
• Homan & Belloni (2005a) Homan, J., & Belloni, T., 2005a. The Evolution of Black Hole States. Ap&SS, 300, 107-117.
• Homan et al. (2005b) Homan, J., Miller, J. M., & Wijnands, R., et al., 2005b. High- and Low-Frequency Quasi-periodic Oscillations in the X-Ray Light Curves of the Black Hole Transient H1743-322. ApJ, 623, 383-391.
• Ingram & Done (2011) Ingram, A. & Done, C., 2011. A physical model for the continuum variability and quasi-periodic oscillation in accreting black holes. MNRAS, 415, 2323.
• Kalemci et al. (2006) Kalemci, E., Tomsick, J. A., & Rothschild, R. E., et al., 2006. The Galactic black hole transient H1743-322 during outburst decay: connections between timing noise, state transitions, and radio emission. ApJ, 639, 340.
• Kaluzienski & Holt (1977) Kaluzienski, L. J., & Holt, S. S., 1977. Variable X-Ray Sources. IAU Circ., 3099, 3.
• Kato & Manmoto (2000) Kato, S. & Manmoto, T., 2000. Trapped Low-Frequency Oscillations in the Transition Region between Advection-dominated Accretion Flows and Standard Disks. ApJ, 541, 889.
• Kuulkers et al. (2011) Kuulkers, E., Chenevez, J., & Altamirano, D., et al., 2011. IGR J17464-3213 (= H1743-322) is active again. ATel, 3263, 1.
• Maccarone & Coppi (2003) Maccarone, T. J. & Coppi, P. S., 2003. Hysteresis in the light curves of soft X-ray transients. MNRAS, 338, 189-196.
• Mandal & Chakrabarti (2010) Mandal, S. & Chakrabarti, S. K., 2010. On the Evolution of Accretion Rates in Compact Outburst Sources. ApJ, 710, 147.
• Markwardt & Swank (2003) Markwardt, C. B., & Swank, J. H., 2003. XTE J1746-322 = Igr J17464-3213 = H1743-322. ATel, 133, 1.
• McClintock & Remillard (2006) McClintock, J. E., & Remillard, R. A., 2006. Black Hole Binaries. csxs.book, 157 (arXiv:astro-ph/0306213).
• McClintock et al. (2009) McClintock, J. E., Remillard, R. A., & Rupen, M. P., et al., 2009. The 2003 outburst of the X-ray transient H1743-322: Comparisons with the black hole microquasar XTE J1550-564. ApJ, 698, 1398.
• Meyer et al. (2007) Meyer, F., Liu, B. F., & Meyer-Hofmeister, E., 2007. Re-condensation from an ADAF into an inner disk: the intermediate state of black hole accretion? A&A, 463, 1.
• Meyer-Hofmeister et al. (2009) Meyer-Hofmeister, E., Liu, B. F., & Meyer, F., 2009. The hard to soft spectral transitions in LMXBs-affected by recondensation of gas into an inner disk. A&A, 508, 329.
• Miller-Jones et al. (2012) Miller-Jones, J. C. A., Sivakoff, G. R., & Altamirano, D., et al., 2012. Disc-jet coupling in the 2009 outburst of the black hole H1743-322. MNRAS, 421, 468.
• Molteni et al. (1996) Molteni, D., Sponholz, H. & Chakrabarti, S.K., 1996. Resonance Oscillation of Radiative Shock Waves in Accretion Disks around Compact Objects. ApJ, 457, 805.
• Motta et al. (2009) Motta, S., Belloni, T., & Homan, J., 2009. The evolution of the high-energy cut-off in the X-ray spectrum of GX 339-4 across a hard-to-soft transition. MNRAS, 400, 1603-1612.
• Nandi et al. (2012) Nandi, A., Debnath, D., & Mandal, S., et. al., 2012. Accretion flow dynamics during the evolution of timing and spectral properties of GX 339-4 during its 2010-11 outburst. A&A, 542, 56.
• Nandi et al. (2013) Nandi, A., Radhika, D., & Seetha, S., 2013. Is the ‘disappearance’ of low-frequency QPOs in the power spectra a general phenomenon for Disk-Jet symbiosis? ASI Conf. Series (in press) (arXiv:astro-ph/1308.4567). .
• Nowak (2000) Nowak, M. A., 2000. Are there three peaks in the power spectra of GX 339-4 and Cyg X-1? MNRAS, 318, 361-367.
• Parmar et al. (2003) Parmar, A. N., Kuulkers, E., & Oosterbroek, T., et al., 2003. INTEGRAL observations of the black hole candidate H 1743-322 in outburst. A&A, 411, L421-L425.
• Pétri (2008) Pétri, J., 2008. A new model for QPOs in accreting black holes: application to the microquasar GRS 1915+105. Ap&SS, 318, 181-186.
• Prat et al. (2009) Prat, L., Rodriguez, J., & Cardolle Bel, M., et al., 2009. The early phase of a H1743-322 outburst observed by INTEGRAL, RXTE, Swift, and XMM/Newton. A&A, 494, L21.
• Radhika & Nandi (2013) Radhika, D., & Nandi, A., 2013. XTE J1859+226: Evolution of spectro-temporal properties, disk-jet connection during 1999 outburst and implications on accretion disk dynamics (arXiv:astro-ph/1308.3138).
• Remillard & McClintock (2006) Remillard, R. A., & McClintock, J. E., 2006. X-Ray Properties of Black-Hole Binaries. ARA&A, 44, 49-92.
• Remillard et al. (2006) Remillard, R. A., McClintock, J. E., & Orosz, J. A., et al., 2006. The X-Ray Outburst of H1743-322 in 2003: High-Frequency QPOs with a 3:2 Frequency Ratio. ApJ, 637, 1002-1009.
• Revnivtsev et al. (2003) Revnivtsev, M., Chernyakova, M., & Capitanio, F., et al., 2003. Igr J17464-3213. ATel, 132, 1.
• Rodriguez et al. (2002) Rodriguez, J., Varnière, P., Tagger, M. & Durouchoux, Ph., 2000. Accretion-ejection instability and QPO in black hole binaries I. Observations. A&A, 387, 487.
• Rupen et al. (2003) Rupen, M. P., Mioduszewski, A. J., & Dhawan, V., 2003. Radio counterpart to IGR J17464-3213 = XTE J17464-3213. ATel, 137, 1.
• Rupen et al. (2004) Rupen, M. P., Mioduszewski, A. J., & Dhawan, V., 2004. A Boiling Core and a Shocking Jet: Radio Observations of H1743-322. HEAD, 8, 1706.
• Ryu et al. (1997) Ryu, D., Chakrabarti, S.K. & Molteni, D., 1997. Zero-Energy Rotating Accretion Flows near a Black Hole. ApJ, 474, 378.
• Shaposhnikov & Tomsick (2010a) Shaposhnikov, N., & Tomsick, J. A., 2010a. RXTE shows spectral transition during decay in H 1743-322. ATel, 2410, 1.
• Shaposhnikov (2010b) Shaposhnikov, N., 2010b. RXTE observes a transition to the low-hard state in H1743-322. ATel, 2857, 1.
• Shirakawa & Lai (2002) Shirakawa, A. & Lai, D., 2002. Precession of magnetically driven warped disks and low-frequency quasi-periodic oscillations in low-mass X-ray binaries. ApJ, 564, 361.
• Steeghs et al. (2003) Steeghs, D., Miller, J. M., & Kaplan, D., et al., 2003. IGR/XTE J17464-3213: New radio position and optical counterpart. ATel, 146, 1.
• Stiele et al. (2013) Stiele, H., Belloni, T. M., & Kalemci, E., et al., 2013. Relations between X-ray timing features and spectral parameters of Galactic balck hole X-ray binaries. MNRAS, 429, 2655.
• Titarchuk & Osherovich (2000) Titarchuk, L. & Osherovich, V., 2000. The golbal normal disk oscillations and the persistent low-frequency quasi-periodic oscillations in X-ray binaries. ApJ, 542, 111.
• Trudolyubov et al. (1999) Trudolyubov, S., Churazov, E., & Gilfanov, M., 1999. The 1-12 HZ QPOs and dips in GRS 1915+105: tracers of Keplerian and viscous time scales? A&A, 351, L15.
• van der Klis (2004) van der Klis, M., 2004. A review of rapid X-ray variability in X-ray binaries (arXiv:astro-ph/10551).
• van der Klis (2005) van der Klis, M., 2005. The QPO phenomenon. AN, 326, 798-803.
• White & Marshall (1984) White, N. E., & Marshall, F. E., 1984. The unusually soft X-ray spectrum of LMC X-3. ApJ, 281, 354-359.
• Yamaoka et al. (2010) Yamaoka, K., Negoro, H., & Sugizaki, M., et al., 2010. MAXI/GSC detects a spectral state transition in H 1743-322. ATel, 2378, 1.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters |
Models, code, and papers for "Jing Liao":
##### Style Mixer: Semantic-aware Multi-Style Transfer Network
Oct 29, 2019
Zixuan Huang, Jinghuai Zhang, Jing Liao
Recent neural style transfer frameworks have obtained astonishing visual quality and flexibility in Single-style Transfer (SST), but little attention has been paid to Multi-style Transfer (MST) which refers to simultaneously transferring multiple styles to the same image. Compared to SST, MST has the potential to create more diverse and visually pleasing stylization results. In this paper, we propose the first MST framework to automatically incorporate multiple styles into one result based on regional semantics. We first improve the existing SST backbone network by introducing a novel multi-level feature fusion module and a patch attention module to achieve better semantic correspondences and preserve richer style details. For MST, we designed a conceptually simple yet effective region-based style fusion module to insert into the backbone. It assigns corresponding styles to content regions based on semantic matching, and then seamlessly combines multiple styles together. Comprehensive evaluations demonstrate that our framework outperforms existing works of SST and MST.
* Pacific Graphics 2019
##### CariGANs: Unpaired Photo-to-Caricature Translation
Nov 02, 2018
Kaidi Cao, Jing Liao, Lu Yuan
Facial caricature is an art form of drawing faces in an exaggerated way to convey humor or sarcasm. In this paper, we propose the first Generative Adversarial Network (GAN) for unpaired photo-to-caricature translation, which we call "CariGANs". It explicitly models geometric exaggeration and appearance stylization using two components: CariGeoGAN, which only models the geometry-to-geometry transformation from face photos to caricatures, and CariStyGAN, which transfers the style appearance from caricatures to face photos without any geometry deformation. In this way, a difficult cross-domain translation problem is decoupled into two easier tasks. The perceptual study shows that caricatures generated by our CariGANs are closer to the hand-drawn ones, and at the same time better persevere the identity, compared to state-of-the-art methods. Moreover, our CariGANs allow users to control the shape exaggeration degree and change the color/texture style by tuning the parameters or giving an example caricature.
* ACM Transactions on Graphics, Vol. 37, No. 6, Article 244. Publication date: November 2018
* To appear at SIGGRAPH Asia 2018
##### Learning-based Natural Geometric Matching with Homography Prior
Jul 13, 2018
Yifang Xu, Tianli Liao, Jing Chen
Geometric matching is a key step in computer vision tasks. Previous learning-based methods for geometric matching concentrate more on improving alignment quality, while we argue the importance of naturalness issue simultaneously. To deal with this, firstly, Pearson correlation is applied to handle large intra-class variations of features in feature matching stage. Then, we parametrize homography transformation with 9 parameters in full connected layer of our network, to better characterize large viewpoint variations compared with affine transformation. Furthermore, a novel loss function with Gaussian weights guarantees the model accuracy and efficiency in training procedure. Finally, we provide two choices for different purposes in geometric matching. When compositing homography with affine transformation, the alignment accuracy improves and all lines are preserved, which results in a more natural transformed image. When compositing homography with non-rigid thin-plate-spline transformation, the alignment accuracy further improves. Experimental results on Proposal Flow dataset show that our method outperforms state-of-the-art methods, both in terms of alignment accuracy and naturalness.
* 13 pages,4 figures
##### Coarse-to-fine Seam Estimation for Image Stitching
May 24, 2018
Tianli Liao, Jing Chen, Yifang Xu
Seam-cutting and seam-driven techniques have been proven effective for handling imperfect image series in image stitching. Generally, seam-driven is to utilize seam-cutting to find a best seam from one or finite alignment hypotheses based on a predefined seam quality metric. However, the quality metrics in most methods are defined to measure the average performance of the pixels on the seam without considering the relevance and variance among them. This may cause that the seam with the minimal measure is not optimal (perception-inconsistent) in human perception. In this paper, we propose a novel coarse-to-fine seam estimation method which applies the evaluation in a different way. For pixels on the seam, we develop a patch-point evaluation algorithm concentrating more on the correlation and variation of them. The evaluations are then used to recalculate the difference map of the overlapping region and reestimate a stitching seam. This evaluation-reestimation procedure iterates until the current seam changes negligibly comparing with the previous seams. Experiments show that our proposed method can finally find a nearly perception-consistent seam after several iterations, which outperforms the conventional seam-cutting and other seam-driven methods.
* 5 pages, 4 figures
##### Graph-based Hypothesis Generation for Parallax-tolerant Image Stitching
Apr 20, 2018
Jing Chen, Nan Li, Tianli Liao
The seam-driven approach has been proven fairly effective for parallax-tolerant image stitching, whose strategy is to search for an invisible seam from finite representative hypotheses of local alignment. In this paper, we propose a graph-based hypothesis generation and a seam-guided local alignment for improving the effectiveness and the efficiency of the seam-driven approach. The experiment demonstrates the significant reduction of number of hypotheses and the improved quality of naturalness of final stitching results, comparing to the state-of-the-art method SEAGULL.
* 3 pages, 3 figures, 2 tables
##### Ratio-Preserving Half-Cylindrical Warps for Natural Image Stitching
Mar 18, 2018
Yifang Xu, Jing Chen, Tianli Liao
A novel warp for natural image stitching is proposed that utilizes the property of cylindrical warp and a horizontal pixel selection strategy. The proposed ratio-preserving half-cylindrical warp is a combination of homography and cylindrical warps which guarantees alignment by homography and possesses less projective distortion by cylindrical warp. Unlike previous approaches applying cylindrical warp before homography, we use partition lines to divide the image into different parts and apply homography in the overlapping region while a composition of homography and cylindrical warps in the non-overlapping region. The pixel selection strategy then samples the points in horizontal and reconstructs the image via interpolation to further reduce horizontal distortion by maintaining the ratio as similarity. With applying half-cylindrical warp and horizontal pixel selection, the projective distortion in vertical and horizontal is mitigated simultaneously. Experiments show that our warp is efficient and produces a more natural-looking stitched result than previous methods.
* 3 pages, 5 figures
##### Semantic Example Guided Image-to-Image Translation
Oct 04, 2019
Jialu Huang, Jing Liao, Tak Wu Sam Kwong
Many image-to-image (I2I) translation problems are in nature of high diversity that a single input may have various counterparts. Prior works proposed the multi-modal network that can build a many-to-many mapping between two visual domains. However, most of them are guided by sampled noises. Some others encode the reference images into a latent vector, by which the semantic information of the reference image will be washed away. In this work, we aim to provide a solution to control the output based on references semantically. Given a reference image and an input in another domain, a semantic matching is first performed between the two visual contents and generates the auxiliary image, which is explicitly encouraged to preserve semantic characteristics of the reference. A deep network then is used for I2I translation and the final outputs are expected to be semantically similar to both the input and the reference; however, no such paired data can satisfy that dual-similarity in a supervised fashion, so we build up a self-supervised framework to serve the training purpose. We improve the quality and diversity of the outputs by employing non-local blocks and a multi-task architecture. We assess the proposed method through extensive qualitative and quantitative evaluations and also presented comparisons with several state-of-art models.
* 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
##### Arbitrary Style Transfer with Deep Feature Reshuffle
Jun 20, 2018
Shuyang Gu, Congliang Chen, Jing Liao, Lu Yuan
This paper introduces a novel method by reshuffling deep features (i.e., permuting the spacial locations of a feature map) of the style image for arbitrary style transfer. We theoretically prove that our new style loss based on reshuffle connects both global and local style losses respectively used by most parametric and non-parametric neural style transfer methods. This simple idea can effectively address the challenging issues in existing style transfer methods. On one hand, it can avoid distortions in local style patterns, and allow semantic-level transfer, compared with neural parametric methods. On the other hand, it can preserve globally similar appearance to the style image, and avoid wash-out artifacts, compared with neural non-parametric methods. Based on the proposed loss, we also present a progressive feature-domain optimization approach. The experiments show that our method is widely applicable to various styles, and produces better quality than existing methods.
##### Neural Color Transfer between Images
Oct 02, 2017
Mingming He, Jing Liao, Lu Yuan, Pedro V. Sander
We propose a new algorithm for color transfer between images that have perceptually similar semantic structures. We aim to achieve a more accurate color transfer that leverages semantically-meaningful dense correspondence between images. To accomplish this, our algorithm uses neural representations for matching. Additionally, the color transfer should be spatially-variant and globally coherent. Therefore, our algorithm optimizes a local linear model for color transfer satisfying both local and global constraints. Our proposed approach jointly optimize matching and color transfer, adopting a coarse-to-fine strategy. The proposed method can be successfully extended from "one-to-one" to "one-to-many" color transfers. The latter further addresses the problem of mismatching elements of the input image. We validate our proposed method by testing it on a large variety of image content.
##### Document Rectification and Illumination Correction using a Patch-based CNN
Sep 20, 2019
Xiaoyu Li, Bo Zhang, Jing Liao, Pedro V. Sander
We propose a novel learning method to rectify document images with various distortion types from a single input image. As opposed to previous learning-based methods, our approach seeks to first learn the distortion flow on input image patches rather than the entire image. We then present a robust technique to stitch the patch results into the rectified document by processing in the gradient domain. Furthermore, we propose a second network to correct the uneven illumination, further improving the readability and OCR accuracy. Due to the less complex distortion present on the smaller image patches, our patch-based approach followed by stitching and illumination correction can significantly improve the overall accuracy in both the synthetic and real datasets.
* 11 pages, 10 figures
##### Blind Geometric Distortion Correction on Images Through Deep Learning
Sep 08, 2019
Xiaoyu Li, Bo Zhang, Pedro V. Sander, Jing Liao
We propose the first general framework to automatically correct different types of geometric distortion in a single input image. Our proposed method employs convolutional neural networks (CNNs) trained by using a large synthetic distortion dataset to predict the displacement field between distorted images and corrected images. A model fitting method uses the CNN output to estimate the distortion parameters, achieving a more accurate prediction. The final corrected image is generated based on the predicted flow using an efficient, high-quality resampling method. Experimental results demonstrate that our algorithm outperforms traditional correction methods, and allows for interesting applications such as distortion transfer, distortion exaggeration, and co-occurring distortion correction.
* 10 pages, 11 figures, published in CVPR 2019
##### Structure fusion based on graph convolutional networks for semi-supervised classification
Jul 02, 2019
Guangfeng Lin, Jing Wang, Kaiyang Liao, Fan Zhao, Wanjun Chen
Suffering from the multi-view data diversity and complexity for semi-supervised classification, most of existing graph convolutional networks focus on the networks architecture construction or the salient graph structure preservation, and ignore the the complete graph structure for semi-supervised classification contribution. To mine the more complete distribution structure from multi-view data with the consideration of the specificity and the commonality, we propose structure fusion based on graph convolutional networks (SF-GCN) for improving the performance of semi-supervised classification. SF-GCN can not only retain the special characteristic of each view data by spectral embedding, but also capture the common style of multi-view data by distance metric between multi-graph structures. Suppose the linear relationship between multi-graph structures, we can construct the optimization function of structure fusion model by balancing the specificity loss and the commonality loss. By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as adjacent matrix to input graph convolutional networks for semi-supervised classification. Experiments demonstrate that the performance of SF-GCN outperforms that of the state of the arts on three challenging datasets, which are Cora,Citeseer and Pubmed in citation networks.
##### Deep Exemplar-based Colorization
Jul 21, 2018
Mingming He, Dongdong Chen, Jing Liao, Pedro V. Sander, Lu Yuan
We propose the first deep learning approach for exemplar-based local colorization. Given a reference color image, our convolutional neural network directly maps a grayscale image to an output colorized image. Rather than using hand-crafted rules as in traditional exemplar-based methods, our end-to-end colorization network learns how to select, propagate, and predict colors from the large-scale data. The approach performs robustly and generalizes well even when using reference images that are unrelated to the input grayscale image. More importantly, as opposed to other learning-based colorization methods, our network allows the user to achieve customizable results by simply feeding different references. In order to further reduce manual effort in selecting the references, the system automatically recommends references with our proposed image retrieval algorithm, which considers both semantic and luminance information. The colorization can be performed fully automatically by simply picking the top reference suggestion. Our approach is validated through a user study and favorable quantitative comparisons to the-state-of-the-art methods. Furthermore, our approach can be naturally extended to video colorization. Our code and models will be freely available for public use.
* To Appear in Siggraph 2018
##### Stereoscopic Neural Style Transfer
May 20, 2018
Dongdong Chen, Lu Yuan, Jing Liao, Nenghai Yu, Gang Hua
This paper presents the first attempt at stereoscopic neural style transfer, which responds to the emerging demand for 3D movies or AR/VR. We start with a careful examination of applying existing monocular style transfer methods to left and right views of stereoscopic images separately. This reveals that the original disparity consistency cannot be well preserved in the final stylization results, which causes 3D fatigue to the viewers. To address this issue, we incorporate a new disparity loss into the widely adopted style loss function by enforcing the bidirectional disparity constraint in non-occluded regions. For a practical real-time solution, we propose the first feed-forward network by jointly training a stylization sub-network and a disparity sub-network, and integrate them in a feature level middle domain. Our disparity sub-network is also the first end-to-end network for simultaneous bidirectional disparity and occlusion mask estimation. Finally, our network is effectively extended to stereoscopic videos, by considering both temporal coherence and disparity consistency. We will show that the proposed method clearly outperforms the baseline algorithms both quantitatively and qualitatively.
* Accepted by CVPR2018
##### Visual Attribute Transfer through Deep Image Analogy
Jun 06, 2017
Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, Sing Bing Kang
We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style/texture transfer, color/style swap, sketch/painting to photo, and time lapse.
* Accepted by SIGGRAPH 2017
##### StyleBank: An Explicit Representation for Neural Image Style Transfer
Mar 28, 2017
Dongdong Chen, Lu Yuan, Jing Liao, Nenghai Yu, Gang Hua
We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.
* Accepted by CVPR 2017, corrected typos
##### Coherent Online Video Style Transfer
Mar 28, 2017
Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, Gang Hua
Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime.
* Corrected typos
##### An α-Matte Boundary Defocus Model Based Cascaded Network for Multi-focus Image Fusion
Oct 30, 2019
Haoyu Ma, Qingmin Liao, Juncheng Zhang, Shaojun Liu, Jing-Hao Xue
Capturing an all-in-focus image with a single camera is difficult since the depth of field of the camera is usually limited. An alternative method to obtain the all-in-focus image is to fuse several images focusing at different depths. However, existing multi-focus image fusion methods cannot obtain clear results for areas near the focused/defocused boundary (FDB). In this paper, a novel {\alpha}-matte boundary defocus model is proposed to generate realistic training data with the defocus spread effect precisely modeled, especially for areas near the FDB. Based on this {\alpha}-matte defocus model and the generated data, a cascaded boundary aware convolutional network termed MMF-Net is proposed and trained, aiming to achieve clearer fusion results around the FDB. More specifically, the MMF-Net consists of two cascaded sub-nets for initial fusion and boundary fusion, respectively; these two sub-nets are designed to first obtain a guidance map of FDB and then refine the fusion near the FDB. Experiments demonstrate that with the help of the new {\alpha}-matte boundary defocus model, the proposed MMF-Net outperforms the state-of-the-art methods both qualitatively and quantitatively.
* 10 pages, 8 figures, journal Unfortunately, I cannot spell one of the authors' name coorectly
##### CariGAN: Caricature Generation through Weakly Paired Adversarial Learning
Nov 01, 2018
Wenbin Li, Wei Xiong, Haofu Liao, Jing Huo, Yang Gao, Jiebo Luo
Caricature generation is an interesting yet challenging task. The primary goal is to generate plausible caricatures with reasonable exaggerations given face images. Conventional caricature generation approaches mainly use low-level geometric transformations such as image warping to generate exaggerated images, which lack richness and diversity in terms of content and style. The recent progress in generative adversarial networks (GANs) makes it possible to learn an image-to-image transformation from data, so that richer contents and styles can be generated. However, directly applying the GAN-based models to this task leads to unsatisfactory results because there is a large variance in the caricature distribution. Moreover, some models require strictly paired training data which largely limits their usage scenarios. In this paper, we propose CariGAN overcome these problems. Instead of training on paired data, CariGAN learns transformations only from weakly paired images. Specifically, to enforce reasonable exaggeration and facial deformation, facial landmarks are adopted as an additional condition to constrain the generated image. Furthermore, an attention mechanism is introduced to encourage our model to focus on the key facial parts so that more vivid details in these regions can be generated. Finally, a Diversity Loss is proposed to encourage the model to produce diverse results to help alleviate the mode collapse' problem of the conventional GAN-based models. Extensive experiments on a new large-scale WebCaricature' dataset show that the proposed CariGAN can generate more plausible caricatures with larger diversity compared with the state-of-the-art models.
* 12 |
# A gentle introduction to CFT [closed]
1) Which is the definition of a conformal field theory?
2) Which are the physical prerequisites one would need to start studying conformal field theories? (i.e Does one need to know supersymmetry? Does one need non-perturbative effects such as solitons, instantons etc?)
3) Which are the mathematical prerequisites one would need to start studying conformal field theories? (i.e how much complex analysis should one know? Does one need the theory of Riemann Surfaces? Does one need algebraic topology or algebraic geometry? And how much?)
4) Which are the best/most common books, or review articles, for a gentle introduction on the topic, at second/third year graduate level?
5) Do CFT models have an application in real world (already experimentally tested) physics? (Also outside the high energy framework, maybe in condensed matter, etc.)
-
## closed as too broad by Carlo Beenakker, Ricardo Andrade, Qiaochu Yuan, Andres Caicedo, Kevin WalkerOct 28 '13 at 23:44
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question.
@Federico. You wrongly assume that there is a unique mathematical definition of CFT. There are many (I know three). Probably all good. And people don't know how to compare them (at a mathematical level of rigor). The mathematical prerequisites are very different depending on which approach to CFT you decide to study. Also, there is an important distinction between "chiral CFT" and "full CFT": those are two completely different things (but not unrelated). – André Henriques Oct 28 '13 at 23:40
This is probably irrelevant since this question is already closed, but I'll just note that it's a cross-post of a question which was put on hold on Physics SE. – Logan Maingi Oct 29 '13 at 1:04
@Logan Maingi I believed that different places have different rules on what can or can not be posted. Is it really bad to crosspost if it did not work there? Please I am not trying to be arrogant, just trying to understand, since it is my first time in both of these places – Federico Carta Oct 29 '13 at 1:47
What's with the hate for this question? Regarding (1), there's a paper of Segal titled The definition of conformal field theory'' that might be relevant. As for (2), (3), (4), I would very much like an answer myself. I also think (2), (3), (4) are just reformulations / aspects of / clarifications of the same underlying question, so the complaint that there's too many questions here is just absurd. – Vivek Shende Oct 29 '13 at 5:06
@Federico: For references try Gaberdiel's review paper on conformal field theories. For commented pointers to the literature see here ncatlab.org/nlab/show/conformal+field+theory Or for one rigorous definition and derivation of the full theory from the first principles see the book by P. Di Francesco, P. Mathieu and D. Senechal, Conformal Field Theory (Springer, 1997) (and this is my source). – Irina Oct 29 '13 at 10:38 |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 3.5: Parallel and Perpendicular Lines in the Coordinate Plane
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Compute slope.
• Determine the equation of parallel and perpendicular lines to a given line.
• Graph parallel and perpendicular lines in slope-intercept and standard form.
## Review Queue
Find the slope between the following points.
1. (-3, 5) and (2, -5)
2. (7, -1) and (-2, 2)
3. Is \begin{align*}x = 3\end{align*} horizontal or vertical? How do you know?
Graph the following lines on an \begin{align*}x-y\end{align*} plane.
4. \begin{align*}y=-2x+3\end{align*}
5. \begin{align*}y=\frac{1}{4}x-2\end{align*}
Know What? The picture to the right is the California Incline, a short piece of road that connects Highway 1 with the city of Santa Monica. The length of the road is 1532 feet and has an elevation of 177 feet. You may assume that the base of this incline is sea level, or zero feet. Can you find the slope of the California Incline?
HINT: You will need to use the Pythagorean Theorem, which has not been introduced in this class, but you may have seen it in a previous math class.
## Slope in the Coordinate Plane
Recall from Algebra I, The slope of the line between two points \begin{align*}(x_1, \ y_1)\end{align*} and \begin{align*}(x_2, \ y_2\end{align*}) is \begin{align*}m=\frac{(y_2-y_1)}{(x_2-x_1)}\end{align*}.
Different Types of Slope:
Example 1: What is the slope of the line through (2, 2) and (4, 6)?
Solution: Use the slope formula to determine the slope. Use (2, 2) as \begin{align*}(x_1, \ y_1)\end{align*} and (4, 6) as \begin{align*}(x_2, \ y_2)\end{align*}.
\begin{align*}m=\frac{6-2}{4-2}=\frac{4}{2}=2\end{align*}
Therefore, the slope of this line is 2.
This slope is positive. Recall that slope can also be the “rise over run.” In this case we “rise”, or go up 2, and “run” in the positive direction 1.
Example 2: Find the slope between (-8, 3) and (2, -2).
Solution: \begin{align*}m=\frac{-2-3}{2-(-8)}= \frac{-5}{10}=-\frac{1}{2}\end{align*}
This is a negative slope. Instead of “rising,” the negative slope means that you would “fall,” when finding points on the line.
Example 3: Find the slope between (-5, -1) and (3, -1).
Solution:
\begin{align*}m=\frac{-1-(-1)}{3-(-5)}= \frac{0}{8}=0\end{align*}
Therefore, the slope of this line is 0, which means that it is a horizontal line. Horizontallines always pass through the \begin{align*}y-\end{align*}axis. Notice that the \begin{align*}y-\end{align*}coordinate for both points is -1. In fact, the \begin{align*}y-\end{align*}coordinate for any point on this line is -1. This means that the horizontal line must cross \begin{align*}y = -1\end{align*}.
Example 4: What is the slope of the line through (3, 2) and (3, 6)?
Solution:
\begin{align*}m=\frac{6-2}{3-3}=\frac{4}{0}=undefined\end{align*}
Therefore, the slope of this line is undefined, which means that it is a vertical line. Verticallines always pass through the \begin{align*}x-\end{align*}axis. Notice that the \begin{align*}x-\end{align*}coordinate for both points is 3. In fact, the \begin{align*}x-\end{align*}coordinate for any point on this line is 3. This means that the vertical line must cross \begin{align*}x = 3\end{align*}.
## Slopes of Parallel Lines
Recall from earlier in the chapter that the definition of parallel is two lines that never intersect. In the coordinate plane, that would look like this:
If we take a closer look at these two lines, we see that the slopes of both are \begin{align*}\frac{2}{3}\end{align*}.
This can be generalized to any pair of parallel lines in the coordinate plane.
Parallel lines have the same slope.
Example 5: Find the equation of the line that is parallel to \begin{align*}y=-\frac{1}{3}x+4\end{align*} and passes through (9, -5).
Recall that the equation of a line in this form is called the slope-intercept form and is written as \begin{align*}y = mx + b\end{align*} where \begin{align*}m\end{align*} is the slope and \begin{align*}b\end{align*} is the \begin{align*}y-\end{align*}intercept. Here, \begin{align*}x\end{align*} and \begin{align*}y\end{align*} represent any coordinate pair, \begin{align*}(x, \ y)\end{align*} on the line.
Solution: We know that parallel lines have the same slope, so the line we are trying to find also has \begin{align*}m=-\frac{1}{3}\end{align*}. Now, we need to find the \begin{align*}y-\end{align*}intercept. 4 is the \begin{align*}y-\end{align*}intercept of the given line, not our new line. We need to plug in 9 for \begin{align*}x\end{align*} and -5 for \begin{align*}y\end{align*} (this is our given coordinate pair that needs to be on the line) to solve for the new \begin{align*}y-\end{align*}intercept \begin{align*}(b)\end{align*}.
\begin{align*}-5 & = -\frac{1}{3}(9)+b\\ -5 & = -3 + b \qquad \text{Therefore, the equation of line is} \ y=-\frac{1}{3}x-2.\\ -2 & = b\end{align*}
Reminder: the final equation contains the variables \begin{align*}x\end{align*} and \begin{align*}y\end{align*} to indicate that the line contains and infinite number of points or coordinate pairs that satisfy the equation.
Parallel lines always have the same slope and different \begin{align*}y-\end{align*}intercepts.
## Slopes of Perpendicular Lines
Recall from Chapter 1 that the definition of perpendicular is two lines that intersect at a \begin{align*}90^\circ\end{align*}, or right, angle. In the coordinate plane, that would look like this:
If we take a closer look at these two lines, we see that the slope of one is -4 and the other is \begin{align*}\frac{1}{4}\end{align*}.
This can be generalized to any pair of perpendicular lines in the coordinate plane.
The slopes of perpendicular lines are opposite signs and reciprocals of each other.
Example 6: Find the slope of the perpendicular lines to the lines below.
a) \begin{align*}y=2x+3\end{align*}
b) \begin{align*}y=-\frac{2}{3}x-5\end{align*}
c) \begin{align*}y=x+2\end{align*}
Solution: We are only concerned with the slope for each of these.
a) \begin{align*}m = 2\end{align*}, so \begin{align*}m_\perp\end{align*} is the reciprocal and negative, \begin{align*}m_\perp=-\frac{1}{2}\end{align*}.
b) \begin{align*}m=-\frac{2}{3}\end{align*}, take the reciprocal and make the slope positive, \begin{align*}m_\perp=\frac{3}{2}\end{align*}.
c) Because there is no number in front of \begin{align*}x\end{align*}, the slope is 1. The reciprocal of 1 is 1, so the only thing to do is make it negative, \begin{align*}m_\perp=-1\end{align*}.
Example 7: Find the equation of the line that is perpendicular to \begin{align*}y=-\frac{1}{3}x+4\end{align*} and passes through (9, -5).
Solution: First, the slope is the reciprocal and opposite sign of \begin{align*}-\frac{1}{3}\end{align*}. So, \begin{align*}m = 3\end{align*}. Now, we need to find the \begin{align*}y-\end{align*}intercept. 4 is the \begin{align*}y-\end{align*}intercept of the given line, not our new line. We need to plug in 9 for \begin{align*}x\end{align*} and -5 for \begin{align*}y\end{align*} to solve for the new \begin{align*}y-\end{align*}intercept \begin{align*}(b)\end{align*}.
\begin{align*}-5 & = 3(9)+b\\ -5 & = 27 + b \qquad \text{Therefore, the equation of line is} \ y=3x-32.\\ -32 & = b\end{align*}
## Graphing Parallel and Perpendicular Lines
Example 8: Find the equations of the lines below and determine if they are parallel, perpendicular or neither.
Solution: To find the equation of each line, start with the \begin{align*}y-\end{align*}intercept. The top line has a \begin{align*}y-\end{align*}intercept of 1. From there, determine the slope triangle, or the “rise over run.” From the \begin{align*}y-\end{align*}intercept, if you go up 1 and over 2, you hit the line again. Therefore, the slope of this line is \begin{align*}\frac{1}{2}\end{align*}. The equation is \begin{align*}y=\frac{1}{2}x+1\end{align*}. For the second line, the \begin{align*}y-\end{align*}intercept is -3. Again, start here to determine the slope and if you “rise” 1 and “run” 2, you run into the line again, making the slope \begin{align*}\frac{1}{2}\end{align*}. The equation of this line is \begin{align*}y=\frac{1}{2}x-3\end{align*}. The lines are parallel because they have the same slope.
Example 9: Graph \begin{align*}3x-4y=8\end{align*} and \begin{align*}4x+3y=15\end{align*}. Determine if they are parallel, perpendicular, or neither.
Solution: First, we have to change each equation into slope-intercept form. In other words, we need to solve each equation for \begin{align*}y\end{align*}.
\begin{align*}3x-4y & = 8 && 4x+3y=15\\ -4y & = -3x+8 && 3y = -4x + 15\\ y & = \frac{3}{4}x-2 && y = -\frac{4}{3}x+5\end{align*}
Now that the lines are in slope-intercept form (also called \begin{align*}y-\end{align*}intercept form), we can tell they are perpendicular because the slopes are opposites signs and reciprocals.
To graph the two lines, plot the \begin{align*}y-\end{align*}intercept on the \begin{align*}y-\end{align*}axis. From there, use the slope to rise and then run. For the first line, you would plot -2 and then rise 3 and run 4, making the next point on the line (1, 4). For the second line, plot 5 and then fall (because the slop is negative) 4 and run 3, making the next point on the line (1, 3).
Know What? Revisited In order to find the slope, we need to first find the horizontal distance in the triangle to the right. This triangle represents the incline and the elevation. To find the horizontal distance, or the run, we need to use the Pythagorean Theorem, \begin{align*}a^2+b^2=c^2\end{align*}, where \begin{align*}c\end{align*} is the hypotenuse.
\begin{align*}177^2 +run^2 & = 1532^2\\ 31,329+run^2 & = 2,347,024\\ run^2 & = 2,315,695\\ run & \approx 1521.75\end{align*}
The slope is then \begin{align*}\frac{177}{1521.75}\end{align*}, which is roughly \begin{align*}\frac{3}{25}\end{align*}.
## Review Questions
Find the slope between the two given points.
1. (4, -1) and (-2, -3)
2. (-9, 5) and (-6, 2)
3. (7, 2) and (-7, -2)
4. (-6, 0) and (-1, -10)
5. (1, -2) and (3, 6)
6. (-4, 5) and (-4, -3)
Determine if each pair of lines are parallel, perpendicular, or neither. Then, graph each pair on the same set of axes.
1. \begin{align*}y=-2x+3\end{align*} and \begin{align*}y=\frac{1}{2}x+3\end{align*}
2. \begin{align*}y=4x-2\end{align*} and \begin{align*}y=4x+5\end{align*}
3. \begin{align*}y=-x+5\end{align*} and \begin{align*}y=x+1\end{align*}
4. \begin{align*}y=-3x+1\end{align*} and \begin{align*}y=3x-1\end{align*}
5. \begin{align*}2x-3y=6\end{align*} and \begin{align*}3x+2y=6\end{align*}
6. \begin{align*}5x+2y=-4\end{align*} and \begin{align*}5x+2y=8\end{align*}
7. \begin{align*}x-3y=-3\end{align*} and \begin{align*}x+3y=9\end{align*}
8. \begin{align*}x+y=6\end{align*} and \begin{align*}4x+4y=-16\end{align*}
Determine the equation of the line that is parallel to the given line, through the given point.
1. \begin{align*}y=-5x+1; \ (-2, \ 3)\end{align*}
2. \begin{align*}y=\frac{2}{3}x-2; \ (9, 1)\end{align*}
3. \begin{align*}x-4y=12; \ (-16, \ -2)\end{align*}
4. \begin{align*}3x+2y=10; \ (8, \ -11)\end{align*}
5. \begin{align*}2x - y = 15; \ (3, \ 7)\end{align*}
6. \begin{align*}y = x - 5; \ (9, \ -1)\end{align*}
Determine the equation of the line that is perpendicular to the given line, through the given point.
1. \begin{align*}y=x-1; \ (-6, \ 2)\end{align*}
2. \begin{align*}y=3x+4; \ (9, \ -7)\end{align*}
3. \begin{align*}5x-2y=6; \ (5, \ 5)\end{align*}
4. \begin{align*}y = 4; \ (-1, \ 3)\end{align*}
5. \begin{align*}x = -3; \ (1, \ 8)\end{align*}
6. \begin{align*}x - 3y = 11; \ (0, \ 13)\end{align*}
Find the equation of the two lines in each graph below. Then, determine if the two lines are parallel, perpendicular or neither.
For the line and point below, find:
a) A parallel line, through the given point.
b) A perpendicular line, through the given point.
1. \begin{align*}m = \frac{-5-5}{2 + 3} = \frac{-10}{2} = -5\end{align*}
2. \begin{align*}m = \frac{2 + 1}{-2-7} = \frac{3}{-9} = - \frac{1}{3}\end{align*}
3. Vertical because it has to pass through \begin{align*}x = 3\end{align*} on the \begin{align*}x-\end{align*}axis and doesn’t pass through \begin{align*}y\end{align*} at all.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects: |
WGCNA HELP - Error: REAL() can only be applied to a 'numeric', not a 'integer'
2
0
Entering edit mode
sm15766 ▴ 30
@sm15766-9497
Last seen 6.6 years ago
Hi,
I've been analysing gene expression networks from my RNAseq dataset using the WGCNA software, using the following code on a single .csv folder with individuals making the columns and different genes making up the rows - I used the gene expression values as FPFM from the tuxedo pipeline. I used this code:
R
setwd("/home/user/edgeR")
library(WGCNA)
options(stringsAsFactors = FALSE)
nSets = 1
setLabels = c("alpha")
shortLabels = c("alpha")
multiExpr = vector(mode = "list", length = nSets)
names(multiExpr[[1]]$data) = alphaData$substanceBXH;
rownames(multiExpr[[1]]$data) = names(alphaData)[-c(1:8)]; exprSize = checkSets(multiExpr) exprSize gsg = goodSamplesGenesMS(multiExpr, verbose = 3); gsg$allOK
sampleTrees = list()
for (set in 1:nSets)
{
sampleTrees[[set]] = hclust(dist(multiExpr[[set]]$data), method = "average") } pdf(file = "Plots/SampleClustering.pdf", width = 12, height = 12); par(mfrow=c(2,1)) par(mar = c(0, 4, 2, 0)) for (set in 1:nSets) plot(sampleTrees[[set]], main = paste("Sample clustering on all genes in", setLabels[set]), xlab="", sub="", cex = 0.7); dev.off() save(multiExpr,setLabels, shortLabels, file = "Consensus-dataInput.RData") enableWGCNAThreads() lnames = load(file = "Consensus-dataInput.RData"); lnames nSets = checkSets(multiExpr)$nSets
powers = c(seq(4,10,by=1), seq(12,20, by=2));
powerTables = vector(mode = "list", length = nSets);
for (set in 1:nSets)
powerTables[[set]] = list(data = pickSoftThreshold(multiExpr[[set]]$data, powerVector=powers, verbose = 2)[[2]]); collectGarbage(); colors = c("black", "red") plotCols = c(2,5,6,7) colNames = c("Scale Free Topology Model Fit", "Mean connectivity", "Median connectivity", "Max connectivity"); ylim = matrix(NA, nrow = 2, ncol = 4); for (set in 1:nSets) { for (col in 1:length(plotCols)){ ylim[1, col] = min(ylim[1, col], powerTables[[set]]$data[, plotCols[col]], na.rm = TRUE);
ylim[2, col] = max(ylim[2, col], powerTables[[set]]$data[, plotCols[col]], na.rm = TRUE); } } sizeGrWindow(8, 6) par(mfcol = c(2,2)); par(mar = c(4.2, 4.2 , 2.2, 0.5)) cex1 = 0.7; for (col in 1:length(plotCols)) for (set in 1:nSets) { if (set==1) { plot(powerTables[[set]]$data[,1], -sign(powerTables[[set]]$data[,3])*powerTables[[set]]$data[,2],
xlab="Soft Threshold (power)",ylab=colNames[col],type="n", ylim = ylim[, col],
main = colNames[col]);
}
if (col==1)
{
text(powerTables[[set]]$data[,1], -sign(powerTables[[set]]$data[,3])*powerTables[[set]]$data[,2], labels=powers,cex=cex1,col=colors[set]); } else text(powerTables[[set]]$data[,1], powerTables[[set]]\$data[,plotCols[col]],
labels=powers,cex=cex1,col=colors[set]);
if (col==1)
{
legend("bottomright", legend = setLabels, col = colors, pch = 20) ;
} else
legend("topright", legend = setLabels, col = colors, pch = 20) ;
}
net = blockwiseConsensusModules(
multiExpr, power = 6, minModuleSize = 30, deepSplit = 2,
pamRespectsDendro = FALSE,
mergeCutHeight = 0.25, numericLabels = TRUE,
minKMEtoStay = 0,
saveTOMs = TRUE, verbose = 5)
which returns the error
Could not find a suitable move to improve the clustering.
..merging smaller clusters...
..Working on block 1 .
....Working on set 1
Error: REAL() can only be applied to a 'numeric', not a 'integer'
Any ideas on what might be causing this? Thanks!
wgcna rnaseq R bioconductor • 5.9k views
2
Entering edit mode
@peter-langfelder-4469
Last seen 6 months ago
United States
Aaron is correct, the problem arises because some internal functions expect real numbers, not integers. This can be fixed easily as Keith suggested.
More generally, I don't recommend applying WGCNA directly to integer (count) data. See WGCNA FAQ at https://labs.genetics.ucla.edu/horvath/CoexpressionNetwork/Rpackages/WGCNA/faq.html, point 3 (Working with RNA-seq data) for some advice on working with RNA-seq data (the most common integer data).
0
Entering edit mode
Hey,
log2(myarray+1)
and then exported it back into a text file which looked like:
X10G48 X35Y87 X36W26 X23Y79 X2B84 X12Y30 X10B70 X10G87 X36W62 X23Y70 X2UNA X12Y47 X10R99 X10G17 X35Y44 X36W35 X23Y59 X2B82 X12Y51 5.0874628413 5.1292830169 5.3219280949 6.2288186905 5.4594316186 5.4594316186 5.1292830169 4.8579809951 5.4262647547 6.3750394313 5.2854022189 4.8579809951 4.5849625007 5.3575520046 5.3575520046 5.2854022189 4.0874628413 5.4262647547 4.2479275134 4.1699250014 5.4262647547 5.4918530963 6.0874628413 5.2854022189 5.8826430494 5.3923174228 4.9068905956 4.7004397181 5.8826430494 5.7548875022 4.8579809951 4.8073549221 5.3219280949 4.1699250014 5 4.9541963104 4.8073549221 2.3219280949 0 0 0 0 0 0 2.8073549221 0 0 0 0 0 3.7004397181 4.0874628413 0 2.3219280949 2.8073549221 0 3 0 0 0 1.5849625007 0 0 3.4594316186 0 0 0 0 2.3219280949 1.5849625007 4.9541963104 0 1.5849625007 0 0 2.5849625007 5.4918530963 5.8579809951 5.2479275134 6.1699250014 5.5235619561 5.5849625007 5.6438561898 5.7548875022 5.2094533656 5.5849625007 6.0874628413 5.6147098441 5.3219280949 6.2854022189 4.1699250014 5.3219280949 5.3923174228 6.022367813 5.2094533656 5.4594316186 5.5545888517 5.4262647547 6.1699250014 5.7548875022 5.7548875022 5.6438561898 5.6147098441 4.2479275134 5.7548875022 5.9541963104 5.2479275134 4.7548875022 6.3037807482 5.4594316186 4.9068905956 5.7548875022 5.5849625007 4.5235619561 10.2131042196 10.7598881832 10.3264294871 9.9351650496 9.6474584265 9.5313814605 9.6438561898 10.0112272554 9.8856963733 9.5313814605 10.1305705628 10.342074668 10.0620461377 10.4283601727 10.7944158664 10.2807707701 10.9701058906 10.4252159033 9.9985904297 10.2215871213 10.822570831 10.3793783671 10.0042204663 9.6147098441 9.5468944599 9.7262181593 10.0927571409 9.8041310212 9.5468944599 10.2288186905 10.401946124 10.0953970228 10.3912435894 10.7540523675 10.3286749273 11.048486874 10.4726908392 10.0714623626
But the same error came up again as before:
Error: REAL() can only be applied to a 'numeric', not a 'integer'
Am I doing something wrong?
0
Entering edit mode
Keith Hughitt ▴ 170
@keith-hughitt-6740
Last seen 6 months ago
United States
Not sure what the underlying cause is, but I've found that some methods seem not like discrete data and simply casting it to float may help, e.g.:
Keith
1
Entering edit mode
From the developer side, the error probably comes from the C-level API, where REAL macro expects a SEXP of double-precision values rather than integers. I suspect that someone, somewhere, didn't put any coercion of user inputs to double-precision values, e.g., via as.double or with coerceVector. This results in the observed error upon entry into C.
0
Entering edit mode
OK thanks, that sounds reasonable. I am very much a rookie when it comes to R coding and stuff - could you reccomend a way to fix it perhaps? Thanks! |
# 5.4: A Population Proportion
During an election year, we see articles in the newspaper that state confidence intervals in terms of proportions or percentages. For example, a poll for a particular candidate running for president might show that the candidate has 40% of the vote within three percentage points (if the sample is large enough). Often, election polls are calculated with 95% confidence, so, the pollsters would be 95% confident that the true proportion of voters who favored the candidate would be between 0.37 and 0.43: (0.40 – 0.03,0.40 + 0.03).
Investors in the stock market are interested in the true proportion of stocks that go up and down each week. Businesses that sell personal computers are interested in the proportion of households in the United States that own personal computers. Confidence intervals can be calculated for the true proportion of stocks that go up or down each week and for the true proportion of households in the United States that own personal computers.
The procedure to find the confidence interval, the sample size, the error bound, and the confidence level for a proportion is similar to that for the population mean, but the formulas are different. How do you know you are dealing with a proportion problem? First, the underlying distribution is a binomial distribution. (There is no mention of a mean or average.) If $$X$$ is a binomial random variable, then
$X \sim B(n, p)\nonumber$
where $$n$$ is the number of trials and $$p$$ is the probability of a success.
To form a proportion, take $$X$$, the random variable for the number of successes and divide it by $$n$$, the number of trials (or the sample size). The random variable $$P′$$ (read "P prime") is that proportion,
$P' = \dfrac{X}{n}\nonumber$
(Sometimes the random variable is denoted as $$\hat{P}$$, read "P hat".)
When $$n$$ is large and $$p$$ is not close to zero or one, we can use the normal distribution to approximate the binomial.
$X \sim N(np, \sqrt{npq})\nonumber$
If we divide the random variable, the mean, and the standard deviation by $$n$$, we get a normal distribution of proportions with $$P′$$, called the estimated proportion, as the random variable. (Recall that a proportion as the number of successes divided by $$n$$.)
$\dfrac{X}{n} = P' - N\left(\dfrac{np}{n}, \dfrac{\sqrt{npq}}{n}\right)\nonumber$
Using algebra to simplify:
$\dfrac{\sqrt{npq}}{n} = \sqrt{\dfrac{pq}{n}}\nonumber$
P′ follows a normal distribution for proportions:
$\dfrac{X}{n} = P' - N\left(\dfrac{np}{n}, \dfrac{\sqrt{npq}}{n}\right)\nonumber$
The confidence interval has the form
$(p′ – EBP, p′ + EBP).\nonumber$
where
• $$EBP$$ is error bound for the proportion.
• $$p′ = \dfrac{x}{n}$$
• $$p′ =$$ the estimated proportion of successes (p′ is a point estimate for p, the true proportion.)
• $$x =$$ the number of successes
• $$n =$$ the size of the sample
The error bound (EBP) for a proportion is
$EBP = \left(z_{\frac{\alpha}{2}}\right)\left(\sqrt{\dfrac{p'q'}{n}}\right)\nonumber$
where $$q\ = 1 - p'$$.
This formula is similar to the error bound formula for a mean, except that the "appropriate standard deviation" is different. For a mean, when the population standard deviation is known, the appropriate standard deviation that we use is $$\dfrac{\sigma}{\sqrt{n}}$$. For a proportion, the appropriate standard deviation is
$\sqrt{\dfrac{pq}{n}}.\nonumber$
However, in the error bound formula, we use
$\sqrt{\dfrac{p'q'}{n}}\nonumber$
as the standard deviation, instead of
$\sqrt{\dfrac{pq}{n}}.\nonumber$
In the error bound formula, the sample proportions p′ and q′ are estimates of the unknown population proportions p and q. The estimated proportions $$p′$$ and $$q′$$ are used because $$p$$ and $$q$$ are not known. The sample proportions $$p′$$ and $$q′$$ are calculated from the data: $$p′$$ is the estimated proportion of successes, and $$q′$$ is the estimated proportion of failures.
The confidence interval can be used only if the number of successes $$np′$$ and the number of failures $$nq′$$ are both greater than five.
Normal Distribution of Proportions
For the normal distribution of proportions, the $$z$$-score formula is as follows.
If
$P' - N\left(p, \sqrt{\dfrac{pq}{n}}\right)$
then the $$z$$-score formula is
$z = \dfrac{p'-p}{\sqrt{\dfrac{pq}{n}}}$
Example $$\PageIndex{1}$$
Suppose that a market research firm is hired to estimate the percent of adults living in a large city who have cell phones. Five hundred randomly selected adult residents in this city are surveyed to determine whether they have cell phones. Of the 500 people surveyed, 421 responded yes - they own cell phones. Using a 95% confidence level, compute a confidence interval estimate for the true proportion of adult residents of this city who have cell phones.
Solution A
• The first solution is step-by-step (Solution A).
• The second solution uses a function of the TI-83, 83+ or 84 calculators (Solution B).
Let $$X =$$ the number of people in the sample who have cell phones. $$X$$ is binomial.
$X \sim B(500,\dfrac{421}{500}).\nonumber$
To calculate the confidence interval, you must find $$p′$$, $$q′$$, and $$EBP$$.
• $$n = 500$$
• $$x =$$ the number of successes $$= 421$$
$p′ = \dfrac{x}{n} = \dfrac{421}{500} = 0.842\nonumber$
• $$p′ = 0.842$$ is the sample proportion; this is the point estimate of the population proportion.
$q′ = 1 – p′ = 1 – 0.842 = 0.158\nonumber$
Since $$CL = 0.95$$, then
$\alpha = 1 – CL = 1 – 0.95 = 0.05\left(\dfrac{\alpha}{2}\right) = 0.025.\nonumber$
Then
$z_{\dfrac{\alpha}{2}} = z_{0.025 = 1.96}\nonumber$
Use the TI-83, 83+, or 84+ calculator command invNorm(0.975,0,1) to find $$z_{0.025}$$. Remember that the area to the right of $$z_{0.025}$$ is $$0.025$$ and the area to the left of $$z_{0.025}$$ is $$0.975$$. This can also be found using appropriate commands on other calculators, using a computer, or using a Standard Normal probability table.
$EBP = \left(z_{\dfrac{\alpha}{2}}\right)\sqrt{\dfrac{p'q'}{n}} = (1.96)\sqrt{\dfrac{(0.842)(0.158)}{500}} = 0.032\nonumber$
$p' – EBP = 0.842 – 0.032 = 0.81\nonumber$
$p′ + EBP = 0.842 + 0.032 = 0.874\nonumber$
The confidence interval for the true binomial population proportion is $$(p′ – EBP, p′ +EBP) = (0.810, 0.874)$$.
Interpretation
We estimate with 95% confidence that between 81% and 87.4% of all adult residents of this city have cell phones.
Explanation of 95% Confidence Level
Ninety-five percent of the confidence intervals constructed in this way would contain the true value for the population proportion of all adult residents of this city who have cell phones.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
Arrow down to xx and enter 421.
Arrow down to nn and enter 500.
Arrow down to C-Level and enter .95.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.81003, 0.87397).
Exercise $$\PageIndex{1}$$
Suppose 250 randomly selected people are surveyed to determine if they own a tablet. Of the 250 surveyed, 98 reported owning a tablet. Using a 95% confidence level, compute a confidence interval estimate for the true proportion of people who own tablets.
(0.3315, 0.4525)
Example $$\PageIndex{2}$$
For a class project, a political science student at a large university wants to estimate the percent of students who are registered voters. He surveys 500 students and finds that 300 are registered voters. Compute a 90% confidence interval for the true percent of students who are registered voters, and interpret the confidence interval.
• The first solution is step-by-step (Solution A).
• The second solution uses a function of the TI-83, 83+, or 84 calculators (Solution B).
Solution A
• $$x = 300$$ and
• $$n = 500$$
$p' = \dfrac{x}{n} = \dfrac{300}{500} = 0.600\nonumber$
$q′ = 1 − p′ = 1 − 0.600 = 0.400\nonumber$
Since $$CL = 0.90$$, then
$\alpha = 1 – CL = 1 – 0.90 = 0.10\left(\dfrac{\alpha}{2}\right) = 0.05$
$z_{\dfrac{\alpha}{2}} = z_{0.05} = 1.645\nonumber$
Use the TI-83, 83+, or 84+ calculator command invNorm(0.95,0,1) to find $$z_{0.05}$$. Remember that the area to the right of $$z_{0.05}$$ is 0.05 and the area to the left of $$z_{0.05}$$ is 0.95. This can also be found using appropriate commands on other calculators, using a computer, or using a standard normal probability table.
$EBP = \left(z_{\dfrac{\alpha}{2}}\right)\sqrt{\dfrac{p'q'}{n}} = (1.645)\sqrt{\dfrac{(0.60)(0.40)}{500}} = 0.036\nonumber$
$p′ – EBP = 0.60 − 0.036 = 0.564\nonumber$
$p′ + EBP = 0.60 + 0.036 = 0.636\nonumber$
The confidence interval for the true binomial population proportion is $$(p′ – EBP, p′ +EBP) = (0.564,0.636)$$.
Interpretation
• We estimate with 90% confidence that the true percent of all students that are registered voters is between 56.4% and 63.6%.
• Alternate Wording: We estimate with 90% confidence that between 56.4% and 63.6% of ALL students are registered voters.
Explanation of 90% Confidence Level
Ninety percent of all confidence intervals constructed in this way contain the true value for the population percent of students that are registered voters.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
Arrow down to xx and enter 300.
Arrow down to nn and enter 500.
Arrow down to C-Level and enter 0.90.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.564, 0.636).
Exercise $$\PageIndex{2}$$
A student polls his school to see if students in the school district are for or against the new legislation regarding school uniforms. She surveys 600 students and finds that 480 are against the new legislation.
1. Compute a 90% confidence interval for the true percent of students who are against the new legislation, and interpret the confidence interval.
2. In a sample of 300 students, 68% said they own an iPod and a smart phone. Compute a 97% confidence interval for the true percent of students who own an iPod and a smartphone.
(0.7731, 0.8269); We estimate with 90% confidence that the true percent of all students in the district who are against the new legislation is between 77.31% and 82.69%.
Sixty-eight percent (68%) of students own an iPod and a smart phone.
$p′ = 0.68\nonumber$
$q′ = 1–p′ = 1 – 0.68 = 0.32\nonumber$
Since $$CL = 0.97$$, we know
$\alpha = 1 – 0.97 = 0.03\nonumber$
and
$\dfrac{\alpha}{2} = 0.015.\nonumber$
The area to the left of $$z_{0.05}$$ is 0.015, and the area to the right of $$z_{0.05}$$ is 1 – 0.015 = 0.985.
Using the TI 83, 83+, or 84+ calculator function InvNorm(0.985,0,1),
$z_{0.05} = 2.17\nonumber$
$EPB = \left(z_{\dfrac{\alpha}{2}}\right)\sqrt{\dfrac{p'q'}{n}} = 2.17\sqrt{\dfrac{0.68(0.32)}{300}} \approx 0.0269\nonumber$
$p′ – EPB = 0.68 – 0.0269 = 0.6531\nonumber$
$p′ + EPB = 0.68 + 0.0269 = 0.7069\nonumber$
We are 97% confident that the true proportion of all students who own an iPod and a smart phone is between 0.6531 and 0.7069.
Calculator
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
Arrow down to x and enter 300*0.68.
Arrow down to n and enter 300.
Arrow down to C-Level and enter 0.97.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.6531, 0.7069).
## "Plus Four" Confidence Interval for $$p$$
There is a certain amount of error introduced into the process of calculating a confidence interval for a proportion. Because we do not know the true proportion for the population, we are forced to use point estimates to calculate the appropriate standard deviation of the sampling distribution. Studies have shown that the resulting estimation of the standard deviation can be flawed.
Fortunately, there is a simple adjustment that allows us to produce more accurate confidence intervals. We simply pretend that we have four additional observations. Two of these observations are successes and two are failures. The new sample size, then, is $$n + 4$$, and the new count of successes is $$x + 2$$. Computer studies have demonstrated the effectiveness of this method. It should be used when the confidence level desired is at least 90% and the sample size is at least ten.
Example $$\PageIndex{3}$$
A random sample of 25 statistics students was asked: “Have you smoked a cigarette in the past week?” Six students reported smoking within the past week. Use the “plus-four” method to find a 95% confidence interval for the true proportion of statistics students who smoke.
Solution A
Six students out of 25 reported smoking within the past week, so $$x = 6$$ and $$n = 25$$. Because we are using the “plus-four” method, we will use $$x = 6 + 2 = 8$$ and $$n = 25 + 4 = 29$$.
$p' = \dfrac{x}{n} = \dfrac{8}{29} \approx 0.276\nonumber$
$q′ = 1 – p′ = 1 – 0.276 = 0.724\nonumber$
Since $$CL = 0.95$$, we know $$\alpha = 1 – 0.95 = 0.05$$ and $$\dfrac{\alpha}{2} = 0.025$$.
$z_{0.025} = 1.96\nonumber$
$$EPB = \left(z_{\dfrac{\alpha}{2}}\right)\sqrt{\dfrac{p'q'}{n}} = (1.96)\sqrt{\dfrac{0.276(0.724)}{29}} \approx 0.163$$
$p′ – EPB = 0.276 – 0.163 = 0.113\nonumber$
$p′ + EPB = 0.276 + 0.163 = 0.439\nonumber$
We are 95% confident that the true proportion of all statistics students who smoke cigarettes is between 0.113 and 0.439.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
REMINDER
Remember that the plus-four method assume an additional four trials: two successes and two failures. You do not need to change the process for calculating the confidence interval; simply update the values of x and n to reflect these additional trials.
Arrow down to $$x$$ and enter eight.
Arrow down to $$n$$ and enter 29.
Arrow down to C-Level and enter 0.95.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.113, 0.439).
Exercise $$\PageIndex{3}$$
Out of a random sample of 65 freshmen at State University, 31 students have declared a major. Use the “plus-four” method to find a 96% confidence interval for the true proportion of freshmen at State University who have declared a major.
Solution A
Using “plus four,” we have $$x = 31 + 2 = 33$$ and $$n = 65 + 4 = 69$$.
$p′ = 3369 \approx 0.478\nonumber$
$q′ = 1 – p′ = 1 – 0.478 = 0.522\nonumber$
Since $$CL = 0.96$$, we know $$\alpha = 1 – 0.96 = 0.04$$ and $$\dfrac{\alpha}{2} = 0.02$$.
$z_{0.02} = 2.054\nonumber$
$EPB = \left(z_{\dfrac{\alpha}{2}}\right)\sqrt{\dfrac{p'q'}{n}} = (2.054)\left(\sqrt{\dfrac{(0.478)(0.522)}{69}}\right) - 0.124\nonumber$
$p′ – EPB = 0.478 – 0.124 = 0.354\nonumber$
$p′ + EPB = 0.478 + 0.124 = 0.602\nonumber$
We are 96% confident that between 35.4% and 60.2% of all freshmen at State U have declared a major.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
Arrow down to $$x$$ and enter 33.
Arrow down to $$n$$ and enter 69.
Arrow down to C-Level and enter 0.96.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.355, 0.602).
Example $$\PageIndex{4}$$
The Berkman Center for Internet & Society at Harvard recently conducted a study analyzing the privacy management habits of teen internet users. In a group of 50 teens, 13 reported having more than 500 friends on Facebook. Use the “plus four” method to find a 90% confidence interval for the true proportion of teens who would report having more than 500 Facebook friends.
Solution A
Using “plus-four,” we have $$x = 13 + 2 = 15$$ and $$n = 50 + 4 = 54$$.
$p′ = 1554 \approx 0.278\nonumber$
$q′ = 1 – p′ = 1 − 0.241 = 0.722\nonumber$
Since $$CL = 0.90$$, we know $$\alpha = 1 – 0.90 = 0.10$$ and $$\dfrac{\alpha}{2} = 0.05$$.
$z_{0.05} = 1.645\nonumber$
$EPB = \left(z_{\dfrac{\alpha}{2}}\right)\left(\sqrt{\dfrac{p'q'}{n}}\right) = (1.645)\left(\sqrt{\dfrac{(0.278)(0.722)}{54}}\right) \approx 0.100\nonumber$
$p′ – EPB = 0.278 – 0.100 = 0.178\nonumber$
$p′ + EPB = 0.278 + 0.100 = 0.378\nonumber$
We are 90% confident that between 17.8% and 37.8% of all teens would report having more than 500 friends on Facebook.
Solution B
Press STAT and arrow over to TESTS.
Arrow down to A:1-PropZint. Press ENTER.
Arrow down to $$x$$ and enter 15.
Arrow down to $$n$$ and enter 54.
Arrow down to C-Level and enter 0.90.
Arrow down to Calculate and press ENTER.
The confidence interval is (0.178, 0.378).
Exercise $$\PageIndex{4}$$
The Berkman Center Study referenced in Example talked to teens in smaller focus groups, but also interviewed additional teens over the phone. When the study was complete, 588 teens had answered the question about their Facebook friends with 159 saying that they have more than 500 friends. Use the “plus-four” method to find a 90% confidence interval for the true proportion of teens that would report having more than 500 Facebook friends based on this larger sample. Compare the results to those in Example.
Solution A
Using “plus-four,” we have $$x = 159 + 2 = 161$$ and $$n = 588 + 4 = 592$$.
$p′ = 161592 \approx 0.272\nonumber$
$q′ = 1 – p′ = 1 – 0.272 = 0.728\nonumber$
Since CL = 0.90, we know $$\alpha = 1 – 0.90 = 0.10$$ and $$\dfrac{\alpha}{2} = 0.05$$
$EPB = \left(z_{\dfrac{\alpha}{2}}\right)\left(\sqrt{\dfrac{p'q'}{n}}\right) = (1.645)\left(\sqrt{\dfrac{(0.272)(0.728)}{592}}\right) \approx 0.030\nonumber$
$p′ – EPB = 0.272 – 0.030 = 0.242\nonumber$
$p′ + EPB = 0.272 + 0.030 = 0.302\nonumber$
We are 90% confident that between 24.2% and 30.2% of all teens would report having more than 500 friends on Facebook.
Solution B
• Press STAT and arrow over to TESTS.
• Arrow down to A:1-PropZint. Press ENTER.
• Arrow down to $$x$$ and enter 161.
• Arrow down to $$n$$ and enter 592.
• Arrow down to C-Level and enter 0.90.
• Arrow down to Calculate and press ENTER.
• The confidence interval is (0.242, 0.302).
Conclusion: The confidence interval for the larger sample is narrower than the interval from Example. Larger samples will always yield more precise confidence intervals than smaller samples. The “plus four” method has a greater impact on the smaller sample. It shifts the point estimate from 0.26 (13/50) to 0.278 (15/54). It has a smaller impact on the EPB, changing it from 0.102 to 0.100. In the larger sample, the point estimate undergoes a smaller shift: from 0.270 (159/588) to 0.272 (161/592). It is easy to see that the plus-four method has the greatest impact on smaller samples.
## Calculating the Sample Size $$n$$
If researchers desire a specific margin of error, then they can use the error bound formula to calculate the required sample size. The error bound formula for a population proportion is
$EBP = \left(z_{\frac{\alpha}{2}}\right)\left(\sqrt{\dfrac{p'q'}{n}}\right)\nonumber$
Solving for $$n$$ gives you an equation for the sample size.
$n = \dfrac{\left(z_{\frac{\alpha}{2}}\right)^{2}(p'q')}{EBP^{2}}\nonumber$
Example $$\PageIndex{5}$$
Suppose a mobile phone company wants to determine the current percentage of customers aged 50+ who use text messaging on their cell phones. How many customers aged 50+ should the company survey in order to be 90% confident that the estimated (sample) proportion is within three percentage points of the true population proportion of customers aged 50+ who use text messaging on their cell phones.
From the problem, we know that $$\bf{EBP = 0.03}$$ (3%=0.03) and $$z_{\dfrac{\alpha}{2}} z_{0.05} = 1.645$$ because the confidence level is 90%.
However, in order to find $$n$$, we need to know the estimated (sample) proportion $$p′$$. Remember that $$q′ = 1 – p′$$. But, we do not know $$p′$$ yet. Since we multiply $$p′$$ and $$q′$$ together, we make them both equal to 0.5 because $$p′q′ = (0.5)(0.5) = 0.25$$ results in the largest possible product. (Try other products: $$(0.6)(0.4) = 0.24$$; $$(0.3)(0.7) = 0.21$$; $$(0.2)(0.8) = 0.16$$ and so on). The largest possible product gives us the largest $$n$$. This gives us a large enough sample so that we can be 90% confident that we are within three percentage points of the true population proportion. To calculate the sample size $$n$$, use the formula and make the substitutions.
$n = \dfrac{z^{2}p'q'}{EBP^{2}}\nonumber$
gives
$n = \dfrac{1.645^{2}(0.5)(0.5)}{0.03^{2}} = 751.7\nonumber$
Round the answer to the next higher value. The sample size should be 752 cell phone customers aged 50+ in order to be 90% confident that the estimated (sample) proportion is within three percentage points of the true population proportion of all customers aged 50+ who use text messaging on their cell phones.
Exercise $$\PageIndex{5}$$
Suppose an internet marketing company wants to determine the current percentage of customers who click on ads on their smartphones. How many customers should the company survey in order to be 90% confident that the estimated proportion is within five percentage points of the true population proportion of customers who click on ads on their smartphones?
a discrete random variable (RV) which arises from Bernoulli trials; there are a fixed number, $$n$$, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV $$X$$ is defined as the number of successes in $$n$$ trials. The notation is: $$X \sim B(\mathbf{n},\mathbf{p})$$. The mean is $$\mu = np$$ and the standard deviation is $$\sigma = \sqrt{npq}$$. The probability of exactly $$x$$ successes in $$n$$ trials is $$P(X = x = \left(\binom{n}{x}\right))p^{x}q^{n-x}$$.
Error Bound for a Population Proportion ($$EBP$$) |
• ### On Algorithms
So, generally speaking, I’ve typically adhered to the rule that those who develop software should be aware of various classes of algorithms and data structures, but should avoid implementing them if at all possible. The reasoning here is pretty simple, and I think pretty common:
1. You’re reinventing the wheel. Stop that, we have enough wheels.
2. You’re probably reinventing it badly.
So just go find yourself the appropriate wheel to solve your problem and move on.
Ah, but there’s a gotcha, here: Speaking for myself, I never truly understand an algorithm or data structure, both theoretically (ie, how it works in the abstract, complexity, etc) and practically (ie, how you’d actually implement the thing) until I try to implement it. After all, these things in the abstract can be tricky to grok, and when actually implemented you discover there’s all kinds of details and edge cases that you need to deal with.
Now, I’ve spent a lot of my free time learning about programming languages (the tools of our trade that we use to express our ideas), and about software architecture and design, the “blueprints”, if you will. But if languages are the tools and the architecture and design are the blueprints, algorithms and data structures are akin to the templates carpenters use for building doors, windows, etc. That is, they provide a general framework for solving various classes of problems that we as developers encounter day-to-day.
And, like a framer, day-to-day we may very well make use of various prefabbed components to get our jobs done more quickly and efficiently. But without understanding how and why those components are built the way they are, it can be very easy to misuse or abuse them. Plus, it can’t hurt if, when someone comes along and asks you to show off your mad skillz, you can demonstrate your ability to build one of those components from scratch.
Consequently, I plan to kick off a round of posts wherein I explore various interesting algorithms and data structures that happen to catch my attention. So far I have a couple on the list that look interesting, either because I don’t know them, or because it’s been so long that I’ve forgotten them…
Data Structures
1. Skip list
2. Fibonacci heap
3. Red-Black tree
4. Tries
2. Suffix Tries
5. Bloom filter
Algorithms
1. Various streaming algorithms (computations over read-once streams of data):
1. Heavy hitters (finding elements that appear more often than a proscribed freqency)
2. Counting distinct elements
3. Computing entropy
2. Topological sort
And I guarantee there’s more that belong on this list, but this is just an initial roadmap… assuming I follow through, anyway.
• ### Hosting Git on Windows
Using Git to push changes upstream to servers is incredibly handy. In essence, you set up a bare repository on the target server, configure git to use the production application path as the git working directory, and then set up hooks to automatically update the working directory when changes are pushed into the repository. The result is dead easy code deployment, as you can simply push from your repository to the remote on the server.
But making this work when the Git repository is being hosted on Windows is a bit tricky. Normally ssh is the default transport for git, but making that work on Windows is an enormous pain. As such, this little writeup assumes the use of HTTP as the transport protocol.
### Installation
So, first up we need to install a couple components:
1. msysgit
2. Apache
Note: When installing msysgit, make sure to select the option that installs git in your path! After installation the system path should include the following1:
C:\Program Files\Git\cmd;C:\Program Files\Git\bin;C:\Program Files\Git\libexec\git-core
Now, in addition, we’ll be using git-http-backend to serve up our repository, and it turns out the msysgit installation of this tool is broken such that one of its required DLLs is not in the directory where it’s installed. As such, you need to copy:
C:\Program Files\Git\bin\libiconv-2.dll
to
C:\Program Files\Git\libexec\git-core\
### Repository Initialization
Once you have the software installed, create your bare repository by firing up Git Bash and running something like:
$mkdir -p /c/git/project.git$ cd /c/git/project.git
$git init --bare$ git config core.worktree c:/path/to/webroot
$git config http.receivepack true$ touch git-daemon-export-ok
Those last three commands are vital and will ensure that we can push to the repository, and that the repository uses our web root as the working tree.
### Configuring Apache
Next up, add the following lines to your httpd.conf:
SetEnv GIT_PROJECT_ROOT c:*git*
ScriptAlias *git* "C:/Program Files/Git/libexec/git-core/git-http-backend.exe/"
<Directory "C:/Program Files/Git/libexec/git-core/">
Allow From All
</Directory>
Note, I’ve omitted any security, here. You’ll probably want to enable some form of HTTP authentication.
In addition, in order to make hooks work, you need to reconfigure the Apache daemon to run as a normal user. Obviously this user should have permissions to read from/write to the git repository folder and web root.
Oh, and last but not least, don’t forget to restart Apache at this point.
### Pushing the Base Repository
So, we now have our repository exposed, let’s try to push to it. Assuming you have an already established repository ready to go and it’s our master branch we want to publish, we just need to do a:
git remote add server http://myserver/git/project.git
git push server master
In theory, anyway.
Note: After the initial push, in at least one instance I’ve found that “logs/refs” wasn’t present in the server bare repository. This breaks, among other things, git stash. To remedy this I simply created that folder manually.
Lastly, you can pop over to your server, fire up Git Bash, and:
$cd /c/git/project.git$ git checkout master
### Our Hooks
So, about those hooks. I use two, one that triggers before a new update comes to stash any local changes, and then another after a pack is applied to update the working tree and then unstash those local changes. The first is a pre-receive hook:
#!/bin/sh
export GIT_DIR=pwd
cd git config --get core.worktree
git stash save --include-untracked
The second is a post-update hook:
#!/bin/sh
export GIT_DIR=pwd
cd git config --get core.worktree
git checkout -f
git reset --hard HEAD
git stash pop
Obviously you can do whatever you want, here. This is just something I slapped together for a test server I was working with.
1. Obviously any paths, here, would need to be tweaked on a 64-bit server with a 32-bit Git. |
# The Joy of Generating C Code from MATLAB
By Bill Chou, MathWorks
Engineers have translated low-level languages like C into machine code for decades using compilers. But is it possible to translate a high-level language like MATLAB® to C using coders? Most engineers would agree that it’s possible in theory—but does it work in practice? Is the generated code readable or spaghetti? Efficient or bloated? Fast or slow? And does it support industrial workflows, or just R&D?
This article addresses these concerns head-on. It provides tips and best practices for working with MATLAB Coder™, as well as industry examples of successful applications of generated code by companies such as Delphi, Baker Hughes, iSonea, and dorsaVi.
## Comparing MATLAB and C Code: A Multiplication Example
The simple MATLAB function below multiplies two inputs.
Given scalar inputs, MATLAB Coder generates the following C code:
As you can see, the generated code maps clearly back to the MATLAB code.
The same piece of MATLAB code, when given two matrix inputs, generates three nested for-loops in C:
## Recommended Three-Step Iterative Workflow
The simple function shown above can be implemented in a single step. But for more substantial projects, we recommend a structured approach using a three-step iterative workflow (Figure 1):
1. Prepare your algorithm for code generation. Examine and modify the MATLAB code to introduce implementation considerations needed for low-level C code, and use the MATLAB language and functions that support code generation.
2. Test the MATLAB code’s readiness for code generation using default settings. Check for run-time errors by generating and executing a MEX file. If successful, move to the next step. If not, repeat step 1 until you can generate a MEX function.
3. Generate C code or keep the MEX function from step 2. You can iterate on the MATLAB code to optimize either the generated C code (for look and feel, memory, and speed) or the MEX function (for performance).
Figure 1. Three-step iterative workflow for generating code.
The MATLAB Coder app guides you through this iterative process while enabling you to stay within the MATLAB environment. It analyzes your MATLAB code to propose data types and sizes for your inputs. It tests whether your MATLAB code is ready for code generation by generating a MEX function, then executes the MEX function to check for run-time errors (Figure 2). Equivalent command-line functions provide the same functionality so you can generate code as part of a script or function.
Figure 2. Left: Automated checks for features and functions not supported for code generation. Right: Automated analysis and proposal for input data types and sizes.
The video below illustrates these steps with an example of generating a Kalman filter to predict the trajectory of a bouncing ball. You’ll see that the three-step iterative process enables us to generate code that closely matches the original MATLAB results and satisfies its tracking requirements.
## Implementation Constraints
As you prepare your MATLAB algorithm for code generation, you need to take account of implementation constraints resulting from the differences between MATLAB and C code. These include:
• Memory allocation. In MATLAB, memory allocation is automatic. In C code, memory allocation is manual—it is allocated either statically (using static), dynamically (using malloc), or on the stack (using local variables).
• Array-based language. MATLAB provides a rich set of array operations that allow concise coding of numerical algorithms. C code requires explicit for-loops to express the same algorithms.
• Dynamic typing. MATLAB automatically determines the data types and sizes as your code runs. C requires explicit type declarations on all variables and functions.
• Polymorphism. MATLAB functions can support many different input types, while C requires fixed type declarations. At the top level, you must specify the intended C function declaration.
Let’s take a closer look at polymorphism. Polymorphism can give a single line of MATLAB code different meanings depending on your inputs. For example, the function shown in Figure 3 could mean scalar multiplication, dot product, or matrix multiplication. In addition, your inputs could be of different data types (logical, integer, floating-point, fixed-point), and they could be real or complex numbers.
Figure 3. Polymorphism example.
MATLAB is a powerful algorithm development environment precisely because you don’t need to worry about implementation details as you create algorithms. However, for the equivalent C code, you have to specify what operations mean. For example, the line of MATLAB code shown above could be translated into this single line of C code that returns B*C:
Or, it could be translated into 11 lines of C code with 3 for-loops that multiply two matrices:
The video below uses a Newton-Raphson algorithm to illustrate the concept of taking implementation constraints into account. You’ll see that code generated using the three-step iterative workflow exactly matches the original MATLAB results.
## Working with the Generated Code: Four Use Cases
Once you have generated readable and portable C/C++ code from MATLAB algorithms using MATLAB Coder, you have several options for using it. For example, you can:
• Integrate your MATLAB algorithms as source code or libraries into a larger software project such as custom simulators or software packages running on PCs and servers (watch video (4:17))
• Implement and verify your MATLAB algorithms on embedded processors such as ARM® processors and mobile devices (watch video (0:26))
• Prototype your MATLAB algorithms as a standalone executable on PCs (watch video (2:57))
• Accelerate computationally intensive portions of your MATLAB code by generating a MEX function that calls the compiled C/C++ code (watch video (4:21))
## Industry Success Stories
• Baker Hughes’ Dynamics & Telemetry group generated a DLL from sequence prediction algorithms and integrated it into surface decoding software running on a PC that enables downhole data to be decoded quickly and reliably during drilling operations.
• dorsaVi generated C++ code from motion analysis algorithms and compiled it into a DLL, which was then integrated into their C# application running on a PC that analyzes the athlete’s movements to diagnose injury.
• VivaQuant generated fixed-point C code from heart rhythm monitoring algorithms and compiled it for an ARM Cortex-M processor.
• Delphi generated C code for an automotive radar sensor alignment algorithm and compiled it for an ARM10 processor.
• Respiri generated C code from acoustic respiratory monitoring algorithms and compiled it for an iPhone app, an Android app, and cloud-based server software.
## Multicore-Capable Code Generation and Other Optimization Methods
In MATLAB, for-loops whose iterations are independent of each other can be run in parallel simply by replacing for with parfor. MATLAB Coder uses the Open Multiprocessing (OpenMP) application interface to support shared-memory, multicore code generation from parfor loops. OpenMP is supported by many C compilers (for example, Microsoft® Visual Studio® Professional).
You can use MATLAB Coder with Embedded Coder® to further optimize code efficiency and customize the generated code. Embedded Coder provides optimizations for fine-grained control of the generated code’s functions, files, and data. For example, you can use storage classes to control the declaration and definition of a global variable in the generated code, and use code generation templates to customize banners and comments in the generated code. Embedded Coder also improves code efficiency by using code replacement libraries, which replace certain operators and functions with implementations optimized for popular processors like ARM Cortex®-A and ARM Cortex-M.
## Testing the Generated Code
As you develop your MATLAB algorithm, you can create unit tests to verify that the algorithm produces the results you expect. Tests written using the MATLAB unit testing framework can be reused to verify that the generated code behaves the same way as your MATLAB algorithm. The videos below show how you can reuse the unit tests in Embedded Coder in combination with software-in-the-loop (SIL) and processor-in-the-loop (PIL) tests on the generated standalone code or library (Figure 4).
## An Automated Workflow
MATLAB Coder enables an automated workflow for translating MATLAB algorithms into C code. With this workflow you spend less time writing and debugging low-level C code and more time developing, testing, and tuning designs. By maintaining one golden reference in MATLAB, including the algorithm and test benches, you can propagate algorithmic changes to your C code more quickly. Automated tools like the MATLAB unit testing framework and the Embedded Coder SIL and PIL testing framework let you test both the MATLAB code and the C code thoroughly and systematically. Whether you are implementing designs running on traditional PCs, web servers, mobile devices, or embedded processors, MATLAB Coder will help you get from MATLAB to C code faster and with fewer manual translation errors.
Article featured in MathWorks News & Notes
Published 2016 - 92987v00 |
## Mathalicious Post: Most Expensive. Collectibles. Ever.
Hey y'all. My most recent post on the Mathalicious blog has been live for a while, but in case you missed it, I’d encourage you to go check it out! Consider it a Simpsons themed cautionary tale for collectors on a budget. Here’s a sample:
One of the more recent trends in the world of Simpsons memorabilia is the advent of the Mini-Figure collections, produced by Kidrobot. Each series (there have been two so far) consists of around 25 small Simpsons figures, each with his or her own accessories. The figures cost around $10 each ($9.95, to be precise), so an avid collector would need to spend something like \$250 to complete each of the two collections, right?
Well, not quite. When you buy one of these figures, you have no idea which one you’ll get, because the box containing the figure doesn’t indicate what’s inside. All you know are the probabilities for each figure, and even those are sometimes missing…
Given this information, here’s a natural question: how many of these boxes should you expect to buy if you want to complete the set, and how much will it cost you? |
# Contents
## Idea
From the Wikipedia article on the subject:
In fluid mechanics, the Reynolds number (Re) is a dimensionless number that gives a measure of the ratio of inertial forces to viscous forces and consequently quantifies the relative importance of these two types of forces for given flow conditions.
Below we see an animation for Re from -45 - 10^7:
Reynolds numbers frequently arise when performing dimensional analysis of fluid dynamics problems, and as such can be used to determine dynamic similitude between different experimental cases.
They are also used to characterize different flow regimes, such as laminar or turbulent flow: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers |
# 10.43. Text Input Problem¶
Note
EdX offers full support for this problem type.
The text input problem type is a core problem type that can be added to any course. At a minimum, text input problems include a question or prompt and a response field for free form answer text. By adding hints, feedback, or both, you can give learners guidance and help when they work on a problem.
For more information about the core problem types, see Working with Problem Components.
## 10.43.1. Overview¶
In text input problems, learners enter text into a response field. The response can include numbers, letters, and special characters such as punctuation marks. Because the text that the learner enters must match the instructor’s specified answer exactly, including spelling and punctuation, edX recommends that you specify more than one correct answer for text input problems to allow for differences in capitalization and typographical errors.
### 10.43.1.1. Example Text Input Problem¶
In the LMS, learners enter a value into a response field to complete a text input problem. An example of a completed text input problem follows.
To add the example problem illustrated above, in Studio you use the simple editor to enter the following text and Markdown formatting.
>>What was the first post-secondary school in China to allow both male and female students?||Answer with a name from the modern period.<<
= Nanjing University
or= National Central University
or= Nanjing Higher Normal Institute
or= Nanking University
[explanation]
Nanjing University first admitted female students in 1920.
[explanation]
The OLX (open learning XML) markup for this example text input problem follows.
<problem>
<stringresponse answer="Nanjing University" type="ci">
<label>What was the first post-secondary school in China to allow both
male and female students?</label>
<description>Answer with a name from the modern period.</description>
<textline size="20"/>
<solution>
<div class="detailed-solution">
<p>Explanation</p>
<p>Nanjing University first admitted female students in 1920.</p>
</div>
</solution>
</stringresponse>
</problem>
### 10.43.1.2. Analyzing Performance on Text Input Problems¶
For the text input problems in your course, you can use edX Insights to review aggregated learner performance data and examine submitted answers. For more information, see Using edX Insights.
## 10.43.2. Adding a Text Input Problem¶
You add text input problems in Studio by selecting the Problem component type and then using either the simple editor or the advanced editor to specify the prompt and the acceptable answer or answers.
Note
You can begin work on the problem in the simple editor, and then switch to the advanced editor. However, after you save any changes you make in the advanced editor, you cannot switch back to the simple editor.
### 10.43.2.1. Use the Simple Editor to Add a Text Input Problem¶
When you add a text input problem, you can choose one of these templates.
• Text Input
• Text Input with Hints and Feedback
These templates include the Markdown formatting that you use in the simple editor to add a problem without, or with, hints and feedback.
To use the simple editor to add a problem, follow these steps.
1. In the unit where you want to create the problem, under Add New Component select Problem.
2. From the list of Common Problem Types, select the type of problem you want to add. Studio adds a template for the problem to the unit.
3. Select Edit. The simple editor opens to a template that shows the Markdown formatting that you use for this problem type.
4. Replace the guidance provided by the template to add your own text for the question or prompt, answer options, explanation, and so on.
To format equations, you can use MathJax. For more information, see Using MathJax for Mathematics.
5. Select Settings to provide an identifying Display Name and define settings for the problem. For more information, see Defining Settings for Problem Components.
6. Select Save.
### 10.43.2.2. Use the Advanced Editor to Add a Text Input Problem¶
You can use the advanced editor to identify the elements of a text input problem with OLX. For more information, see Text Input Problem XML Reference.
To use the advanced editor to add a problem, follow these steps.
1. Follow steps 1-3 for creating the problem in the simple editor.
2. Select Advanced Editor. The advanced editor opens the template and shows the OLX markup that you can use for this problem type.
3. Replace the guidance provided by the template to add your own text. For example, replace the question or prompt, answer options, and explanation.
To format equations, you can use MathJax. For more information, see Using MathJax for Mathematics.
4. Update the OLX to add optional elements and attributes required for your problem.
5. Select Settings to provide an identifying Display Name and define settings for the problem. For more information, see Defining Settings for Problem Components.
6. Select Save.
## 10.43.3. Adding Multiple Correct Responses¶
You can specify more than one correct response for text input problems. For example, instead of requiring learners to enter an answer of “Dr. Martin Luther King, Junior” exactly, you can also allow answers of “Martin Luther King, Jr.” “Doctor Martin Luther King,” and other variations. To do this, you can use the simple editor or the advanced editor.
### 10.43.3.1. Add Multiple Correct Responses in the Simple Editor¶
To specify additional correct responses in the simple editor, include or= before each additional correct response.
>>What African-American led the United States civil rights movement during the 1960s?<<
=Dr. Martin Luther King, Jr.
or=Dr. Martin Luther King, Junior
or=Martin Luther King, Jr.
or=Martin Luther King
### 10.43.3.2. Add Multiple Correct Responses in the Advanced Editor¶
To specify an additional correct response in the advanced editor, within the <stringresponse> element add the <additional_answer /> element with an answer="" attribute value.
<problem>
<stringresponse answer="Dr. Martin Luther King, Jr." type="ci" >
<label>What African-American led the United States civil rights movement during the 1960s?</label>
<textline size="30"/>
</stringresponse>
</problem>
## 10.43.4. Adding Feedback to a Text Input Problem¶
For an overview of feedback in problems, see Adding Feedback and Hints to a Problem. In text input problems, you can provide feedback for the correct answer or for a specified incorrect answer. Use feedback on incorrect answers as an opportunity to address common learner misconceptions. Feedback for text input questions should also provide guidance to the learner on how to arrive at the correct answer.
If you define multiple correct responses for the question, you can define feedback for each response.
### 10.43.4.1. Configure Feedback in the Simple Editor¶
You can configure feedback in the simple editor. When you add a text input problem, select the template Text Input with Hints and Feedback. This template has example formatted feedback that you can replace with your own text.
In the simple editor, you configure feedback for a text input problem with the following Markdown formatting.
=Correct Answer {{Feedback for learners who enter this answer.}}
not=Incorrect Answer {{Feedback for learners who enter this answer.}}
For example, the following problem has feedback for the correct answer and two common incorrect answers.
>>What is the largest state in the U.S. in terms of land area?<<
=Alaska {{Alaska is the largest state in the U.S. in terms of not only land
area, but also total area and water area. Alaska is 576,400 square miles,
more than double the land area of the second largest state, Texas.}}
not=Texas {{While many people think Texas is the largest state in terms of
land area, it is actually the second largest and contains 261,797 square
miles.}}
not=California {{California is the third largest state and contains 155,959
square miles.}}
### 10.43.4.2. Configure Feedback in the Advanced Editor¶
In the advanced editor, you configure answer feedback with the following syntax.
<problem>
<label>Question text</label>
<correcthint>Feedback for the correct answer</correcthint>
<textline size="20"/>
</stringresponse>
</problem>
For example, the following problem has feedback for the correct answer and two common incorrect answers.
<problem>
<label>What is the largest state in the U.S. in terms of land area?</label>
<correcthint>Alaska is the largest state in the U.S. in terms of not
only land area, but also total area and water area. Alaska is 576,400
square miles, more than double the land area of the second largest
state, Texas.</correcthint>
<stringequalhint answer="Texas">While many people think Texas is the
largest state in terms of land area, it is actually the second
largest and contains 261,797 square miles.</stringequalhint>
<stringequalhint answer="California">California is the third largest
state and contains 155,959 square miles.</stringequalhint>
<textline size="20"/>
</stringresponse>
</problem>
### 10.43.4.3. Customizing Feedback Labels¶
By default, the feedback labels shown to learners are Correct and Incorrect. If you do not define feedback labels, learners see these terms when they submit an answer, as in the following example.
Incorrect: California is the third largest state and contains 155,959 square
miles.
You can configure the problem to override the default labels. For example, you can configure a custom label for a specific wrong answer.
Close but wrong: California is the third largest state and contains 155,959
square miles.
Note
The default labels Correct and Incorrect display in the learner’s requested language. If you provide custom labels, they display as you define them to all learners. They are not translated into different languages.
#### 10.43.4.3.1. Customize a Feedback Label in the Simple Editor¶
In the simple editor, you configure custom feedback labels with the following syntax.
not=Answer {{Label:: Feedback}}
That is, you provide the label text, followed by two colon (:) characters, before the feedback text.
For example, the following feedback is configured to use a custom label.
not=Texas {{Close but wrong:: While many people think Texas is the largest
state in terms of land area, it is actually the second largest of the 50 U.S.
states, containing 261,797 square miles.}}
#### 10.43.4.3.2. Customize a Feedback Label in the Advanced Editor¶
In the advanced editor, you configure custom feedback labels with the following syntax.
<correcthint label="Custom Label">Feedback</correcthint>
For example, the following feedback is configured to use custom labels.
<correcthint label="Right you are">Alaska is the largest state in the U.S.
in terms of not only land area, but also total area and water area. Alaska
is 576,400 square miles, more than double the land area of the second
largest state, Texas.</correcthint>
<stringequalhint answer="Texas" label="Close but wrong">While many people
think Texas is the largest state in terms of land area, it is actually the
second largest of the 50 U.S. states containing 261,797 square miles.</stringequalhint>
## 10.43.5. Adding Hints to a Text Input Problem¶
You can add hints to a text input problem using the simple editor or the advanced editor. For an overview of hints in problems, see Adding Feedback and Hints to a Problem.
### 10.43.5.1. Configure Hints in the Simple Editor¶
In the simple editor, you configure hints with the following syntax.
||Hint 1||
||Hint 2||
||Hint n||
Note
You can configure any number of hints. The learner views one hint at a time and views the next one by selecting Hint again.
For example, the following problem has two hints.
||A fruit is the fertilized ovary from a flower.||
||A fruit contains seeds of the plant.||
### 10.43.5.2. Configure Hints in the Advanced Editor¶
In the advanced editor, you add the <demandhint> element immediately before the closing </problem> tag, and then configure each hint using the <hint> element.
.
.
.
<demandhint>
<hint>Hint 1</hint>
<hint>Hint 2</hint>
<hint>Hint 3</hint>
</demandhint>
</problem>
For example, the following OLX for a multiple choice problem shows two hints.
.
.
.
</multiplechoiceresponse>
<demandhint>
<hint>A fruit is the fertilized ovary from a flower.</hint>
<hint>A fruit contains seeds of the plant.</hint>
</demandhint>
</problem>
## 10.43.6. Adding Text after the Response Field¶
You might want to include a word, phrase, or sentence after the response field in a text input problem to help guide your learners or resolve ambiguity.
To do this, you use the advanced editor.
In the problem, locate the textline element. This element creates the response field for the problem and is a child of the stringresponse element. An example follows.
<problem>
<label>What Pennsylvania school was founded in 1854 to provide
educational opportunities for African-Americans?</label>
<textline size="20"/>
</stringresponse>
</problem>
To add text after the response field, add the trailing_text attribute together with the text that you want to use inside the textline element.
<problem>
<label>What Pennsylvania school was founded in 1854 to provide
educational opportunities for African-Americans?</label>
<textline size="20" trailing_text="Institute"/>
</stringresponse>
</problem>
## 10.43.7. Case Sensitivity and Text Input Problems¶
By default, text input problems do not require a case sensitive response. You can change this default to require a case sensitive answer.
To make a text input response case sensitive, you use the advanced editor.
In the advanced editor, the stringresponse element has a type attribute. By default, the value for this attribute is set to ci, for “case insensitive”. An example follows.
<problem>
.
.
.
</stringresponse>
</problem>
Learners who submit an answer of either “Paris” or “paris” are scored as correct.
To make the response case sensitive, change the value of the type attribute to cs.
<problem>
.
.
.
</stringresponse>
</problem>
Learners who submit an answer of “Paris” are scored as correct, but learners who submit an answer of “PARIS” are scored as incorrect.
## 10.43.8. Response Field Length in Text Input Problems¶
By default, the response field for text input problems is 20 characters long.
You should preview the unit to ensure that the length of the response input field accommodates the correct answer, and provides extra space for possible incorrect answers.
If the default response field is not long enough, you can change it using the advanced editor.
In the advanced editor, the textline element has a size attribute. By default, the value for this attribute is set to 20. An example follows.
<problem>
<stringresponse answer="Democratic Republic of the Congo" type="ci">
.
.
.
<textline size="20"/>
</stringresponse>
</problem>
To change the response field length, change the value of the size attribute.
<problem>
<stringresponse answer="Democratic Republic of the Congo" type="ci">
.
.
.
<textline size="40" />
</stringresponse>
</problem>
## 10.43.9. Allowing Regular Expressions as Answers for Text Input Problems¶
You can configure a text input problem to allow a regular expression as an answer. Allowing learners to answer with a regular expression can minimize the number of distinct correct responses that you need to define for the problem: if a learner responds with the correct answer formed as a plural instead of a singular noun, or a verb in the past tense instead of the present tense, the answer is marked as correct.
To do this, you use the advanced editor.
In the advanced editor, the stringresponse element has a type attribute. You can set the value for this attribute to regexp, with or without also including ci or cs for a case insensitive or case sensitive answer. An example follows.
<problem>
<stringresponse answer="string pattern" type="regexp ci">
.
.
.
</stringresponse>
</problem>
The regular expression that the learner enters must contain, in whole or in part, the answer that you specify.
In this example, learners who submit an answer of “string pattern”, “String Patterns”, “string patterned”, or “STRING PATTERNING” are all scored as correct, but learners who submit an answer of “Strings Pattern” or “string patern” are scored as incorrect.
## 10.43.10. Text Input Problem XML Reference¶
### 10.43.10.1. Template¶
<problem>
<label>Question text</label>
<description>Optional tip</description>
<correcthint>Provides feedback when learners submit the correct
response.</correcthint>
learners submit the specified incorrect response.</stringequalhint>
learners submit the specified incorrect response.</stringequalhint>
<textline size="20" />
</stringresponse>
<demandhint>
<hint>The first text string to display when learners request a hint.</hint>
<hint>The second text string to display when learners request a hint.</hint>
</demandhint>
</problem>
### 10.43.10.2. Elements¶
For text input problems, the <problem> element can include this hierarchy of child elements.
<stringresponse>
<label>
<description>
<correcthint>
<stringequalhint>
<textline>
<solution>
<demandhint>
<hint>
In addition, standard HTML tags can be used to format text.
#### 10.43.10.2.1. <stringresponse>¶
Required. Indicates that the problem is a text input problem.
##### 10.43.10.2.1.1. Attributes¶
Attribute
Description
answer (required)
Specifies the correct answer.
Note that if you do not also add the type attribute and set it to regexp, the learner’s answer must match the value for this attribute exactly.
type (optional)
Specifies whether the problem requires a case sensitive response and if it allows regular expressions.
• If type="ci", the problem is not case sensitive.
• If type="cs", the problem is case sensitive.
• If type="regexp", the problem allows regular expressions.
You can also combine these values in a space separated list. For example, <stringresponse type="regexp cs"> specifies that the problem allows regular expressions and is case sensitive.
##### 10.43.10.2.1.2. Children¶
• <label>
• <description>
• <textline>
• <additional_answer>
• <correcthint>
• <stringequalhint>
• <solution>
#### 10.43.10.2.2. <label>¶
Required. Identifies the question or prompt. You can include HTML tags within this element.
None.
None.
#### 10.43.10.2.3. <description>¶
Optional. Provides clarifying information about how to answer the question. You can include HTML tags within this element.
None.
None.
#### 10.43.10.2.4. <textline>¶
Required. Creates a response field in the LMS where the learner enters a text string.
##### 10.43.10.2.4.1. Attributes¶
Attribute
Description
size
Optional. Specifies the size, in characters, of the response field in the LMS. Defaults to 20.
hidden
Optional. If set to “true”, learners cannot see the response field.
correct_answer
Optional. Lists the correct answer to the problem.
trailing_text
Optional. Specifies text to appear immediately after the response field.
None.
#### 10.43.10.2.5. <additional_answer>¶
Optional. Specifies an additional correct answer for the problem. A problem can contain an unlimited number of additional answers.
##### 10.43.10.2.5.1. Attributes¶
Attribute
Description
answer
Required. The text of the alternative correct answer.
##### 10.43.10.2.5.2. Children¶
<correcthint>
#### 10.43.10.2.6. <correcthint>¶
Optional. Specifies feedback to appear after the learner submits a correct answer.
##### 10.43.10.2.6.1. Attributes¶
Attribute
Description
label
Optional. The text of the custom feedback label.
None.
#### 10.43.10.2.7. <stringequalhint>¶
Optional. Specifies feedback to appear after the learner submits an incorrect answer.
##### 10.43.10.2.7.1. Attributes¶
Attribute
Description
answer
Required. The text of the incorrect answer.
label
Optional. The text of the custom feedback label.
None.
#### 10.43.10.2.8. <solution>¶
Optional. Identifies the explanation or solution for the problem, or for one of the questions in a problem that contains more than one question.
This element contains an HTML division <div>. The division contains one or more paragraphs <p> of explanatory text.
#### 10.43.10.2.9. <demandhint>¶
Optional. Specifies hints for the learner. For problems that include multiple questions, the hints apply to the entire problem.
None.
##### 10.43.10.2.9.2. Children¶
<hint>
#### 10.43.10.2.10. <hint>¶
Required. Specifies additional information that learners can access if needed.
None.
None.
## 10.43.11. Deprecated Hinting Method¶
The following example shows the XML format with the <hintgroup> element that you could use in the past to configure hints for text input problems. Problems using this XML format will continue to work in the edX Platform. However, edX recommends that you use the new way of configuring hints documented above.
<problem>
<label>Question text</label>
<textline size="20" />
<hintgroup>
<stringhint answer="Incorrect answer A" type="ci" name="hintA" />
<hintpart on="hintA">
<startouttext />Text of hint for incorrect answer A<endouttext />
</hintpart >
<stringhint answer="Incorrect answer B" type="ci" name="hintB" />
<hintpart on="hintB">
<startouttext />Text of hint for incorrect answer B<endouttext />
</hintpart >
<stringhint answer="Incorrect answer C" type="ci" name="hintC" />
<hintpart on="hintC">
<startouttext />Text of hint for incorrect answer C<endouttext />
</hintpart >
</hintgroup>
</stringresponse>
<solution>
<div class="detailed-solution">
<p>Explanation or Solution Header</p>
<p>Explanation or solution text</p>
</div>
</solution>
</problem> |
Browse Questions
# If $A$ is a square matrix of order $n$ then |adj$A$| is
$\begin{array}{1 1}(1)|A^{2}| &(2) |A|^{n}\\ (3)|A|^{n-1}&(4)|A|\end{array}$
$A(adj A) = (adj A )A =|A|$
$A (adj A) =\begin{bmatrix} |A| & 0 & 0 & ... & 0\\ 0 & |A| & 0 & ... & |A| \\ ... & ... & ... & ... \\ 0 & 0 & ... & ... & |A| \end{bmatrix}$
$A (adj A) =\begin{vmatrix} |A| & 0 & 0 & ... & 0\\ 0 & |A| & 0 & ... & |A| \\ ... & ... & ... & ... \\ 0 & 0 & ... & ... & |A| \end{vmatrix}$
$|A| |adj A|=|A|^n$
$|adj A| =|A|^{n-1}$
Hence 3 is the correct answer. |
# Unset variable by its name given as a string
Unlike the function Clear, the function Unset does not work for string patterns. Given the name of a variable as a string, how can you unset the corresponding variable? Example:
Given:
x = 10;
trying to unset later:
Unset[Symbol["x"]]
results in
Unset::write: Tag Symbol in Symbol[x] is Protected. >>
Using With:
With[{var = Symbol["x"]}, Unset[var]]
results in
Unset::usraw: Cannot unset raw object 10. >>
Note that Clear["x"] is not an option, because it also removes all DownValues associated with x.
-
Dumb solution: ToExpression["Unset[" <> "x" <> "]"]. – J. M. May 7 '13 at 12:27
You can use the third argument of ToExpression to do this in a structured way:
ToExpression["x", InputForm, Unset]
+1. I remember that you have a long history of using this construct, which I also use often. @sakra An alternative: Unset @@ ToHeldExpression["x"] – Leonid Shifrin May 7 '13 at 16:47
Yes, it's useful when trying to do something with some symbols I get from Names["...*"]. The usual form one ends up with is ToExpression["x", InputForm, Function[expr, ..., {HoldAll}]], which is rather cumbersome. I did not know about ToHeldExpression. It might make some things simpler. – Szabolcs May 7 '13 at 16:56
@Szabolcs ToHeldExpression is deprecated and undocumented, but I am sure it will stay. – Leonid Shifrin May 7 '13 at 17:40
As noted here, ToHeldExpression[] was deprecated in version three. Still, it remains in the kernel, much like that other function Release[]... – J. M. May 7 '13 at 20:51 |
# Converse of Gold's theorem and necessary condition for unlearnability
### Background
A class of languages $C$ has Gold's Property if $C$ contains
1. a countable infinite number of languages $L_i$ such that $L_i \subsetneq L_{i + 1}$ for all $i > 0$
2. a further language $L_\infty$ such that for any $i > 0$, if $x \in L$ then $x \in L_\infty$.
Then, Gold's theorem is:
Any class of languages with Gold's Property is unlearnable
In other words, Gold's Property is a sufficient condition for unlearnability.
### Question
What is the weakest (natural) necessary condition? In other words, I want the weakest property $P$ such that:
Any unlearnable class of languages has property $P$
In particular: is Gold's Property such a property? Can Gold's theorem be strengthened to an if and only if?
Alternatively, as @TsuyoshiIto pointed out in the comments:
What is a sufficient condition for a class of languages to be learnable?
-
I would ask for sufficient conditions for learnability rather than necessary conditions for unlearnability because the latter sounds confusing. – Tsuyoshi Ito Feb 3 '12 at 11:54
@TsuyoshiIto thank you for the suggestion, I added it to the question. – Artem Kaznatcheev Feb 3 '12 at 12:10
Could you point to a definition of "learnable"/"unlearnable"? – Henning Makholm Feb 3 '12 at 13:37
You may want to take the question to brand-new cs.SE! – Raphael Mar 23 '12 at 23:37
## 1 Answer
Gold's property does not characterize languages learnable in the limit from positive examples. However, Angluin (1980; pdf) does give a property that is both necessary and sufficient:
An indexed family $C$ of nonempty languages $\{L_1,L_2,..\}$ has Angluin's Property if there exists a Turing Machine which on any input $i \geq 1$ enumerates a finite set of strings $T_i$ such that $T_i \subseteq L_i$ and for all $j \geq 1$ if $T_i \subseteq L_j$ then $L_j$ is not a proper subset of $L_i$.
Informally, this property says that for every language $L$ in the family, there exists a "telltale" finite subset $T$ of $L$ such that if another language $L' \neq L$ in the family contains $T$ then there is some positive example $x \not\in L$ but in $L'$.
Angluin (1980) proves that:
An indexed family of nonempty recursive languages is learnable from positive data (in the sense of Gold) if and only if it has Angluin's Property.
- |
## Calculate the molecular weight of a dibasic acid.0.56gm of which is required 250ml of N/20 sodium hydroxide solution for neutralization.
Question
Calculate the molecular weight of a dibasic acid.0.56gm of which is required 250ml of N/20 sodium hydroxide solution for neutralization.
in progress 0
6 months 2021-07-13T19:43:48+00:00 1 Answers 5 views 0
1. Answer: The molecular weight of the dibasic acid is 89.6 g/mol
Explanation:
Normality is defined as the amount of solute expressed in the number of gram equivalents present per liter of solution. The units of normality are eq/L. The formula used to calculate normality:
….(1)
We are given:
Normality of solution =
Given mass of solute = 0.56 g
Volume of solution = 250 mL
Putting values in equation 1, we get:
Equivalent weight of an acid is calculated by using the equation:
…..(2)
Equivalent weight of acid = 44.8 g/eq
Basicity of an acid = 2 eq/mol
Putting values in equation 2, we get:
Hence, the molecular weight of the dibasic acid is 89.6 g/mol |
# Model help:DeltaNorm
## DeltaNorm
This is a calculator for evolution of long profile of a river ending in a 1D migrating delta, using the normal flow approximation.
## Model introduction
This program calculates the bed surface evolution for a narrowly channelized 1D fan-delta prograding into standing water, as well as calculating the initial and final amounts of sediment in the system.
## Model parameters
Parameter Description Unit
Input directory path to input files
Site prefix Site prefix for Input/Output files
Case prefix Case prefix for Input/Output files
Parameter Description Unit
Chezy Or Manning, Chezy-1 or Manning-2
Parameter Description Unit
Flood discharge (q) m2 / s
Intermittency (I) flood intermittency -
upstream bed material sediment feed rate during flood (Q) m2 / m
Grain size of bed material (D) mm
Chezy resistance coefficient (C) cofficient in the Chezy relation -
Exponent in load relation (n) -
Critical Shields stress in load relation (T) -
Elevation of top of forest (E) m
Initial elevation of forest bottom (e) m
Initial fluvial bed slope (f) -
Subaqueous basement slope (b) -
initial length of fluvial zone (s) m
Slope of forest face (Sa) -
Submerged specific gravity of sediment (R) -
Bed porosity (L) -
Manning-Strickler coefficient (k) coefficient in the Manning-Strickler relation -
Manning-Strickler coefficient (r) coefficient in the Manning-Strickler relation -
Coefficient in total bed material relation (a) -
Number of fluvial nodes (M) -
Time step (t) days
Number of printouts after initial one -
Iterations per each printout -
Parameter Description Unit
Model name name of the model -
Author name name of the model author -
## Uses ports
This will be something that the CSDMS facility will add
## Provides ports
This will be something that the CSDMS facility will add
## Main equations
• Water surface elevation
$\displaystyle{ \eta = \eta_{f}[s_{s} \left (t\right ), t] - S_{a}[x - s_{s}\left ( t \right )] }$ (1)
• Exner equation for shock condition
$\displaystyle{ \left ( 1 - \lambda_{p} \right ) \int _{s_{s}\left (t\right )} ^ \left ( s_{b} \left (t\right ) \right ){\frac{\partial \eta}{\partial t}} d x = I_{f} \{q_{t}[s_{s}\left (t \right ), t] - q_{t} [s_{b}\left (t\right ),t] \} }$ (2)
$\displaystyle{ \dot{s_{s}} = {\frac{1}{\left (S_{a} - S_{s} \right )}}[{\frac{I_{f} q_{ts}}{\left ( 1 - \lambda_{p}\right ) \left (s_{b} - s_{s} \right )}} - {\frac{\partial \eta _{f}}{\partial t}}|_{s_{s}}] }$ (3)
$\displaystyle{ \left (S_{a} - S_{b} \right ) \dot{s}_{b} = \left (S_{a} - S_{s}\right ) \dot{s}_{s} + {\frac{\partial \eta _{f}}{\partial t}}|_{s_{s}} }$ (4)
• Moving boundary coordinate
$\displaystyle{ \hat{x} = {\frac{x}{S_{s}\left (t\right )}} }$ (5)
$\displaystyle{ \hat{t} = t }$ (6)
• Exner equation for moving-boundary coordinate
$\displaystyle{ \left ( 1 - \lambda_{p} \right ) [\left ({\frac{\partial \eta_{f}}{\partial \hat{t}}} - {\frac{\dot{s}_{s}}{s_{s}}} \hat{x} {\frac{\partial \eta_{f}}{\partial \dot{x}}}\right )] = - {\frac{1}{s_{s}}} I_{f} {\frac{\partial q_{t}}{\partial \dot{x}}} }$ (7)
• Shock condition for moving-boundary coordinate
$\displaystyle{ \left (s_{b} - s_{s} \right )[{\frac{\partial \eta_{f}}{\partial \hat{t}}}|_{\hat{x} = 1} + S_{a} \dot{s}_{s}] = {\frac{I_{f} q_{t} \left (1, \hat{t}\right )}{\left ( 1 - \lambda_{p}\right )}} }$ (8)
• Continuity condition for moving-boundary coordinate
$\displaystyle{ \dot{s}_{b} = {\frac{S_{a} \dot{s}_{s} + {\frac{\partial \eta _{f}}{\partial \hat{t}}}|_{\hat{x} = 1}}{\left ( S_{a} - S_{b}\right )}} }$ (9)
• Sediment transport relation
1) Total bed material transport
$\displaystyle{ q_{t} = \sqrt{R g D} D q_{t} ^* }$ (10)
$\displaystyle{ q_{t}^* = \alpha_{t}[\tau^* - \tau_{c}^*]^ \left (n_{t}\right ) }$ (11)
• Normal flow approximation
$\displaystyle{ \tau^* = \left ( {\frac{C_{f} q_{w}^2}{g}}\right )^ \left ({\frac{1}{3}}\right ) {\frac{S^ \left ({\frac{2}{3}}\right )}{R D}} }$ (12)
$\displaystyle{ C_{f} = Cz^ \left (-2\right ) }$ (13)
$\displaystyle{ S = - {\frac{1}{s_{s}}} {\frac{\partial \eta _{f}}{\partial \dot{x}}} }$ (14)
• Boundary conditions
$\displaystyle{ s_{s} \left ( \hat{t} + \Delta \hat{t} \right ) = s_{s} \left (\hat{t}\right ) + \dot{s}_{s} \Delta \hat{t} }$ (15)
$\displaystyle{ s_{b} \left ( \hat{t} + \Delta \hat{t} \right ) = s_{b} \left (\hat{t}\right ) + \dot{s}_{b} \Delta \hat{t} }$ (16)
$\displaystyle{ \eta_{b} \equiv \eta [S_{b} \left (\hat{t} \right ), \hat{t}] = \eta_{d} - S_{s} \left ( s_{b} - s_{s}\right ) }$ (17)
• Calculation of derivatives
$\displaystyle{ {\frac{\partial \eta}{\partial \hat{x}}}|_{i} = \left\{\begin{matrix} {\frac{\eta_{i+1} - \eta_{i}}{\Delta \hat{x}}} & i = 1 \\ {\frac{\eta_{i+1} - \eta_{i-1}}{2 \Delta \hat{x}}} & i = 2...M \\ {\frac{\eta_{i} - \eta_{i-1}}{\Delta \hat{x}}} & i = M+1 \end{matrix}\right. }$ (18)
$\displaystyle{ {\frac{\partial q_{t}}{\partial \hat{x}}}|_{i} = \left\{\begin{matrix} {\frac{q_{t,j+1} - q_{tf}}{2 \Delta \hat{x}}} & i = 1 \\ {\frac{q_{t,i} - q_{t,i-1}}{2 \Delta \hat{x}}} & 2 \lt = i \lt = M \end{matrix}\right. }$ (19)
## Notes
This model is used to calculate for 1D Subaerial Fluvial Fan-Delta with Channel of Constant Width. This model assumes a narrowly channelized 1D fan-delta prograding into standing water. The model uses a single grain size D, a generic total bed material load relation and a constant bed resistance coefficient. The channel is assumed to have a constant width. Water and sediment discharge are specified per unit width. The fan builds outward by forming a prograding delta front with an assigned foreset slope. The code employs the normal flow approximation rather than a full backwater calculation.
In the normal flow formulation, for any given time t = t^:
a) Specify the downstream bed elevation ηd
b) Calculate the bed slope S everywhere, and use this to find H everywhere.
c) Use this to evaluate qt everywhere, including qts at x^ = 1.
d) Implement the shock condition to find dot{s}s. This shock condition requires knowledge of the term d ηf / d t^ |x^ = 1 = d ηd / d t^, which can be directly computed in terms of the imposed function ηd \left (t^\right ).
e) Solve Exner everywhere except the last node (where bed elevation is specified) to find new bed elevations at time Δt^ later.
f) Use continuity condition to find dot{s}b.
• Note on model running
This model assumes a uniform grain size.
The fan builds outward by forming a prograding delta front with an assigned foreset slope.
The water depth is calculated using a Chézy formulation, when only the Chézy coefficient is present in the inputted text file, and with the Manning-Strickler formulation, when only the roughness height, kc, value is present. When both are present the program will ask the user which formulation they would like to use.
## Examples
An example run with input parameters, BLD files, as well as a figure / movie of the output
Follow the next steps to include images / movies of simulations: |
# Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
This article relates the Schrödinger equation with the path integral formulation of quantum mechanics using a simple nonrelativistic one-dimensional single-particle Hamiltonian composed of kinetic and potential energy.
## Background
### Schrödinger's equation
Schrödinger's equation, in bra–ket notation, is
$i\hbar {d\over dt} |\psi\rangle = \hat H |\psi\rangle$
where $\hat H$ is the Hamiltonian operator. We have assumed for simplicity that there is only one spatial dimension.
The Hamiltonian operator can be written
$\hat H = {\hat{p}^2 \over 2m} + V(\hat q )$
where $V(\hat q )$ is the potential energy, m is the mass and we have assumed for simplicity that there is only one spatial dimension q.
The formal solution of the equation is
$|\psi(t)\rangle = \exp\left({- {i \over \hbar } \hat H t}\right) |q_0\rangle \equiv \exp\left({- {i \over \hbar } \hat H t}\right) |0\rangle$
where we have assumed the initial state is a free-particle spatial state $|q_0\rangle$.
The transition probability amplitude for a transition from an initial state $|0\rangle$ to a final free-particle spatial state $| F \rangle$ at time T is
$\langle F |\psi(t)\rangle = \langle F | \exp\left({- {i \over \hbar } \hat H T}\right) |0\rangle.$
### Path integral formulation
The path integral formulation states that the transition amplitude is simply the integral of the quantity
$\exp\left( {i\over \hbar} S \right)$
over all possible paths from the initial state to the final state. Here S is the classical action.
The reformulation of this transition amplitude, originally due to Dirac[1] and conceptualized by Feynman,[2] forms the basis of the path integral formulation.[3]
## From Schrödinger's equation to the path integral formulation
Note: the following derivation is heuristic (it is valid in cases in which the potential, $V(q)$, commutes with the momentum, $p$). Following Feynman, this derivation can be made rigorous by writing the momentum, $p$, as the product of mass, $m$, and a difference in position at two points, $x_a$ and $x_b$, separated by a time difference, $\delta t$, thus quantizing distance.
$p = m \left(\frac{x_b - x_a}{\delta t}\right)$
Note 2: There are two errata on page 11 in Zee, both of which are corrected here.
We can divide the time interval from 0 to T into N segments of length
$\delta t = {T \over N}.$
The transition amplitude can then be written
$\langle F | \exp\left({- {i \over \hbar } \hat H T}\right) |0\rangle = \langle F | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) \exp\left( {- {i \over \hbar } \hat H \delta t} \right) \cdots \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |0\rangle$.
We can insert the identity
$I = \int dq |q\rangle \langle q |$
matrix N-1 times between the exponentials to yield
$\langle F | \exp\left({- {i \over \hbar } \hat H T}\right) |0\rangle = \left( \prod_{j=1}^{N-1} \int dq_j \right) \langle F | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |q_{N-1}\rangle \langle q_{N-1} | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |q_{N-2}\rangle \cdots \langle q_{1} | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |0\rangle$.
Each individual transition probability can be written
$\langle q_{j+1} | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |q_j\rangle = \langle q_{j+1} | \exp\left( {- {i \over \hbar } { {\hat p}^2 \over 2m} \delta t} \right) \exp\left( {- {i \over \hbar } V \left( q_j \right) \delta t} \right)|q_j\rangle$.
We can insert the identity
$I = \int { dp \over 2\pi } |p\rangle \langle p |$
into the amplitude to yield
$\langle q_{j+1} | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |q_j\rangle = \exp\left( {- {i \over \hbar } V \left( q_j \right) \delta t} \right) \int { dp \over 2\pi } \langle q_{j+1} | \exp\left( {- {i \over \hbar } { { p}^2 \over 2m} \delta t} \right) |p\rangle \langle p |q_j\rangle$
$= \exp\left( {- {i \over \hbar } V \left( q_j \right) \delta t} \right) \int { dp \over 2\pi } \exp\left( {- {i \over \hbar } { { p}^2 \over 2m} \delta t} \right) \langle q_{j+1} |p\rangle \langle p |q_j\rangle$
$= \exp\left( {- {i \over \hbar } V \left( q_j \right) \delta t} \right) \int { dp \over 2\pi } \exp\left( {- {i \over \hbar } { { p}^2 \over 2m} \delta t} -{i\over \hbar} p \left( q_{j+1} - q_{j} \right) \right)$
where we have used the fact that the free particle wave function is
$\langle p |q_j\rangle = \frac{\exp\left( {i\over \hbar} p q_{j} \right)}{\sqrt{\hbar}}$.
The integral over p can be performed (see Common integrals in quantum field theory) to obtain
$\langle q_{j+1} | \exp\left( {- {i \over \hbar } \hat H \delta t} \right) |q_j\rangle = \left( {-i m \over 2\pi \delta t \hbar } \right)^{1\over 2} \exp\left[ {i\over \hbar} \delta t \left( {1\over 2} m \left( {q_{j+1}-q_j \over \delta t } \right)^2 - V \left( q_j \right) \right) \right]$
The transition amplitude for the entire time period is
$\langle F | \exp\left( {- {i \over \hbar } \hat H T} \right) |0\rangle = \left( {-i m \over 2\pi \delta t \hbar } \right)^{N\over 2} \left( \prod_{j=1}^{N-1} \int dq_j \right) \exp\left[ {i\over \hbar} \sum_{j=0}^{N-1} \delta t \left( {1\over 2} m \left( {q_{j+1}-q_j \over \delta t } \right)^2 - V \left( q_j \right) \right) \right]$.
If we take the limit of large N the transition amplitude reduces to
$\langle F | \exp\left( {- {i \over \hbar } \hat H T} \right) |0\rangle = \int Dq(t) \exp\left[ {i\over \hbar} S \right]$
where S is the classical action given by
$S = \int_0^T dt L\left( q(t), \dot{q}(t) \right)$
and L is the classical Lagrangian given by
$L\left( q, \dot{q} \right) = {1\over 2} m {\dot{q}}^2 - V \left( q \right)$.
Any possible path of the particle, going from the initial state to the final state, is approximated as a broken line and included in the measure of the integral
$\int Dq(t) = \lim_{N\to\infty}\left( {-i m \over 2\pi \delta t \hbar } \right)^{N\over 2} \left( \prod_{j=1}^{N-1} \int dq_j \right)$
This expression actually defines the manner in which the path integrals are to be taken. The coefficient in front is needed to ensure that the expression has the correct dimensions, but it has no actual relevance in any physical application.
This recovers the path integral formulation from Schrödinger's equation.
## References
1. ^ Dirac, P. A. M. (1958). The Principles of Quantum Mechanics, Fourth Edition. Oxford. ISBN 0-19-851208-2.
2. ^ Richard P. Feynman (1958). Feynman's Thesis: A New Approach to Quantum Theory. World Scientific. ISBN 981-256-366-0.
3. ^ A. Zee (2003). Quantum Field Theory in a Nutshell. Princeton University. ISBN 0-691-01019-6. |
Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
## 31.7 Expected Value
Probability is the percentage chance that something will occur. For example, there is a 50 percent chance that a tossed coin will come up heads. We say that the probability of getting the outcomeheads” is 1/2. There are five things you need to know about probability:
1. The list of possible outcomes must be complete.
2. The list of possible outcomes must not overlap.
3. If an outcome is certain to occur, it has probability 1.
4. If an outcome is certain not to occur, it has probability 0.
5. If we add together the probabilities for all the possible outcomes, the total must equal 1.
The expected value of a situation with financial risk is a measure of how much you would expect to win (or lose) on average if the situation were to be replayed a large number of times. You can calculate expected value as follows:
• For each outcome, multiply the probability of that outcome by the amount you will receive.
• Add together these amounts over all the possible outcomes.
For example, suppose you are offered the following proposal. Roll a six-sided die. If it comes up with 1 or 2, you get $90. If it comes up 3, 4, 5, or 6, you get$30. The expected value is
(1/3) × $90 + (2/3) ×$30 = $50. Most people dislike risk. They prefer a fixed sum of money to a gamble that has the same expected value. Risk aversion is a measure of how much people want to avoid risk. In the example we just gave, most people would prefer a sure$50 to the uncertain proposal with the expected value of $50. Suppose we present an individual with the following gamble: • With 99 percent probability, you lose nothing. • With 1 percent probability, you lose$1,000.
The expected value of this gamble is −$10. Now ask the individual how much she would pay to avoid this gamble. Someone who is risk-neutral would be willing to pay only$10. Someone who is risk-averse would be willing to pay more than \$10. The more risk-averse an individual, the more the person would be willing to pay.
The fact that risk-averse people will pay to shed risk is the basis of insurance. If people have different attitudes toward risky gambles, then the less risk-averse individual can provide insurance to the more risk-averse individual. There are gains from trade. Insurance is also based on diversification, which is the idea that people can share their risks so it is much less likely that any individual will face a large loss.
## Key Insights
• Expected value is the sum of the probability of an event times the gain/loss if that event occurs.
• Risk-averse people will pay to avoid risk. This is the basis of insurance.
## More Formally
Consider a gamble where there are three and only three possible outcomes (x, y, z) that occur with probabilities Pr(x), Pr(y), and Pr(z). Think of these outcomes as the number of dollars you get in each case. First, we know that
Pr(x) + Pr(y) + Pr(z) = 1.
Second, the expected value of this gamble is
EV = (Pr(x)*x) + (Pr(y)*y) + (Pr(z)*z).
## The Main Uses of This Tool
Close Search Results
Study Aids |
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Computer Science , 2015, Abstract: We study the Optimal Linear Arrangement (OLA) problem of Halin graphs, one of the simplest classes of non-outerplanar graphs. We present several properties of OLA of general Halin graphs. We prove a lower bound on the cost of OLA of any Halin graph, and define classes of Halin graphs for which the cost of OLA matches this lower bound. We show for these classes of Halin graphs, OLA can be computed in O(n log n), where n is the number of vertices.
Computer Science , 2011, Abstract: We show that there exists a linear-time algorithm that computes the strong chromatic index of Halin graphs.
Communications and Network (CN) , 2011, DOI: 10.4236/cn.2011.32009 Abstract: The optimal semi-matching problem is one relaxing form of the maximum cardinality matching problems in bipartite graphs, and finds its applications in load balancing. Ordered binary decision diagram (OBDD) is a canonical form to represent and manipulate Boolean functions efficiently. OBDD-based symbolic algorithms appear to give improved results for large-scale combinatorial optimization problems by searching nodes and edges implicitly. We present novel symbolic OBDD formulation and algorithm for the optimal semi-matching problem in bipartite graphs. The symbolic algorithm is initialized by heuristic searching initial matching and then iterates through generating residual network, building layered network, backward traversing node-disjoint augmenting paths, and updating semi-matching. It does not require explicit enumeration of the nodes and edges, and therefore can handle many complex executions in each step. Our simulations show that symbolic algorithm has better performance, especially on dense and large graphs.
Computer Science , 2014, Abstract: Let $T$ be a tree with no degree 2 vertices and $L(T)=\{l_1,\ldots,l_r\}, r \geq 2$ denote the set of leaves in $T$. An Halin graph $G$ is a graph obtained from $T$ such that $V(G)=V(T)$ and $E(G)=E(T) \cup \{\{l_i,l_{i+1}\} ~|~ 1 \leq i \leq r-1\} \cup \{l_1,l_r\}$. In this paper, we investigate combinatorial problems such as, testing whether a given graph is Halin or not, chromatic bounds, an algorithm to color Halin graphs with the minimum number of colors. Further, we present polynomial-time algorithms for testing and coloring problems.
Computer Science , 2015, Abstract: The quadratic traveling salesman problem (QTSP) is to find a least cost Hamiltonian cycle in an edge-weighted graph. We define a restricted version of QTSP, the $k$-neighbour TSP (TSP($k$)), and give a linear time algorithm to solve TSP($k$) on a Halin graph for $k\leq 3$. This algorithm can be extended to solve TSP($k$) on any fully reducible class of graphs for any fixed $k$ in polynomial time. This result generalizes corresponding results for the standard TSP. Further, these results are useful in establishing approximation bounds based on domination analysis. TSP($k$) can be used to model various machine scheduling problems including an optimal routing problem for UAVs.
Computer Science , 2015, Abstract: For a connected labelled graph $G$, a {\em spanning tree} $T$ is a connected and an acyclic subgraph that spans all the vertices of $G$. In this paper, we consider a classical combinatorial problem which is to list all spanning trees of $G$. A Halin graph is a graph obtained from a tree with no degree two vertices and by joining all leaves with a cycle. We present a sequential and parallel algorithm to enumerate all spanning trees in Halin graphs. Our approach enumerates without repetitions and we make use of $O((2pd)^{p})$ processors for parallel algorithmics, where $d$ and $p$ are the depth, the number of leaves, respectively, of the Halin graph. We also prove that the number of spanning trees in Halin graphs is $O((2pd)^{p})$.
Mathematics , 2007, Abstract: A k-dimensional box is the Cartesian product R_1 x R_2 x ... x R_k where each R_i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G) is the minimum integer k such that G is the intersection graph of a collection of k-dimensional boxes. Halin graphs are the graphs formed by taking a tree with no degree 2 vertex and then connecting its leaves to form a cycle in such a way that the graph has a planar embedding. We prove that if G is a Halin graph that is not isomorphic to K_4, then box(G)=2. In fact, we prove the stronger result that if G is a planar graph formed by connecting the leaves of any tree in a simple cycle, then box(G)=2 unless G is isomorphic to K_4 (in which case its boxicity is 1).
Computer Science , 2011, DOI: 10.1145/2332432.2332464 Abstract: We study distributed algorithms that find a maximal matching in an anonymous, edge-coloured graph. If the edges are properly coloured with $k$ colours, there is a trivial greedy algorithm that finds a maximal matching in $k-1$ synchronous communication rounds. The present work shows that the greedy algorithm is optimal in the general case: any algorithm that finds a maximal matching in anonymous, $k$-edge-coloured graphs requires $k-1$ rounds. If we focus on graphs of maximum degree $\Delta$, it is known that a maximal matching can be found in $O(\Delta + \log^* k)$ rounds, and prior work implies a lower bound of $\Omega(\polylog(\Delta) + \log^* k)$ rounds. Our work closes the gap between upper and lower bounds: the complexity is $\Theta(\Delta + \log^* k)$ rounds. To our knowledge, this is the first linear-in-$\Delta$ lower bound for the distributed complexity of a classical graph problem.
Computer Science , 2011, Abstract: We show that there exist linear-time algorithms that compute the strong chromatic index of Halin graphs, of maximal outerplanar graphs and of distance-hereditary graphs.
Mathematics , 2014, Abstract: A spanning tree with no vertices of degree 2 is called a Homeomorphically irreducible spanning tree\,(HIST). Based on a HIST embedded in the plane, a Halin graph is formed by connecting the leaves of the tree into a cycle following the cyclic order determined by the embedding. Both of the determination problems of whether a graph contains a HIST or whether a graph contains a spanning Halin graph are shown to be NP-complete. It was conjectured by Albertson, Berman, Hutchinson, and Thomassen in 1990 that a {\it every surface triangulation of at least four vertices contains a HIST}\,(confirmed). And it was conjectured by Lov\'asz and Plummer that {\it every 4-connected plane triangulation contains a spanning Halin graph}\,(disproved). Balancing the above two facts, in this paper, we consider generalized Halin graphs, a family of graph structures which are "stronger" than HISTs but "weaker" than Halin graphs in the sense of their construction constraints. To be exact, a generalized Halin graph is formed from a HIST by connecting its leaves into a cycle. Since a generalized Halin graph needs not to be planar, we investigate the minimum degree condition for a graph to contain it as a spanning subgraph. We show that there exists a positive integer $n_0$ such that any 3-connected graph with $n\ge n_0$ vertices and minimum degree at least $(2n+3)/5$ contains a spanning generalized Halin graph. As an application, the result implies that under the same condition, the graph $G$ contains a wheel-minor of order at least $n/2$. The minimum degree condition in the result is best possible.
Page 1 /100 Display every page 5 10 20 Item |
# If this is Gaussian beam?
1. Jan 16, 2010
### KFC
In the book "Fundamentals of Photonics", the form of the Gaussian beam is written as
$$I(\rho,z) = I_0 \left(\frac{W_0}{W(z)}\right)^2\exp\left[-\frac{2\rho^2}{W^2(z)}\right]$$
where $$\rho = \sqrt{x^2 + y^2}$$
However, in some books (I forgot which one), the author use the following form
$$I(R) = I_0 \exp\left[-\frac{R^2W_0^2}{W^2}\right]$$
where
$$R = \rho/W_0, \qquad \rho=\sqrt{x^2+y^2}$$
In the second expression, I don't know why there is no $$\left(W_0/W(z)\right)^2$$ in the amplitude and why he want to define R instead of using $$\rho$$ directly? And what about $$W_0$$ and $$W$$ in the second expression? Are they have some meaning as in the first one?
I forgot which book using such form, if you know any information, could you please tell me the title and author of the book? Thanks.
2. Jan 17, 2010
### mathman
Off hand (I'll admit I know nothing about the physics involved) it looks like the two expressions are equivalent, except for the 2 in the numerator of the first expression. W and W0 look like they are the same in both. The coeficient of both is I0. In one case the argument seems to be expressed, while the other may be implicit - again I don't know what any of this is supposed to be physically.
3. Jan 17, 2010
### KFC
Thanks. I am thinking one aspect on physics. Since the energy is conserved (the total energy of the input beam should be conserved after transported to some distance), so if the intensity is not inverse proportional to the waist, how to make the energy conserved? Please show me if I am wrong :)
4. Jan 18, 2010
### Bob S
I would first write the cross section of the beam traveling in the z direction in cartesian coordinates:
I(z) = I0 exp[-x2/2σx(z)2] exp[-y2/2σy(z)2]
where σx(z) and σy(z) are the rms widths of the Gaussian beam in the x and y directions at z.
This may be rewritten as
I(z) = I0 exp[-x2/2σx(z)2-y2/2σy(z)2]
and finally as
I(z) = I0 exp[-ρ2/2σ(z)2]
if σx(z) = σy(z), where ρ2 = x2 + y2.
Bob S |
# Is overfitting… good?
Published:
Conventional wisdom in Data Science/Statistical Learning tells us that when we try to fit a model that is able to learn from our data and generalize what it learned to unseen data, we must keep in mind the bias/variance trade-off. This means that as we increase the complexity of our models (let us say, the number of learnable parameters), it is more likely that they will just memorize the data and will not be able to generalize well to unseen data. On the other hand, if we keep the complexity low, our models will not be able to learn too much from our data and will not do well either. We are told to find the sweet spot in the middle. But is this paradigm about to change? In this article we review new developments that suggest that this might be the case.
### Revising bias and variance
While we framed it as conventional wisdom, the bias/variance trade-off is based on the observation that the loss functions that we minimize when fitting models to our data can be decomposed as
$\operatorname{loss} = \operatorname{bias}^2 + \operatorname{variance} + \operatorname{error}$
where the error term is due to random fluctuations of our data with respect to the true function which determines responses from features, or in other words, we can think of it as noise on which we do not have control. For the purpose of this entry, we will suppose that the error term can be taken to be zero.
When we fit models to our data, we try to minimize the loss function on the training data, while keeping track of it evaluated on the test data. The bias/variance trade-off means that when we lookt at the test loss as a function of complexity, we try to keep it minimal by balancing both the bias and the variance. This is normally explained with the conventional U-shaped plot of complexity vs loss:
We see that low complexity models do not generalize well, as they are not strong enough to learn the patterns in our data, while high complexity models do not generalize well either, as they learn patterns which are not really representative of the underlying relationship between features and responses. Thus, we look for the middle ground. There have been signs (which can be traced back to Leo Breiman, Reflections after refereeing papers for nips 1995) that this paradigm must be re-thought and that there might be more to it. I am by no means giving an extensive literature review on this, but I will point out some articles which seem particularly interesting in this regard.
### Understanding deep learning requires re-thinking generalization
The article Understanding deep learning requires re-thinking generalization by Zhang et al suggested that we should revise our current notions of generalization to unseen data, as they can become meaningless in the examples they exhibit. More concretely, they look at the (empirical) Rademacher complexity, and examine if it is a viable candidate measure to explain the incredible generalization capabilities of the state-of-the-art neural networks
. Suppose that our data has points $\{ (x_k,y_k) : k = 1\dots n\}$ and we are trying to fit functions from a class $\mathcal{H}$, that is, we are trying to find $h\in\mathcal{H}$ such that $h(x_k)$ is as close to $y_k$ as possible, while keeping the error on new samples low. In order to see how flexible for learning the class $\mathcal{H}$ is, consider Bernoulli random variables $\sigma_1,\dots,\sigma_n \sim \operatorname{Ber}(1/2)$ taking values on $\{-1,1 \}$, which we think as a realization of labels for our points $\{x_1,\dots,x_n \}$. A fixed function $h\in\mathcal{H}$ performs well if it outputs the correct label for most points, and since the outputs are $1$ and $-1$ with equal distribution, a good performance would mean that the average
$\dfrac{1}{n}\sum_{k=1}^n \sigma_k h(x_k)$
is close to one, as a correct prediction adds one to the sum (both $\sigma_k$ and $h(x_k)$ have the same sign in this case), while an incorrect prediction subtracts one to the sum (the factors have opposite signs). We can find the optimal function in the class by simply taking the supremum over $h$:
$r_n(\mathcal{H},\sigma) = \sup_{h\in\mathcal{H}}\dfrac{1}{n}\sum_{k=1}^n \sigma_k h(x_k).$
The above number quantifies how well the class $\mathcal{H}$ is able to learn a particular realization of labels $\sigma_1,\dots,\sigma_n$. To measure the overall generalizing power of the class $\mathcal{H}$, we look at the average $r(\sigma)$ for all realizations, so we can measure how well the class $\mathcal{H}$ is able to perform over all balanced problems:
$\mathfrak{R}_n(\mathcal{H}) = \mathbb{E}_\sigma\left[ \sup_{h\in\mathcal{H}}\dfrac{1}{n}\sum_{k=1}^n \sigma_k h(x_k) \right].$
We call this number the Rademacher complexity of $\mathcal{H}$. Zhang et al put this measure to a test: they found that multiple modern neural networks architectures are able to easily fit randomized labels. In other words, they took data $\{(x_k,y_k), k = 1,\dots, n \}$ and applied permutations $\phi\in S_n$ to the labels and then trained neural networks on the new data $\{(x_k,y_{\phi(k)}), k = 1,\dots, n \}$, which was capable of achieving near zero train error. These were the same networks that yield state-of-the-art results, so their conclusion is that Rademacher complexity is not a measure capable of explaining the great generalization potential of these networks. In their words:
This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks.
They also looked at different measures, such as VC dimension and fat-shattering dimension, for which they have the same conclusions. We are in need then of re-thinking how to measure the complexity of our models.
### To understand deep learning we need to understand kernel learning
The follow-up article To understand deep learning we need to understand kernel learning by Belkin et al dives deeper into the phenomenon studied in the previous article, and provides evidence that overfitting models with good generalization results are not a characteristic of deep neural networks, and it is in fact, something we can see in more shallow models. We will not go into further details of this paper, as we just want to remark that the phenomenon is wider than deep learning.
### Reconciling modern machine learning practice and the bias-variance trade-off
In Reconciling modern machine learning practice and the bias-variance trade-off by Belkin et al, we see again strong signs that the classical bias/variance trade-off picture is incomplete. In their paper, the authors show evidence that (shallow) neural networks and decision trees (and their respective emsembles) exhibit a different behavior to the classical U-shape curve for the test error as a function of the complexity of the model (number of parameters). It is observed that there is threshold point (called interpolation threshold) for which more the training error becomes zero (that is, the model has effectively memorized the data) and the test error is high. Typically this point is presented (if so) at the rightmost of the test error against complexity graph. In the paper, the authors show that beyond the interpolation threshold, the test error starts decreasing again, while the training error stays minimal.
We can see that after the interpolation threshold, the test error decreases again. The x axis represents the numbers of features $\times 10^3$.
The U shape is only the first part of a larger picture.
The authors explain this new regime in terms of regularization: when we reach the interpolation threshold, the solutions are not unique. In order to choose the solution which is optimal in some sense, we can pick the one which gives minimal $\ell^2$ norm to the vector of parameters defining the model. In the case of a shallow neural network, this corresponds to minimizing the norm in the repoducing kernel Hilbert space corresponding to the Gaussian kernel (i.e., an infinite dimensional model). As we expand the class of functions over which we look for our solution, this gives more room to find a candidate with smaller $\ell^2$ norm for the coefficients, acting as a regularization term for the solutions interpolating the data. This mechanism turns out to be an inductive bias, meaning that we strengthen the bias of our already overfitting models.
### Benign Overfitting in Linear Regression
The previous paper showed strong evidence for a double descent behavior in the test error against complexity curve: after the interpolation threshold, the test error goes down again. In Benign Overfitting in Linear Regression, Bartlett et al characterize the phenomenon in the setting of linear regression. More precisely, for i.i.d. points $(x_1,y_1),\dots,(x_n,y_n),(x,y)$, we want to find $\theta^*$ such that $\mathbb{E}(x^T\theta^* - y) = \min_{\theta}\mathbb{E}(x^T\theta - y)$. For any $\theta$, the excess risk is
$R(\theta) = \mathbb{E}\left[ (x^T\theta - y)^2 - (x^T\theta^* -y)^2 \right],$
which measures the average quadratic error of our estimations using $\theta$ with respect to the optimal estimations. The article gives upper and lower bounds for the excess risk of the minimum norm estimator $\hat\theta$, defined by
$\min_\theta \| \theta \|^2 \quad \text{ such that } \quad \|X\theta - Y\| = \min_\beta\|X\beta - Y\|,$
where $X$ is the matrix containing all the realizations $(x_1,\dots,x_n)$ of $x$ as rows and $Y$ is similarly defined for $y$. The bounds in the paper depend of the notion of effective ranks of the covariance matrix of $x$, $\Sigma = \mathbb{E}[xx^T]$, defined by
$r_{k}(\Sigma)=\frac{\sum_{i>k} \lambda_{i}}{\lambda_{k+1}} \quad , \quad R_{k}(\Sigma)=\frac{\left(\sum_{i>k} \lambda_{i}\right)^{2}}{\sum_{i>k} \lambda_{i}^{2}}$
wheere $\lambda_i$ are the eigenvalues of $\Sigma$ in decreasing order.
### Final words
We have seen throughout this article that the notions of complexity need to be re-thought, as well as the traditional idea that bias/variance trade-off is a static picture. There is hard evidence, both theoretical and practical, by means of experiments as well as the daily practices of many ML practitioner, that overfitting is not necessarily a bad thing, as long as we keep control of the test error and observe decay of it after the interpolation threshold. It is worth pointing that for this to happen, it is necessary to have a number of parameters much higher than the number of points in the dataset (taking into account the number of dimensions of each point), which can become computationally expensive. We may see in the future practices like this more often, and maybe one day the books will have to be changed and the bias/variance trade-off will be a thing of the past. In pragmatic terms, as long as it works, we can keep doing it!
Images from: 1 Elements of Statistical learning,
2, 3 Reconciling modern machine learning practice and the bias-variance trade-off
Thanks to Juan Pablo Vigneaux and Bertrand Nortier for pointing my interest to some of the papers discussed here. |
# Value Types and Operators¶
In compliance with Recombee Recommender API, there are 6 value types which correspond to possible domains of item/user properties:
• int – signed integer (currently 64bit),
• double – double-precision floating-point number (IEEE 754 compliant),
• timestamp – UTC timestamp, similar to double,
• string – sequence of Unicode characters,
• boolean – binary data type of two possible values: true or false,
• set – unordered collection of values.
Except for set, all of the types include special value of null, which, again, corresponds to the fact that null is an allowed and also default value for the property values in the API.
## Numbers¶
### Notation¶
Expression Equivalent Comment
0123.000 123.0 Leading and trailing zeros are ignored.
1.23e+3 1230.0 Exponential notation may be used.
1e9 1000000000 Using simple exponential notation for huge numbers.
123E-2 1.23 Negative exponents may also be used. Case of the e character does not matter.
### Operations¶
Expression Result Comment
1 + 2 3 Addition.
1 + 2 + 3 + 4 10 Chain of additions.
1 - 2 -1 Subtraction.
1 - 2 - 3 - 4 -9 Chain of subtractions.
-(1 + 2) -3 Unary minus.
2 * 3 6 Multiplication.
1 + 2 * 3 - 4 3 Standard operator precedence.
(1 + 2) * (3 - (4 + 5)) -18 Bracketing.
10 / 5 2.0 Division.
1 / 2 0.5 Division always results in double, event if the operands are integers!
5 / 0 NaN If the divisor is 0, the result is NaN.
9 % 4 1 Modulo division.
3.14 % 2.5 0.64 Modulo division also works for doubles.
5 % 0 NaN If the divisor is 0, the result is NaN.
### Comparison¶
Expression Result Comment
1 < 2.0 true Integers, doubles, and timestamps may be compared using standard comparison operators.
1 < 2 <= 2 == 2 != 1 >= 1 > 0 true Comparison operators may be arbitrarily chained.
1 < 2 <= 2 == 3 != 1 >= 1 > 0 false Chain of comparisons returns true if and only if all the individual comparisons are true.
2 == 2.0 true In comparison, there is no difference between integers, doubles, and timestamps.
## Strings¶
### Notation¶
Expression Comment
"foo" Strings constants are enclosed in double quotes.
"" Empty string.
"she said \"hello\"" Double quotes must be escaped.
"she said 'hello'" Single quotes needn’t be escaped.
### Comparison¶
Expression Result Comment
"Alice" < "Bob" true Strings are ordered lexicographically.
"Alice" < "Bob" < "Carol" < "Dan" true Comparisons may be chained arbitrarily.
"Alice" < "Bob" <= "Carol" != "Dan" true Comparisons in the chain may be of different types.
"Alice" < "Bob" >= "Carol" != "Dan" false All the comparisons must hold for the chain to return true.
"Alice" < 5 error Strings are only comparable with strings.
"Alice" ~ "A[a-z]+" true Strings can be matched with regular expressions (regex).
### Containment¶
Expression Result Comment
"ice" in "Alice" true in operator between strings tests whether the first string is contained in the second string.
"Ice" in "Alice" false Containment test is case sensitive.
"ice" not in "Alice" false in operator may be negated for better readability.
"" in "abc" true Empty string is contained in every string.
"abc" in "" false No non-empty string is contained in empty string.
5 in "abc" error Both operands must be strings for string containment testing.
### Concatenation¶
Expression Result Comment
"foo" + "bar" "foobar" Strings can be concatenated using the + operator.
"" + "foo" + "" "foo" Empty string is neutral element for concatenation.
"foo" + 123 "foo123" Strings can be concatenated with integers.
"foo" + 123.0 "foo123.0" Strings can be concatenated with numbers.
## Sets¶
### Notation¶
Expression Comment
{} Empty set.
{1, 2, 3} Set containing three integers.
{1, 2.0, false, "foo", null} Sets may contain values of different types. This is an extension to sets in the API, which may only contain strings.
{{1,2}, {2,3}} Sets may be nested.
### Properties¶
Expression Result Comment
{ 1, 1, 1, 2 } { 1, 2 } Sets only contain unique elements.
{ 1, 1.0 } { 1.0 } Integers, doubles, and timestamps, are merged.
{ {1,2}, {2,1} } { {1,2} } Merging also works for nested sets.
### Value Containment¶
Expression Result Comment
2 in { 1, 2, 3 } true Using in operator, you may test whether a value is contained in given set (the ∈ relation)
4 not in { 1, 2, 3 } true The in operator may be negated for better readability (the ∉ relation).
2.0 in { 1, 2, 3 } true There is no difference between integers, doubles, and timestamps when testing containment.
"2" in { 1, 2, 3 } false There is a difference between numbers and strings.
{ 1, 2 } in { 1, 2, 3 } false in stays for ∈, not ⊆!
{ 1, 2 } in { {1,2}, {3,4} } true in stays for ∈.
### Comparison¶
Expression Result Comment
{ 1, 2 } < { 1, 2, 3 } true Using < operator, you may test whether one test is a proper subset of another set (⊂ operator in set algebra).
{ 1, 2 } < { 1, 2 } false No set is a proper subset of itself.
{} < { 1, 2 } true Empty set is a proper subset of every non-empty set.
{} < {} false Empty set is not a proper subset of itself.
{ 1, 2 } <= { 1, 2, 3 } true Using <= operator, you may test whether one set is a subset of another set (⊆ operator is set algebra).
{ 1, 2 } <= { 1, 2 } true Every set is a subset of itself.
{ 1, 2 } == { 1, 2 } true == tests whether two sets are identical.
{ 1, 2 } != { 1, 2 } false != tests whether two sets are different.
{ 1, 2, 3 } >= { 1, 2 } true >= operator tests whether one set is a superset of another set (⊇ operator in set algebra).
{ 1, 2 } >= { 1, 2 } true Every set is a superset of itself.
{ 1, 2, 3 } > { 1, 2 } true > operator tests whether one set is a proper superset of another set (⊃ operator in set algebra).
{ 1, 2 } > { 1, 2 } false A non-empty set in not a proper superset of itself.
{ 1, 2 } > {} true Every non-empty set is a proper superset of an empty set.
{} > {} false Empty set is not a proper subset of itself.
### Union¶
Expression Result Comment
{ 1, 2 } + { 2, 3 } { 1, 2, 3 } Sets may be unified using the + operator (∪ in set algebra).
{ 1, 2.0 } + { 2, 3 } { 1, 2.0, 3 } Integers, doubles, and timestamps are merged when unifying sets.
{ 1, 2 } + { 2, 3 } + { 4 } { 1, 2, 3, 4 } Unions may be chained.
{ 1, 2 } + {} { 1, 2 } Unification with empty set has no effect on the original set.
{ 1, 2 } + { "2", "3" } { 1, 2, "2", "3" } Strings and numbers are handled as different values.
### Difference¶
Expression Result Comment
{ 1, 2 } - { 2, 3 } { 1 } Set difference may be obtained using the - operator (operator is set algebra).
{ 1, 2 } - { 2.0, 3.0 } { 1 } Integers, doubles, and timestamps are considered equal if they equal in values.
{ 1, 2 } - {} { 1, 2 } Subtracting an empty set has no effect.
{ 1, 2 } - { 1 } - { 2 } {} Chaining of set subtractions works from left to rights.
{ 1, 2 } - ({ 1, 2 } - { 2 }) { 2 } Parenthesizing also works.
### Intersection¶
Expression Result Comment
{ 1, 2 } & { 2, 3 } { 2 } Set intersection may be obtained using the & operator.
{ 1, 2 } & { 2.0, 3.0 } { 2 } Integers, doubles, and timestamps are considered equal if they equal in values.
{ 1, 2 } & {"1", "2"} {} Strings and numbers are handled as different values.
{"a", { 1, 2 }} & {"b", { 1, 2 }} {{1,2}} Works with subsets.
### Symmetric difference¶
Expression Result Comment
{ 1, 2 } / { 2, 3 } { 1, 3 } Symmetric difference of sets may be obtained using the / operator.
{ 1, 2 } / { 2.0, 3.0 } { 1, 3 } Integers, doubles, and timestamps are considered equal if they equal in values.
{ 1, 2 } / {"1", "2"} {1, 2, "1", "2"} Strings and numbers are handled as different values.
{"a", { 1, 2 }} / {"b", { 1, 2 }} {"a", "b"} Works with subsets.
## Logical Operators¶
### Negation (NOT)¶
Expression Comment
not 'a' == 'b' 'a' != 'b'
not 'a' > 'b' 'a' <= 'b'
not true false
not false true
Implicit conversion to boolean (for advanced uses only!):
Expression Result Comment
not -1 false Negative numbers are truthy.
not 0 true Zero numbers are falsy.
not 1.23 false Positive numbers are truthy.
not "" true Empty strings are falsy.
not "foo" false Non-empty strings are truthy.
not {} true Empty sets are falsy.
not {1,2,3} false Non-empty sets are truthy.
not null true null is falsy.
### Disjunction (OR)¶
Expression a b c Result Comment
'a' > 'b' or 'a' > 'c' 1 2 3 false If both operands are false, false is returned.
'a' > 'b' or 'a' > 'c' 2 1 3 true If at least one of boolean operands is true, the result is true.
'a' > 'b' or 'a' > 'c' 2 3 1 true If at least one of boolean operands is true, the result is true.
'a' > 'b' or 'a' > 'c' 3 1 2 true If- both the operands are true, the result is true.
Advanced uses: Implicit conversion to boolean.
Expression Result Comment
"foo" or "bar" "foo" If the first operand truthy, it is returned.
"" or false false If the first operand is falsy, the second operand is returned.
false or "" "" If the first operand is falsy, the second operand is returned.
### Conjunction (AND)¶
Expression a b c Result Comment
'a' > 'b' and 'a' > 'c' 1 2 3 false If both operands are false, false is returned.
'a' > 'b' and 'a' > 'c' 2 1 3 false If at least one of boolean operands is false, the result is false.
'a' > 'b' and 'a' > 'c' 2 3 1 false If at least one of boolean operands is false, the result is false.
'a' > 'b' and 'a' > 'c' 3 1 2 true If both the operands are true, the result is true.
Advanced uses: Implicit conversion to boolean.
Expression Result Comment
"foo" and "bar" "bar" If the first operand truthy, the second operand is returned.
"" and false "" If the first operand is falsy, it is returned.
false and "" false If the first operand is falsy, it is returned.
## Conditional Operator¶
Expression a b Result Comment
if 'a' > 'b' then "foo" else "bar" 10 5 "foo" then-value is returned if the condition is satisfied.
if 'a' < 'b' then "foo" else "bar" 10 5 "bar" else-value is returned if the condition is not satisfied.
if 'a' < 'b' then "foo" error else clause must always be present.
if 'a' < 'b' then "foo" else (if 'a' > 'b' then "bar" else "bah") 5 5 "bah" if-else statements may be nested using parentheses.
Expression Result Comment
if -1 then "foo" else "bar" "foo" Negative numbers are truthy.
if 0 then "foo" else "bar" "bar" Zero numbers are falsy.
if 1.23 then "foo" else "bar" "foo" Positive numbers are truthy.
if "" then "foo" else "bar" "bar" Empty strings are falsy.
if "bah" then "foo" else "bar" "foo" Non-empty strings are truthy.
if {} then "foo" else "bar" "bar" Empty sets are falsy.
if {1,2,3} then "foo" else "bar" "foo" Non-empty sets are truthy.
if null then "foo" else "bar" "bar" null is falsy. |
# Group cohomology of an abelian group with nontrivial action
How do I compute the group cohomology $H^2(G,A)$ if G is a finite abelian group acting nontrivially on a finite abelian group A?
-
Type it into a computer. Seriously. Magma will definitely do it. – Kevin Buzzard May 30 '11 at 19:44
GAP too........ – Fernando Muro Sep 14 '11 at 12:10
If $G$ is any group and $A$ is any $G$-module, then $H^2(G,A)$ can be identified with the set of the equivalence classes of extensions $$1\to A\to H\to G\to 1$$
such that the action of $G$ on $A$ is the given action. Two extensions $H_1,H_2$ are said to be equivalent if there is an isomorphism $H_1\to H_2$ that makes the extension exact sequences commute. See K. Brown, Group cohomology, chapter 4.
-
One can do the calculation using Kunneth theorem and the cohomology of cyclic group.
See eqn J18 and appendix J.6 and J.7 in a physics paper http://arxiv.org/pdf/1106.4772v2
-
I don't think this works so easily with non-trivial coefficients. – Fernando Muro Sep 14 '11 at 12:12
Dear Fernando: Eqn. J60 - eqn. J70 in the above paper give some explicit results for non-trivial coefficients, for some simple Abelian groups. But do your suggest that I cannot use Kunneth theorem for non-trivial coefficients? – Xiao-Gang Wen Sep 14 '11 at 16:02
I agree with Fernando. At least the derivation of (J54) is doubtful: It's based on (J43), but in (J43) one has $H^i(G_1;M)\otimes_M H^{n-i}(G_2;M)$ while by setting $G_1 := Z_2^T, G_2 := Z_n$, (J54) reads: $H^i(G_1;Z_T) \otimes_Z H^{d-i}(Z_n;Z)$, i.e. in both components of the tensor product in (J43) the coefficients are equal, while in (J54) they differ! Also be aware of the wikipedia references for Kuenneth-formulars: They require trivial coefficients! (otherwise wikipedia had to use (co)homology with local coefficients, what they don't). – Ralph Sep 14 '11 at 20:58
Dear Ralph: Thanks for the comments. I agrees with you that eqn. J43 from a webpage is intended for trivial coefficient. But if Kuenneth formula only depends on the cohomological structure algebraically, should it also apply to non-trivial coefficients, provided that the group action "splits" in some way? Here $Z_T$ is the same as $Z$. Just that $G_1$ has a non-trivial action on $Z_T$. In fact, $G1\times G_2$ acts "naturally" on $H^i(G_1,Z_T)\otimes_Z H^{d−i}(G2,Z)$, Let $a\in H^i(G_1,Z_T)$ and $b\in H^{d−i}(G_2,Z)$. We have a group action $(g1,g2) \cdot (a\otimes b)=(g1\cdot a)\otimes b$. – Xiao-Gang Wen Sep 14 '11 at 22:31
In the case that I am interested in, the action does not split in any way at all. To make things more precise, in my case $G$ is a transitive abelian subgroup of $S_n$ acting on $A=(Q/Z)\times\ldots \times (Q/Z)$ by permuting factors (in the concrete problem I actually have the multiplicative group of complex numbers without $0$ insead of $Q/Z$). – Mitja Sep 15 '11 at 12:27
You can compute it using the Bar resolution, see [Weibel, H-book].
-
I know exactly how it is related to extensions and how to compute it explicitly if the actions is trivial (using Kunneth formula to translate the problem to cyclic groups). I was just trying to find out if there is some relatively easy procedure that gives you an explicit answer for nontrivial actions (perhaps by somehow using the long exact sequence to translate the problem to that of trivial actions). – Mitja May 30 '11 at 21:06
I don't think it's a good idea to use the bar resolution here since it's way to large. You'll get a projective resolution of mimimal complexity by tensoring the periodic resolutions of the cyclic summands of $G$. – Ralph May 31 '11 at 4:14 |
Why is ancient astronomy right?
• B
Let's be honest...Why were planets visible to the ancient people and not to us humans any longer? Is it because the planets distanced? How can some planets distance themselves while others don't? It's said that light pollution has affected the visibility of planets in the sky, whereas there still reports of people seeing at least one planet in the sky. So how come ancient astrology was on right before the telescopes?
Related Astronomy and Astrophysics News on Phys.org
Orodruin
Staff Emeritus
Homework Helper
Gold Member
Why were planets visible to the ancient people and not to us humans any longer?
This is an incorrect supposition. Many planets are visible to the naked eye - brighter than the stars. Most notably Venus and Mars, but I regularly see Mercury, Jupiter, and Saturn too.
Is it because the planets distanced? How can some planets distance themselves while others don't?
It is unclear what you mean by this. The planets all have different orbits, resulting in different periods and therefore varying distances.
So how come ancient astrology was on right before the telescopes?
You have to separate astrology from astronomy. The latter is an empirical science, the former is hokum.
Ancient astronomers could easily measure the movement of the planets. Saturn and closer planets are visible to the eye without problem. Uranus was not discovered until the 18th century and Neptune 19th.
QuantumQuest
This is an incorrect supposition. Many planets are visible to the naked eye - brighter than the stars. Most notably Venus and Mars, but I regularly see Mercury, Jupiter, and Saturn too.
It is unclear what you mean by this. The planets all have different orbits, resulting in different periods and therefore varying distances.
You have to separate astrology from astronomy. The latter is an empirical science, the former is hokum.
Ancient astronomers could easily measure the movement of the planets. Saturn and closer planets are visible to the eye without problem. Uranus was not discovered until the 18th century and Neptune 19th.
How come i haven't been capable of pointing out any planets on my own? and the only one planet that i was pointed to looked allot like a normal star in the same white seeming color. What colors do you see the planets you pointed out in?
phinds
Gold Member
2019 Award
How come i haven't been capable of pointing out any planets on my own?
I don't see how you can expect us to know why you fail to do what most of the rest of us do easily.
CalcNerd
mfb
Mentor
To the naked eye, planets look very similar to bright stars. The most notable difference is their position in the sky which varies a bit from day to day. They also tend to flicker a bit less in turbulent air.
I don't see how you can expect us to know why you fail to do what most of the rest of us do easily.
Im being honest i don't see any planet and the only one shown to me didn't look like a planet it looked more like a star I don't need your insult at all....how long does one have to look in the sky without a telescope until they notice the orbit of a planet?
fresh_42
Mentor
Light pollution plays a crucial role. Especially if you live in an urban area. And other than phinds, I doubt that I could find or see Mercury. Venus isn't difficult. She is a little brighter than stars and changes position within hours. Sometimes Mars is easy, too, if it can be seen at all, because it really appears to be a little orange and as Venus, is slowly moving. I haven't seen Jupiter or Saturn either, although I tried. But I guess my main problem (beside light pollution) was, that I didn't knew very well where to search for. And this might be another essential point. You have to know where and also when to look at. If you have plenty of time, as your ancient astronomers probably had, you could watch and see which points are slowly moving (differences between half an hour or so). In addition you have to rule out high flying aircrafts, ISS, and some other man made satellites.
Vedward
Astronuc
Staff Emeritus
As Orodruin mentioned, the planets, notably Venus and Mars, as well as Mercury, Jupiter, and Saturn are readily visible. It helps when they are on our side of the sun. Mercury might be a challenge given it's proximity to the sun, but Venus and Mars are readily visible.
how long does one have to look in the sky without a telescope until they notice the orbit of a planet?
If Venus is visible, it's readily apparent. Venus has been incorrectly called the 'Evening Star' or 'Morning Star'. It is very bright, and as mfb indicated, the planets do not flicker like the stars do.
https://stardate.org/astro-guide/ssguide/venus
There are numerous astronomical resources, which provide information on the planets, their positions and when they are visible.
For example,
You can see all five bright planets in the evening this month! But you’ll have to look hard for two of them. First, the easy ones … Jupiter, Mars and Saturn pop out as darkness falls in July 2016. Jupiter, brightest of the bunch, is found in the western half of the sky until late evening. Mars is still a bright beacon, although fainter than Jupiter at nightfall and early evening, still in a noticeable triangle with Saturn and the bright star Antares. Jupiter, Mars and Saturn are visible throughout July. Now the more difficult planets … Mercury and Venus. In July 2016, they’re low in the glare of evening twilight, quickly following the sun below the horizon before nightfall. But, as the days pass, both Mercury and Venus get higher in the sky. By mid-July, you can start searching for them with the eye, in the west after sunset. By late July, you might be able to see all five bright planets at once, briefly, after sunset. Follow the links below to learn more about July planets in 2016.
http://earthsky.org/astronomy-essentials/visible-planets-tonight-mars-jupiter-venus-saturn-mercury
http://earthsky.org/astronomy-essen...onight-mars-jupiter-venus-saturn-mercury#mars
QuantumQuest
Combine this info http://www.space.com/5743-storied-history-word-planet.html with the fact that the "Ancients" compiled generations of observations, let me know your thoughts.
before i do that answer my question...how long does it take for anyone to measure the orbit of a planet without telescope?
Astronuc
Staff Emeritus
how long does it take for anyone to measure the orbit of a planet without telescope?
One would have to take several measurements over days for Venus, and perhaps weeks for Mars to months for Jupiter and Saturn, keeping careful note of date and time, and positions with respect to the coordinate system.
There are plenty of astronomical resources that map the planets' orbits. Why would one bother to repeat such measurements?
One should study some basic astronomy, including understanding about the ecliptic.
http://earthsky.org/space/what-is-the-ecliptic
fresh_42
Mentor
how long does it take for anyone to measure the orbit of a planet without telescope?
I would say it took Copernicus half a lifetime. There have been column after column of notes on positions, entire fully written books.
I don't know whether it would be easier today as we know Kepler's laws.
One would have to take several measurements over days for Venus, and perhaps weeks for Mars to months for Jupiter and Saturn, keeping careful note of date and time, and positions with respect to the coordinate system.
There are plenty of astronomical resources that map the planets' orbits. Why would one bother to repeat such measurements?
Yep...Yet it seems very tiresome to me to do all that work. Do you possess any links to the first and all records left behind by ancient people about jupiter, venus, saturn...etc?
Yet it seems very tiresome to me to do all that work.
CalcNerd, DrSteve, davenn and 3 others
Astronuc
Staff Emeritus
Yep...Yet it seems very tiresome to me to do all that work. Do you possess any links to the first and all records left behind by ancient people about jupiter, venus, saturn...etc?
I don't know about ancient data sources, but current sources are available from various national observatories. In the US, the US Naval Observatory publishes data on the moon and planets.
Depending on where one lives, one could probably contact a local (regional or national, that is) and see if they offer a publication (either hard copy or pdf) or software to determine the position or orbits of the planets.
https://en.wikipedia.org/wiki/List_of_astronomical_observatories
Last edited:
lucivaldo
russ_watters
Mentor
how long does it take for anyone to measure the orbit of a planet without telescope?
For the brightest planet, Venus, you really can't miss it since it is the brightest object in the night sky that isn't the moon. And if you are paying attention, you should be able to notice its movement over a period of a few weeks (or even a few days if you are really paying attention). So the problem is, quite simply, that you are spending very little time looking at the sky.
About every other year, Jupiter and Venus are very close to each other in the sky. For an ancient with nothing else to look at at night that isn't lit by a campfire, they'd pretty much have to be blind not to notice them changing position from one day to the next:
Jupiter, Mars, Venus and Mercury are so bright and therefore so noticeable that the ancients simply couldn't help but realize they weren't stars, what with Pokemon Go and Must See TV not being available yet to occupy their time at night.
CalcNerd, anorlunda, Merlin3189 and 3 others
what with Pokemon Go and Must See TV not being available yet to occupy their time at night.
Those were the days.
fresh_42
Mentor
So the problem is, quite simply, that you are spending very little time looking at the sky.
I have also the problem, that I don't have a horizon nearby and it's difficult to see objects that don't rise very far in the sky. Plus light pollution is bigger the closer it comes to human housing. So sometimes it is not as easy as simply look out of the window.
fresh_42
Mentor
1oldman2
davenn
Gold Member
2019 Award
Sometimes Mars is easy, too, if it can be seen at all, because it really appears to be a little orange
Mars ranges from easy to that flaming bright you cant miss it, as it has been for the recent months
has also been very easy to see its motion relative to the background stars. It has just finished doing a big loop into the constellation of Scorpio and back out
Dave
I think Jupiter is the most easy planet to spot since it's very bright when near to Earth, not so much as Venus, but Venus is only visible in the sky just after the Sun sets, or soon before Sun Rises - because it's nearer to the Sun than we are.
Jupiter is often the only bright object that can be seen in the sky at any time of night other than the Moon, if there is a thin cloud cover.
The thin clouds are enough to block almost all stars, but Jupiter still can be seen.
1oldman2
Chronos
Gold Member
All planets out to saturn are easily visible to the naked eye most of the time.
sophiecentaur
Gold Member
How come i haven't been capable of pointing out any planets on my own?
Partly for the same reason that you may not be able to point out a rare summer visiting bird in your local park or an unusual version of a well known saloon car. You need to get at least partially immersed in any topic before you start to get any success.
There is a small matter of Dark Adaption, which can help you, even in urban conditions. You need to spend several minutes / best part of an hour and you start to see stuff up there that was not there when you stepped outside your door.
It's along these lines:-
Q: "Can you play the piano?"
A: "Dunno, I've never tried."
As for the Ancients, people tend to confuse technology with intelligence and ingenuity. These old guys were just as smart as us and have only been proved 'wrong' in the light of some serious help from later technology. The lack of light pollution was a significant advantage that they had. Plus enough money to employ servants (or slaves) to allow them to devote their lives (24/7) to their study.
Last edited:
1oldman2
Astronuc
Staff Emeritus
To determine how long it would take to calculate or plot the orbit of a planet, one would need to determine how accurate one wishes the calculation to be. Consider the following:
Code:
Planet Distance from Mass
Sun (millions km) 10^22 kg Solar orbital period
Jupiter 778 190,000 4332 days (10.87 years)
Saturn 1429 56,900 10760 days (29.48 years)
So for the orbit of Jupiter, it would take nearly 11 years to find Jupiter in the same region of sky (against the stellar background) and nearly 30 years for Saturn. One would need a very good surveying instrument. One could make yearly measurements against the stars, then gradually construct an orbit based on the geometry between the earth and the star field.
http://www.telescope.org/nuffield/pas/solar/solar7.html
1oldman2
The thin clouds are enough to block almost all stars, but Jupiter still can be seen.
Here is a good example.
anorlunda |
# The fate of the Antennae galaxies
http://hdl.handle.net/10138/309984
#### Citation
Lahén , N , Johansson , P H , Rantala , A , Naab , T & Frigo , M 2018 , ' The fate of the Antennae galaxies ' , Monthly Notices of the Royal Astronomical Society , vol. 475 , no. 3 , pp. 3934-3958 . https://doi.org/10.1093/mnras/sty060-
Title: The fate of the Antennae galaxies Author: Lahén, Natalia; Johansson, Peter H.; Rantala, Antti; Naab, Thorsten; Frigo, Matteo Contributor organization: Department of Physics Date: 2018-04 Language: eng Number of pages: 25 Belongs to series: Monthly Notices of the Royal Astronomical Society ISSN: 0035-8711 DOI: https://doi.org/10.1093/mnras/sty060- URI: http://hdl.handle.net/10138/309984 Abstract: We present a high-resolution smoothed particle hydrodynamics simulation of the Antennae galaxies (NGC 4038/4039) and follow the evolution $3$ Gyrs beyond the final coalescence. The simulation includes metallicity dependent cooling, star formation, and both stellar feedback and chemical enrichment. The simulated best-match Antennae reproduces well both the observed morphology and the off-nuclear starburst. We also produce for the first time a simulated two-dimensional metallicity map of the Antennae and find good agreement with the observed metallicity of off-nuclear stellar clusters, however the nuclear metallicities are overproduced by $\sim 0.5$ dex. Using the radiative transfer code SKIRT we produce multi-wavelength observations of both the Antennae and the merger remnant. The $1$ Gyr old remnant is well fitted with a S\'ersic profile of $n=4.05$, and with an $r$-band effective radius of $r_{\mathrm{e}}= 1.8$ kpc and velocity dispersion of $\sigma_{\mathrm{e}}=180$ km$/$s the remnant is located on the fundamental plane of early-type galaxies (ETGs). The initially blue Antennae remnant evolves onto the red sequence after $\sim 2.5$ Gyr of secular evolution. The remnant would be classified as a fast rotator, as the specific angular momentum evolves from $\lambda_R\approx0.11$ to $\lambda_R\approx0.14$ during its evolution. The remnant shows ordered rotation and a double peaked maximum in the mean 2D line-of-sight velocity. These kinematical features are relatively common among local ETGs and we specifically identify three local ETGs (NGC 3226, NGC 3379 and NGC 4494) in the ATLAS$^\mathrm{3D}$ sample, whose photometric and kinematic properties most resemble the Antennae remnant.We present a high-resolution smoothed particle hydrodynamic simulation of the Antennae galaxies (NGC 4038/4039) and follow the evolution 3 Gyr beyond the final coalescence. The simulation includes metallicity-dependent cooling, star formation, and both stellar feed-back and chemical enrichment. The simulated best-match Antennae reproduce well both the observed morphology and the off-nuclear starburst. We also produce for the first time a simulated two-dimensional (2D) metallicity map of the Antennae and find good agreement with the observed metallicity of off-nuclear stellar clusters; however, the nuclear metallicities are overproduced by similar to 0.5 dex. Using the radiative transfer code SKIRT, we produce multiwavelength observations of both the Antennae and the merger remnant. The 1-Gyr-old remnant is well fitted with a Sersic profile of n = 7.07, and with an r-band effective radius of r(e) = 1.6 kpc and velocity dispersion of sigma(e) = 180 km s(-1) the remnant is located on the Fundamental Plane of early-type galaxies (ETGs). The initially blue Antennae remnant evolves on to the red sequence after similar to 2.5 Gyr of secular evolution. The remnant would be classified as a fast rotator, as the specific angular momentum evolves from lambda(Re) approximate to 0.11 to 0.14 during its evolution. The remnant shows ordered rotation and a double peaked maximum in the mean 2D line-of-sight velocity. These kinematical features are relatively common amongst local ETGs and we specifically identify three local ETGs (NGC 3226, NGC 3379, and NGC 4494) in the ATLAS(3D) sample, whose photometric and kinematic properties most resemble the Antennae remnant. Description: 27 pages, 18 figures, submitted to MNRAS Subject: astro-ph.GA RADIATIVE-TRANSFER CODE galaxies: starburst ELLIPTIC GALAXIES galaxies: evolution H-II REGIONS methods: numerical SMOOTHED PARTICLE HYDRODYNAMICS galaxies: individual: NGC 4038/4039 MASS-METALLICITY RELATION YOUNG STAR-CLUSTERS TO-LIGHT RATIO DIGITAL SKY SURVEY galaxies: kinematics and dynamics LENS ACS SURVEY ULTRALUMINOUS INFRARED GALAXIES 115 Astronomy, Space science Peer reviewed: Yes Usage restriction: openAccess Self-archived version: publishedVersion
|
## The sub-harmonic bifurcation of Stokes waves.(English)Zbl 0962.76012
The behaviour of steady periodic water waves on water of infinite depth, that satisfy exactly the kinematic and dynamic boundary conditions on the free surface of water, with or without surface tension, are given by solutions of a nonlinear pseudo-differential operator equation for a $$2\pi$$-periodic function of a real variable. The study is complicated by the fact that the equation is quasilinear, and it involves a non-local operator in the form of a Hilbert transform. Although this equation is exact, it is quadratic with no higher order terms, and the global structure of its solution set can be studied using elements of the theory of real analytic varieties, and using the variational technique. The purpose of this paper is to show that uniquely defined arc-wise connected set of solutions with prescribed minimal period bifurcates from the first eigenvalue of the linearized problem. Although the set is not necessarily maximal as a connected set of solutions, and may possibly self-intersect, it has a local real analytic parametrization that contains a wave of greatest height in its closure. The authors also examine the dependence of the solution on the Froude number in relation to Stokes waves.
### MSC:
76B15 Water waves, gravity waves; dispersion and scattering, nonlinear interaction 76E99 Hydrodynamic stability
Full Text: |
There are a number of useful resources for R programming. I pointed out quite a few in the course syllabus and in the introduction to econ 8080 slides. The notes for today’s class mainly come from Introduction to Data Science by Rafael Irizarry. I’ll cover some introductory topics that I think are most useful.
Most Important Readings: Chapters 1 (Introduction), 2 (R Basics), 3 (Programming Basics), and 5 (Importing Data)
Secondary Readings: (please read as you have time) Chapters 4 (The tidyverse), 7 (Introduction to data visualization), 8 (ggplot2)
The remaining chapters below are just in case you are particularly interested in some topic (these are likely more than you need to know for our course):
• Data visualization - Chapters 9-12
• Data wrangling - Chapter 21-27
• Github - Chapter 40
• Reproducible Research - Chapter 41
I think you can safely ignore all other chapters.
I’m not sure if it is helpful or not, but here are the notes to myself that I used to teach our two review sessions on R.
## List of useful R packages
• AER — package containing data from Applied Econometrics with R
• wooldridge — package containing data from Wooldridge’s text book
• ggplot2 — package to produce sophisticated looking plots
• dplyr — package containing tools to manipulate data
• haven — package for loading different types of data files
• plm — package for working with panel data
• fixest — another package for working with panel data
• ivreg — package for IV regressions, diagnostics, etc.
• estimatr — package that runs regressions but with standard errors that economists often like more than the default options in R
• modelsummary — package for producing nice output of more than one regression and summary statistics
Version: [csv] [RData] [dta]
If, for some reason this doesn’t work, you can use the following code to reproduce this data
firm_data <- data.frame(name=c("ABC Manufacturing", "Martin\'s Muffins", "Down Home Appliances", "Classic City Widgets", "Watkinsville Diner"),
industry=c("Manufacturing", "Food Services", "Manufacturing", "Manufacturing", "Food Services"),
county=c("Clarke", "Oconee", "Clarke", "Clarke", "Oconee"),
employees=c(531, 6, 15, 211, 25))
# Practice Questions
Note: We’ll try to do these on our own, but if you get stuck, the solutons are here
1. Create two vectors as follows
x <- seq(2,10,by=2)
y <- c(3,5,7,11,13)
Add x and y, subtract y from x, multiply x and y, and divide x by y and report your results.
2. The geometric mean of a set of numbers is an alternative measure of central tendency to the more common “arithmetic mean” (this is the mean that we are used to). For a set of $$J$$ numbers, $$x_1,x_2,\ldots,x_J$$, the geometric mean is defined as
$(x_1 \cdot x_2 \cdot \cdots \cdot x_J)^{1/J}$
Write a function called geometric_mean that takes in a vector of numbers and computes their geometric mean. Compute the geometric mean of c(10,8,13)
3. Use the lubridate package to figure out how many days there were between Jan. 1, 1981 and Jan. 10, 2022.
4. mtcars is one of the data frames that comes packaged with base R.
1. How many observations does mtcars have?
2. How many columns does mtcars have?
3. What are the names of the columns of mtcars?
4. Print only the rows of mtcars for cars that get at least 20 mpg
5. Print only the rows of mtcars that get at least 20 mpg and have at least 100 horsepower (it is in the column called hp)
6. Print only the rows of mtcars that have 6 or more cylinders (it is in the column labeld cyl) or at least 100 horsepower
7. Recover the 10th row of mtcars
8. Sort the rows of mtcars by mpg (from highest to lowest) |
Session 58 - Gamma Rays & Cosmic Rays.
Oral session, Tuesday, January 16
Salon del Rey South, Hilton
## [58.06] EGRET Observations of Radio Bright Supernova Remnants
J. A. Esposito, P. Sreekumar (USRA NASA/GSFC), S. D. Hunter (NASA/GSFC), G. Kanbach (MPE)
Data from Phase 1 through Phase 3 of EGRET observations have been analyzed for gamma-ray emission near supernova remnants (SNRs) with radio flux greater than 1 Jy at 1 GHz. Comparision of the position of unidentified gamma ray sources from the second EGRET catalog (Thompson et al. 1995, ApJS, in press) with fourteen SNRs near the Galactic plane, |b| < 10\deg, indicates a statistically significant correlation, with a probability of chance coincidence better than 2 \times 10^-5 for five of thirty-two unidentified EGRET gamma ray sources being spatially consistant with radio bright SNRs.
Four of the unidentified EGRET sources studied have strong spatial correlations with radio bright SNRs associated with nearby medium mass (\sim 5000\rm\ M_ødot) molecular clouds. In these cases the mass of the molecular cloud is substantially greater than the mass of the SNR. If the gamma-ray emission is assumed to originate from cosmic rays accelerated in the SNR and interacting with the molecular cloud then, the inferred cosmic ray density in the vicinity of the SNRs is significantly enhanced compared to the average Galactic cosmic ray density in the Solar neighborhood (Esposito et al. 1996, ApJ, in press). This result supports the explanation of SNRs being the dominant acceleration site, and possibly the source, of Galactic cosmic rays.
Spectral analysis has been performed on two unidentified EGRET sources, 2EG J2020-4026 and 2EG J0618+2234, which are spatially correlated with \gamma\rm\, Cygni and IC443 respectively. The spectral index of both sources is consistant with a cosmic ray source spectral index \sim2. However, the spectral analysis is photon limited above 2 GeV.
The gamma-ray intensity or upper limit for all fourteen SNRs and spectra of 2EG J2020-4026 and 2EG J0618+2234 will be presented along with interpretation of the results. |
# Questions on a problem I am running into with a Behringer Subwoofer
#### Ih2010
Joined Mar 19, 2016
6
Hi guys! I own a Behringer eurolive b1500d-pro sub that I used to use for dj'ing gigs in college. About 18-24 months back I started having troubles where it would only play music through it 20-30% of the time. It was a type of issue where if I could get it to start playing music, it would be good for the entire night. However, getting it to start was a pain in the ass and I could not develop a reliable trick - for example I would play around with it for an hour in my apartment, get it to start playing music, and not lose signal until I turned it off. Hooray! However, I would then take it to a frat house or some such and completely fail to get it to play music. For clarification, it turns on every time and I get the blue power signal - it's the playing of actual music that is spotty. Ok that's the background.
Now, about 8-12 months ago I tore into the sub, pulled out the motherboard, and starting playing around with it with multimeter I borrowed from my electronics lab. Couldn't figure it out, seemed complex, and I know that I barely troubleshooted - it was just that I didn't know where to begin. Now, I've dug back into the thing and I'm ready to take another crack at it after pulling it apart this morning and examining it for an hour or two.
It is my belief that a connection was shorted somewhere and that this connection is able to work some of the time, but not very often. This is why I was able to get sound to come out of it once in a great while. I recognize that this assumption has little basis in fact and more based on personal experiences with the device closing in on 2 years ago, but it's all I really know to work with.
By posting on here I hope not to find the answer to all my problems in a single post - but learn where to direct my attention, what to start looking at closer, what resources I may need to purchase, and overall the level of difficulty of what I'm dealing with. At the end of the day I could probably take this to a repair shop and get it fixed for $100-250, but, for many reasons, I would like to avoid that. I've always had a mild curiosity surrounding electronics and have an Amazon wish list with half a dozen things for ~$75 that would enable me to start working directly on breadboards and motherboards - I just need to press the buy button when I know I can use it productively for a project (like this one!).
Anywho, hope I'm posting this in the right place. I've got part numbers, pictures, several (possibly irrelevant, but maybe not) observations - but nothing of substance to enable me to further troubleshoot/investigate.
Cheers,
Ian
#### #12
Joined Nov 30, 2010
18,210
I've got part numbers, pictures, several (possibly irrelevant, but maybe not) observations
But we don't.
How about posting some information so we aren't just guessing into the wind?
ps, an open connection, like a cracked solder joint, is more likely that a short.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
Now, about 8-12 months ago I tore into the sub, pulled out the motherboard
Web references show/describe the "Behringer eurolive b1500d-pro sub" as a 'powered subwoofer' unit -- Hence your allusion to a 'motherboard' (i.e. the main-board/back-plane of a personal computer system) would seem a non-sequitur (of sorts) -- Please elucidate!
It is my belief that a connection was shorted somewhere and that this connection is able to work some of the time, but not very often.
I agree with the previous respondent (@#12 ) -- An 'open' is more likely - a 'short' (anywhere outside of the small signal stages) -- For example in the PSU, power Amp or transducer itself -- would likely cause other issues which, in turn, would likely present as a persistent failure (as opposed to an intermittent) -- Moreover, the probability of said etiology is greatly enhanced in consideration of the intense vibratory conditions inherent to such a device...
Best regards
HP
#### Ih2010
Joined Mar 19, 2016
6
Mostly because I was on my phone writing this up so far and re-learning what I learned in the past. Uploading pictures now and will edit this post with more info shortly. Apologies I didn't include it in OP, 'tis a lazy Saturday afternoon for me!
#### Ih2010
Joined Mar 19, 2016
6
It would appear that I am unable to edit my previous post. Not sure if that is a restriction because I am a new account - but in either case sorry for the double-post. The album is made and I hosted on imgur, I could directly link the pics here but since I've got 14 images on it I'll just leave this here for thread cleanliness: http://imgur.com/a/u22y6
Explanation of potentially more important images:
1-3 show about a third of the board each from a top-down view
4 highlights a little mark that a friend pointed out in that it looked different from the other places on the board where these same parts are - but it may be just that it's some sort of solder/glue
5 is my basement saying hello and showing a rough size of what I am dealing with. Yes, I have anti-static gloves and none of this ever touches the carpet you are seeing
6 and 9 are closer looks at the bottom piece below the main board. This is where the power gets plugged in and a switch is pressed in to turn the device on
7 looks at the board from what would be the top if it was in it's typical standing position. Get's some view on the bottom half of the board
8 also gets a bit of insight to the bottom half of the board, but from the side rather than the top. Not sure how useful 7/8 really are.
10/11 show, with flash on, more views of the middle/bottom part of the board
12 starts focusing in on the toroid (what a cool little device!!!!)
13 - Before these pictures were taken, there was some plaster/battery leakage (My friend who spent 20 minutes looking at this with me thought it was possibly a battery leakage until we learned what the purpose of the device is) - Now after removing it with some isopropyl and a q-tip and I see how moveable the piece sticking out from the middle is I wonder what the purpose, of what I believe was plaster, was for. I am concerned this will turn into a problem moving forward if I can fix the initial break.
14 - A closer look at the resistor/capacitor poking out from the middle of the toroid. It doesn't look like it belongs there at all and I am confused by it's existence in that place.
I would be more than happy to take more, and possible more accurate, photos to help further the inspection of this little piece. The board does say SMPSU28 on it in 2 places, so am wondering how accurate the schematics are for the class-d circuit posted on the only reply in this thread (Looks complicated enough!). Also, this thread looks very useful in that someone has a similar model amp with some problems that also uses the infamous class-d amp. My electronics never hit a mastery level and it's rusting over with no use in 2 years so I'm concerned I will not be able to pull enough potentially useful information out of that thread. Hmm. But, I am determined and do not want to give this up!
And also! In response to the last reply, yes I believe 'motherboard' is not the right terminology, however it's the closest comparison to what I am looking at that I am familiar with. If it's an open, what are some steps to diagnose where/why/how it is occurring? I would be happy to read any documentation/procedures and/or would love to dig for this info in a different subforum if that information is around somewhere. Thank you for the quick responses!
#### #12
Joined Nov 30, 2010
18,210
A massive Bass amplifier that is almost completely surface mount.
You're going to have to start with the tap-tap wiggle-wiggle looking for an intermittent connection.
If that doesn't work, this will need a schematic and signal tracing. That is not a job for an amateur.
#### crutschow
Joined Mar 14, 2008
25,250
Web references show/describe the "Behringer eurolive b1500d-pro sub" as a 'powered subwoofer' unit -- Hence your allusion to a 'motherboard' (i.e. the main-board/back-plane of a personal computer system) would seem a non-sequitur (of sorts) -- Please elucidate!
.............
Would the word "main board" instead of mother board be more appealing to your pedantic nature?
#### Ih2010
Joined Mar 19, 2016
6
If that doesn't work, this will need a schematic and signal tracing. That is not a job for an amateur.
I'm the type of person who enjoys jumping into a volcano and trying to dig my way out rather than be feared away due to being an amateur. I'm at least not a complete novice when it comes to electronics, but the more I read and learn the more I am getting nervous this will be a complete waste of my Saturday :/ Hmm.
Other observations/notes:
I've tried the wiggle wiggle approach when this problem first started occurring with the connections on the sub of the cables that run between itself and the speakers and I vaguely recall having some success with that initially. As time went on it became more and more difficult to reproduce and eventually was impossible. I am unsure what I could have wiggled to make that happen. Also, and this was more common, I would leave the sub plugged into my system while it was not working and anytime between 15-90 minutes after I started playing steady music the subwoofer would 'pop on' and then work nominally until I killed the power to the sub. I definitely noticed some odd patterns when I first began having this problem but, alas, it was quite some time ago and many of those patterns deteriorated into the device simply not working at all.
Other random tidbits that I've found:
I have 2 IRS20955S MOSFETS on the board - found this data sheet that explains a lot about them: http://www.irf.com/product-info/datasheets/data/irs20955pbf.pdf
I also found a general tutorial on class d amps from irf: http://www.irf.com/product-info/audio/classdtutorial.pdf
My problem with both of these is that I am not an electrical engineer, merely a network engineer, and I am hoping to fix this unit without becoming a full-blown electrical engineer (yet )
Edit to #12 about it being a surface mount: I have not tried this yet for fear of breaking something and no gameplan on what to do if I did it, but I do believe I can dismount both boards from the heatsink it is attached to in order to get at the back. But then my question becomes: what do I gain with access to the back?
Also, as another addendum, I do currently have one of these multimeters
#### #12
Joined Nov 30, 2010
18,210
I'm the type of person who enjoys jumping into a volcano and trying to dig my way out rather than be feared away due to being an amateur.
That's what I said when I took my fathers pocket watch apart.
I am getting nervous this will be a complete waste of my Saturday
You are thinking about doing this in one day?
You're going to need some luck.
We just had a guy spend 2 weeks on a kitchen blender that only had 22 parts.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
Would the word "main board" instead of mother board be more appealing to your pedantic nature?
Inasmuch as 'mother board' is jargon exclusive to IT hardware, 'main board' is a definite improvement all around -- Still...
I apologize that my 'call for clarity' was perceived as pedantry -- I'm certain we may agree that ambiguity is best avoided in any event but especially on international fora?
Best regards
HP
#### #12
Joined Nov 30, 2010
18,210
Anybody who works on these or looks at the photos knows that there is only one board in there. The owner can call it the elephant board for all I care, and there will be no confusion as to which of the one boards he is referring.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
@Ih2010
FWIW -- As a preliminary, please try this:
Power the unit on, apply a signal to the input, then with the 'blunt end' of an all plastic ballpoint pen (or similar non-conductive implement) carefully apply pressure to various points on the board (being very, very careful to avoid mechanical damage to the smt components)...
Best regards and good luck!
HP
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
Anybody who works on these or looks at the photos knows that there is only one board in there. The owner can call it the elephant board for all I care, and there will be no confusion as to which of the one boards he is referring.
Inasmuch as both you and the TS are native English speakers (or, at very least, masters of said language) such usage indeed poses no problem! That said, these fora are frequented by many 'ESLers', etc, whom may find such usage confusing --- Moreover, the original post presented no images -- It is not unreasonable to assume that a piece of audio equipment may contain a dedicated/integrated 'PC' as do certain (musical) synthesizers --- FWIW It was not my intent to 'correct' of the TS -- My remark was a genuine request for clarification
Best regards
HP
Last edited:
#### Kermit2
Joined Feb 5, 2010
4,162
I clearly see that your post is written in english HP,
But,
What?
#### Ih2010
Joined Mar 19, 2016
6
@Ih2010
FWIW -- As a preliminary, please try this:
Power the unit on, apply a signal to the input, then with the 'blunt end' of an all plastic ballpoint pen (or similar non-conductive implement) carefully apply pressure to various points on the board (being very, very careful to avoid mechanical damage to the smt components)...
Best regards and good luck!
HP
This was an excellent place to start, although it did not solve my problems. After 10-15 minutes of probing with no luck I thought to myself "I hope I didn't grab my broken xlr-xlr for the connection between my sub and speaker"...popped it off and sure enough it was the broken one. Damn you Murphy!
Reversed the connections so that the audio 'input' from my 3.5mm to XLR was in input A of the sub. I then hooked up an xlr-xlr from both the output-A and throughput-A connections to the speaker (Which serves as a signal passthrough) and I got audio out of the speaker and an occasional 'hiccup' from the sub every 15-20 seconds. This 'hiccup' is consistent and happens at a regular interval, not really sure how else to describe it though. It's almost like the sub wants to cut on, but then it gives up because it's unable to. I also find it interesting that the passthrough works in every way on both A and B - there are 4 possible connections (when signal gets sent to sub and then to speaker) to get sound based on how I am hooking it up. This makes me believe the top most part of the board (which handles signal passthrough and sound modulation I believe) is not having troubles, but something about the actual amplifier is.
I think the hiccup is important. As consistent as it is and as often as it happens I think my answer may lie in tracking down the root cause of the 'hiccup'.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
I clearly see that your post is written in english HP,
But,
What?
Touché! Typo corrected!
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
I think the hiccup is important. As consistent as it is and as often as it happens I think my answer may lie in tracking down the root cause of the 'hiccup'.
That would appear a reasonable assumption!
FWIW: The described symptom smacks of a 'decoupling issue' (perhaps indicative of a failing Cap or resistor) --- Some 'voltage' readings of the PSU rails at various stages may be useful -- In case you ignore the correct values, you may 'hunt' the difficulty merely by 'looking' for EMF variation in 'sympathy' with the period of the 'hiccough' cycle...
Best regards -- and best of luck!
HP
Last edited:
#### #12
Joined Nov 30, 2010
18,210
It is entirely typical for a power supply to keep checking to see if it is safe to come on. (Self-protecting power supplies is one of my specialties.)
That hiccup is probably a symptom of a shorted power stage. The power supply might not be broken at all.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
@#12 makes an excellent point! -- Please bear in mind, however, that should the 'hiccough' symptom indeed owe to PSU cycling, such represents 'advancement' of the original difficulty or a 'new' malfunction perhaps due to an over current condition inadvertently introduced during disassembly? - Please inspect leads, connectors, etc against the later possibility... Note also that audio power amplifiers not infrequently feature similarly acting mismatch (i.e. over/under load) protection...
Best regards and again good luck!
Last edited:
#### Cartel
Joined Dec 1, 2016
3
Hello I am working on this behringer model too.
Same thing, a thumping pop every 5-10 seconds.
I seen right away some garbage capacitors behringer used, decon and ksd....junk.
all the 470uf in one section of the 15V supply were buldged and tested 50pf.
I replaced them all and now I still hear nothing, no thump but a screeching after a bit.
Voltages all seem good, the big diodes and IRFB4227's test good.
I have a schematic of the 1800 thats pretty much the same.
#### Attachments
• 4.8 MB Views: 24
• 313.7 KB Views: 16
• 580.6 KB Views: 17
• 572.7 KB Views: 17
• 552.8 KB Views: 19
• 716.6 KB Views: 21
• 533.6 KB Views: 19
• 613.7 KB Views: 17 |
1)
A contractor employed 30 men to do a piece of work in 38 days. After 25 days, he employed 5 men more and the work was finished one day earlier. How many days he would have been behind, if he had not employed additional men ?
A) $1$
B) $1\frac{1}{4}$
C) $1\frac{1}{2}$
D) $1\frac{3}{4}$
$\therefore$ 30 men can do it in $\frac{12\times 35}{30}$ = 14 days, which is 1 day behind. |
• Influence of substrate temperature on certain physical properties and antibacterial activity of nanocrystalline Ag-doped In$_2$O$_3$ thin films
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/087/06/0100
• # Keywords
AIO thin films; spray pyrolysis; biological application; electrical properties; scanning electron microscopy;
• # Abstract
Nanocrystalline Ag-doped indium oxide (AIO) thin films, by employing a much simplified spray pyrolysis technique in different substrate temperatures (300, 350, 400 and 450$\deg$ C), were fabricated for the first time. The deposited films were subjected to various characterization studies, to explore certain features like the influence of various deposition temperatures on physical and antibacterial properties. XRD results showed that all the samples exhibited preferential orientation along the (2 2 2) plane. The variation in the crystalline size with increasing substrate temperature was explained on the basis of the Zener pinning effect. The electrical sheet resistance ($R_{sh}$) was found to decrease sharply with increasing substrate temperature and attained a minimum value (62$\Omega$/$\square$) at 400$\deg$C and then started increasing for higher deposition temperatures. Further, PL emission spectra of the samples in the visible range ascertained the possibility of applicability of the same in nanoscale optoelectronic devices. From the studies, it was found that at 400.C deposition temperature, one could expect better antibacterial efficiency against {\it Escherichia coli}. The influence of the shape and size of AIO nanograins on the antibacterial activity was analysed using scanning electron microscopy images.
• # Author Affiliations
1. PG and Research Department of Physics, A.V.V.M. Sri Pushpam College, Poondi 613 503, Thanjavur, India
• # Pramana – Journal of Physics
Volume 94, 2019
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 |
PSYCH 525 Week 5 Individual Assignment Personality Assessment Instrument or Inventory Critique (UOP Course)
# PSYCH 525 Week 5 Individual Assignment Personality Assessment Instrument or Inventory Critique (UOP Course)
J
0 points
Myers-Briggs Type Indicator
PSYCH525
Myers Briggs Type Indicator
Even on the Internet coming across opportunities to take a variety of online personality tests is the promise to tell you more about yourself is common. Through the ages, people have been preoccupied with understanding themselves and those around them. The Internet has just made these efforts for understanding on a whole new level. Serious versions of online personality tests on sites dedicated to careers and job searching, relationships, parenting, and education are available (Plante, 2005).
However personality assessment instruments have been around for much longer than Internet personality tests. Professionals have used them to understand, assist, and diagnose his or her clients for many years. These assessments...
PSYCH 525
jacob
J
0 points
#### Oh Snap! This Answer is Locked
Thumbnail of first page
Excerpt from file: Runninghead:MYERSBRIGGSTYPEINDICATOR MyersBriggsTypeIndicator PSYCH525 1 MYERSBRIGGSTYPEINDICATOR 2 MyersBriggsTypeIndicator EvenontheInternetcomingacrossopportunitiestotakeavarietyofonlinepersonalitytestsis thepromisetotellyoumoreaboutyourselfiscommon.Throughtheages,peoplehavebeen
Filename: psych-525-week-5-individual-assignment-personality-assessment-instrument-or-inventory-critique-uop-course-73.docx
Filesize: < 2 MB
Print Length: 10 Pages/Slides
Words: 203
Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$
Use LaTeX to type formulas and markdown to format text. See example. |
## Abstract
Aims
Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a rare genetic condition caused predominantly by mutations within desmosomal genes. The mutation leading to ARVC-5 was recently identified on the island of Newfoundland and caused by the fully penetrant missense mutation p.S358L in TMEM43. Although TMEM43-p.S358L mutation carriers were also found in the USA, Germany, and Denmark, the genetic relationship between North American and European patients and the disease mechanism of this mutation remained to be clarified.
Methods and results
We screened 22 unrelated ARVC patients without mutations in desmosomal genes and identified the TMEM43-p.S358L mutation in a German ARVC family. We excluded TMEM43-p.S358L in 22 unrelated patients with dilated cardiomyopathy. The German family shares a common haplotype with those from Newfoundland, USA, and Denmark, suggesting that the mutation originated from a common founder. Examination of 40 control chromosomes revealed an estimated age of 1300–1500 years for the mutation, which proves the European origin of the Newfoundland mutation. Skin fibroblasts from a female and two male mutation carriers were analysed in cell culture using atomic force microscopy and revealed that the cell nuclei exhibit an increased stiffness compared with TMEM43 wild-type controls.
Conclusion
The German family is not affected by a de novo TMEM43 mutation. It is therefore expected that an unknown number of European families may be affected by the TMEM43-p.S358L founder mutation. Due to its deleterious clinical phenotype, this mutation should be checked in any case of ARVC-related genotyping. It appears that the increased stiffness of the cell nucleus might be related to the massive loss of cardiomyocytes, which is typically found in ventricles of ARVC hearts.
## Introduction
Arrhythmogenic right ventricular cardiomyopathy (ARVC) is associated with myocardial fibrosis, fibrofatty replacement, and a progressive loss of predominantly right ventricular tissue, although biventricular disease involvement is not uncommon. Ventricular arrhythmias may cause sudden cardiac death (SCD) in ARVC patients. Disease causing mutations in genes encoding for desmosomal proteins have been reported in more than 40–50% of ARVC patients.1,2
A rare form of ARVC is caused by a missense mutation within the gene of transmembrane protein 43 (TMEM43) on chromosome 3p25 (ARVC-5). In a total of 15 Canadian families, a heterozygous amino acid substitution (p.S358L) in TMEM43 fully cosegregated with autosomal dominant ARVC and was proposed to be a founder mutation on the island of Newfoundland (Canada).3TMEM43-p.S358L is a fully penetrant mutation. Although the mutation is not located on the heterosomes, the phenotype is strikingly gender specific.4 Males have a lower life expectancy3,4 than female carriers and carry a high risk for SCD. The frequency of ARVC-5 in central Europe is currently unknown. Recently, a first European family with this mutation was independently identified in Denmark5 and almost at the same time in Germany.6 However, the Danish family did not have male carriers of the mutation.5 A de novo mutation was also recently found in a Canadian cohort with obviously non-Newfoundland individuals.7
The gene TMEM43 codes for the nuclear protein LUMA, which appears to form a complex with the nuclear lamina proteins lamin A/C and emerin,8 which respective genes are known to underlie Emery Dreifuss Muscular Dystrophy (EDMD). Mutations in LMNA coding for lamin A/C were also identified in patients with arrhythmogenic forms of dilated cardiomyopathy (DCM) and ARVC.9 LUMA is a ubiquitously expressed nuclear protein and was predicted to carry four transmembrane domains which localize the protein to the inner nuclear membrane. Consequently, other TMEM43 variants were recently found in patients with EDMD.10
In this study, we describe the screening of the entire gene TMEM43 for ARVC-associated mutations within previously genetically characterized unrelated 22 ARVC index patients1 and analysis of the TMEM43-p.S358L mutation in additional 22 DCM patients. We describe the clinical course and history of a German family who was tested positive for the TMEM43 c.1073C>T, p.S358L mutation (Table 1). Furthermore, we provide evidence that the Newfoundland TMEM43-p.S358L mutation was imported by immigrants from continental Europe. Finally, we found that in skin fibroblasts of three p.S358L mutation carriers, the nuclear nanomechanics is substantially affected.
Table 1
Clinical classification 21 of TMEM43-p.S358L mutation carriers
Clinical parameters TF criterion Pat. III/01 Pat. III/03 Pat. IV/01 Pat. IV/03 Pat. IV/05 Pat. IV/6 Pat. IV/8 Pat. IV/10 Pat. V/01 Pat. V/02 Pat. V/03
Gender
Age (years) 74 68 37* 32* 46 28* 35 33 15 12
Family history
Autopsy – – – – – – – –
SCD – – – – – – – –
Pathogenic mutation (TMEM43-p.S358L) Major ± ± ± ± ± ± ± ± ± ± ±
Depolarization abnormalities
TAD ≥55 ms Minor – n.a. – – – n.a. n.a. n.a. – – –
Epsilon waves Major – n.a. – – – n.a. n.a. n.a. – – –
Late potentials Minor – n.a. – – – n.a. n.a. n.a. – – –
Repolarization abnormalities
Inverted T in V1-3 Major – n.a. – – – n.a. n.a. n.a. – – –
Inverted T in V4-6 Minor – n.a. – – – n.a. n.a. n.a. – – –
Arrhythmias
VES >500/24 h Minor + (ICD) n.a. + (ICD) n.a. + (ICD) n.a. – – –
LBBB VT Minor – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – –
LBBB VT sup. Major – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – –
RBBB VT – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – –
Structural characteristics
Structural major Major n.a. – n.a. – – –
RV WMA n.a. n.a. n.a. – n.a. n.a. n.a. – – –
RV DE n.a. n.a. n.a. – n.a. n.a. n.a. – – –
LV WMA – n.a. n.a. n.a. n.a. n.a. n.a. – – –
LV DE – n.a. n.a. n.a. – n.a. n.a. n.a. – – –
Functional characteristics
RVEF ≤ 45% – n.a. n.a. n.a. – n.a. n.a. – – –
LVEF ≤ 50% – n.a. n.a. n.a. – n.a. n.a. n.a. – – –
Ventricular involvement
RV involvement n.a. – n.a. n.a. – – –
LV involvement n.a. – – n.a. n.a. – – –
No. of major TF criteria
No. minor TF criteria
Concluding ARVC diagnosis21 Definite $Border-line Definite Definite$Border-line $Definite$Definite $Possible Possible Possible Possible Clinical parameters TF criterion Pat. III/01 Pat. III/03 Pat. IV/01 Pat. IV/03 Pat. IV/05 Pat. IV/6 Pat. IV/8 Pat. IV/10 Pat. V/01 Pat. V/02 Pat. V/03 Gender Age (years) 74 68 37* 32* 46 28* 35 33 15 12 Family history Autopsy – – – – – – – – SCD – – – – – – – – Pathogenic mutation (TMEM43-p.S358L) Major ± ± ± ± ± ± ± ± ± ± ± Depolarization abnormalities TAD ≥55 ms Minor – n.a. – – – n.a. n.a. n.a. – – – Epsilon waves Major – n.a. – – – n.a. n.a. n.a. – – – Late potentials Minor – n.a. – – – n.a. n.a. n.a. – – – Repolarization abnormalities Inverted T in V1-3 Major – n.a. – – – n.a. n.a. n.a. – – – Inverted T in V4-6 Minor – n.a. – – – n.a. n.a. n.a. – – – Arrhythmias VES >500/24 h Minor + (ICD) n.a. + (ICD) n.a. + (ICD) n.a. – – – LBBB VT Minor – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – – LBBB VT sup. Major – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – – RBBB VT – n.a. n.a. n.a. n.a. n.a. n.a. n.a. – – – Structural characteristics Structural major Major n.a. – n.a. – – – RV WMA n.a. n.a. n.a. – n.a. n.a. n.a. – – – RV DE n.a. n.a. n.a. – n.a. n.a. n.a. – – – LV WMA – n.a. n.a. n.a. n.a. n.a. n.a. – – – LV DE – n.a. n.a. n.a. – n.a. n.a. n.a. – – – Functional characteristics RVEF ≤ 45% – n.a. n.a. n.a. – n.a. n.a. – – – LVEF ≤ 50% – n.a. n.a. n.a. – n.a. n.a. n.a. – – – Ventricular involvement RV involvement n.a. – n.a. n.a. – – – LV involvement n.a. – – n.a. n.a. – – – No. of major TF criteria No. minor TF criteria Concluding ARVC diagnosis21 Definite$Border-line Definite Definite $Border-line$Definite $Definite$Possible Possible Possible Possible
F, female; ICD, implanted cardioverter defibrillator; LBBB VT, ventricular tachycardia with left bundle branch block morphology; LV DE, LV delayed enhancement; LV WMA, left ventricular wall motion abnormalities (a/dyskinaesia); M, male; n.a., not available; RBBB VT, ventricular tachycardia with right bundle branch block morphology; Structural major, structural RV abnormalities accounting for a major criterion; RV WMA, right ventricular wall motion abnormalities (a/dyskinaesia); RV DE, RV delayed enhancement; SCD, sudden cardiac death (<35 years); sup. , superior axis; TAD, terminal activation duration; TF, Task force; VES, ventricular extra systoles; *, at death; \$, classification biased by incomplete clinical data; −, absence or +, presence of clinical parameter; ±, heterozygous genotype. Tissue characterization was not available, since cardiac tissue was not preserved during autopsy for histomorphometric analyses.
## Methods
### Patient cohort
We studied 22 unrelated ARVC index patients already characterized for the lack of mutations in the genes DSC2, DSG2, PKP2, JUP, DSP, and DES.1 About 80% were inpatients from the Heart and Diabetes Center North Rhine Westfalia, Bad Oeynhausen, Germany (HDZ-NRW; www.hdz-nrw.de), whereas the remaining patients were ambulatory patients referred to for molecular genetic analysis to our centre. The clinical diagnosis was classified according to the revised task force criteria for ARVC.11 All coding exonic and adjacent intronic sequences of the TMEM43 gene were screened.
In addition, we screened 22 patients with DCM recruited from the Heart- & Diabetes Center NRW, Bad Oeynhausen, Germany, for the presence of c.1073C>T, p.S358L in TMEM43 using a sequence-specific TaqMan SNP genotyping assay. Dilated cardiomyopathy patients were included in the study, since recently in 5% of DCM patients desomosmal gene variants associated with ARVC were also found12 indicating that the phenotype in cardiomyopathies is not strictly associated with a particular gene.9
### Genetic analysis
Genomic DNA was isolated from peripheral blood samples according to the standard protocols. DNA of blood samples was extracted using standard techniques (Illustra™ blood genomic Prep Mini Spin Kit, GE Healthcare, Bucking Hamshire, UK). The genomic sequence used to design polymerase chain reaction (PCR) primers was obtained from GenBank (www.ncbi.nih.gov/projects/genome/guide/human; accession-No.: NM_024334). Polymerase chain reaction amplification of genomic DNA was carried out following standard protocols. Mutation screening was done by denaturing high-performance liquid chromatography using a DNASep column with a WAVE DNA Fragment Analysis System (Transgenomic Inc., San Jose, CA, USA) as described previously.13 The analytical temperature(s) and primers for each exon are available from the authors upon request. Exons with aberrant temperature-modulated heteroduplex profile were sequenced in both directions on an ABI 310 genetic analyzer using the BigDye Terminator v3.1 cycle sequencing kit (Applied Biosystems, Foster City, CA, USA). Sequencing electropherograms were inspected manually and analysed with the software Variant Reporter (Applied Biosystems).
To evaluate a potential founder effect, 16 microsatellite markers spanning TMEM43 on chromosome 3p25 were selected for haplotype analysis. Primers and conditions are available upon request. In addition to the family described here, haplotype analysis was also performed in three additional families from Denmark, three index-patients from the original Newfoundland ARVC-5 founder family, and a patient from the USA. The different families were not known to be related. In patients DK1, DK2, DK3 and GER1 DNA of relatives of the index patients, we facilitated the verification of the phase and reconstruction of haplotypes.
To estimate the age of the mutation, the linkage disequilibrium between the p.S358L mutation and each of the closest recombinant microsatellite markers was calculated and the recombination fraction from the distances between the mutation and these microsatellite markers was determined. This enabled the calculation of the number of generations since the mutation had occurred.14 The genetic distances (cM) were inferred from the deCODE genetic map.15
### Cell cultures of dermal fibroblasts
Dermal fibroblasts from individual IV/5 (Figure 1), from 2 Danish male TMEM43-p.S358L carriers, and from TMEM43 wild-type controls were isolated from skin biopsy as previously reported.16 Skin biopsies of control individuals were isolated from two unrelated healthy volunteers, three subjects with inherited DCM due to the mutations LMNA p.K219T, PKP2 p.H679Y, and PLN p.R14del, and two patients with the non-cardiac diseases osteogenesis imperfecta and chronic granulomatosis (for clinical data see Supplementary material online, Table S1). All control cell cultures were wild type for the TMEM43 mutation.
Figure 1
(A) Family tree of the German family with the mutation TMEM43-p.S358L. All male patients of generation IV died due to sudden cardiac death (SCD) despite being under medical control without knowledge of the genetic background. Hatched symbols mark individuals suspected to be diseased. Filled symbols represent the cardiac phenotype. The heterozygous genotype TMEM43-pS.358L is given as (+/−). (B) Sanger sequencing chromatogram of index patient III/01 showing the heterozygous mutation TMEM43 c.1073C>T, p.S358L.
Figure 1
(A) Family tree of the German family with the mutation TMEM43-p.S358L. All male patients of generation IV died due to sudden cardiac death (SCD) despite being under medical control without knowledge of the genetic background. Hatched symbols mark individuals suspected to be diseased. Filled symbols represent the cardiac phenotype. The heterozygous genotype TMEM43-pS.358L is given as (+/−). (B) Sanger sequencing chromatogram of index patient III/01 showing the heterozygous mutation TMEM43 c.1073C>T, p.S358L.
The culture medium was DMEM (Life Technologies, Darmstadt, Germany) supplemented with 10% foetal bovine serum (PAA laboratories, Cölbe, Germany) 4 mmol/L glutamine, 4.5 g/L glucose, 50 µM 2-mercaptoethanol (Life Technologies, Darmstadt, Germany), 1% (w/v) non-essential amino acids (Life Technologies), 1% (w/v) penicillin (100 U/mL) and streptomycin (100 µg/ml), and 25 mM HEPES.
### Analysis of nuclear nanomechanics
Local force indentation curves of skin fibroblasts were acquired with a MFP3D (Asylum Research, Goleta, CA, USA) atomic force microscope (AFM) which was mounted on an inverted optical microscope (Olympus IX 71, ×40 phase contrast objective) and allowed precise alignment of the cantilever tip relative to the probed cell nucleus of interest (Figure 3A) and to observe cell motility (for details of the methods, see Supplementary material online).
We observed a nice agreement of predicted and experimental data in the analysis region which validates the applicability of the Hertz elasticity model (Supplementary material online, Figure S5). All indentation curves were recorded and analysed in a blinded fashion with respect to the genotype of the cell cultures.
## Results
### Genetic analysis
We identified in our cohort the mutation TMEM43-p.S358L as published before in Newfoundland3 in a single German family with 20 members including 3 male SCD victims (Figure 1). The family decided after genetic counselling to test all available members of the family. Predictive testing was done in accordance with the recommendations of the ESC working group and the German law.17 The heterozygous under-aged patients V/01-03 without cardiac phenotype are being followed using ECG, Holter ECG, echocardiography including tissue Doppler and speckle tracking as well as magnetic resonance imaging on a regular basis. Until now, no abnormalities have been detected in children and adolescents below 18 years in this family. Disease classification is based in these individuals on the presence of the mutation only (Table 1). DNA sequencing of the coding region including the exon–intron boundaries of the genes DSP, PKP2, DES, DSG2, DSC2, and JUP revealed the heterozygous variant DSP c.8531G>T, p.G2844V in patients III/01, III/03, IV/05, IV/08, and IV/09, but not in IV/11 and especially not in the SCD victims IV/01 and IV/06.
Thus, this variant did not cosegregate with disease in this family and was therefore regarded to be not disease causing. The index patient III/01 was additionally screened for variants in the candidate genes PLN,18LMNA,9CTNNA3,19CDH2, and CTNNB1,20 but no relevant pathologic sequence variants related to the familial cardiomyopathy were found (Supplementary material online, Table S2). The genetic screening for the known desmosmal ARVC mutations and further candidate genes remained therefore negative.
Arrhythmogenic right ventricular cardiomyopathy was confirmed in all three male SCD victims at autopsy. Two of those died during physical exercise. The female family members IV/05, IV/08, and IV/09 presented with ventricular tachyarrhythmias leading to ICD implantations in IV/05 and IV/08. The family reported that II/02 had frequent episodes of arrhythmias and died from SCD aged 78 years. The grandfather (I/01) of the index patient (III/01) died as well from SCD (see arrow in Figure 1). In addition, patient III/03 received an ICD as a consequence of a near fatal syncope. The TMEM43 mutation cosegregated with the disease in the family and was confirmed in paraffin embedded tissue samples of two male relatives (IV/01 and IV/06), who died by SCD (aged 37 and 32 years, respectively). No samples for genotyping were available from autopsy for individual IV/03. However, a transthoracic echocardiogram performed 9 months before death of IV/03 revealed a mildly enlarged right ventricle in the absence of detectable aneurysms (right ventricular end-diastolic diameter, RVEDD = 33 mm,) while LV dimensions were normal (left ventricular end-diastolic diameter, LVEDD = 55 mm). The patient revealed during examination frequent ventricular ectopic beats. During autopsy, fibrofatty infiltrations in the right ventricular free wall were detected. His brother (IV/01) died suddenly without antecedent cardiac symptoms, while his sister (IV/5) had frequent episodes of tachyarrhythmias and received finally an ICD. However, she presented with a right ventricle with a dyskinaesia of the posterior wall and a mild systolic dysfunction during MRI examination. In general, the female mutation carriers of the family had no signs of heart failure and expressed a less severe phenotype than male carriers. ECGs of the family members III/01, IV/01, IV/03, and IV/05 were available as Supplementary material online. We found that the QRS times of individuals IV/03 and IV/05 were in the upper normal range (∼100 ms), whereas the QT interval was normal in all individuals tested (compare also Table 1 for clinical ARVC-classification21 of the mutation carriers).
Of note, the phenotype of the family perfectly recapitulated the gender-specific severity of the disease with an increased risk for SCD in male mutation carriers as reported before.3,4 Interestingly, mutation carriers of the German TMEM43 family revealed in addition to the cardiac disease a phenotype of idiopathic lipodystrophy. None of the patients in the German family had a dermatological phenotype. The allele frequency of the p.S358L mutation in our patient cohort was calculated to be 1.7%. In addition, we did not find the mutation in any DCM patient. Due to the lack of cosegregation, the DSP variant was not considered as pathogenic. We did also not find any evidence for a digenic inheritance or modifier effect of this variant.
### Haplotype analysis
We compared the haplotypes of the patients III/01 and IV/05 (Figure 1; family GER1) to the index patients from Denmark (DK1-3), USA, and Canada (Newfoundland; NFL1-3). The haplotype analysis in eight different TMEM43-p.S358L mutation carriers revealed that all index patients shared a 1 Mb region around TMEM43, with larger shared haplotypes among specific populations, e.g. in patients from Denmark/Germany or in the different Newfoundland index-patients. These data suggest a shared common haplotype indicative for an ancient founder mutation (Figure 2). Interestingly, a member of the Danish family (DK1) left Denmark for Canada in the early nineteenth century.
Figure 2
(A) Shared haplotype surrounding the gene TMEM43 in p.S358L mutation carrying patients. The table shows constructed shared haplotypes among different families. In families DK1, DK2, DK3, and GER1 (indicated with *), additional affected family members were available to reconstruct the phase of the haplotype. In gray are depicted the shared markers between different families. In light blue are marked specific markers present in the Danish population, in yellow specific markers in the German and US patients, and in green the markers specific for the Newfoundland population. In patient DK2, a cross-over may have occurred. (B) Schematic map of known TMEM43 variants associated with Emery Dreifuss Muscular Dystrophy (EDMD) or arrhythmogenic right ventricular cardiomyopathy (ARVC). Variants with unknown significance33 were given in brackets. The domain structure was derived from ref. 8.
Figure 2
(A) Shared haplotype surrounding the gene TMEM43 in p.S358L mutation carrying patients. The table shows constructed shared haplotypes among different families. In families DK1, DK2, DK3, and GER1 (indicated with *), additional affected family members were available to reconstruct the phase of the haplotype. In gray are depicted the shared markers between different families. In light blue are marked specific markers present in the Danish population, in yellow specific markers in the German and US patients, and in green the markers specific for the Newfoundland population. In patient DK2, a cross-over may have occurred. (B) Schematic map of known TMEM43 variants associated with Emery Dreifuss Muscular Dystrophy (EDMD) or arrhythmogenic right ventricular cardiomyopathy (ARVC). Variants with unknown significance33 were given in brackets. The domain structure was derived from ref. 8.
### Age of the mutation
A shared haplotype for four markers in a 0.7 Mb (1.3 cM) region surrounding TMEM43 was found in all p.S358L mutation carriers. The linkage disequilibrium between the p.S358L mutation and recombinant markers D3S2385 and D3S3613 was calculated. This revealed that the p.S358L mutation occurred between 52 and 64 generations ago. Allowing 25 years per generation, the age of the haplotype containing the p.S358L mutation is therefore estimated to be between 1300 and 1600 years (corresponding to the years 400–700 AD).
### Cell nucleus nanomechanics of dermal fibroblasts
LUMA was found to be a nuclear envelope protein8,22 (see also Supplementary material online, Figures S3 and S4), which appears to be expressed ubiquitously. For that reason, we investigated fibroblasts from a skin biopsy of patient IV/05 and two Danish male TMEM43-p.S358L carriers using AFM. We analysed the expression of the mRNA and the protein coded by TMEM43 by reverse transcription polymerase chain reaction (RT-PCR), immunoblotting and -staining, respectively (Supplementary material online, Figures S2 and S3). To compare the relative expression of LUMA in dermal fibroblasts and the myocardium, we analysed by real-time RT-PCR the mRNA expression levels (Supplementary material online, Figure S1A and B). In addition, we stained the fibroblast cell cultures by two different commercially available antibodies against LUMA (Supplementary material online, Figure S3). In summary, there is evidence that LUMA is expressed in human skin fibroblasts on a level comparable with that of the human myocardium.
We isolated dermal fibroblasts and characterized the nuclear nanomechanical resistance and stiffness by AFM force indentation experiments in cell culture. We compared the cell culture data from three heterozygous TMEM43-p.S358L carriers to fibroblasts of healthy controls or different non-cardiac diseases. As a further control, we also tested fibroblasts from patients with DCM (DCM due to LMNA-p.K219T, due to PLN-p.R14del, and related to a novel variant PKP2-p.H679Y). All control cells were tested to be wild type for TMEM43.
In order to determine the stiffness of the cell nuclei, we acquired more than 12 000 force indentation curves (Figure 3B and C) for each patient (30 different cells with 400 indentation curves each) and calculated the elastic moduli thereof. These elastic data for each cell were plotted in a histogram (Figure 3D). We found that the elasticity data from single control cells showed a characteristic Gaussian shape distribution with a peak value of 1–3 kPa (elastic modulus) and a comparably small standard deviation of σ < 1 kPa (Figure 3D).
Figure 3
Measurement of nuclear stiffness by atomic force microscopy (AFM). (A) Schematic view of a cell indentation experiment using AFM. The cell is indented by the AFM cantilever and is deflected due to the elastic mechanic response of the cell. The cantilever movement z is equal to the sum of indentation δ and deflection d (z = δ + d), whereas on an inflexible surface (not shown here) the cantilever deflection d corresponds to the cantilever movement z (z = d). Inset: Light microscopy of a single skin fibroblast and a triangular AFM cantilever. (B) Force distance curves (F(z), dashed) acquired on substrates with different stiffness: control cell nucleus (blue), TMEM43 p.S358L nucleus (red), and bottom of cell culture petri dish (black). The cantilever is approached (from left) and touches the sample surface at zero. Further approaching of the cantilever leads to both a deflection of the cantilever and an indentation of the sample. The sample indentation can be estimated by subtracting the deflection from the cantilever travel (F(z)→F(δ)). The resulting force indentation curves (F(δ), solid lines) allow the direct read out of the sample deformation. (C) Experimental cell indentation data (open circles) in a log–log plot is approximated by Sneddon's equation (solid lines) to estimate the elastic modulus E. (D) Representative elasticity histograms derived from an indentation scan of a single cell. Each histogram is normalized to an overall probability of 1. The elastic properties of the skin fibroblast of TMEM43-p.S358L patient IV/5 (red) differs remarkably from those of a healthy control (blue) or from dilated cardiomyopathy due to the mutation LMNA-p.K219T (green). (E) Colour-coded histograms of the elastic modulus from different patients. Each distribution is a cumulative of at least 30 cell nuclei containing 400 indentation curves each. The maximum of every histogram is set to 1 for better visibility. Non-cardiomyopathy controls: (1) healthy control A, (2) healthy control B, (3) chronic granulomatosis, and (4) osteogenesis imperfecta. Skin fibroblasts from cardiomyopathy patients: (5) DCM (mutation PLN-p.R14del), (6) DCM (mutation LMNA-p.K219T) (7) DCM (homozygous mutation PKP2- p.H679Y), (8) ARVC-5 (mutation TMEM43-p.S358L, female), and (9–10) ARVC-5 (mutation TMEM43-p.S358L, males).
Figure 3
Measurement of nuclear stiffness by atomic force microscopy (AFM). (A) Schematic view of a cell indentation experiment using AFM. The cell is indented by the AFM cantilever and is deflected due to the elastic mechanic response of the cell. The cantilever movement z is equal to the sum of indentation δ and deflection d (z = δ + d), whereas on an inflexible surface (not shown here) the cantilever deflection d corresponds to the cantilever movement z (z = d). Inset: Light microscopy of a single skin fibroblast and a triangular AFM cantilever. (B) Force distance curves (F(z), dashed) acquired on substrates with different stiffness: control cell nucleus (blue), TMEM43 p.S358L nucleus (red), and bottom of cell culture petri dish (black). The cantilever is approached (from left) and touches the sample surface at zero. Further approaching of the cantilever leads to both a deflection of the cantilever and an indentation of the sample. The sample indentation can be estimated by subtracting the deflection from the cantilever travel (F(z)→F(δ)). The resulting force indentation curves (F(δ), solid lines) allow the direct read out of the sample deformation. (C) Experimental cell indentation data (open circles) in a log–log plot is approximated by Sneddon's equation (solid lines) to estimate the elastic modulus E. (D) Representative elasticity histograms derived from an indentation scan of a single cell. Each histogram is normalized to an overall probability of 1. The elastic properties of the skin fibroblast of TMEM43-p.S358L patient IV/5 (red) differs remarkably from those of a healthy control (blue) or from dilated cardiomyopathy due to the mutation LMNA-p.K219T (green). (E) Colour-coded histograms of the elastic modulus from different patients. Each distribution is a cumulative of at least 30 cell nuclei containing 400 indentation curves each. The maximum of every histogram is set to 1 for better visibility. Non-cardiomyopathy controls: (1) healthy control A, (2) healthy control B, (3) chronic granulomatosis, and (4) osteogenesis imperfecta. Skin fibroblasts from cardiomyopathy patients: (5) DCM (mutation PLN-p.R14del), (6) DCM (mutation LMNA-p.K219T) (7) DCM (homozygous mutation PKP2- p.H679Y), (8) ARVC-5 (mutation TMEM43-p.S358L, female), and (9–10) ARVC-5 (mutation TMEM43-p.S358L, males).
In full contrast, the nuclear stiffness and elasticity of cell nuclei derived from patients with TMEM43-p.S358L differed significantly from those of the controls (Figure 3A–E). Our data evidence that the nanomechanical properties of the nuclei in TMEM43-p.S358L cells exhibit a considerably higher average stiffness with an increased elastic modulus (E ≈ 10 kPa). Moreover, the scattering of the stiffness data in TMEM43-p.S358L cells is much broader compared with those of the wild-type controls. This difference of the nanomechanical properties was observed throughout the whole analysis independent of the gender of the mutation carrier (Figure 3E): All wild-type control nuclei were significantly softer, exhibiting a considerably narrower distribution of the elastic moduli (Figure 3D and E).
As a consequence, the mutation TMEM43-p.S358L heavily affects the nanomechanical properties of the cell nucleus. Thus, nuclei bearing the mutant LUMA protein are substantially stiffer than nuclei from the control group. Of note, in very rare cases (<5%), significantly stiffer cell nuclei were also observed in control cells (data not shown). However, the shape and the standard deviation of these distributions remained unchanged and are attributed to mechanical phenomena related to cell division and proliferation.23
Furthermore, and as an additional control, we imaged the orientation of the nucleosomes before and after an indentation scan. Therefore, we could guarantee not to observe any signs for nucleosomic displacement during the experiments (see Supplementary material online, Figure S6).
## Discussion
Arrhythmogenic right ventricular cardiomyopathy is a rare cardiomyopathy24 affecting preferentially the right or both ventricles and is regarded to be a genetic trait in the majority of cases. Variants in genes coding for desmosomal genes are frequently found to be associated with this cardiomyopathy.24–27 However, of most variants the molecular disease mechanism is not yet known27 and the pathogenic role of some missense variants is still a matter of debate.28,29 Moreover, variants in genes coding for proteins of the cardiac desmosome are also found in patients with different forms of cardiomyopathy like i.e. DCM.12,30 It is to be expected that novel genes will be identified in ARVC patients, since the hit rate in genetic screening studies of the known disease-causing genes is ∼40–50%.1,2
Recently, the gene responsible for ARVC-531 was identified as TMEM43.3 This gene does not code for a desmosomal protein but the inner nuclear envelope protein LUMA, which was initially identified by a proteomics analysis of nuclear lamina proteins.22 The missense mutation TMEM43-p.S358L was shown in a cohort from the island Newfoundland to be a fully penetrant mutation leading to ARVC, heart failure, and SCD in affected individuals.3 They also found evidence that this mutation is a founder mutation.
Settlement of Newfoundland started in the middle of the eighteenth century. By 2001, ∼98% of the population in Newfoundland was of English or Irish descent.32 Thus, it was suggested that the mutation was imported from the British islands at the beginning of the 18th century.4 However, Haywood et al. failed to identify mutation carriers with TMEM43-p.S358L analysed in a British ARVC cohort,4 although they found the novel variants p.R28W, p.E142K, and p.R312W, which are, however, of unknown significance [variant of unknown significance (VUS); Fig. 2B]. Interestingly, they found also some haplotypes, which were in common with the Newfoundland patients.4
We recently identified the novel mutation p.N116S in the gene DES, which caused ARVC in a young female patient with terminal heart failure.1 In that cohort, we had a hit rate of ∼40% for variants related to the disease in the genes DSG2, DSC2, DES, JUP, PKP2 and DSP2. As a consequence of the work of Merner et al.,3 we decided to screen this cohort again for variants in TMEM43 and identified the Newfoundland mutation TMEM43-p.S358L in a German family.6 Strikingly, the phenotype of the disease was similar to the disease course of the Newfoundland families: the males died by SCD aged between 28 and 37 years, while females were affected by arrhythmias (Figure 1A). Another European family was published almost at the same time by Christensen et al.5 in Denmark. Thus, apparently unrelated families carrying the Newfoundland mutation were identified in different countries of continental Europe.
A relationship with the Newfoundland families could not be assessed at the first glance. According to reports from the family, it was assumed that the grandfather I/01 of the index patient (Figure 1A), who died from SCD, was the first mutation carrier within this family. However, the data of our haplotype analysis revealed that the three European families and mutation carriers from Newfoundland and the USA have a common genetic haplotype, suggesting that the German mutation is not a spontaneous de novo mutation but was inherited from a common European ancestor. Therefore, it is likely that additional mutation carriers will be identified on the European continent.
This conclusion is supported by the finding that the mutation appears to be an old continental variant dating back to the early medieval age (years 400–700 AD) and long before the today's European nations were built. Surprisingly, the estimated frequency of this mutation in Europe is comparably low (TMEM43-p.S358L in the general population ranging from 1:460 000 to 1:1 250 000 or 1:230 among European ARVC cases), when other European TMEM43-screening studies5,6,33 or data on the global prevalence of ARVC24 were considered. The comparably high incidence of ARVC-5 on the island Newfoundland might be related to the genetic drift of this unfavourable variant in larger populations. However, at present we can only speculate why the prevalence of TMEM43 p.S358L on the island of Newfoundland appears to be considerably higher compared with the European continent.
TMEM43 encodes the protein LUMA, which is a transmembrane protein of the inner membrane of the nuclear envelope. In a series of experiments, Bengtsson and Otto8 provided evidence that LUMA is in a molecular complex with the nuclear envelope proteins lamin and emerin. Both proteins carry mutations in patients with EDMD. Thus, LUMA is part of the ‘linkers of nucleoskeleton and cytoskeleton complex’ (LINC), which connects the nuclear lamina with the cytoskeleton.34 Consequently, other mutations in TMEM43 were also recently found in patients with EDMD,10 who are frequently affected by mutations in LMNA and EMD. However, the EDMD related TMEM43 mutations p.E85K and p.I91V were found in the N-terminal portion of the protein, which resides in the lumen of the endoplasmic reticulum (Figure 2). When these mutants were transfected in HeLa cells, the localization of lamin B was not affected but a considerable number of cells had an irregular shape of the nucleus.10 The VUS recently identified in the British ARVC cohort33 was also found at the N-terminus (p.R28W), located in the nucleoplasma, or within the transmembrane domain 2 of LUMA (p.R312W) (Figure 2). Of note, the Newfoundland mutation p.S358L is located within the third transmembrane domain of LUMA.
Recently, Rajkumar et al.35 demonstrated that transfection of cells with the cDNA of TMEM43 did not affect the expression of lamin B and emerin transcripts or structural properties of the cell nucleus. Thus, currently the precise pathomechanism of the Newfoundland mutation is unknown.
Mutations in genes coding for nuclear envelope proteins and especially variants in LMNA are associated with EDMD, DCM, pathologies of the nervous system and adipose tissue, or the Hutchinson Gilford progeria syndrome (for a review see ref. 36). Of note, lamin, encoded by LMNA, is a ubiquitously expressed protein and its myopathy related mutations were analysed in fibroblast model systems previously.37,38 Since LUMA was found to be a binding partner of lamin,8 we speculated that the mutation TMEM43-p.S358L might also affect the nuclear nanomechanics of cells. Therefore, we recorded the nanomechanical properties of the cell nuclei in skin derived fibroblast cultures derived from patient IV/5 and two Danish male mutation carriers. We found that in contrast to different LMNA38 mutations, the nuclei of the TMEM43-p.S358L cells show stiffer nanomechanical characteristics compared with controls derived from skin fibroblasts of two voluntary control individuals, two patients with non-cardiac disease (osteogenesis imperfecta and chronic granulomatosis), and three patients with DCM due to (i) the homozygous mutation PKP2-p.H679Y, (ii) PLN-p.R14del, and (iii) inherited DCM due to the mutation LMNA-p.K219T. We found that the data of the nanomechanical analysis could be fitted in the control cells by a Gaussian distribution, but not in the mutant TMEM43-p.S358L cells revealing that the mutant LUMA leads to aberrant and increased mechanical stiffness of the nuclei. In contrast to these findings, Zwerger et al.38 recently published a comparative analysis of LMNA mutants, which are associated with DCM, EDMD, partial lipodystrophy, and Charcot–Marie–Tooths disease. They found that skin fibroblasts derived from myopathic patients with the LMNA mutations p.ΔK32 and p.E358K were associated with aberrant nuclear stability leading to softer cell nuclei. However, some of the myopathic mutants like i.e. LMNA-p.E203G in the recombinant model were not associated with changed nuclear mechanics,38 which is in line with our observation on the LMNA mutant p.K219T (Figure 3E).
Our observation is also in agreement with previous findings in recombinant LUMA-transfected cells. Liang et al.10 observed an abnormal nuclear shape in HeLa cells, when transfected with TMEM43 cDNAs coding for the EDMD mutations p.E85K and p.I91V. Thus, in contrast to myopathic LMNA mutations,38TMEM43-p.S358L leads to increased stiffness of the cell nucleus, which might lead to a stochastic death of cardiomyocytes. It appears that the increased stiffness as well as the elevated elasticity of the cell nucleus might not be tolerated in mechanically active cells like cardiomyocytes.
We did not find nanomechanical differences in vitro between cell cultures of males or females. This might be explained by a systemic effect in vivo, hormonal effects, or the time scale of the cell culture experiments in comparison with the chronic effects in mutation carriers. Of note, the increased incidence of SCDs among male mutation carriers parallels the natural course of the testosterone plasma concentrations in young men. In this hormonal context, a mechanistic gender effect was also suggested by a recent paper of Arimura et al.39 who identified a testosterone effect in recombinant mice carrying the homozygous Lmna mutation p.H222P, which codes for the aberrant version of the nuclear protein lamin. It is currently not known whether hormonal effects are also responsible for the gender phenotype of TMEM43-p.S358L carriers. Gender effects related to exercise were also found by a recent study of James et al.40 They clearly demonstrated the clinical effects of endurance and frequent exercise in desmosomal gene mutation carriers. This potentially might also explain gender differences in the clinical phenotype of TMEM43-p.S358L carriers. However, the deceased male mutation carriers of the German family did only recreational sports for about 2–4 h per week on average. Thus, the exercise effects do not provide valid evidences for the clinical gender differences in this family. Thus, the reasons for the clinical gender differences in ARVC-5 remain currently unclear.
In summary, we conclude and predict from our data that
1. The mutation TMEM43-p.S358L is more widely distributed worldwide than anticipated before and therefore it is expected that further mutation carriers will be identified in Europe. Therefore, due to full penetrance and malignancy, all ARVC patients should be tested for the presence of this mutation.
2. The history of the German family reveals that without the knowledge of the mutation, the individual risk for SCD in ARVC patients might be clinically underestimated.
3. The mutation affects the stability of the nucleus due to increased stiffness, which might be associated with cellular death.
## Funding
This study was supported by the Erich & Hanna Klessmann-Foundation, Gütersloh, Germany to H.M., the Bundesministerium für Bildung und Forschung (BMBF) to T.S. (grant 01GN0824), and the Stem Cell Network Northrhine Westphalia to T.S. and H.M.
Conflict of interest: none declared.
## Acknowledgements
First of all, we would like to thank the families for their support. We are grateful especially to the German family for their patience and commitment despite the loss of beloved family members. We thank Désirée Gerdes, Ruhr University Bochum, Heart and Diabetes Center, Bad Oeynhausen, Jan Jongloed, and Ludolf Boven, University of Groningen, and Nadin Piekarek, University Cologne, for their excellent technical assistance. We also thank Dr Jan Kramer, University Hospital Lübeck, Germany, and Dr Marie José Stasia-Pauger, Centre Diagnostic et Recherche Granomatose Septique, University Grenoble, France, for providing control skin biopsies of non-cardiac patients.
## References
1
Klauke
B
Kossmann
S
Gaertner
A
Brand
K
Stork
I
Brodehl
A
Dieding
M
Walhorn
V
Anselmetti
D
Gerdes
D
Bohms
B
Schulz
U
Zu Knyphausen
E
Vorgerd
M
Gummert
J
Milting
H
.
De novo desmin-mutation N116S is associated with arrhythmogenic right ventricular cardiomyopathy
.
Hum Mol Genet
2010
;
19
:
4595
4607
.
2
Fressart
V
Duthoit
G
Donal
E
Probst
V
Deharo
JC
Chevalier
P
Klug
D
Dubourg
O
Delacretaz
E
Cosnay
P
Scanu
P
Extramiana
F
Keller
D
Hidden-Lucet
F
Simon
F
Bessirard
V
Roux-Buisson
N
Hebert
JL
Azarine
A
Casset-Senon
D
Rouzet
F
Lecarpentier
Y
Fontaine
G
Coirault
C
Frank
R
Hainque
B
Charron
P
.
Desmosomal gene analysis in arrhythmogenic right ventricular dysplasia/cardiomyopathy: spectrum of mutations and clinical impact in practice
.
Europace
2010
;
12
:
861
868
.
3
Merner
ND
Hodgkinson
KA
Haywood
AF
Connors
S
French
VM
Drenckhahn
JD
Kupprion
C
K
Thierfelder
L
McKenna
W
Gallagher
B
Morris-Larkin
L
Bassett
AS
Parfrey
PS
Young
TL
.
Arrhythmogenic right ventricular cardiomyopathy type 5 is a fully penetrant, lethal arrhythmic disorder caused by a missense mutation in the TMEM43 gene
.
Am J Hum Genet
2008
;
82
:
809
821
.
4
Hodgkinson
K
Connors
S
Merner
N
Haywood
A
Young
TL
McKenna
W
Gallagher
B
Curtis
F
Bassett
A
Parfrey
P
.
The natural history of a genetic subtype of arrhythmogenic right ventricular cardiomyopathy caused by a p.S358L mutation in TMEM43
.
Clin Genet
2013
;
83
:
321
331
.
5
Christensen
AH
Andersen
CB
Tybjaerg-Hansen
A
Haunso
S
Svendsen
JH
.
Mutation analysis and evaluation of the cardiac localization of TMEM43 in arrhythmogenic right ventricular cardiomyopathy
.
Clin Genet
2011
;
80
:
256
264
.
6
Klauke
B
Baecker
C
Muesebeck
J
Schulze-Bahr
E
Gerdes
D
Gaertner
A
Milting
H
.
Deleterious effects of the TMEM43 mutation p.S358L found in a German family with arrhythmogenic right ventricular cardiomyopathy and sudden cardiac death
.
Cell Tissue Res
2012
;
348
:
368
.
7
B
Skinner
JR
Sanatani
S
Terespolsky
D
Krahn
Ray
PN
Scherer
SW
Hamilton
RM
.
TMEM43 mutations associated with arrhythmogenic right ventricular cardiomyopathy in non-Newfoundland populations
.
Hum Genet
2013
;
132
:
1245
1252
.
8
Bengtsson
L
Otto
H
.
LUMA interacts with emerin and influences its distribution at the inner nuclear membrane
.
J Cell Sci
2008
;
121
(Pt 4)
:
536
548
.
9
Quarta
G
Syrris
P
Ashworth
M
Jenkins
S
Zuborne Alapi
K
Morgan
J
Muir
A
Pantazis
A
McKenna
WJ
Elliott
PM
.
Mutations in the Lamin A/C gene mimic arrhythmogenic right ventricular cardiomyopathy
.
Eur Heart J
2012
;
33
:
1128
1136
.
10
Liang
WC
Mitsuhashi
H
Keduka
E
Nonaka
I
Noguchi
S
Nishino
I
Hayashi
YK
.
TMEM43 mutations in Emery-Dreifuss muscular dystrophy-related myopathy
.
Ann Neurol
2011
;
69
:
1005
1013
.
11
Marcus
FI
McKenna
WJ
Sherrill
D
Basso
C
Bauce
B
Bluemke
DA
Calkins
H
D
Cox
MG
Daubert
JP
Fontaine
G
Gear
K
Hauer
R
Nava
A
Picard
MH
Protonotarios
N
Saffitz
JE
Sanborn
DM
Steinberg
JS
Tandri
H
Thiene
G
Towbin
JA
Tsatsopoulou
A
Wichter
T
Zareba
W
.
Diagnosis of arrhythmogenic right ventricular cardiomyopathy/dysplasia: proposed modification of the Task Force Criteria
.
Eur Heart J
2010
;
31
:
806
814
.
12
Elliott
P
O'Mahony
C
Syrris
P
Evans
A
Rivera Sorensen
C
Sheppard
MN
Carr-White
G
Pantazis
A
McKenna
WJ
.
Prevalence of desmosomal protein gene mutations in patients with dilated cardiomyopathy
.
Circ Cardiovasc Genet
2010
;
3
:
314
322
.
13
Milting
H
Lukas
N
Klauke
B
Korfer
R
Perrot
A
Osterziel
KJ
Vogt
J
Peters
S
Thieleczek
R
Varsanyi
M
.
Composite polymorphisms in the ryanodine receptor 2 gene associated with arrhythmogenic right ventricular cardiomyopathy
.
Cardiovasc Res
2006
;
71
:
496
505
.
14
PM
Brandao
RD
Cavaco
BM
Eugenio
J
Bento
S
Nave
M
Rodrigues
P
Fernandes
A
Vaz
F
.
Screening for a BRCA2 rearrangement in high-risk breast/ovarian cancer families: evidence for a founder effect and analysis of the associated phenotypes
.
J Clin Oncol
2007
;
25
:
2027
2034
.
15
Kong
A
Gudbjartsson
DF
Sainz
J
Jonsdottir
GM
Gudjonsson
SA
Richardsson
B
Sigurdardottir
S
Barnard
J
Hallbeck
B
Masson
G
Shlien
A
Palsson
ST
Frigge
ML
TE
Gulcher
JR
Stefansson
K
.
A high-resolution recombination map of the human genome
.
Nat Genet
2002
;
31
:
241
247
.
16
Fatima
A
Xu
G
Shao
K
S
Lehmann
M
Arnaiz-Cot
JJ
Rosa
AO
Nguemo
F
Matzkies
M
Dittmann
S
Stone
SL
M
Zechner
U
Beyer
V
Hennies
HC
Rosenkranz
S
Klauke
B
Parwani
AS
Haverkamp
W
Pfitzer
G
Farr
M
Cleemann
L
M
Milting
H
Hescheler
J
Saric
T
.
In vitro modeling of ryanodine receptor 2 dysfunction using human induced pluripotent stem cells
.
Cell Physiol Biochem
2011
;
28
:
579
592
.
17
Charron
P
M
Arbustini
E
Basso
C
Bilinska
Z
Elliott
P
Helio
T
Keren
A
McKenna
WJ
Monserrat
L
Pankuweit
S
Perrot
A
Rapezzi
C
Ristic
A
Seggewiss
H
van Langen
I
Tavazzi
L
;
European Society of Cardiology Working Group on Myocardial and Pericardial Diseases
.
Genetic counselling and testing in cardiomyopathies: a position statement of the European Society of Cardiology Working Group on Myocardial and Pericardial Diseases
.
Eur Heart J
2010
;
31
:
2715
2726
.
18
van der Zwaag
PA
van Rijsingen
IA
Asimaki
A
Jongbloed
JD
van Veldhuisen
DJ
Wiesfeld
AC
Cox
MG
van Lochem
LT
de Boer
RA
Hofstra
RM
Christiaans
I
van Spaendonck-Zwarts
KY
Lekanne Dit Deprez
RH
Judge
DP
Calkins
H
Suurmeijer
AJ
Hauer
RN
Saffitz
JE
Wilde
AA
van den Berg
MP
van Tintelen
JP
.
Phospholamban R14del mutation in patients diagnosed with dilated cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy: evidence supporting the concept of arrhythmogenic cardiomyopathy
.
Eur J Heart Fail
2012
;
14
:
1199
1207
.
19
Christensen
AH
Benn
M
Tybjaerg-Hansen
A
Haunso
S
Svendsen
JH
.
Screening of three novel candidate genes in arrhythmogenic right ventricular cardiomyopathy
.
Genet Test Mol Biomarkers
2011
;
15
:
267
271
.
20
Swope
D
Cheng
L
Gao
E
Li
J
GL
.
Loss of cadherin-binding proteins beta-catenin and plakoglobin in the heart leads to gap junction remodeling and arrhythmogenesis
.
Mol Cell Biol
2012
;
32
:
1056
1067
.
21
Sen-Chowdhry
S
Morgan
RD
Chambers
JC
McKenna
WJ
.
Arrhythmogenic cardiomyopathy: etiology, diagnosis, and treatment
.
Annu Rev Med
2010
;
61
:
233
253
.
22
Dreger
M
Bengtsson
L
Schoneberg
T
Otto
H
Hucho
F
.
Nuclear envelope proteomics: novel integral membrane proteins of the inner nuclear membrane
.
2001
;
98
:
11943
11948
.
23
Dvorak
JA
Nagao
E
.
Kinetic analysis of the mitotic cycle of living vertebrate cells by atomic force microscopy
.
Exp Cell Res
1998
;
242
:
69
74
.
24
Basso
C
Bauce
B
D
Thiene
G
.
Pathophysiology of arrhythmogenic cardiomyopathy
.
Nat Rev Cardiol
2012
;
9
:
223
233
.
25
Delmar
M
.
Desmosome-ion channel interactions and their possible role in arrhythmogenic cardiomyopathy
.
Pediatr Cardiol
2012
;
33
:
975
979
.
26
Kirchner
F
Schuetz
A
Boldt
LH
Martens
K
Dittmar
G
Haverkamp
W
Thierfelder
L
Heinemann
U
Gerull
B
.
Molecular insights into arrhythmogenic right ventricular cardiomyopathy caused by plakophilin-2 missense mutations
.
Circ Cardiovasc Genet
2012
;
5
:
400
411
.
27
Gaertner
A
Klauke
B
Stork
I
Niehaus
K
Niemann
G
Gummert
J
Milting
H
.
In vitro functional analyses of arrhythmogenic right ventricular cardiomyopathy-associated desmoglein-2-missense variations
.
PLoS ONE
2012
;
7
:
e47097
.
28
Milting
H
Klauke
B
.
Molecular genetics of arrhythmogenic right ventricular dysplasia/cardiomyopathy
.
Nat Clin Pract Cardiovasc Med
2008
;
5
:
E1
;
.
29
Posch
MG
Posch
MJ
Perrot
A
Dietz
R
Ozcelik
C
.
Variations in DSG2: V56M, V158G and V920G are not pathogenic for arrhythmogenic right ventricular dysplasia/cardiomyopathy
.
Nat Clin Pract Cardiovasc Med
2008
;
5
:
E1
.
30
Posch
MG
Posch
MJ
Geier
C
Erdmann
B
Mueller
W
Richter
A
Ruppert
V
Pankuweit
S
Maisch
B
Perrot
A
Buttgereit
J
Dietz
R
Haverkamp
W
Ozcelik
C
.
A missense variant in desmoglein-2 predisposes to dilated cardiomyopathy
.
Mol Genet Metab
2008
;
95
:
74
80
.
31
F
Li
D
Karibe
A
Gonzalez
O
Tapscott
T
Hill
R
Weilbaecher
D
Blackie
P
Furey
M
Gardner
M
Bachinski
LL
Roberts
R
.
Localization of a gene responsible for arrhythmogenic right ventricular dysplasia to chromosome 3p23
.
Circulation
1998
;
98
:
2791
2795
.
32
Rahman
P
Jones
A
Curtis
J
Bartlett
S
Peddle
L
Fernandez
BA
Freimer
NB
.
The Newfoundland population: a unique resource for genetic investigation of complex diseases
.
Hum Mol Genet
2003
;
12
(Spec No. 2)
:
R167
R172
.
33
Haywood
AF
Merner
ND
Hodgkinson
KA
Houston
J
Syrris
P
Booth
V
Connors
S
Pantazis
A
Quarta
G
Elliott
P
McKenna
W
Young
TL
.
Recurrent missense mutations in TMEM43 (ARVD5) due to founder effects cause arrhythmogenic cardiomyopathies in the UK and Canada
.
Eur Heart J
2013
;
34
:
1002
1011
.
34
Mejat
A
Misteli
T
.
LINC complexes in health and disease
.
Nucleus
2010
;
1
:
40
52
.
35
Rajkumar
R
Sembrat
JC
McDonough
B
Seidman
CE
F
.
Functional effects of the TMEM43 Ser358Leu mutation in the pathogenesis of arrhythmogenic right ventricular cardiomyopathy
.
BMC Med Genet
2012
;
13
:
21
.
36
Worman
HJ
Ostlund
C
Wang
Y
.
Diseases of the nuclear envelope
.
Cold Spring Harb Perspect Biol
2010
;
2
:
a000760
.
37
Lammerding
J
Schulze
PC
Takahashi
T
Kozlov
S
Sullivan
T
Kamm
RD
Stewart
CL
Lee
RT
.
Lamin A/C deficiency causes defective nuclear mechanics and mechanotransduction
.
J Clin Invest
2004
;
113
:
370
378
.
38
Zwerger
M
Jaalouk
DE
Lombardi
ML
Isermann
P
Mauermann
M
Dialynas
G
Herrmann
H
Wallrath
LL
Lammerding
J
.
Myopathic lamin mutations impair nuclear stability in cells and tissue and disrupt nucleo-cytoskeletal coupling
.
Hum Mol Genet
2013
;
22
:
2335
2349
.
39
Arimura
T
Onoue
K
Takahashi-Tanaka
Y
Ishikawa
T
Kuwahara
M
Setou
M
Shigenobu
S
Yamaguchi
K
Bertrand
AT
Machida
N
Takayama
K
Fukusato
M
Tanaka
R
Somekawa
S
Nakano
T
Yamane
Y
Kuba
K
Imai
Y
Saito
Y
Bonne
G
Kimura
A
.
Nuclear accumulation of androgen receptor in gender difference of dilated cardiomyopathy due to lamin A/C mutations
.
Cardiovasc Res
2013
;
99
:
382
394
.
40
James
CA
Bhonsale
A
Tichnell
C
Murray
B
Russell
SD
Tandri
H
Tedford
RJ
Judge
DP
Calkins
H
.
Exercise increases age-related penetrance and arrhythmic risk in arrhythmogenic right ventricular dysplasia/cardiomyopathy-associated desmosomal mutation carriers
.
J Am Coll Cardiol
2013
;
62
:
1290
1297
. |
Chapter 9.9, Problem 13ES
### Discrete Mathematics With Applicat...
5th Edition
EPP + 1 other
ISBN: 9781337694193
Chapter
Section
### Discrete Mathematics With Applicat...
5th Edition
EPP + 1 other
ISBN: 9781337694193
Textbook Problem
1 views
# One urn contains 10 red balls and 25 green balls, and a second urn contains 22 red balls and 15 green balls. A ball is chosen as follows: First an urn is selected by tossing a loaded coin with probability 0.4 of landing heads up and probability 0.6 of landing tails up. If the coin lands heads up, the first urn is chosen: otherwise, the second urn is chosen. Then a ball is picked at random from the chosen urn. a. What is the probability that the chosen ball is green?
To determine
To find out the probability that chosen ball is green.
Explanation
Given information:
One urn contains 10 red balls and 25 green balls and second urn contains 22 red balls and 15 green balls. The probability of choosing the first urn is 0.4 and the probability of choosing the second urn is 0.6.
Calculation:
Let A= event of choosing the green ball.
E1= event of choosing the blue ball from first bag.
E2= event of choosing the blue ball from second bag.
P(A)=i=1nP(Ei)(A|Ei){using multiplication theorem of probability}
To determine
To find out the probability that chosen ball is green and it came from the first urn.
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started |
NEWS@SKY (Science&Space News)
Home To Survive in the Universe
.MYA{ color: yellow; text-decoration: none; :hover { color: red; text-decoration: none; } } Services
Why to Inhabit Top Contributors Astro Photo The Collection Forum Blog New! FAQ Login
# να Boo (Nu1 Boötis )
Contents
### Images
DSS Images Other Images
### Related articles
Kinematic structure of the corona of the Ursa Major flow found using proper motions and radial velocities of single starsAims.We study the kinematic structure of peripheral areas of the UrsaMajoris stream (Sirius supercluster). Methods.We use diagrams ofindividual stellar apexes developed by us and the classical technique ofproper motion diagrams generalized to a star sample distributed over thesky. Results.Out of 128 cluster members we have identified threecorona (sub)structures comprised of 13, 13 and 8 stars. Thesubstructures have a spatial extension comparable to the size of thecorona. Kinematically, these groups are distinguished by their propermotions, radial velocities and by the directions of their spatialmotion. Coordinates of their apexes significantly differ from those ofthe apexes of the stream and its nucleus. Our analysis shows that thesesubstructures do not belong to known kinematic groups, such as Hyades orCastor. We find kinematic inhomogeneity of the corona of the UMa stream. CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 Local kinematics of K and M giants from CORAVEL/Hipparcos/Tycho-2 data. Revisiting the concept of superclustersThe availability of the Hipparcos Catalogue has triggered many kinematicand dynamical studies of the solar neighbourhood. Nevertheless, thosestudies generally lacked the third component of the space velocities,i.e., the radial velocities. This work presents the kinematic analysisof 5952 K and 739 M giants in the solar neighbourhood which includes forthe first time radial velocity data from a large survey performed withthe CORAVEL spectrovelocimeter. It also uses proper motions from theTycho-2 catalogue, which are expected to be more accurate than theHipparcos ones. An important by-product of this study is the observedfraction of only 5.7% of spectroscopic binaries among M giants ascompared to 13.7% for K giants. After excluding the binaries for whichno center-of-mass velocity could be estimated, 5311 K and 719 M giantsremain in the final sample. The UV-plane constructed from these datafor the stars with precise parallaxes (σπ/π≤20%) reveals a rich small-scale structure, with several clumpscorresponding to the Hercules stream, the Sirius moving group, and theHyades and Pleiades superclusters. A maximum-likelihood method, based ona Bayesian approach, has been applied to the data, in order to make fulluse of all the available stars (not only those with precise parallaxes)and to derive the kinematic properties of these subgroups. Isochrones inthe Hertzsprung-Russell diagram reveal a very wide range of ages forstars belonging to these groups. These groups are most probably relatedto the dynamical perturbation by transient spiral waves (as recentlymodelled by De Simone et al. \cite{Simone2004}) rather than to clusterremnants. A possible explanation for the presence of younggroup/clusters in the same area of the UV-plane is that they have beenput there by the spiral wave associated with their formation, while thekinematics of the older stars of our sample has also been disturbed bythe same wave. The emerging picture is thus one of dynamical streamspervading the solar neighbourhood and travelling in the Galaxy withsimilar space velocities. The term dynamical stream is more appropriatethan the traditional term supercluster since it involves stars ofdifferent ages, not born at the same place nor at the same time. Theposition of those streams in the UV-plane is responsible for the vertexdeviation of 16.2o ± 5.6o for the wholesample. Our study suggests that the vertex deviation for youngerpopulations could have the same dynamical origin. The underlyingvelocity ellipsoid, extracted by the maximum-likelihood method afterremoval of the streams, is not centered on the value commonly acceptedfor the radial antisolar motion: it is centered on < U > =-2.78±1.07 km s-1. However, the full data set(including the various streams) does yield the usual value for theradial solar motion, when properly accounting for the biases inherent tothis kind of analysis (namely, < U > = -10.25±0.15 kms-1). This discrepancy clearly raises the essential questionof how to derive the solar motion in the presence of dynamicalperturbations altering the kinematics of the solar neighbourhood: doesthere exist in the solar neighbourhood a subset of stars having no netradial motion which can be used as a reference against which to measurethe solar motion?Based on observations performed at the Swiss 1m-telescope at OHP,France, and on data from the ESA Hipparcos astrometry satellite.Full Table \ref{taba1} is only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/430/165} The Indo-US Library of Coudé Feed Stellar SpectraWe have obtained spectra for 1273 stars using the 0.9 m coudéfeed telescope at Kitt Peak National Observatory. This telescope feedsthe coudé spectrograph of the 2.1 m telescope. The spectra havebeen obtained with the no. 5 camera of the coudé spectrograph anda Loral 3K×1K CCD. Two gratings have been used to provide spectralcoverage from 3460 to 9464 Å, at a resolution of ~1 Å FWHMand at an original dispersion of 0.44 Å pixel-1. For885 stars we have complete spectra over the entire 3460 to 9464 Åwavelength region (neglecting small gaps of less than 50 Å), andpartial spectral coverage for the remaining stars. The 1273 stars havebeen selected to provide broad coverage of the atmospheric parametersTeff, logg, and [Fe/H], as well as spectral type. The goal ofthe project is to provide a comprehensive library of stellar spectra foruse in the automated classification of stellar and galaxy spectra and ingalaxy population synthesis. In this paper we discuss thecharacteristics of the spectral library, viz., details of theobservations, data reduction procedures, and selection of stars. We alsopresent a few illustrations of the quality and information available inthe spectra. The first version of the complete spectral library is nowpublicly available from the National Optical Astronomy Observatory(NOAO) via ftp and http. Empirical calibration of the near-infrared CaII triplet - IV. The stellar population synthesis modelsWe present a new evolutionary stellar population synthesis model, whichpredicts spectral energy distributions for single-age single-metallicitystellar populations (SSPs) at resolution 1.5 Å (FWHM) in thespectral region of the near-infrared CaII triplet feature. The mainingredient of the model is a new extensive empirical stellar spectrallibrary that has been recently presented by Cenarro et al., which iscomposed of more than 600 stars with an unprecedented coverage of thestellar atmospheric parameters.Two main products of interest for stellar population analysis arepresented. The first is a spectral library for SSPs with metallicities-1.7 < [Fe/H] < +0.2, a large range of ages (0.1-18 Gyr) andinitial mass function (IMF) types. They are well suited to modellinggalaxy data, since the SSP spectra, with flux-calibrated responsecurves, can be smoothed to the resolution of the observational data,taking into account the internal velocity dispersion of the galaxy,allowing the user to analyse the observed spectrum in its own system. Wealso produce integrated absorption-line indices (namely CaT*, CaT andPaT) for the same SSPs in the form of equivalent widths.We find the following behaviour for the CaII triplet feature in old-agedSSPs: (i) the strength of the CaT* index does not change much with timefor all metallicities for ages larger than ~3 Gyr; (ii) this index showsa strong dependence on metallicity for values below [M/H]~-0.5 and (iii)for larger metallicities this feature does not show a significantdependence either on age or on the metallicity, being more sensitive tochanges in the slope of power-like IMF shapes.The SSP spectra have been calibrated with measurements for globularclusters by Armandroff & Zinn, which are well reproduced, probingthe validity of using the integrated CaII triplet feature fordetermining the metallicities of these systems. Fitting the models totwo early-type galaxies of different luminosities (NGC 4478 and 4365),we find that the CaII triplet measurements cannot be fitted unless avery dwarf-dominated IMF is imposed, or if the Ca abundance is evenlower than the Fe abundance. More details can be found in work byCenarro et al. High-Precision Near-Infrared Photometry of a Large Sample of Bright Stars Visible from the Northern HemisphereWe present the results of 8 yr of infrared photometric monitoring of alarge sample of stars visible from Teide Observatory (Tenerife, CanaryIslands). The final archive is made up of 10,949 photometric measuresthrough a standard InSb single-channel photometer system, principally inJHK, although some stars have measures in L'. The core of this list ofstars is the standard-star list developed for the Carlos SánchezTelescope. A total of 298 stars have been observed on at least twooccasions on a system carefully linked to the zero point defined byVega. We present high-precision photometry for these stars. The medianuncertainty in magnitude for stars with a minimum of four observationsand thus reliable statistics ranges from 0.0038 mag in J to 0.0033 magin K. Many of these stars are faint enough to be observable with arraydetectors (42 are K>8) and thus to permit a linkage of the bright andfaint infrared photometric systems. We also present photometry of anadditional 25 stars for which the original measures are no longeravailable, plus photometry in L' and/or M of 36 stars from the mainlist. We calculate the mean infrared colors of main-sequence stars fromA0 V to K5 V and show that the locus of the H-K color is linearlycorrelated with J-H. The rms dispersion in the correlation between J-Hand H-K is 0.0073 mag. We use the relationship to interpolate colors forall subclasses from A0 V to K5 V. We find that K and M main-sequence andgiant stars can be separated on the color-color diagram withhigh-precision near-infrared photometry and thus that photometry canallow us to identify potential mistakes in luminosity classclassification. Stellar Kinematic Groups. II. A Reexamination of the Membership, Activity, and Age of the Ursa Major GroupUtilizing Hipparcos parallaxes, original radial velocities and recentliterature values, new Ca II H and K emission measurements,literature-based abundance estimates, and updated photometry (includingrecent resolved measurements of close doubles), we revisit the UrsaMajor moving group membership status of some 220 stars to produce afinal clean list of nearly 60 assured members, based on kinematic andphotometric criteria. Scatter in the velocity dispersions and H-Rdiagram is correlated with trial activity-based membership assignments,indicating the usefulness of criteria based on photometric andchromospheric emission to examine membership. Closer inspection,however, shows that activity is considerably more robust at excludingmembership, failing to do so only for <=15% of objects, perhapsconsiderably less. Our UMa members demonstrate nonzero vertex deviationin the Bottlinger diagram, behavior seen in older and recent studies ofnearby young disk stars and perhaps related to Galactic spiralstructure. Comparison of isochrones and our final UMa group membersindicates an age of 500+/-100 Myr, some 200 Myr older than thecanonically quoted UMa age. Our UMa kinematic/photometric members' meanchromospheric emission levels, rotational velocities, and scattertherein are indistinguishable from values in the Hyades and smaller thanthose evinced by members of the younger Pleiades and M34 clusters,suggesting these characteristics decline rapidly with age over 200-500Myr. None of our UMa members demonstrate inordinately low absolutevalues of chromospheric emission, but several may show residual fluxes afactor of >=2 below a Hyades-defined lower envelope. If one defines aMaunder-like minimum in a relative sense, then the UMa results maysuggest that solar-type stars spend 10% of their entire main-sequencelives in periods of precipitously low activity, which is consistent withestimates from older field stars. As related asides, we note six evolvedstars (among our UMa nonmembers) with distinctive kinematics that liealong a 2 Gyr isochrone and appear to be late-type counterparts to diskF stars defining intermediate-age star streams in previous studies,identify a small number of potentially very young but isolated fieldstars, note that active stars (whether UMa members or not) in our samplelie very close to the solar composition zero-age main sequence, unlikeHipparcos-based positions in the H-R diagram of Pleiades dwarfs, andargue that some extant transformations of activity indices are notadequate for cool dwarfs, for which Ca II infrared triplet emissionseems to be a better proxy than Hα-based values for Ca II H and Kindices. A catalogue of calibrator stars for long baseline stellar interferometryLong baseline stellar interferometry shares with other techniques theneed for calibrator stars in order to correct for instrumental andatmospheric effects. We present a catalogue of 374 stars carefullyselected to be used for that purpose in the near infrared. Owing toseveral convergent criteria with the work of Cohen et al.(\cite{cohen99}), this catalogue is in essence a subset of theirself-consistent all-sky network of spectro-photometric calibrator stars.For every star, we provide the angular limb-darkened diameter, uniformdisc angular diameters in the J, H and K bands, the Johnson photometryand other useful parameters. Most stars are type III giants withspectral types K or M0, magnitudes V=3-7 and K=0-3. Their angularlimb-darkened diameters range from 1 to 3 mas with a median uncertaintyas low as 1.2%. The median distance from a given point on the sky to theclosest reference is 5.2degr , whereas this distance never exceeds16.4degr for any celestial location. The catalogue is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/183 CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/386/492, and from theauthors on CD-Rom. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas/year for mu alpha cos delta and 0.7mas/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/222 Rotation and lithium in single giant starsIn the present work, we study the link between rotation and lithiumabundance in giant stars of luminosity class III, on the basis of alarge sample of 309 single stars of spectral type F, G and K. We havefound a trend for a link between the discontinuity in rotation at thespectral type G0III and the behavior of lithium abundances around thesame spectral type. The present work also shows that giant starspresenting the highest lithium contents, typically stars earlier thanG0III, are those with the highest rotation rates, pointing for adependence of lithium content on rotation, as observed for otherluminosity classes. Giant stars later than G0III present, as a rule, thelowest rotation rates and lithium contents. A large spread of about fivemagnitudes in lithium abundance is observed for the slow rotators.Finally, single giant stars with masses 1.5 < M/Msun<=2.5 show a clearest trend for a correlation between rotational velocityand lithium abundance. Based on observations collected at theObservatoire de Haute -- Provence (France) and at the European SouthernObservatory, La Silla (Chile). Table 2 is only available electronicallywith the On-Line publication athttp://link.springer.de/link/service/00230/ Sixth Catalogue of Fundamental Stars (FK6). Part I. Basic fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over more than twocenturies and summarized in the FK5. Part I of the FK6 (abbreviatedFK6(I)) contains 878 basic fundamental stars with direct solutions. Suchdirect solutions are appropriate for single stars or for objects whichcan be treated like single stars. From the 878 stars in Part I, we haveselected 340 objects as "astrometrically excellent stars", since theirinstantaneous proper motions and mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving "single-star candidates" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,199 of the stars in Part I are Δμ binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical "single-star mode" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among "apparently single-stars" introduce sizable "cosmicerrors" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives in addition to the SI mode the "long-termprediction (LTP) mode" and the "short-term prediction (STP) mode". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(I) proper motion in the single-star mode is 0.35 mas/year. This isabout a factor of two better than the typical HIPPARCOS errors for thesestars of 0.67 mas/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(I) proper motions have atypical mean error of 0.50 mas/year, which is by a factor of more than 4better than the corresponding error for the HIPPARCOS values of 2.21mas/year (cosmic errors included). Spectral Irradiance Calibration in the Infrared. X. A Self-Consistent Radiometric All-Sky Network of Absolutely Calibrated Stellar SpectraWe start from our six absolutely calibrated continuous stellar spectrafrom 1.2 to 35 μm for K0, K1.5, K3, K5, and M0 giants. These wereconstructed as far as possible from actual observed spectral fragmentstaken from the ground, the Kuiper Airborne Observatory, and the IRAS LowResolution Spectrometer, and all have a common calibration pedigree.From these we spawn 422 calibrated spectral templates'' for stars withspectral types in the ranges G9.5-K3.5 III and K4.5-M0.5 III. Wenormalize each template by photometry for the individual stars usingpublished and/or newly secured near- and mid-infrared photometryobtained through fully characterized, absolutely calibrated,combinations of filter passband, detector radiance response, and meanterrestrial atmospheric transmission. These templates continue ourongoing effort to provide an all-sky network of absolutely calibrated,spectrally continuous, stellar standards for general infrared usage, allwith a common, traceable calibration heritage. The wavelength coverageis ideal for calibration of many existing and proposed ground-based,airborne, and satellite sensors, particularly low- tomoderate-resolution spectrometers. We analyze the statistics of probableuncertainties, in the normalization of these templates to actualphotometry, that quantify the confidence with which we can assert thatthese templates truly represent the individual stars. Each calibratedtemplate provides an angular diameter for that star. These radiometricangular diameters compare very favorably with those directly observedacross the range from 1.6 to 21 mas. A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Continuous declination system set up by observations of photoelectric astrolabe Mark I In Irkutsk. The first results of international cooperation between CSAO and VS NIIFTRIThe Photoelectric Astrolabe Mark I (PHA I) has been revised with a newcombined prism which could work as an almucantar with zenith distance of45(deg) , to measure continuous declinations at the latitude of Irkutsk,Russia (phi = 52fdg2 ). The PHA I has been working at the astronomicalbase of VS NIIFTRI in Irkutsk since Nov. 1995 based on an internationalcooperation agreement of near 4 years for star catalogue and EOPmeasurements. The first observing program was ended in June 1997, givingcorrections in both right ascension and declination to 200 stars with noblind zone in declination determination, which most astrolabe cataloguesin the world usually would have. Appendix is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html. Towards a fundamental calibration of stellar parameters of A, F, G, K dwarfs and giantsI report on the implementation of the empirical surface brightnesstechnique using the near-infrared Johnson broadband { (V-K)} colour assuitable sampling observable aimed at providing accurate effectivetemperatures of 537 dwarfs and giants of A-F-G-K spectral-type selectedfor a flux calibration of the Infrared Space Observatory (ISO). Thesurface brightness-colour correlation is carefully calibrated using aset of high-precision angular diameters measured by moderninterferometry techniques. The stellar sizes predicted by thiscorrelation are then combined with the bolometric flux measurementsavailable for a subset of 327 ISO standard stars in order to determineone-dimensional { (T, V-K)} temperature scales of dwarfs and giants. Theresulting very tight relationships show an intrinsic scatter induced byobservational photometry and bolometric flux measurements well below thetarget accuracy of +/- 1 % required for temperature determinations ofthe ISO standards. Major improvements related to the actual directcalibration are the high-precision broadband { K} magnitudes obtainedfor this purpose and the use of Hipparcos parallaxes for dereddeningphotometric data. The temperature scale of F-G-K dwarfs shows thesmallest random errors closely consistent with those affecting theobservational photometry alone, indicating a negligible contributionfrom the component due to the bolometric flux measurements despite thewide range in metallicity for these stars. A more detailed analysisusing a subset of selected dwarfs with large metallicity gradientsstrongly supports the actual bolometric fluxes as being practicallyunaffected by the metallicity of field stars, in contrast with recentresults claiming somewhat significant effects. The temperature scale ofF-G-K giants is affected by random errors much larger than those ofdwarfs, indicating that most of the relevant component of the scattercomes from the bolometric flux measurements. Since the giants have smallmetallicities, only gravity effects become likely responsible for theincreased level of scatter. The empirical stellar temperatures withsmall model-dependent corrections are compared with the semiempiricaldata by the Infrared Flux Method (IRFM) using the large sample of 327comparison stars. One major achievement is that all empirical andsemiempirical temperature estimates of F-G-K giants and dwarfs are foundto be closely consistent between each other to within +/- 1 %. However,there is also evidence for somewhat significant differential effects.These include an average systematic shift of (2.33 +/- 0.13) % affectingthe A-type stars, the semiempirical estimates being too low by thisamount, and an additional component of scatter as significant as +/- 1 %affecting all the comparison stars. The systematic effect confirms theresults from other investigations and indicates that previousdiscrepancies in applying the IRFM to A-type stars are not yet removedby using new LTE line-blanketed model atmospheres along with the updatedabsolute flux calibration, whereas the additional random component isfound to disappear in a broadband version of the IRFM using an infraredreference flux derived from wide rather than narrow band photometricdata. Table 1 and 2 are only available in the electronic form of thispaper The Tokyo PMC catalog 90-93: Catalog of positions of 6649 stars observed in 1990 through 1993 with Tokyo photoelectric meridian circleThe sixth annual catalog of the Tokyo Photoelectric Meridian Circle(PMC) is presented for 6649 stars which were observed at least two timesin January 1990 through March 1993. The mean positions of the starsobserved are given in the catalog at the corresponding mean epochs ofobservations of individual stars. The coordinates of the catalog arebased on the FK5 system, and referred to the equinox and equator ofJ2000.0. The mean local deviations of the observed positions from theFK5 catalog positions are constructed for the basic FK5 stars to comparewith those of the Tokyo PMC Catalog 89 and preliminary Hipparcos resultsof H30. Classification and Identification of IRAS Sources with Low-Resolution SpectraIRAS low-resolution spectra were extracted for 11,224 IRAS sources.These spectra were classified into astrophysical classes, based on thepresence of emission and absorption features and on the shape of thecontinuum. Counterparts of these IRAS sources in existing optical andinfrared catalogs are identified, and their optical spectral types arelisted if they are known. The correlations between thephotospheric/optical and circumstellar/infrared classification arediscussed. CA II K Emission Line Asymmetries Among Red GiantsIn the spectra of red giants the chromospheric emission feature found inthe core of the Ca II K line often exhibits an asymmetric profile. Thisasymmetry can be documented by a parameter V/R which is classified as> 1, 1, or < 1 if the violet wing of the emission profile is ofgreater, equal, or lower intensity than the redward wing. A literaturesearch has been conducted to compile a V/R dataset which builds on thelarge survey of bright field giants made by Wilson (1976). Among starsof luminosity classes II-III-IV the majority of those with V/R > 1are found to be bluer than B-V =1.3, while those with V/R < 1 aremostly redder than this colour. Stars with nearly symmetric profiles,V/R≈ 1, are found throughout the colour range 0.8 < B-V < 1.5.There is no sharp transition line separating stars of V/R > 1 and< 1 in the colour-magnitude diagram, but rather a transition zone'centered at B-V ≈ 1.3. The center of this zone coincides closely witha coronal dividing line' identified by Haish, Schmitt and Rosso (1991)as the red envelope in the H-R diagram of giants detected in soft x-rayemission by ROSAT. It is suggested that both the transition to a Ca II Kemission asymmetry of V/R < 1 and the drop in soft x-ray activityacross the coronal dividing line are related to changes in the dynamicalstate of the chromospheres of red giants. By contrast, the onset ofphotometric variability due to pulsation occurs among stars of early-Mspectral type, that are redward of the mid-point of the Ca II V/Rtransition zone', suggesting that the chromospheric motions whichproduce an asymmetry of V/R < 1 are established prior to the onset ofpulsation. Systematic Errors in the FK5 Catalog as Derived from CCD Observations in the Extragalactic Reference Frame.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114..850S&db_key=AST A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html Barium stars, galactic populations and evolution.In this paper HIPPARCOS astrometric and kinematical data together withradial velocities from other sources are used to calibrate bothluminosity and kinematics parameters of Ba stars and to classify them.We confirm the results of our previous paper (where we used data fromthe HIPPARCOS Input Catalogue), and show that Ba stars are aninhomogeneous group. Five distinct classes have been found i.e. somehalo stars and four groups belonging to disk population: roughlysuper-giants, two groups of giants (one on the giant branch, the otherat the clump location) and dwarfs, with a few subgiants mixed with them.The confirmed or suspected duplicity, the variability and the range ofknown orbital periods found in each group give coherent resultssupporting the scenario for Ba stars that are not too highly massivebinary stars in any evolutionary stages but that all were previouslyenriched with Ba from a more evolved companion. The presence in thesample of a certain number of false'' Ba stars is confirmed. Theestimates of age and mass are compatible with models for stars with astrong Ba anomaly. The mild Ba stars with an estimated mass higher than3Msun_ may be either stars Ba enriched by themselves or`true'' Ba stars, which imposes new constraints on models. Absolute magnitudes and kinematics of barium stars.The absolute magnitude of barium stars has been obtained fromkinematical data using a new algorithm based on the maximum-likelihoodprinciple. The method allows to separate a sample into groupscharacterized by different mean absolute magnitudes, kinematics andz-scale heights. It also takes into account, simultaneously, thecensorship in the sample and the errors on the observables. The methodhas been applied to a sample of 318 barium stars. Four groups have beendetected. Three of them show a kinematical behaviour corresponding todisk population stars. The fourth group contains stars with halokinematics. The luminosities of the disk population groups spread alarge range. The intrinsically brightest one (M_v_=-1.5mag,σ_M_=0.5mag) seems to be an inhomogeneous group containing bariumbinaries as well as AGB single stars. The most numerous group (about 150stars) has a mean absolute magnitude corresponding to stars in the redgiant branch (M_v_=0.9mag, σ_M_=0.8mag). The third group containsbarium dwarfs, the obtained mean absolute magnitude is characteristic ofstars on the main sequence or on the subgiant branch (M_v_=3.3mag,σ_M_=0.5mag). The obtained mean luminosities as well as thekinematical results are compatible with an evolutionary link betweenbarium dwarfs and classical barium giants. The highly luminous group isnot linked with these last two groups. More high-resolutionspectroscopic data will be necessary in order to better discriminatebetween barium and non-barium stars. Hybrid stars and the reality of "dividing lines" among G to K bright giants and supergiants.We present results of pointed ROSAT PSPC observations of 15 hybridstars/candidates, which have been analyzed in a homogenous way. 7 ofthese stars were observed in X-rays for the first time. 12 out of 15hybrid stars have been detected as X-ray sources, some of them close tothe detection limit. We conclude that essentially all hybrid stars asdefined by the simultaneous presence of transition region line emissionand cool stellar winds are X-ray sources if exposed sufficiently deep.The X-ray luminosities of hybrid stars cover a range between 2x10^27^and ~10^30^erg/s. Their X-ray surface fluxes can be as low as =~20erg/cm^2^/s and thus considerably lower than those of normal luminosityclass (LC) III giants. X-ray spectra of hybrid stars tend to be harderthan that of normal LC III giants, moreover, the X-ray brightest starshave the hardest spectra. We find that for K II giants the normalizedX-ray flux versus C IV flux obeys a power law with an exponent a=2.9,steeper than among normal giants (1.5). Hybrid K II stars are X-rayunderluminous by a factor of 5 to 20 compared to LC III giants at thesame level of normalized CIV flux f_CIV_/f_bol_; hybrid G supergiantsare even more X-ray deficient. We reanalyze the CaII wind dividing lineand find it vertical at B-V=1.45 for LC III giants. It is nearlyhorizontal between B-V=1.45 and 1.0 (at M_bol_=~-2...-3), and not welldefined for supergiants with B-V<1.0. We therefore suggest thatpossibly all LC II and Ib G and K giants are hybrid stars and that the"dividing line" concept in its simplest form is not valid for G/K giantsbrighter than M_bol_=~-2. Hybrid stars are supposed to be evolvedintermediate mass stars and their coronal activity may in principle bedetermined by the individual history of each star. Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. Corrections to the right ascension to be applied to the apparent places of 1217 stars given in "The Chinese Astronomical Almanach" for the year 1984 to 1992.Not Available X-ray activity as statistical age indicator - The disk G-K giantsFor a sample of late-type disk giant stars, the dependence of coronalemission on age as defined by metallicity and kinematics indicators hasbeen studied. It is found that the mean level of X-ray emission forstars with strong metallic lines and/or small peculiar velocities islarger by about one order of magnitude than the mean level of emissionfor stars with weak lines and/or high peculiar velocities. Hence, it issuggested that the X-ray activity can be used as a statistical ageindicator for late-type giants, as well as the classical metallicity orkinematics indicators. It is found that the spread in metallicitytypical of the Galactic disk accounts for less than 50 percent of theobserved difference in X-ray emission. To explain the observations it isargued that other effects should be invoked, such as changes in theefficiency of the stellar magnetic dynamo or the influence ofmetallicity itself on the coronal heating processes. The correction in right ascension of 508 stars determinated with PMO photoelectric transit instrument.Not Available
Submit a new article
• - No Links Found - |