url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://aloro.net/grades/su/gr-lts40ag | LTS40AZH
Melting point 904 ° C. Hardness (GOST 17711−93) HB 10 -1 = 78−88 MPa.
Armature, bushings, bearings
Classification
Country Section Category
CIS, Russia, Ukraine Brass (copper-zinc alloy) Foundry brass
Physical characteristics
Temperature, °C $\alpha\cdot 10^{6}$, $K^{-1}$ $\varkappa$, $\frac{W}{m\cdot K}$ $\rho$, $\frac{kg}{m^3}$ $R\cdot 10^{-6}$, $\Omega\cdot m$
20 113 8500 90
100 21.6
Standards
Standard Description
GOST 17711-93
Description of physical characteristics
Parameter Units of measurement Description
$\alpha\cdot 10^{6}$ $K^{-1}$ Coefficient of thermal (linear) expansion (range 20°C–T)
$\varkappa$ $\frac{W}{m\cdot K}$ Coefficient of thermal conductivity (the heat capacity of the material)
$\rho$ $\frac{kg}{m^3}$ The density of the material
$R\cdot 10^{-6}$ $\Omega\cdot m$ Electrical resistivity | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98866206407547, "perplexity": 23885.662270676952}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00292.warc.gz"} |
https://chemistry.stackexchange.com/questions/34908/deducing-compounds-from-spectroscopic-data-for-isomers-of-formula-c4h8o | # Deducing compounds from spectroscopic data for isomers of formula C4H8O
Here's the problem:
I already know the $$\pu{3600 cm-1}$$ line indicates the $$\ce{-OH}$$ group and the $$\pu{1640 cm-1}$$ line indicates the $$\ce{C}$$-double-bond-$$\ce{C}$$ alkene.
From the $$\ce{^13C}$$ NMR data: the two carbons with chemical shifts of 100+ are $$\mathrm{sp^2}$$ carbons and the remaining two are $$\mathrm{sp^3}$$ carbons.
I have tried to work out compound D, and suggest this structure:
I am stuck on figuring out compound C. I suggest the structure below, but am unsure whether if it is correct.
• What makes you unsure about it? – pH13 - Yet another Philipp Aug 10 '15 at 12:12
• @PH13 it's just the 1Hproton shifts that are confusing...especially for the Hb that I have labelled in compound C. I have interpreted 'qn' to mean 'quintet' but if it couples to 1x Ha and 3xHc, technically shouldn't it be a 'doublet of quartets'? – justbehappy Aug 10 '15 at 12:31
• That's not a "normal" quintet and thus it should not be named like one ... it's rather a multiplet or as you say at high resolution it would probably be a quarted of a doublet (vice versa). The coupling constants are very close to each other and this makes it a quintet at low resolutions. – pH13 - Yet another Philipp Aug 10 '15 at 12:46
• @PH13 ok! I see. Thanks for clarifying this. Do you also agree with my proposed structure for Compound D? – justbehappy Aug 10 '15 at 13:08
## 1 Answer
Your two proposed structures are correct, although your rationalisation of the 13C spectrum for Cpd D is not quite correct around the double bond. In a nutshell, carbon chemical shifts can be calculated as a function of their α, β and γ contributions. α for what is directly bonded to that carbon, β for substituents one carbon away, and γ for substituents two carbons away. Almost without exception, alkene carbons exhibit a downfield β shift and upfield γ shift for substituted alkenes. So, for both compounds C and D, the carbon closest to the substituents will be shifted furthest downfield. The β and γ contributions for the following substituents are shown (someone more tech savvy than me might tidy this up) :
[Substituent; β ;γ] [-H; 0 ; 0] [-CH2O-; +14.2 ; -8.2] [-CH3 ; +9.0 ; -7.0]
The electronics contributing to β and γ shifts is not well understood, but is more complex than a consideration of just electronegativities/electron withdrawing potential.
Below are the spectra for the two compounds:
Your confusion with the description of the quintet label arises from how people report splitting patterns in the literature. I have discussed this on another question, but essentially the two methods of reporting splittings are to (a) report the observed splitting pattern (here a quintet) and (b) report the calculated/expected splitting pattern (here a doublet of quartets). As you can see, (or least I can), reporting the observed pattern can lead to some confusion, and gives no real information about how this splitting pattern arises.
Further, reporting couplings to the nearest Hz (here 6Hz) might be fine for some occasions where the difference in coupling for the two different partners is less or approximate to the observed linewidth, however, with a good sample and a good operator on a good magnet, what is reported as a quintet may very well appear different, especially once some apodization is applied. For example, below is a expansion of a simulated spectrum for compound C with Jbc=5.8 and Jbe=6.2 (smaller couplings from d and f ignored), and observed linewidth of 0.5Hz. The top spectrum is the normal spectrum, and the bottom spectrum is what it looks like with a gaussian linewidth function applied (gb 0.1, lb -1). They look quite different, and the bottom spectrum is very hard to rationalise as a quintet, but easily recognizable as a doublet of quartets. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7557026147842407, "perplexity": 2462.164425699369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00333.warc.gz"} |
https://ashpublications.org/blood/article/106/12/3684/109734/XSLT_Related_Article_Replace_Href | Comment on Airoldi et al, page 3846
Mice genetically deficient for the IL12RB2 gene develop systemic lymphocyte activation, spontaneous autoimmunity, and malignancy, particularly plasmacytoma and lung carcinoma.
Interleukin-12 (IL-12) is a heterodimeric cytokine formed by 2 chains, IL-12 p35 or α-chain and IL-12 p40 or β-chain.1 The IL-12 p40 chain can also associate with IL-23 p19 to form the IL-23 cytokine. The IL-12 receptor is formed by 2 chains, IL-12Rβ1 and IL-12Rβ2. IL-12Rβ2 is specific for the IL-12 receptor, whereas IL-12Rβ1 also associates with IL-23R to form the receptor for IL-23. IL-12 is considered a typical proinflammatory cytokine produced mostly by myeloid cells and dendritic cells, and its biologic effects, in particular the ability to induce production of interferon γ (IFN-γ) and to support T-helper 1 (Th1) type T-cell responses, have been studied particularly on natural killer (NK) and T cells that constitutively or upon activation express functional IL-12 receptors. Although an activity of IL-12 on B-cell functions and immunoglobulin production has been described in early studies,2 whether B cells express functional IL-12 receptors has long been a controversial issue. More recently, however, it was clearly established that normal B cells express both chains of the IL-12 receptor and that they respond to IL-12 with increased immunoglobulin secretion, expression of the IL-18 receptors, and, particularly in the presence of IL-18, production of a high level of IFN-γ.3 However, Airoldi et al4 have shown that in malignant B cells the IL12RB2 gene was silenced, probably by hypermethylation. When the IL12RB2 gene expression was reestablished either by treatment of the cells with a DNA methyltransferase inhibitor or by gene transfection, IL-12, both in vitro and in vivo, induced apoptosis and growth inhibition of the malignant B cells.4
On the basis of the data mentioned above, Airoldi et al4 have postulated that IL-12Rβ2 functions as a tumor suppressor in human B-cell malignancies. In a paper in the present issue of Blood, Airoldi and colleagues tested this hypothesis by analyzing the appearance of malignancies in aging IL12rb2–deficient mice. They observed not only a very significant incidence of plasmacytoma and lung carcinoma but also immune complex mesengial glomerulonephritis with serum antinuclear antibodies and multiorgan lymphoid infiltrates with systemic B- and T-cell activation in all aging animals. The observed autoimmune pathology may in part be secondary to an up-regulation of IL-6 in the IL12rb2–deficient animals, and the data presented suggest that there is a reciprocal down-regulation between IL-6 and IL-12. These results strongly support the conclusions that IL-12 may be important in controlling aberrant or excessive B-cell activation and that the absence of signaling of this proinflammatory cytokine paradoxically results in a state of systemic B- and T-cell activation. These findings open a new perspective on the physiologic role of IL-12. The high frequency of plasmacytoma observed in the aging IL12rb2–deficient animals may reflect either the inability of the animals to control aberrant B-cell activation or an effect of the chronic inflammatory environment on B-cell neoplastic transformation and tumor progression. The occurrence in some animals of lung adenocarcinoma may have an opposite mechanism and be linked to defective innate antitumor surveillance in the animals lacking IL-12 functions, possibly secondary to a reduced production of IFN-γ. Future studies analyzing the specific role of IL-12 in regulation of B-cell activation and transformation, autoimmunity, and solid tumor immunosurveillance will shed new light on the mechanisms of homeostatic regulation of B-cell activation and on the complex role of proinflammatory cytokines in either promoting or preventing tumor initiation and progression. ▪
1
Trinchieri G. Interleukin-12 and the regulation of innate resistance and adaptive immunity.
Nat Rev Immunol.
2003
;
3
:
133
-146.
2
Jelinek DF, Braaten JK. Role of IL-12 in human B lymphocyte proliferation and differentiation.
J Immunol.
1995
;
154
:
1606
-1613.
3
Airoldi I, Gri G, Marshall JD, et al. Expression and function of IL-12 and IL-18 receptors on human tonsillar B cells.
J Immunol.
2000
;
165
:
6880
-6888.
4
Airoldi I, Di Carlo E, Banelli B, et al. The IL-12Rbeta2 gene functions as a tumor suppressor in human B cell malignancies.
J Clin Invest.
2004
;
113
:
1651
-1659. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113176822662354, "perplexity": 14863.04654467378}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00178.warc.gz"} |
http://math.stackexchange.com/questions/83926/probability-why-my-solution-doesnt-work-out-p-of-drawing-a-pair | # Probability, why my solution doesn't work out? (P of drawing a pair)
The task is simple, the probability of drawing a pair of cards. You draw two cards from a stack, what is the chance that you get two kings or two fours.
My idea was the following. There are 13 different valued cards. The probability of getting lets say a pair of two is following.
$\frac{1}{52} \cdot \frac{3}{51}$
The chance of getting first card 2 is $\frac{1}{52}$, the chance of getting one of the three other cards that would make this a pair is $\frac{3}{51}$. That sounds reasonable to me. And now to account for all 13 different types you just multiply this with 13. Or add it up 13 times.
$13(\frac{1}{52} \cdot \frac{3}{51})$
This is wrong and I don't understand why. The probability of getting a random pair should be the sum of getting every type of pair.
A correct solution would be the following.
$\frac{52 \cdot 3}{52 \cdot 51}$
I try to stick with the 'count the number of beneficial outcomes in every step and multiply method' but I got stuck on why my way of thinking didn't work out here.
-
The probability that the first card is a 2, is $4/52$, not $1/52$. Your reasoning works out, with this correction. – David Mitra Nov 20 '11 at 13:14
Silly me. Lalalala. Thanks. – Algific Nov 20 '11 at 13:16
It doesn't matter what first card drawn is. Probability of matching is $3/51$. – André Nicolas Nov 20 '11 at 15:56
Shouldn't it be $\frac{2}{13}\cdot\frac{3}{51}$ where the $\frac{2}{13}$ is the probability that the first card is a four or a king? – Henning Makholm Nov 20 '11 at 16:00
@David, why not post that as an answer, so it may be accepted? – Mr.Wizard Nov 22 '11 at 9:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788880705833435, "perplexity": 380.4872222342576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00456-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.intmath.com/plane-analytic-geometry/8-curves-polar-coordinates.php | # 8. Curves in Polar Coordinates
Don't miss the Polar graphs interactive applet.
We'll plot the graphs in this section using a computer. You'll also learn how to sketch some of them on paper because it helps you understand how graphs in polar coordinates work.
Don't worry about all the difficult-looking algebra in the second part of the answers - it's just there to demonstrate that polar coordinates are much simpler than rectangular coordinates for these graphs. We convert them using what we learned in the last section, Polar Coordinates.
Curves in polar coordinates work very similarly to vectors. See:
Vector concepts
### Need Graph Paper?
(Polar graph paper included.)
Sketch each of the following functions using polar coordinates, and then convert each to an equation in rectangular coordinates.
Example 1: r = 2 + 3 sin θ
(This polar graph is called a limacon from the Latin word for "snail".)
Here's another example of a limacon:
Example 2: r = 3 cos 2θ
Example 3: r = sin θ − 1
(This one is called a cardioid because it is heart-shaped. It is a special case of the limacon.)
Continues below
Example 4: r = 2.5
Example 5: r = 0.2 θ
This is an interesting curve, called an Archimedean Spiral. As θ increases, so does r.
Later, we'll learn how to find the Length of an Archimedean Spiral.
Example 6: r = sin (2θ) − 1.7
This is the face I drew at the top of this page. We're not even going to try to find the equivalent in rectangular coordinates!
You can play with this graph in the following interactive applet.
## Interactive Graph
You an explore the above graphs using this interactive graph.
Use the slider below the graph to trace out the curves.
See what happens as you go beyond the normal domain for these graphs (i.e. when theta is less than 0 or greater than theta = 2pi).
Change the function using the select box at the top of the graph.
Choose function:
### Application
Check out Polar Coordinates and Cardioid Microphones for an application of polar coordinates.
top
### Online Algebra Solver
This algebra solver can solve a wide range of math problems.
### Math Lessons on DVD
Easy to understand math lessons on DVD. See samples before you commit. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888789176940918, "perplexity": 985.4021447599059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00512.warc.gz"} |
https://www.gamedev.net/forums/topic/662193-calculating-right-vector-from-two-points/ | # Calculating Right Vector from Two Points
This topic is 1511 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Original Thread from Unreal Engine Forums
Hello all! I am working on a water/river mode plugin for the Unreal Engine 4. It seems to be coming along alright. I have the base features working except for one partially works. And that one is moving a spline to adjust the path.
The problem is that as I extend move it more to the right or left, it gets weird:
I already know what the problem is:
FVector extendDir = FVector(0, width, 0);
one.Vertex0.Position = v0;
one.Vertex1.Position = v0 + extendDir;
one.Vertex2.Position = v2;
two.Vertex0.Position = one.Vertex2.Position + extendDir;
two.Vertex1.Position = one.Vertex2.Position;
two.Vertex2.Position = one.Vertex1.Position;
It is because I'm extending it on the Y-axis. I did it intentionally for testing purposes. Now that I know that the mesh renders alright, the next thing on my list is to get the right vector of a segment on the spline component so I can extend the mesh in that direction. That way, it will never get squished like that and the width of the mesh will stay constant throughout the whole thing. How would I do this? Is it possible? I've been trying to do this all day. And I cannot use the point's rotation because it has not been rotated, I just moved it. Is there a way to calculate this using the first point and connected point or something? Thanks!
##### Share on other sites
I'm not sure what your problem exactly is. So I describe the entire process in a more or less coarse manner, and we can go into details at the step identified by you.
1.) Sub-divide the spline path into linear segments.
2.) Compute the (perpendicular) sideway vectors at the beginning and end of each segment.
3.) Compute a common vector for each pair of sideway vectors from the end of one segment and the beginning of the next segment. This step is used to avoid a gap on an outside bend or an overlapping on an inside bend, resp.
4.) Compute a sideway displacement of the line segment using a defined (half) width, so that a closed mesh is yielded in.
##### Share on other sites
I'm not sure what your problem exactly is. So I describe the entire process in a more or less coarse manner, and we can go into details at the step identified by you.
1.) Sub-divide the spline path into linear segments.
2.) Compute the (perpendicular) sideway vectors at the beginning and end of each segment.
3.) Compute a common vector for each pair of sideway vectors from the end of one segment and the beginning of the next segment. This step is used to avoid a gap on an outside bend or an overlapping on an inside bend, resp.
4.) Compute a sideway displacement of the line segment using a defined (half) width, so that a closed mesh is yielded in.
What I am trying to do is get away from this:
FVector extendDir = FVector(0, width, 0);
Because it always extends the mesh on the Y axis, I get this result if I move the spline back while moving it left or right:
What I would like to do is have the mesh maintain it's width where ever it is. I don't want it to get squished together, it needs to be the same width at the end as it was before. So I would like to have everything extend according to the right vector's of the points.
1) I have everything sub-divided and I can change how may segments each part is sub-divided into:
2) I've calculated the sideway vectors for each segement (bottom left point of triangle to left top point of triangle). Does this look correct?
// Transform the bottom left point of the triangle and left top point of the triangle by y axis to get the right vector.
FVector v0right = FRotationMatrix(Path->GetWorldRotationAtDistanceAlongSpline(Path->GetDistanceAlongSplineAtSplinePoint(i) + (segmentLength * s))).TransformVector(FVector(0, 1, 0)).SafeNormal().ForwardVector;
FVector v2right = FRotationMatrix(Path->GetWorldRotationAtDistanceAlongSpline(Path->GetDistanceAlongSplineAtSplinePoint(i) + (segmentLength * s) + segmentLength)).TransformVector(FVector(0, 1, 0)).SafeNormal().ForwardVector;
3) How would I compute the common vector?
4) How would I compute a sideway displacement?
Thanks!
Edited by KamRandle
##### Share on other sites
Let the spline be divided in segments, so that a sequence of positions
{ p0, p1, p2, ... , pn }
is given. The difference vector
di := pi - pi-1, 0 < i <= n
"ties" 2 consecutive points. Its projection onto the ground plane (assuming this is the x-y plane) is
d'i := ( di;x, di;y, 0 )
and the belonging sideway vector, i.e. one of its both perpendicular vectors in the plane, normalized, is
si := ( di;y, -di;x, 0 ) / | d'i |
This could be used to calculate the 4 corner points of a quad with width w for the segment i as
pi-1, pi, pi + si * w, pi-1 + si * w
Although each segment gets the same width w, the result looks bad because of the said gaps and overlaps. This is because at any intermediate pi there are 2 sideway vectors si and si+1, and they are normally not identical.
Now, a better crossing point would be located somewhere on the halfway vector
hi := si + si+1
which is in general no longer perpendicular to one of the both neighboring segments, so that it cannot be simply scaled like
hi / | hi | * w
in order to yield in a constant width of the mesh.
Instead, using a scaling like so
vi := hi * w / ( 1 + si . si+1 )
does the trick (if I've done it correctly). It computes to a vector with exemplary lengths
| vi | = w
| vi |90° = 1.414 w
| vi |-9 = 1.414 w
| vi |45° = 1,08 w
what seems me okay.
Then the quad for segment 0 has the corners
p0p1p1 + v1p0 + s1 * w
and intermediate segment's quad has the corners
pi-1pipi + vipi-1 + vi-1
and the quad for segment n has the corners
pn-1pnpn + sn * w, pn-1 + vn-1
However, I'd consider to use the spline as middle of the river instead of an edge.
Edited by haegarr
##### Share on other sites
Hey haegarr! Thanks for the help so far, I really appreciate it. Sorry to bug you with all these questions, but I found a way to get the right vector using the points rotation (I didn't know it had a function like that until yesterday) and was wondering if this looks right now:
float distance = Path->GetDistanceAlongSplineAtSplinePoint(i) + (segmentLength * s);
float distance2 = Path->GetDistanceAlongSplineAtSplinePoint(i) + (segmentLength * s) + segmentLength;
FVector v0 = GetTransform().InverseTransformPosition(Path->GetWorldLocationAtDistanceAlongSpline(distance));
FVector v2 = GetTransform().InverseTransformPosition(Path->GetWorldLocationAtDistanceAlongSpline(distance2));
FRotator v0Rotation = Path->GetWorldRotationAtDistanceAlongSpline(distance);
FRotator v2Rotation = Path->GetWorldRotationAtDistanceAlongSpline(distance2);
FVector s1 = FRotationMatrix(v0Rotation).GetScaledAxis(EAxis::Y);
FVector s2 = FRotationMatrix(v2Rotation).GetScaledAxis(EAxis::Y);
FVector h = s1 + s2;
FVector extendDir = h * width / (FVector(1, 1, 1) + s1 * s2);
I was a little confused about calculating vthough (extendDir is that vector). How I read it was "[halfway vector] multiplied by [width] divided by [a vector with all components set to 1] plus [sideway vector 1] multiplied [sideway vector 2]". I'm pretty sure I didn't read it right. Can you type the equation out in words? lol And are these the right points for each calculation?
Thanks for the help, I really appreciate it again man!
Edited by KamRandle
##### Share on other sites
Hey! I got some sleep and I understand it now. Lol I don't know why I was thinking it was a multiplication sign. I changed it to "1 + FVector::DotProduct(s1, s2)" now. :D I will be testing it soon.
##### Share on other sites
Hey haegarr! I got it working thanks to you! Thanks! :D
Edited by KamRandle
1. 1
2. 2
Rutin
19
3. 3
4. 4
khawk
15
5. 5
A4L
13
• 13
• 26
• 10
• 11
• 44
• ### Forum Statistics
• Total Topics
633743
• Total Posts
3013644
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5714871883392334, "perplexity": 1664.2695729083919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00124.warc.gz"} |
https://codedump.io/share/GiHt6snQ4vNx/1/graphviz39s-executables-are-not-found-python-34 | Morteza R - 10 months ago 197
Python Question
I am running Python3.4 on Windows 7. I am trying to use the Python interface for graphviz. This is a script I intend to run:
``````from graphviz import Digraph
import pydotplus
dot = Digraph(comment='The Round Table')
dot.node('A', 'King Arthur')
dot.node('B', 'Sir Bedevere the Wise')
dot.node('L', 'Sir Lancelot the Brave')
dot.edges(['AB', 'AL'])
dot.edge('B', 'L', constraint='false')
print(dot.source)
dot.render('test-output/round-table.gv', view=True)
``````
I get the following error at runtime:
``````RuntimeError: failed to execute ['dot', '-Tpdf', '-O', 'test-output/round-table.gv'], make sure the Graphviz executables are on your systems' path
``````
Now I am sure I have properly installed the correct dependencies. I first tried to set the correct environment variables. The graphviz executables are located at C:\Program Files (x86)\Graphviz2.37\bin so I went to the Environment Variables section. There are two sections there: User Variables and System Variables. Under System Variables I clicked on Path and then clicked
`Edit`
and added ;C:\Program Files (x86)\Graphviz2.37\bin to the end of the string and saved. This didn't clear the error.
Then, following the answer given here I uninstalled pydot (actually I use pydotplus here) and re-installed it again, but still no success.
I have been trying for hours to fix this and the whole PATH variable thing is just confusing and frustrating. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242283701896667, "perplexity": 5743.42579134358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690211.67/warc/CC-MAIN-20170924205308-20170924225308-00400.warc.gz"} |
https://www.physicsforums.com/threads/an-intro-to-real-analysis-question-eazy.431949/ | # An intro to real analysis question. eazy?
1. Sep 24, 2010
### eibon
1. The problem statement, all variables and given/known data
Let f : A -> B be a bijection. Show that if a function g is such that f(g(x)) = x for
all x ϵ B and g(f(x)) = x for all x ϵ A, then g = f^-1. Use only the definition of a
function and the definition of the inverse of a function.
2. Relevant equations
3. The attempt at a solution
well since f is a bijection then there exist an f^-1 that remaps f back to A and since g does that then g is the unique inverse of f, or something like that please help im not very good that this stuff
2. Sep 25, 2010
### losiu99
Yes, you only need to prove uniqueness of an inverse function. If f(g(x))=x, then g(x) must be the unique argument a for which f(a) = x. And so, this condition completely defines g for all the arguments. g is then obviously equal to f-1.
3. Sep 25, 2010
### eibon
thanks losiu for your response.
but how do you prove that it is the unique inverse? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907807469367981, "perplexity": 652.2839878941808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00340.warc.gz"} |
http://www.physicsforums.com/printthread.php?t=199487 | Physics Forums (http://www.physicsforums.com/index.php)
- Calculus (http://www.physicsforums.com/forumdisplay.php?f=109)
- - What are derivatives and integrals? (http://www.physicsforums.com/showthread.php?t=199487)
The_Z_Factor Nov20-07 10:30 AM
What are derivatives and integrals?
What are they? In my book Im studying limits and it has mentioned a few times before and in the current chapter Derivatives and Integrals, but hasnt explained them. Could anybody explain what these two things are, exactly?
SiddharthM Nov20-07 10:36 AM
if your asking for a formal definition then goto www.wikipedia.com and search for derivative and separately integration.
Geometric calculus interpretation:
If f(x) is a line and can be represented by mx +b then the slope is m, but this is a line and the slope is the same throughout the real numbers. Now consider y= x^2, what is the slope? it changes at each point, the slope at any point is the derivative of the function evaluated at that point.
the integral of a real function gives you the area under the curve of the function.
The_Z_Factor Nov20-07 10:41 AM
Quote:
Quote by SiddharthM (Post 1511842) If f(x) is a line and can be represented by mx +b then the slope is m, but this is a line and the slope is the same throughout the real numbers. Now consider y= x^2, what is the slope? it changes at each point, the slope at any point is the derivative of the function evaluated at that point.
So does this mean that there can be as many derivatives as there are points?
Gib Z Nov20-07 08:31 PM
Quote:
Quote by The_Z_Factor (Post 1511846) So does this mean that there can be as many derivatives as there are points?
Different functions can be differentiated a different number of times.
Mute Nov21-07 12:24 AM
Quote:
Quote by The_Z_Factor (Post 1511846) So does this mean that there can be as many derivatives as there are points?
It means that in general the derivative of a function of x is itself a function of x. i.e., the slope of a function is different at each point on that function.
For example, the derivative of x^2 is 2x. This means that on the curve y = x^2, at the point x = 4, the slope of the curve at x = 4 (or, perhaps more precisely, the slope of the line tangent to the curve at x = 4) is 2*4 = 8. Similarly, the slope at the point x = -5 is -10.
The_Z_Factor Nov21-07 01:52 AM
Ah, thanks for clearing that up for me everybody, that explains it. I think as I'm beginning to learn more about simple calculus I'm beginning to like it more. Haha, it just might turn into a hobby once I learn enough.
All times are GMT -5. The time now is 12:51 AM. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461836218833923, "perplexity": 488.04951668484983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00433-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://docplayer.net/355519-Flow-by-mean-curvature-of-convex-surfaces-into-spheres.html | # FLOW BY MEAN CURVATURE OF CONVEX SURFACES INTO SPHERES
Size: px
Start display at page:
Download "FLOW BY MEAN CURVATURE OF CONVEX SURFACES INTO SPHERES"
## Transcription
1 J. DIFFERENTIAL GEOMETRY 20 (1984) FLOW BY MEAN CURVATURE OF CONVEX SURFACES INTO SPHERES GERHARD HUISKEN 1. Introduction The motion of surfaces by their mean curvature has been studied by Brakke [1] from the viewpoint of geometric measure theory. Other authors investigated the corresponding nonparametric problem [2], [5], [9]. A reason for this interest is that evolutionary surfaces of prescribed mean curvature model the behavior of grain boundaries in annealing pure metal. In this paper we take a more classical point of view: Consider a compact, uniformly convex w-dimensional surface M = M o without boundary, which is smoothly imbedded in R π+1. Let M o be represented locally by a diffeomorphism F o : R" D U -> F 0 (U) c M o c R w+1. Then we want to find a family of maps F(-,t) satisfying the evolution equation γ t F(x,t) = Δ t F(x 9 t) 9 F(,0) = F 0, ίeί/, where Δ, is the Laplace-Beltrami operator on the manifold Λf,, given by F(,0 Wehave Δ,F(x, 0 = -H(x 9 t) v(x 9 t) 9 where H(, t) is the mean curvature and v(, t) is the outer unit normal on M r With this choice of sign the mean curvature of our convex surfaces is always positive and the surfaces are moving in the direction of their inner unit normal. Equation (1) is parabolic and the theory of quasilinear parabolic differential equations guarantees the existence of F(, /) for some short time interval. Received April 28,1984.
2 238 GERHARD HUISKEN We want to show here that the shape of M t approaches the shape of a sphere very rapidly. In particular, no singularities will occur before the surfaces M t shrink down to a single point after a finite time. To describe this more precisely, we carry out a normalization: For any time t, where the solution F(- 9 t) of (1) exists, let \p(t) be a positive factor such that the manifold M t given by has total area equal to M 0, the area of M o : M t dμ = Af o for all t. After choosing the new time variable t(t) = / 0 ' Ψ 2 (τ) dτ it is easy to see that F satisfies j ^ r (2) ^7 ' «r ^' '' where h= f H 1 dμ/f dμ J M is the mean value of the squared mean curvature on M t (see 9 below). 1.1 Theorem. Let n ^ 2 and assume that M o is uniformly convex, i.e., the eigenvalues of its second fundamental form are strictly positive everywhere. Then the evolution equation (1) has a smooth solution on a finite time interval 0 < / < Γ, and the M 9 t s converge to a single point ) as t -> T. The normalized equation (2) has a solution M~ t for all time 0 < t < oo. The surfaces M~ t are homothetic expansions of the M 9 t s, and if we choose as the origin ofr n + 1, then the surfaces M~ t converge to a sphere of area \M 0 \ in the C 00 -topology ast^> oo. Remarks, (i) The convergence of M- t in any C^-norm is exponential. (ii) The corresponding one-dimensional problem has been solved recently by Gage and Hamilton (see [4]). The approach to Theorem 1.1 is inspired by Hamiltons paper [6]. He evolved the metric of a compact three-dimensional manifold with positive Ricci curvature in direction of the Ricci curvature and obtained a metric of constant curvature in the limit. The evolution equations for the curvature quantities in our problem turn out to be similar to the equations in [6] and we can use many of the methods developed there. In 3 we establish evolution equations for the induced metric, the second fundamental form and other important quantities. In the next step a lower ' J M
3 FLOW OF CONVEX SURFACES 239 bound independent of time for the eigenvalues of the second fundamental form is proved. Using this, the Sobolev inequality and an iteration method we can show in 5 that the eigenvalues of the second fundamental form approach each other. Once this is established we obtain a bound for the gradient of the mean curvature and then long time existence for a solution of (2). The exponential convergence of the metric then follows from evolution equations for higher derivatives of the curvature and interpolation inequalities. The author wishes to thank Leon Simon for his interest in this work and the Centre for Mathematical Analysis in Canberra for its hospitality. 2 Notation and preliminary results In the following vectors on M will be denoted by X = {X*}, covectors by Y= {Yi} and mixed tensors by T = {Ttf}. The induced metric and the second fundamental form on M will be denoted by g = {g /7 } and A = {Λ /y } We always sum over repeated indices from 1 to n and we use brackets for the inner product on M: (τ/ k, sj k ) = g is g jr g ku τ ks s ru, \τ\ 2 = (τ k, τ; k ). In particular we use the following notation for traces of the second fundamental form on M: - MI 4. By (* > *) w e denote the ordinary inner product in R n+1 Λί M is given locally by some F as in the introduction, the metric and the second fundamental form on M can be computed as follows: x e R w, where v(x) is the outer unit normal to M at F(x). The induced connection on M is given by 1 fel so that the covariant derivative on M of a vector X is J dxj Jk
4 240 GERHARD HUISKEN The Riemann curvature tensor, the Ricci tensor and scalar curvature are given by Gauss' equation &ijkl = "ik"jl ~ "il"jk> K ik = Hh ik - h u g% k9 R = H 2 - \A\ 2. With this notation we obtain, for the interchange of two covariant derivatives, V,V,X Λ - V,V,** = Rl Jk X" = (h,jh, k - h lk h u )g-'x k, v,vjΐ k - v,v,n = R ijkl g""y m = {h lk hj, - h u h jk )g""y m. The Laplacian ΔΓ of a tensor Ton Mis given by AT' = σ mn T7 V7 T i whereas the covariant derivative of T will be denoted by vt = { V{T Jk }. Now we want to state some consequences of these relations, which are crucial in the forthcoming sections. We start with two well-known identities. im 2.1 Lemma, (i) ΔΛ,.. = v t VjH + Hh ilg h mj - Λ 2 /* 0, (ii) ±Δ Λ 2 = (h^vfjh) + \VA\ 2 + Z. Proof. The first identity follows from the Codazzi equations Vih kl = V k h u = V/Λ Λ and the formula for the interchange of derivatives quoted above, whereas (ii) is an immediate consequence of (i). The obvious inequality \VH\ 2 < w V^4 2 can be improved by the Codazzi equations. 2.2 Lemma.(i) \VA\ 2 > 3/(n + 2) V# 2. (ii) VΛ 2 - \vh\ 2 /n > 2{n - l) vλ 2 /3n. Proof. Similar as in [6, Lemma 11.6] we decompose the tensor va: where = E ijk Then we can easily compute that \E\ 2 = 3 vh\ 2 /(n + 2) and ( E ijk> F ijk) = ( E ijk^i h jk - I>Λ) =» i.e., ^ and F are orthogonal components of va. Then which proves the lemma.
5 FLOW OF CONVEX SURFACES 241 If M tj is a symmetric tensor, we say that M ly is nonnegative, M tj > 0, if all eigenvalues of M tj are nonnegative. In view of our main assumption that all eigenvalues of the second fundamental form of M o are strictly positive, there is some ε > 0 such that the inequality (3) h tj > εh gij holds everywhere on M Q. It will be shown in 4 that this lower bound is preserved with the same ε for all M t as long as the solution of (1) exists. The relation (3) leads to the following inequalities, which will be needed in Lemma. IfH> 0, and (3) is valid with some ε > 0, then (i) Z > nε 2 H 2 (\A\ 2 - H 2 /n). (ϋ) IVA/ H - V z # h kl \ 2 > \ε 2 H 2 \vh\ 2. Proof, (i) This is a pointwise estimate, and we may assume that g zy = δ /y and K 2 0 In this setting we have Z = HC- \A\ 4 n \ I n i-l and the conclusion follows since (ϋ) We have μ 2_I i f 2 = 1 IA Σ ( κ κ )\ n n r*. v 7/ ' ί<7 H - \{V,H h kl + V k H h u ) - Kv,iί h kl - V k H h u )\ 2 H - \(v,h h kl + V k H A, 7 ) 2 + \\v,h h kl - V k H hf >\\v i H-h kl - V k H-hf,
6 242 GERHARD HUISKEN since V t h kl is symmetric in (/, k) by the Codazzi equations. Now we have only to consider points where the gradient of the mean curvature does not vanish. Around such a point we introduce an orthonormal frame e l9,e n such that e λ = vh/\vh\. Then in these coordinates. Therefore 10, i > 2, T Σ {v,h'h kl - V k H-h u f h 22 -V 2 H h u f since any eigenvalue, and thus any trace element of h tj is greater than εh. 3. Evolution of metric and curvature In this and the following sections we investigate equation (1) which is easier to handle than the normalized equation (2). The results will be converted to the normalized equation in Theorem. The evolution equation (1) has a solution M t for a short time with any smooth compact initial surface M = M o at t = 0. This follows from the fact that (1) is strictly parabolic (see for example [3, III.4]). From now on we will assume that (1) has a solution on the interval 0 < t < T. Equation (1) implies evolution equations for g and A, which will be derived now. 3.2 Lemma. The metric ofm t satisfies the evolution equation ( 4 ) γ t Su = ~ 2Hh u Proof. The vectors df/dx t are tangential to M, and thus
7 FLOW OF CONVEX SURFACES 243 From this we obtain 9F 3 df\ JdF 3 = -2Hh tj. 3.3 Lemma. The unit normal to M, satisfies dp/dt Proof. This is a straightforward computation: 3ί" ~ \ 3ί"' 3x, Jdxj 8 \ V ' dt Now we can prove 3.4 Theorem. The second fundamental form satisfies the evolution equation lh,j = ΔΛ, 7-2Hh u g lm h mj + \A\ 2 h tj. Proof. We use the Gauss-Weingarten relations 3 2 F rjt 3F, 3, /m 3F = 1 h v v = h,q x3x, l 'Jdx h k 'J V ' dx/ h to conclude J> 8 dx _3_ 3 3/ ' 7 3ί - v,v/r - Hh,,g'"h mj. Then the theorem is a consequence of Lemma 2.1.
8 244 GERHARD HUISKEN 3.5 Corollary. We have the evolution equations: (i) ^H = AH+ \A\ 2 H, (ϋ) j^\a\ 2 = A\A\ 2-2 VΛ 2 + 2\A\\ Proof. We get, from Lemma 3.2, and the first identity follows from Theorem 3.4. To prove the second equation, we calculate + 2g ik gj'h kl {Ah u - 2Hh im g m»h nj + \A\ 2 h,j) = 2g ik gj'h k Ah ij + 2\A\\ = g kl V k V,{g"g m»h pm h qι,) = 2gPig m "h pm Ah qn + 2\VA\\ The last identity follows from (ϋ) and γ f H 2 = 2if(Δi/ + μ 2^) = Δ^2-2\VH\ Corollary, (i) If dμ t = μ,(3c) dx is the measure on M n then μ = Jdet g tj and dμ t /dt = -H 2 μ r In particular the total area \M t \ of M t is decreasing. (ii) // the mean curvature of M o is strictly positive everywhere, then it will be strictly positive on M t as long as the solution exists. Proof. The first part of the corollary follows from Lemma 3.2, whereas the second part is a consequence of the evolution equation for H and the maximum principle. 4. Preserving convexity We want to show now that our main assumption, that is inequality (3), remains true as long as the solution of equation (1) exists. For this purpose we need the following maximum principle for tensors on manifold, which was
9 FLOW OF CONVEX SURFACES 245 proved in [6, Theorem 9.1]: Let M^bea vector field and let g ij9 M {j and N tj be symmetric tensors on a compact manifold M which may all depend on time /. Assume that N tj = p(m ip gtj) is a polynomial in M i} formed by contracting products of M tj with itself using the metric. Furthermore, let this polynomial satisfy a null-eigenvector condition, i.e. for any null-eigenvector X of M tj we have N ij X i X J > 0. Then we have 4.1 Theorem (Hamilton). Suppose that on 0 < / < T the evolution equation holds, where N έj = p(m ij9 g /y ) satisfies the null-eigenvector condition above. If M tj > 0 at t = 0, then it remains soono < t < T. An immediate consequence of Theorems 3.4 and 4.1 is 4.2 Corollary. J/λ, 7 > 0 at t = 0, then it remains so for 0 < / < Γ. Proo/. Set M tj = A l7, «Λ ^ 0 and N tj = -2Hh u g""h mj + μ4 2 *, y. We also have the following stronger result. 4.3 Theorem. If εhg^ < Λ /y < βhgij, and H > 0 at the beginning for some constants 0 < ε < \/n < β < 1, ίaew r/zw remains soono < / < Γ. Proof. To prove the first inequality, we want to apply Theorem 4.1 with M,j = - - ε gij, u k = jjg kl V,H, N ij =2εHh ij -2h im g'"'h IJ. With this choice the evolution equation in Theorem 4.1 is satisfied since ^ i m g lj ' H 8 It remains to check that N^ is nonnegative on the null-eigenvectors of M tj. Assume that, for some vector X = {X 1 }, u hijx j = εhx r Then we derive NyX'XJ = lεhhtjx'xj - 2h im g mi h lj X i X J = 2ε 2 H 2 \X\ 2-2ε 2 H 2 \X\ 2 = 0. That the second inequality remains true follows in the same way after reversing signs.
10 246 GERHARD HUISKEN 5. The eigenvalues of A In this section we want to show that the eigenvalues of the second fundamental form approach each other, at least at those points where the mean curvature tends to infinity (for the unnormalized equation (1)). Following the idea of Hamilton in [6], we look at the quantity which measures how far the eigenvalues K t oi A diverge from each other. We show that \A\ 2 H 2 /n becomes small compared to H Theorem. There are constants δ > 0 and C o < oo depending only on M o, such that for all times 0 < t < T. Our goal is to bound the function/, = {\A\ 2 - H 2 /n)/h 2 ~ σ small σ. We first need an evolution equation for/ σ. 5.2 Lemma. Let a = 2 σ. Then, for any σ, for sufficiently (2 - a)(a - +(2 - a)\aff a. Proof. We have, in view of the evolution equations for \A\ 2 and H, 1/ =AίMl!_I ί /2 dt J dt\h a n = HA\A\2 - a\a\ 2 ΔH _ (2 - «) fj1 -. H a+1
11 FLOW OF CONVEX SURFACES 247 Furthermore I 2 - «MI 2 v,ff (2-α), (2-<*),-, n (5) 1 (2-α)(l-α), 2 F Viί and now the conclusion of the lemma follows from reorganizing terms and the identity Iv,* w # - v,^ AJ 2 = H 2 \VA\ 2 + μ 2 VH\ 2 - (v,μ 2, V, Unfortunately the absolute term (2 α) ^4 2 / σ in this evolution equation is positive and we cannot achieve our goal by the ordinary maximum principle. But from Theorem 4.3 and Lemma 2.3(ϋ) we get 5.3 Corollary. For any σ the inequality (6) l t f σ < Δ/ σ + ^Zi) ( V H> VίΛ ) _ ε 2^ vtf 2 + o\a\ 2 f σ holds ono < t < T. The additional negative term in (6) will be exploited by the divergence theorem: 5.4 Lemma. Let p > 2. Then for any η > 0 and any 0 < σ < \ we have the estimate nε 2 ffph 2 dμ < (2ηp + 5)/ -^ff-^vh] 2 dμ Proof. Let us denote by Λ^ the trace-free second fundamental form
12 248 GERHARD HUISKEN In view of Lemma 2.1(ϋ), the identity (5) may then be rewritten as H ~ W A *Ί 2 " f Now we multiply the inequality by//" 1 and integrate. Integration by parts yields 0 > (P - l)ff. p - 2 \vf a \ 2 dμ + f jp ί ZfΓ 1 dμ -2(a-l)f jjfγ'ivj^v where we used the Codazzi equation. Now, taking the relations (7) ab < f α 2 + J-6 2, α < 2, 2 2i) into account, we derive, for any η > 0, / jp fγ'zdμ < (2η/> + 5)/ ^//-^ The conclusion then follows from Lemma 2.3(i) and Theorem 4.3.
13 FLOW OF CONVEX SURFACES 249 Now we can show that high ZAnorms of f σ are bounded, provided σ is sufficiently small. 5.5 Lemma. There is a constant C λ < oo depending only on M o, such that, for all (8) p > looε" 2, σ < \έp~ X/ \ the inequality holds ono < t < T. Proof. We choose (ί J M t and it is then sufficient to show : ( Af + 1) sup (sup/ ) x o σ [01/2] J σe [0,1/2] V Λ/ o ( To accomplish this, we multiply inequality (6) by/?//" 1 and obtain f / S! dμ + p(p ~ 1)/ //" < 2(β - where the last term on the left-hand side occurs due to the time dependence of dμ as stated in Corollary 3.6(i). In view of (7) we can estimate 2(a-l)pf j and since/? - 1 > looε" 2-1 > 4ε" 2, ^4 2 < H 2, we conclude + hp{p ~ \)jfγ 2 \Vf dμ + Wpj jpfγλvh?dμ H 2 fidμ.
14 250 GERHARD HUISKEN The assumption (8) on σ and Lemma 5.4 yield + \p(p - l)jfγ 2 \vf.\ 2 dμ + Wpj jpfγ^h? dμ 5)/ ±. fr i\ V H\ 2 for any η > 0. Then (9) follows if we choose η = εp~ 1/2 / Corollary. If we assume then we have \ 1/p on 0 < t < T. Proof. This follows from Lemma 5.5 since with n i _ Ί n. i n n ε σ' = σ + < ε 3 /? 7 + mp 1/2 < -ε 3 p 1/2. /? 16 m 16 8 We are now ready to bound f σ by an iteration similar to the methods used in [2], [5]. We will need the following Sobolev inequality from [7]. 5.7 Lemma. For all Lipschitz functions υ on Mwe have If \v\ n/n ~ 1 dμ) <c(n)l[ \w\dμ+ f H\υ\dμ). \ J M I \ J M J M ) Proof of Theorem 5.1. Multiply inequality (6) by pfj-'^, where f ak = max(/ σ k,0) for all k > k 0 = sup M / σ, and denote by A(k) the set where f σ > k. Then we derive as in the proof of Lemma 5.5 for/? > 100ε~ 2 l Z k \ ) j σpf H 2 fp- k
15 FLOW OF CONVEX SURFACES 251 On A(k) we have and thus we obtain with υ = 2 3 TΓ-ί v 2 dμ+( \Vυ\ 2 dμ^op( H 2 fξ dμ. 0 1 J A(k) J A{k) J Λ{k) Let us agree to denote by c n any constant which only depends on n. Then Lemma 5.7 and the Holder inequality lead to where [ V 2 «dμ\ < c j \VΌ\ dμ + c n [j H" dμ) [I v 2^ dμ), \ J M ) J M \/suppi; I \ J M ] [n/{n- 2), n>2, \ < oo, «= 2. Since supp v <z A(k),we have in view of Corollary 5.6 / \ 2/n I \ 2/n \[ H n dμ\ ^k- 2p/n \( H n f p dμ\ < k~ 2p/n C 2p/ \ \ j s\xpvυ I \ J A(k) I provided p > 2 ε~, σ < ~TΣ ε P~ Thus, under this assumption we conclude for k > k λ = A: X (A: O, C\, π, ε) that sup f υ 2 dμ + c n ( f υ 2q dμ dt [0, T) J A(k) J 0 \ J A(k) I <σp Γ f H 2 f p dμdt. Now we use interpolation inequalities for L ^-spaces \l/<7o If ) V<7o / \α /^/ \ ( A(k) < \( υ 2q dμ\ \f v 2 dμ\ \ J A(k) I \ J A(k) I (), with a = l/ί 0 such that 1 < q 0 < q. Then we have 0 f 0 f \ 1/q ί T f f v 2 * dμdt\ <c n σpf [ H 2 fξdμdt 0 J A(k) A(k) I J 0 J A(k) ί f ~ 1/r (ί T f H 2r V J A(k) Ό J A(k)
16 252 GERHARD HUISKEN where r > 1 is to be chosen and IM(^)II = / / dμdt. Again using the Holder inequality we obtain J 0 Γ ί f^dμdt < c n ap\\a{k)\γ /q ~ 1/r [ Γ ί H*'f m»dμ dt\"'. J A(k) \ J 0 J A(k) I If we now choose r so large that 2 - l/q 0 - \/r = γ > 1, then r only depends on n and we may take (10) p > rε , σ < ε^v 1 / 2 such that by Corollary 5.6 for all h > k > k v By a well-known result (see e.g. [8, Lemma 4.1]) we conclude for some/? and σ satisfying (10). Since dμ < \M t \ < Af o by Corollary 3.6(i), it remains only to show that Γis finite. 5.8 Lemma. T < oo. Proof. The mean curvature H satisfies the evolution equation γ t H = Δ// + H\A\ 2 > AH + \H\ Then let φ be the solution of the ordinary differential equation ~«Γ = ~Φ 3 9 φ(0) = ^min(o) > 0. If we consider φ as a function on M X [0, Γ), we get such that by the maximum principle H ^ φ on 0 < / < T. On the other hand φ is explicitly given by φ(0-
17 FLOW OF CONVEX SURFACES 253 And since φ -> oo as / -> («/2)i/^(0), the result follows. Moreover, in the case that M o is a sphere, φ describes exactly the evolution of the mean curvature and so the bound T < (w/2)ϋq^(0) is sharp. This completes the proof of Theorem A bound on I V// In order to compare the mean curvature at different points of the surface M n we bound the gradient of the mean curvature as follows. 6.1 Theorem. For any η > 0 there is a constant C(η, M o, n) such that Proof. First of all we need an evolution equation for the gradient of the mean curvature. 6.2 Lemma. We have the evolution equation v 2 i/ 2 + 2\A\ 2 \VH\ 2 + 2( v,h h mj, VjHvh im ) 6.3 Corollary. ^ vtf 2 < Δ V# 2-2\V 2 H\ 2 + 4M 2 V# 2 + 2H( V t H, V Proof of Lemma 6.2. Using the evolution equations for H and g we obtain = 2H(h u, V,# VjH) + ig'jv The result then follows from the relations Δ V# 2 = 2g*'Δ( V,^) V,H Δ( v k H) = vλδtf) + g'jvmmicj ~ h km g mn h nj ). 6.4 Lemma. We have the inequality
18 254 GERHARD HUISKEN Proof. We compute 9 / \VH\ 2 \ HA\VH\ 2 -\VH\ 2 AH dt\ H and the result follows from Schwarz' inequality. We need two more evolution equations. 6.5 Lemma. We have (i) ^Jϊ 3 = Δi/ 3-6H\VH\ 2 + 3\A\ 2 H\ w/yλ α constant C 3 depending onn, C o and 8, i.e., only on M Q. Proof. The first identity is an easy consequence of the evolution equation for H. To prove the inequality (ϋ), we derive from Corollary 3.5(iii) Now, using Theorem 5.1 and (7) we estimate 2 and the conclusion follows from Lemma 2.2(ii). C(n, Co, 8)\VA\\
19 We are now going to bound the function FLOW OF CONVEX SURFACES 255 /= l^l + N(\ A \ 2 _ ±H 2 \H + NC 3 \A\ 2 - ηh 3 for some large N depending only on n and 0 < η < 1. From Lemmas 6.4 and 6.5 we obtain dt 6ηH\vH\ N\A\ 2 H(\A\ 2 - ±H 2 \ - 3τ,\A\ 2 H 3. Since (ϊ/n)h 2 < \A\ 2 < H 2, V# 2 < n V^ 2 and η < 1 we may choose JV depending only on n so large that By Theorem 5.1 we have f t < Δ/+ 2NC,H* + 3NH*(\A\ 2 - \H 2 ] - \^H\ 2NC 3 H 4 + 3NH 3 l\a\ 2 - ^H 2 \ < 2NC 3 H 4 + 3NC 0 H 5 δ and hence df/dt < Δ/ 4- C(η, M o ). This imphes that max/(o < max/(0) + C(η, M 0 )t, and since we already have a bound for Γ, / is bounded by some (possibly different) constant C(η, M o ). Therefore IVH\ 2 < τ?if 4 + C(η 9 M 0 )H < 2η/ί 4 + C(η, M o ) which proves Theorem 6.1 since η is arbitrary. 7. Higher derivatives of A As in [6] we write S * T for any linear combination of tensors formed by contraction on S and Γby g. The mth iterated covariant derivative of a tensor T will be denoted by V"T. With this notation we observe that the time derivative of the Christoffel symbols Γ^ is equal to 9 _.. 1 u ( ί 9 \ / 9 \ / 9 97 Γ^ - 2 g { V A **») + **(**") ~ V \T = -g il { Vj(Hh kl ) + V k {Hh β ) -
20 256 GERHARD HUISKEN in view of the evolution equation for g = {g /7 }. Then we may proceed exactly as in [6, 13] to conclude 7.1 Theorem. For any m we have an equation j 2 \ 2-2\v m+ι A\ 2 i +j r + k = m Now we need the following interpolation inequality which is proven in [6, 12]. 7.2 Lemma. // T is any tensor and if 1 < i < m 1, then with a constant C(n, m) which is independent of the metric g and the connection Γ we have the estimate /I This leads to 73 Theorem. d f, dt JM, m \v ι T\ dμ < C - max T\ M We have the estimate 4\ dμ + 2 I V m+λ A\ dμ < dμ, where C only depends on n and the number of derivatives m. Proof. By integrating the identity in Theorem 7.1 and using the generalised Holder inequality we derive M t fj M t i/2m, χ j/2m { k/2m. v 1/2 with i + j + k = m. The interpolation inequality above gives //2m.. i/2m 1/ { ) and if we do the same withy and k, the theorem follows. 1 }
21 FLOW OF CONVEX SURFACES The maximal time interval We already stated that equation (1) has a (unique) smooth solution on a short time interval if the uniformly convex, closed and compact initial surface M o is smooth enough. Moreover, we have 8.1 Theorem. The solution of equation (1) exists on a maximal time interval 0 < / < T < oo and max M \A\ 2 becomes unbounded as t approaches T. Proof. Let 0 < t < T be the maximal time interval where the solution exists. We showed in Lemma 5.8 that T < oo. Here we want to show that if max M \A\ 2 ^ C for t -> T, the surfaces M t converge to a smooth limit surface M τ. We could then use the local existence result to continue the solution to later times in contradiction to the maximality of T. In the following we suppose (11) max \A\ 2 < C on 0 < t < T, M t and assume that as in the introduction M t is given locally by F(x 9 t) defined for x e U c R" and 0 < t < T. Then from the evolution equation (1) we obtain for 0 < σ < p < T. Since H is bounded, F(- 9 t) tends to a unique continuous limit F(,T) as/ -> T. In order to conclude that F( 9t) represents a surface M T9 we use [6, Lemma 14.2]. 8.2 Lemma. Let g tj be a time dependent metric on a compact manifold M for 0 < t < T < oo. Suppose / max dt < C < oo. Then the metrics gij(t)for all different times are equivalent, and they converge as t -> T uniformly to a positive definite metric tensor gij(t) which is continuous and also equivalent. Here we used the notation dt 8ij In our case all the surfaces M t are diffeomorphic and we can apply Lemma 8.2 in view of Lemma 3.2, assumption (11) and the fact that T < oo. It remains only to show that M τ is smooth. To accomplish this it is enough to prove that
22 258 GERHARD HUISKEN all derivatives of the second fundamental form are bounded, since the evolution equations (1) and (4) then imply bounds on all derivatives of F. 8.3 Lemma. If (II) holds ono < t < T and T < oo, then \ V m A\ < C m for all m. The constant C m depends on n, M o and C. Proof. Theorem 7.3 immediately implies since the inequality dg/dt < eg on a finite time interval gives a bound on g in terms of its initial data. Then Lemma 7.2 yields J M t for all m and p < oo. The conclusion of the lemma now follows if we apply a version of the Sobolev inequality in Lemma 5.7 to the functions g m = V m A\ 2. Thus the surfaces M t converge to M τ in the C -topology as t -> T. By Theorem 3.1 this contradicts the maximality of T and proves Theorem 8.1. We now want to compare the maximum value of the mean curvature # max to the minimum value H πήn as t tends to T. Since \A\ 2 < H 2, we obtain from Theorem 8.1 that i/ max is unbounded as / approaches T. 8.4 Theorem. We have H m3λ /H πύn -» 1 as t -» T. Proof. We will follow Hamiltons idea to use Myer's theorem. 8.5 Theorem (Myers). If R tj^ (n V)Kg tj along a geodesic of length at least πk~ 1/2 on M, then the geodesic has conjugate points. To apply the theorem we need 8.6 Lemma. Ifh^ > εhgjj holds on M with some 0 < ε < l/n, then R ij >{n-\)ε 2 H 2 g ij. Proof of Lemma 8.6. This is immediate from the identity R tj = Hh u - h im g mn h nj. Now we obtain from Theorem 6.1 that for every η > 0 we can find a constant c(η) with \VH\ < \tfh 2 + C(η) on 0 < / < T. Since i/ max becomes unbounded as t -> Γ, there is some θ < T with C(η) < Wϋmax at ί = fl. Then (12) Ivi/NηΉLc at time t = θ. Now let x be a point on M^, where H assumes its maximum. Along any geodesic starting at x of length at most tf ι ll^ we have H ^ (1 η)h max. In view of Lemma 8.6 and Theorem 8.5 those geodesies then reach any point of M θ if η is small and thus (13) H mϊn >(l-η)h max onm θ.
23 Since H^^ is nondecreasing we have FLOW OF CONVEX SURFACES 259 #max(0 > 2 H {θ) On θ < t < T, maol and hence the inequalities (12) and (13) are true on all of θ < / < T which proves Theorem 8.4. We need the following consequences of Theorem Theorem. We have /H ^ ^ τ) dτ = oo. Proof. Look at the ordinary differential equation ff = #maχg, g(0) = H maχ. We get a solution since H^ is continuous in t. Furthermore we have and therefore T^H = ΔH + \A\ H < Δ/f + H max H, g^(jϊ " g) < Δ(JΪ - g) + ^ax(^ " g) So we obtain i/ < g for 0 < t < T by the maximum principle, and g -» oo as t -> Γ. But now we have jγ H^(τ) dτ = log{g(ί)/g(0)} - oo as / -> Γ, which proves Theorem Corollary. //, as /n /Λe introduction, h is the average of the squared mean curvature then = f H 2 dμ/f dμ J 9 M / J M / h(τ) dτ = oo. Proof. This follows from Theorems 8.4 and 8.7 since H^ ^ h < H^. 8.9 Corollary. We have \A\ 2 /H 2 - \/n -* 0 as t -> T. Proof. This is a consequence of Theorem 5.1 since H min -> oo by Theorem 8.4. Obviously M t stays in the region of R w+1 which is enclosed by M t for t λ > t 2 since the surfaces are shrinking. By Theorem 8.4 the diameter of M t tends to zero as t -> T. This implies the first part of Theorem 1.1.
24 260 GERHARD HUISKEN 9. The normalized equation As we have seen in the last sections, the solution of the unnormalized equation (1) γ t F=ΔF= -Hv shrinks down to a single point ) after a finite time. Let us assume from now on that ) is the origin of R π+1. Note that ) lays in the region enclosed by M t for all times 0 < t < T. We are going to normalize equation (1) by keeping some geometrical quantity fixed, for example the total area of the surfaces M r We could as well have taken the enclosed volume which leads to a slightly different normalized equation. As in the introduction multiply the solution F of (1) at each time 0 < t < T with a positive constant ψ(/) such that the total area of the surface M t given by is equal to the total area of M o : (14) f dμ= \M 0 \ on 0 < t < T. J M t Then we introduce a new time variable by such that dt/dt = ψ 2. We have / and so on. If we differentiate (14) for time ί, we obtain ψ dt n jdμ Now we can derive the normalized evolution equation for F on a different maximal time interval 0 < t < T: dp dp,_ 2,_ 7 ~ ^ \b = ψ όt ut = -HP + -hp n
25 FLOW OF CONVEX SURFACES 261 as stated in (2). We can also compute the new evolution equations for other geometric quantities. 9.1 Lemma. Suppose the expressions P and Q, formed from g and A, satisfy op/ot = ΔP + Q, and P has "degree" a, that is, P = ψ P. Then Q has degree (a 2) and dt n Proof. We calculate with the help of (15) = ψ- 2 {^AP + ψ«δp = -hp + ΔP + Q. n The results in Theorem 4.3, Theorem 8.4 and Corollary 8.9 convert unchanged to the normalized equation, since at each time the whole configuration is only dilated by a constant factor. 9.2 Lemma. We have (i) (ϋ) hy > eϊlgij, h (iii) ^ ^ \ as t - f. Now we prove 9.3 Lemma. There are constants C 4 and C 5 such that for 0 < t < f 0 < C 4 < H^ < H max < C 5 < oo. Proof. theorem The surface M encloses a volume V which is given by the divergence Since the origin is in the region enclosed by M- t for all times as well, we have that Fv is everywhere positive on M- t. By the isoperimetric inequality we have = c n \M 0 \
26 262 GERHARD HUISKEN On the other hand we get from the first variation formula \M 0 \ = M? = \/ H{Fv) dμ < H max Pj, which proves the first inequality in view of Lemma 9.2(ii). To obtain the upper bound we observe that in view of h tj > εh rrύn g ij the enclosed volume V can be estimated by the volume of a ball of radius (ε//^)" 1 : The first variation formula yields V< c (FH r (w+1) V, > -^rjh^f {Fv) Hdμ > ^y^llmol, which proves the upper bound again in view of Lemma 9.2(ii). 9.4 Corollary, f = oo. Proof. We have dt/dt = ψ 2 and H 2 = ψ" 2 /ί 2 such that I h(τ) dτ = I h(τ) dτ = oo by Corollary 8.8. But by Lemma 9.3 we have h < H^ < C 52 and therefore f = oo. 10. Convergence to the sphere We want to show that the surfaces M- t converge to a sphere in the C topology as / -» oo. Let us agree in this section to denote by 8 > 0 and C < oo various constants depending on known quantities. We start with 10.1 Lemma. There are constants 8 > 0 and C < oo such that 7ι2 1 ~ o,_o f \A\ 2,-δt - - Proof. Let/be the function/= \A\ 2 /H 2 - \/n which has degree 0. Then we conclude as in the proof of Lemma 5.5 that, for some large/? and a small 8 depending on ε, -δffp\a\ 2 dμ + / (A - H 2 )f" dμ, since d/dt dμ = (h H 2 )άμ. In view of Lemma 9.2(ii) and Lemma 9.3 we have for all times t larger than some t 0 d
27 FLOW OF CONVEX SURFACES 263 with a different δ. Thus where C now depends on t 0 as well. The conclusion of the lemma then follows from the Holder inequality \M}\ = \M 0 \ and Lemma 9.3. Now let us denote by h the mean value of the mean curvature on M: 10.2 Lemma. We have h= I Hdμ/ί dμ f (H- hfdμ = j H 2 -h 2 dμ^ Ce~ sl. Proof. In view of the Poincare inequality it is enough to show that / \vh\ 2 dμ decreases exponentially. Note that the constant in the Poincare inequality can be chosen independently of t since we got control on the curvature in Lemma 9.2 and Lemma 9.3. Look at the function where N is a large constant depending only on n. The degree of g is -3, and from the results in 6 we obtain for all times larger than some t v Here we used that the term becomes small compared to H\vΛ\ 2 as t -> oo since \h Q kl\ = (\A\ 2 H 2 /n) ι/2 tends to zero. Now using Lemma 10.1 and C 4 < H < C 5 we conclude for t > ϊ l9 j,j gdμ < -δf gdfi 4- Ce- δl + / (h - H 2 )gdμ. Since (h H 2 ) -+ 0 as t -* ooby Lemma 9.2(ii), we have for all t larger than some 1 2 and therefore
### A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang
A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS XAVIER CABRÉ, MANEL SANCHÓN, AND JOEL SPRUCK In memory of Rou-Huai Wang 1. Introduction In this note we consider semistable
### Properties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a non-negative BMO-function w, whose reciprocal is also in BMO, belongs to p> A p,and
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
### 14.11. Geodesic Lines, Local Gauss-Bonnet Theorem
14.11. Geodesic Lines, Local Gauss-Bonnet Theorem Geodesics play a very important role in surface theory and in dynamics. One of the main reasons why geodesics are so important is that they generalize
### Reference: Introduction to Partial Differential Equations by G. Folland, 1995, Chap. 3.
5 Potential Theory Reference: Introduction to Partial Differential Equations by G. Folland, 995, Chap. 3. 5. Problems of Interest. In what follows, we consider Ω an open, bounded subset of R n with C 2
### 1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
### Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
### Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
### Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
### Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
### Convex analysis and profit/cost/support functions
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m
### ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE
i93 c J SYSTEMS OF CURVES 695 ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE BY C H. ROWE. Introduction. A system of co 2 curves having been given on a surface, let us consider a variable curvilinear
### The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
### OPTIMAL SELECTION BASED ON RELATIVE RANK* (the "Secretary Problem")
OPTIMAL SELECTION BASED ON RELATIVE RANK* (the "Secretary Problem") BY Y. S. CHOW, S. MORIGUTI, H. ROBBINS AND S. M. SAMUELS ABSTRACT n rankable persons appear sequentially in random order. At the ith
### INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS
INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS NATHAN BROWN, RACHEL FINCK, MATTHEW SPENCER, KRISTOPHER TAPP, AND ZHONGTAO WU Abstract. We classify the left-invariant metrics with nonnegative
### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
### FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed
FIELDS-MITACS Conference on the Mathematics of Medical Imaging Photoacoustic and Thermoacoustic Tomography with a variable sound speed Gunther Uhlmann UC Irvine & University of Washington Toronto, Canada,
### 1 Scalars, Vectors and Tensors
DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical
### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
### Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
### Vectors, Gradient, Divergence and Curl.
Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use
### 2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
### Theory of Sobolev Multipliers
Vladimir G. Maz'ya Tatyana O. Shaposhnikova Theory of Sobolev Multipliers With Applications to Differential and Integral Operators ^ Springer Introduction Part I Description and Properties of Multipliers
### BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
### The Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
### LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
### A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
### MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL NIKOLAI TARKHANOV AND NIKOLAI VASILEVSKI
### ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
### 4. Expanding dynamical systems
4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,
### Invariant Metrics with Nonnegative Curvature on Compact Lie Groups
Canad. Math. Bull. Vol. 50 (1), 2007 pp. 24 34 Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Nathan Brown, Rachel Finck, Matthew Spencer, Kristopher Tapp and Zhongtao Wu Abstract.
### 3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
### RESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
### x if x 0, x if x < 0.
Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the
### Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
### t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
### MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
### Mathematical Physics, Lecture 9
Mathematical Physics, Lecture 9 Hoshang Heydari Fysikum April 25, 2012 Hoshang Heydari (Fysikum) Mathematical Physics, Lecture 9 April 25, 2012 1 / 42 Table of contents 1 Differentiable manifolds 2 Differential
### A matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
### APPLICATIONS OF TENSOR ANALYSIS
APPLICATIONS OF TENSOR ANALYSIS (formerly titled: Applications of the Absolute Differential Calculus) by A J McCONNELL Dover Publications, Inc, Neiv York CONTENTS PART I ALGEBRAIC PRELIMINARIES/ CHAPTER
### Differentiating under an integral sign
CALIFORNIA INSTITUTE OF TECHNOLOGY Ma 2b KC Border Introduction to Probability and Statistics February 213 Differentiating under an integral sign In the derivation of Maximum Likelihood Estimators, or
### Payment streams and variable interest rates
Chapter 4 Payment streams and variable interest rates In this chapter we consider two extensions of the theory Firstly, we look at payment streams A payment stream is a payment that occurs continuously,
### Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
### Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
### The Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
### Math 497C Sep 9, Curves and Surfaces Fall 2004, PSU
Math 497C Sep 9, 2004 1 Curves and Surfaces Fall 2004, PSU Lecture Notes 2 15 sometries of the Euclidean Space Let M 1 and M 2 be a pair of metric space and d 1 and d 2 be their respective metrics We say
### Duality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
### CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
### Fuzzy Differential Systems and the New Concept of Stability
Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida
### No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
### Determinants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
### OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov
DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics
### SOME PROPERTIES OF FIBER PRODUCT PRESERVING BUNDLE FUNCTORS
SOME PROPERTIES OF FIBER PRODUCT PRESERVING BUNDLE FUNCTORS Ivan Kolář Abstract. Let F be a fiber product preserving bundle functor on the category FM m of the proper base order r. We deduce that the r-th
### Lecture 13 Linear quadratic Lyapunov theory
EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time
### 5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0
a p p e n d i x e ABSOLUTE VALUE ABSOLUTE VALUE E.1 definition. The absolute value or magnitude of a real number a is denoted by a and is defined by { a if a 0 a = a if a
### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
### Rotation Rate of a Trajectory of an Algebraic Vector Field Around an Algebraic Curve
QUALITATIVE THEORY OF DYAMICAL SYSTEMS 2, 61 66 (2001) ARTICLE O. 11 Rotation Rate of a Trajectory of an Algebraic Vector Field Around an Algebraic Curve Alexei Grigoriev Department of Mathematics, The
### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
### Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula:
Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1
### Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
### Irreducibility criteria for compositions and multiplicative convolutions of polynomials with integer coefficients
DOI: 10.2478/auom-2014-0007 An. Şt. Univ. Ovidius Constanţa Vol. 221),2014, 73 84 Irreducibility criteria for compositions and multiplicative convolutions of polynomials with integer coefficients Anca
### DETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
### Coefficient of Potential and Capacitance
Coefficient of Potential and Capacitance Lecture 12: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay We know that inside a conductor there is no electric field and that
### 1 Fixed Point Iteration and Contraction Mapping Theorem
1 Fixed Point Iteration and Contraction Mapping Theorem Notation: For two sets A,B we write A B iff x A = x B. So A A is true. Some people use the notation instead. 1.1 Introduction Consider a function
### Chapter 7. Lyapunov Exponents. 7.1 Maps
Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average
### FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22
FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it
### FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.
FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is
### equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d
and and Karl Lundengård December 3, 2012 Solving General, Contents of todays lecture and (Kroenecker product) Solving General, Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R,
### IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
### 8.1 Examples, definitions, and basic properties
8 De Rham cohomology Last updated: May 21, 211. 8.1 Examples, definitions, and basic properties A k-form ω Ω k (M) is closed if dω =. It is exact if there is a (k 1)-form σ Ω k 1 (M) such that dσ = ω.
### Stochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
### 9 More on differentiation
Tel Aviv University, 2013 Measure and category 75 9 More on differentiation 9a Finite Taylor expansion............... 75 9b Continuous and nowhere differentiable..... 78 9c Differentiable and nowhere monotone......
### Chapter 7. Continuity
Chapter 7 Continuity There are many processes and eects that depends on certain set of variables in such a way that a small change in these variables acts as small change in the process. Changes of this
### Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point
### MATHEMATICS OF FINANCE AND INVESTMENT
MATHEMATICS OF FINANCE AND INVESTMENT G. I. FALIN Department of Probability Theory Faculty of Mechanics & Mathematics Moscow State Lomonosov University Moscow 119992 g.falin@mail.ru 2 G.I.Falin. Mathematics
### 1. Introduction. PROPER HOLOMORPHIC MAPPINGS BETWEEN RIGID POLYNOMIAL DOMAINS IN C n+1
Publ. Mat. 45 (2001), 69 77 PROPER HOLOMORPHIC MAPPINGS BETWEEN RIGID POLYNOMIAL DOMAINS IN C n+1 Bernard Coupet and Nabil Ourimi Abstract We describe the branch locus of proper holomorphic mappings between
### A Primer on Index Notation
A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more
### Quasi-static evolution and congested transport
Quasi-static evolution and congested transport Inwon Kim Joint with Damon Alexander, Katy Craig and Yao Yao UCLA, UW Madison Hard congestion in crowd motion The following crowd motion model is proposed
### The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
### 18.4. Errors and Percentage Change. Introduction. Prerequisites. Learning Outcomes
Errors and Percentage Change 18.4 Introduction When one variable is related to several others by a functional relationship it is possible to estimate the percentage change in that variable caused by given
### Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
### ON FIBER DIAMETERS OF CONTINUOUS MAPS
ON FIBER DIAMETERS OF CONTINUOUS MAPS PETER S. LANDWEBER, EMANUEL A. LAZAR, AND NEEL PATEL Abstract. We present a surprisingly short proof that for any continuous map f : R n R m, if n > m, then there
### Extrinsic geometric flows
On joint work with Vladimir Rovenski from Haifa Paweł Walczak Uniwersytet Łódzki CRM, Bellaterra, July 16, 2010 Setting Throughout this talk: (M, F, g 0 ) is a (compact, complete, any) foliated, Riemannian
### Extremal equilibria for reaction diffusion equations in bounded domains and applications.
Extremal equilibria for reaction diffusion equations in bounded domains and applications. Aníbal Rodríguez-Bernal Alejandro Vidal-López Departamento de Matemática Aplicada Universidad Complutense de Madrid,
### Load balancing of temporary tasks in the l p norm
Load balancing of temporary tasks in the l p norm Yossi Azar a,1, Amir Epstein a,2, Leah Epstein b,3 a School of Computer Science, Tel Aviv University, Tel Aviv, Israel. b School of Computer Science, The
### A RIGOROUS AND COMPLETED STATEMENT ON HELMHOLTZ THEOREM
Progress In Electromagnetics Research, PIER 69, 287 304, 2007 A RIGOROU AND COMPLETED TATEMENT ON HELMHOLTZ THEOREM Y. F. Gui and W. B. Dou tate Key Lab of Millimeter Waves outheast University Nanjing,
### Brief Review of Tensors
Appendix A Brief Review of Tensors A1 Introductory Remarks In the study of particle mechanics and the mechanics of solid rigid bodies vector notation provides a convenient means for describing many physical
### Lecture 3: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can
### Rolle s Theorem. q( x) = 1
Lecture 1 :The Mean Value Theorem We know that constant functions have derivative zero. Is it possible for a more complicated function to have derivative zero? In this section we will answer this question
### Limit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)
SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f
### The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line
The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line D. Bongiorno, G. Corrao Dipartimento di Ingegneria lettrica, lettronica e delle Telecomunicazioni,
### Sign changes of Hecke eigenvalues of Siegel cusp forms of degree 2
Sign changes of Hecke eigenvalues of Siegel cusp forms of degree 2 Ameya Pitale, Ralf Schmidt 2 Abstract Let µ(n), n > 0, be the sequence of Hecke eigenvalues of a cuspidal Siegel eigenform F of degree
### Examination paper for Solutions to Matematikk 4M and 4N
Department of Mathematical Sciences Examination paper for Solutions to Matematikk 4M and 4N Academic contact during examination: Trygve K. Karper Phone: 99 63 9 5 Examination date:. mai 04 Examination
### Some Notes on Taylor Polynomials and Taylor Series
Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited
### Prime Numbers and Irreducible Polynomials
Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.
### LINE INTEGRALS OF VECTOR FUNCTIONS: GREEN S THEOREM. Contents. 2. Green s Theorem 3
LINE INTEGRALS OF VETOR FUNTIONS: GREEN S THEOREM ontents 1. A differential criterion for conservative vector fields 1 2. Green s Theorem 3 1. A differential criterion for conservative vector fields We | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714946746826172, "perplexity": 1575.734459488913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00071-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://cms.math.ca/cmb/msc/26D15?fromjnl=cmb&jnl=CMB | location: Publications → journals
Search results
Search: MSC category 26D15 ( Inequalities for sums, series and integrals )
Expand all Collapse all Results 1 - 5 of 5
1. CMB 2011 (vol 55 pp. 355)
Nhan, Nguyen Du Vi; Duc, Dinh Thanh
Convolution Inequalities in $l_p$ Weighted Spaces Various weighted $l_p$-norm inequalities in convolutions are derived by a simple and general principle whose $l_2$ version was obtained by using the theory of reproducing kernels. Applications to the Riemann zeta function and a difference equation are also considered. Keywords:inequalities for sums, convolutionCategories:26D15, 44A35
2. CMB 2011 (vol 54 pp. 630)
Fiorenza, Alberto; Gupta, Babita; Jain, Pankaj
Mixed Norm Type Hardy Inequalities Higher dimensional mixed norm type inequalities involving certain integral operators are characterized in terms of the corresponding lower dimensional inequalities. Keywords:Hardy inequality, reverse Hardy inequality, mixed norm, Hardy-Steklov operatorCategories:26D10, 26D15
3. CMB 2010 (vol 53 pp. 327)
Luor, Dah-Chin
Multidimensional Exponential Inequalities with Weights We establish sufficient conditions on the weight functions $u$ and $v$ for the validity of the multidimensional weighted inequality $$\Bigl(\int_E \Phi(T_k f(x))^q u(x)\,dx\Bigr)^{1/q} \le C \Bigl (\int_E \Phi(f(x))^p v(x)\,dx\Bigr )^{1/p},$$ where 0<$p$, $q$<$\infty$, $\Phi$ is a logarithmically convex function, and $T_k$ is an integral operator over star-shaped regions. The condition is also necessary for the exponential integral inequality. Moreover, the estimation of $C$ is given and we apply the obtained results to generalize some multidimensional Levin--Cochran-Lee type inequalities. Keywords:multidimensional inequalities, geometric mean operators, exponential inequalities, star-shaped regionsCategories:26D15, 26D10
4. CMB 2005 (vol 48 pp. 333)
Alzer, Horst
Monotonicity Properties of the Hurwitz Zeta Function Let $$\zeta(s,x)=\sum_{n=0}^{\infty}\frac{1}{(n+x)^s} \quad{(s>1,\, x>0)}$$ be the Hurwitz zeta function and let $$Q(x)=Q(x;\alpha,\beta;a,b)=\frac{(\zeta(\alpha,x))^a}{(\zeta(\beta,x))^b},$$ where $\alpha, \beta>1$ and $a,b>0$ are real numbers. We prove: (i) The function $Q$ is decreasing on $(0,\infty)$ if{}f $\alpha a-\beta b\geq \max(a-b,0)$. (ii) $Q$ is increasing on $(0,\infty)$ if{}f $\alpha a-\beta b\leq \min(a-b,0)$. An application of part (i) reveals that for all $x>0$ the function $s\mapsto [(s-1)\zeta(s,x)]^{1/(s-1)}$ is decreasing on $(1,\infty)$. This settles a conjecture of Bastien and Rogalski. Categories:11M35, 26D15
5. CMB 1999 (vol 42 pp. 478)
Pruss, Alexander R.
A Remark On the Moser-Aubin Inequality For Axially Symmetric Functions On the Sphere Let $\scr S_r$ be the collection of all axially symmetric functions $f$ in the Sobolev space $H^1(\Sph^2)$ such that $\int_{\Sph^2} x_ie^{2f(\mathbf{x})} \, d\omega(\mathbf{x})$ vanishes for $i=1,2,3$. We prove that $$\inf_{f\in \scr S_r} \frac12 \int_{\Sph^2} |\nabla f|^2 \, d\omega + 2\int_{\Sph^2} f \, d\omega- \log \int_{\Sph^2} e^{2f} \, d\omega > -\oo,$$ and that this infimum is attained. This complements recent work of Feldman, Froese, Ghoussoub and Gui on a conjecture of Chang and Yang concerning the Moser-Aubin inequality. Keywords:Moser inequality, borderline Sobolev inequalities, axially symmetric functionsCategories:26D15, 58G30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784517884254456, "perplexity": 1324.5433929313665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010824518/warc/CC-MAIN-20140305091344-00021-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://en.m.wikipedia.org/wiki/Crystal_radio_receiver | # Crystal radio
(Redirected from Crystal radio receiver)
Swedish crystal radio from 1922 made by Radiola, with earphones. The device at top is the radio's cat's whisker detector. A second pair of earphone jacks is provided.
1970s-era Arrow crystal radio marketed to children. The earphone is on left. The antenna wire, right, has a clip to attach to metal objects such as a bedspring, which serve as an additional antenna to improve reception.
A crystal radio receiver, also called a crystal set, is a simple radio receiver, popular in the early days of radio. It uses only the power of the received radio signal to produce sound, needing no external power. It is named for its most important component, a crystal detector, originally made from a piece of crystalline mineral such as galena.[1] This component is now called a diode.
Crystal radios are the simplest type of radio receiver[2] and can be made with a few inexpensive parts, such as a wire for an antenna, a coil of wire, a capacitor, a crystal detector, and earphones.[3] Crystal radios are passive receivers, while other radios use an amplifier powered by current from a battery or wall outlet to make the radio signal louder. Thus, crystal sets produce rather weak sound and must be listened to with sensitive earphones, and can only receive stations within a limited range.[4]
The rectifying property of a contact between a mineral and a metal was discovered in 1874 by Karl Ferdinand Braun.[5][6][7] Crystals were first used as a detector of radio waves in 1894 by Jagadish Chandra Bose,[8][9] in his microwave optics experiments. They were first used as a demodulator for radio communication reception in 1902 by G. W. Pickard.[10] Crystal radios were the first widely used type of radio receiver,[11] and the main type used during the wireless telegraphy era.[12] Sold and homemade by the millions, the inexpensive and reliable crystal radio was a major driving force in the introduction of radio to the public, contributing to the development of radio as an entertainment medium with the beginning of radio broadcasting around 1920.[13]
Around 1920, crystal sets were superseded by the first amplifying receivers, which used vacuum tubes. Crystal sets became obsolete for commercial use[11] but continued to be built by hobbyists, youth groups, and the Boy Scouts[14] mainly as a way of learning about the technology of radio. They are still sold as educational devices, and there are groups of enthusiasts devoted to their construction.[15][16][17][18][19]
Crystal radios receive amplitude modulated (AM) signals, and can be designed to receive almost any radio frequency band, but most receive the AM broadcast band.[20] A few receive shortwave bands, but strong signals are required. The first crystal sets received wireless telegraphy signals broadcast by spark-gap transmitters at frequencies as low as 20 kHz.[21][22]
## HistoryEdit
A family listening to a crystal radio in the 1920s
Greenleaf Whittier Pickard's US Patent 836,531 "Means for receiving intelligence communicated by electric waves" diagram
US Bureau of Standards 1922 Circular 120 "A simple homemade radio receiving outfit" taught Americans how to build a crystal radio.[23]
Crystal radio was invented by a long, partly obscure chain of discoveries in the late 19th century that gradually evolved into more and more practical radio receivers in the early 20th century. The earliest practical use of crystal radio was to receive Morse code radio signals transmitted from spark-gap transmitters by early amateur radio experimenters. As electronics evolved, the ability to send voice signals by radio caused a technological explosion around 1920 that evolved into today's radio broadcasting industry.
### Early yearsEdit
Crystal radio (1915) kept at the Museum of the radio - Monteceneri (Switzerland)
Early radio telegraphy used spark gap and arc transmitters as well as high-frequency alternators running at radio frequencies. The coherer was the first means of detecting a radio signal. These, however, lacked the sensitivity to detect weak signals.
In the early 20th century, various researchers discovered that certain metallic minerals, such as galena, could be used to detect radio signals.[24][25]
Indian physicist Jagadish Chandra Bose was first to use a crystal as a radio wave detector, using galena detectors to receive microwaves starting around 1894.[26] In 1901, Bose filed for a U.S. patent for "A Device for Detecting Electrical Disturbances" that mentioned the use of a galena crystal; this was granted in 1904, #755840.[27] The device depended on the large variation of a semiconductor's conductance with temperature; today we would call his invention a bolometer.[citation needed] Bose's patent is frequently, but erroneously, cited as a type of rectifying detector. On August 30, 1906, Greenleaf Whittier Pickard filed a patent for a silicon crystal detector, which was granted on November 20, 1906.[28]
A crystal detector includes a crystal, usually a thin wire or metal probe that contacts the crystal, and the stand or enclosure that holds those components in place. The most common crystal used is a small piece of galena; pyrite was also often used, as it was a more easily adjusted and stable mineral, and quite sufficient for urban signal strengths. Several other minerals also performed well as detectors. Another benefit of crystals was that they could demodulate amplitude modulated signals.[citation needed] This device brought radiotelephones and voice broadcast to a public audience. Crystal sets represented an inexpensive and technologically simple method of receiving these signals at a time when the embryonic radio broadcasting industry was beginning to grow.
### 1920s and 1930sEdit
In 1922 the (then named) US Bureau of Standards released a publication entitled Construction and Operation of a Simple Homemade Radio Receiving Outfit.[29] This article showed how almost any family having a member who was handy with simple tools could make a radio and tune into weather, crop prices, time, news and the opera. This design was significant in bringing radio to the general public. NBS followed that with a more selective two-circuit version, Construction and Operation of a Two-Circuit Radio Receiving Equipment With Crystal Detector, which was published the same year [30] and is still frequently built by enthusiasts today.
In the beginning of the 20th century, radio had little commercial use, and radio experimentation was a hobby for many people.[31] Some historians consider the autumn of 1920 to be the beginning of commercial radio broadcasting for entertainment purposes. Pittsburgh station KDKA, owned by Westinghouse, received its license from the United States Department of Commerce just in time to broadcast the Harding-Cox presidential election returns. In addition to reporting on special events, broadcasts to farmers of crop price reports were an important public service in the early days of radio.
In 1921, factory-made radios were very expensive. Since less-affluent families could not afford to own one, newspapers and magazines carried articles on how to build a crystal radio with common household items. To minimize the cost, many of the plans suggested winding the tuning coil on empty pasteboard containers such as oatmeal boxes, which became a common foundation for homemade radios.
### CrystodyneEdit
In early 1920s Russia, Oleg Losev was experimenting with applying voltage biases to various kinds of crystals for manufacture of radio detectors. The result was astonishing: with a zincite (zinc oxide) crystal he gained amplification.[32][33][34] This was negative resistance phenomenon, decades before the development of the tunnel diode. After the first experiments, Losev built regenerative and superheterodyne receivers, and even transmitters.
A crystodyne could be produced in primitive conditions; it can be made in a rural forge, unlike vacuum tubes and modern semiconductor devices. However, this discovery was not supported by authorities and soon forgotten; no device was produced in mass quantity beyond a few examples for research.
### "Foxhole radios"Edit
"Foxhole radio" used on the Italian Front in World War 2. It uses a pencil lead attached to a safety pin pressing against a razor blade for a detector.
In addition to mineral crystals, the oxide coatings of many metal surfaces act as semiconductors (detectors) capable of rectification. Crystal radios have been improvised using detectors made from rusty nails, corroded pennies, and many other common objects.
When Allied troops were halted near Anzio, Italy during the spring of 1944, powered personal radio receivers were strictly prohibited as the Germans had equipment that could detect the local oscillator signal of superheterodyne receivers. Crystal sets lack power driven local oscillators, hence they could not be detected. Some resourceful soldiers constructed "crystal" sets from discarded materials to listen to news and music. One type used a blue steel razor blade and a pencil lead for a detector. The lead point touching the semiconducting oxide coating (magnetite) on the blade formed a crude point-contact diode. By carefully adjusting the pencil lead on the surface of the blade, they could find spots capable of rectification. The sets were dubbed "foxhole radios" by the popular press, and they became part of the folklore of World War II.
In some German-occupied countries during WW2 there were widespread confiscations of radio sets from the civilian population. This led determined listeners to build their own clandestine receivers which often amounted to little more than a basic crystal set. Anyone doing so risked imprisonment or even death if caught, and in most of Europe the signals from the BBC (or other allied stations) were not strong enough to be received on such a set.
### Later yearsEdit
Crystal radio used as a backup receiver on a World War II Liberty ship
While it never regained the popularity and general use that it enjoyed at its beginnings, the crystal radio circuit is still used. The Boy Scouts have kept the construction of a radio set in their program since the 1920s. A large number of prefabricated novelty items and simple kits could be found through the 1950s and 1960s, and many children with an interest in electronics built one.
Building crystal radios was a craze in the 1920s, and again in the 1950s. Recently, hobbyists have started designing and building examples of the early instruments. Much effort goes into the visual appearance of these sets as well as their performance. Annual crystal radio 'DX' contests (long distance reception) and building contests allow these set owners to compete with each other and form a community of interest in the subject.
## Basic principlesEdit
Block diagram of a crystal radio receiver
Circuit diagram of a simple crystal radio.
A crystal radio can be thought of as a radio receiver reduced to its essentials.[3][35] It consists of at least these components:[20][36][37]
Pictorial diagram from 1922 showing the circuit of a crystal radio. This common circuit did not use a tuning capacitor, but used the capacitance of the antenna to form the tuned circuit with the coil. The detector might have been a piece of galena with a whisker wire in contact with it on a part of the crystal, making a diode contact
As a crystal radio has no power supply, the sound power produced by the earphone comes solely from the transmitter of the radio station being received, via the radio waves captured by the antenna.[3] The power available to a receiving antenna decreases with the square of its distance from the radio transmitter.[42] Even for a powerful commercial broadcasting station, if it is more than a few miles from the receiver the power received by the antenna is very small, typically measured in microwatts or nanowatts.[3] In modern crystal sets, signals as weak as 50 picowatts at the antenna can be heard.[43] Crystal radios can receive such weak signals without using amplification only due to the great sensitivity of human hearing,[3][44] which can detect sounds with an intensity of only 10−16 W/cm2.[45] Therefore, crystal receivers have to be designed to convert the energy from the radio waves into sound waves as efficiently as possible. Even so, they are usually only able to receive stations within distances of about 25 miles for AM broadcast stations,[46][47] although the radiotelegraphy signals used during the wireless telegraphy era could be received at hundreds of miles,[47] and crystal receivers were even used for transoceanic communication during that period.[48]
## DesignEdit
Commercial passive receiver development was abandoned with the advent of reliable vacuum tubes around 1920, and subsequent crystal radio research was primarily done by radio amateurs and hobbyists.[49] Many different circuits have been used.[2][50][51] The following sections discuss the parts of a crystal radio in greater detail.
### AntennaEdit
The antenna converts the energy in the electromagnetic radio waves to an alternating electric current in the antenna, which is connected to the tuning coil. Since in a crystal radio all the power comes from the antenna, it is important that the antenna collect as much power from the radio wave as possible. The larger an antenna, the more power it can intercept. Antennas of the type commonly used with crystal sets are most effective when their length is close to a multiple of a quarter-wavelength of the radio waves they are receiving. Since the length of the waves used with crystal radios is very long (AM broadcast band waves are 182-566 m or 597–1857 ft. long)[52] the antenna is made as long as possible,[53] from a long wire, in contrast to the whip antennas or ferrite loopstick antennas used in modern radios.
Serious crystal radio hobbyists use "inverted L" and "T" type antennas, consisting of hundreds of feet of wire suspended as high as possible between buildings or trees, with a feed wire attached in the center or at one end leading down to the receiver.[54][55] However more often random lengths of wire dangling out windows are used. A popular practice in early days (particularly among apartment dwellers) was to use existing large metal objects, such as bedsprings,[14] fire escapes, and barbed wire fences as antennas.[47][56][57]
### GroundEdit
The wire antennas used with crystal receivers are monopole antennas which develop their output voltage with respect to ground. The receiver thus requires a connection to ground (the earth) as a return circuit for the current. The ground wire was attached to a radiator, water pipe, or a metal stake driven into the ground.[58][59] In early days if an adequate ground connection could not be made a counterpoise was sometimes used.[60][61] A good ground is more important for crystal sets than it is for powered receivers, as crystal sets are designed to have a low input impedance needed to transfer power efficiently from the antenna. A low resistance ground connection (preferably below 25 Ω) is necessary because any resistance in the ground reduces available power from the antenna.[53] In contrast, modern receivers are voltage-driven devices, with high input impedance, hence little current flows in the antenna/ground circuit. Also, mains powered receivers are grounded adequately through their power cords, which are in turn attached to the earth by way of a well established ground.
### Tuned circuitEdit
The earliest crystal receiver circuit did not have a tuned circuit
The tuned circuit, consisting of a coil and a capacitor connected together, acts as a resonator, similar to a tuning fork.[62] Electric charge, induced in the antenna by the radio waves, flows rapidly back and forth between the plates of the capacitor through the coil. The circuit has a high impedance at the desired radio signal's frequency, but a low impedance at all other frequencies.[63] Hence, signals at undesired frequencies pass through the tuned circuit to ground, while the desired frequency is instead passed on to the detector (diode) and stimulates the earpiece and is heard. The frequency of the station received is the resonant frequency f of the tuned circuit, determined by the capacitance C of the capacitor and the inductance L of the coil:[64]
${\displaystyle f={\frac {1}{2\pi {\sqrt {LC}}}}\,}$
The circuit can be adjusted to different frequencies by varying the inductance (L), the capacitance (C), or both. In the lowest-cost sets, the inductor was made variable via a spring contact pressing against the windings that could slide along the coil, thereby introducing a larger or smaller number of turns of the coil into the circuit. Thus the inductance could be varied, "tuning" the circuit to the frequencies of different radio stations.[1] Alternatively, a variable capacitor is used to tune the circuit.[65] Some modern crystal sets use a ferrite core tuning coil, in which a ferrite magnetic core is moved into and out of the coil, thereby varying the inductance by changing the magnetic permeability (this eliminated the less reliable mechanical contact).[66]
The antenna is an integral part of the tuned circuit and its reactance contributes to determining the circuit's resonant frequency. Antennas usually act as a capacitance, as antennas shorter than a quarter-wavelength have capacitive reactance.[53] Many early crystal sets did not have a tuning capacitor,[67] and relied instead on the capacitance inherent in the wire antenna (in addition to significant parasitic capacitance in the coil[68]) to form the tuned circuit with the coil.
The earliest crystal receivers did not have a tuned circuit at all, and just consisted of a crystal detector connected between the antenna and ground, with an earphone across it.[1][67] Since this circuit lacked any frequency-selective elements besides the broad resonance of the antenna, it had little ability to reject unwanted stations, so all stations within a wide band of frequencies were heard in the earphone[49] (in practice the most powerful usually drowns out the others). It was used in the earliest days of radio, when only one or two stations were within a crystal set's limited range.
#### Impedance matchingEdit
"Two slider" crystal radio circuit.[49] and example from 1920s. The two sliding contacts on the coil allowed the impedance of the radio to be adjusted to match the antenna as the radio was tuned, resulting in stronger reception
An important principle used in crystal radio design to transfer maximum power to the earphone is impedance matching.[49][69] The maximum power is transferred from one part of a circuit to another when the impedance of one circuit is the complex conjugate of that of the other; this implies that the two circuits should have equal resistance.[1][70][71] However, in crystal sets, the impedance of the antenna-ground system (around 10-200 ohms[53]) is usually lower than the impedance of the receiver's tuned circuit (thousands of ohms at resonance),[72] and also varies depending on the quality of the ground attachment, length of the antenna, and the frequency to which the receiver is tuned.[43]
Therefore, in improved receiver circuits, in order to match the antenna impedance to the receiver's impedance, the antenna was connected across only a portion of the tuning coil's turns.[64][67] This made the tuning coil act as an impedance matching transformer (in an autotransformer connection) in addition to providing the tuning function. The antenna's low resistance was increased (transformed) by a factor equal to the square of the turns ratio (the ratio of the number of turns the antenna was connected to, to the total number of turns of the coil), to match the resistance across the tuned circuit.[71] In the "two-slider" circuit, popular during the wireless era, both the antenna and the detector circuit were attached to the coil with sliding contacts, allowing (interactive)[73] adjustment of both the resonant frequency and the turns ratio.[74][75][76] Alternatively a multiposition switch was used to select taps on the coil. These controls were adjusted until the station sounded loudest in the earphone.
#### Problem of selectivityEdit
Direct-coupled circuit with impedance matching[49]
One of the drawbacks of crystal sets is that they are vulnerable to interference from stations near in frequency to the desired station.[2][4][43] Often two or more stations are heard simultaneously. This is because the simple tuned circuit does not reject nearby signals well; it allows a wide band of frequencies to pass through, that is, it has a large bandwidth (low Q factor) compared to modern receivers, giving the receiver low selectivity.[4]
The crystal detector worsened the problem, because it has relatively low resistance, thus it "loaded" the tuned circuit, drawing significant current and thus damping the oscillations, reducing its Q factor so it allowed through a broader band of frequencies.[43][77] In many circuits, the selectivity was improved by connecting the detector and earphone circuit to a tap across only a fraction of the coil's turns.[49] This reduced the impedance loading of the tuned circuit, as well as improving the impedance match with the detector.[49]
#### Inductive couplingEdit
Inductively-coupled circuit with impedance matching. This type was used in most quality crystal receivers
Amateur-built crystal receiver with "loose coupler" antenna transformer, Belfast, around 1914
In more sophisticated crystal receivers, the tuning coil is replaced with an adjustable air core antenna coupling transformer[1][49] which improves the selectivity by a technique called loose coupling.[67][76][78] This consists of two magnetically coupled coils of wire, one (the primary) attached to the antenna and ground and the other (the secondary) attached to the rest of the circuit. The current from the antenna creates an alternating magnetic field in the primary coil, which induced a current in the secondary coil which was then rectified and powered the earphone. Each of the coils functions as a tuned circuit; the primary coil resonated with the capacitance of the antenna (or sometimes another capacitor), and the secondary coil resonated with the tuning capacitor. Both the primary and secondary were tuned to the frequency of the station. The two circuits interacted to form a resonant transformer.
Reducing the coupling between the coils, by physically separating them so that less of the magnetic field of one intersects the other, reduces the mutual inductance, narrows the bandwidth, and results in much sharper, more selective tuning than that produced by a single tuned circuit.[67][79] However, the looser coupling also reduced the power of the signal passed to the second circuit. The transformer was made with adjustable coupling, to allow the listener to experiment with various settings to gain the best reception.
One design common in early days, called a "loose coupler", consisted of a smaller secondary coil inside a larger primary coil.[49][80] The smaller coil was mounted on a rack so it could be slid linearly in or out of the larger coil. If radio interference was encountered, the smaller coil would be slid further out of the larger, loosening the coupling, narrowing the bandwidth, and thereby rejecting the interfering signal.
The antenna coupling transformer also functioned as an impedance matching transformer, that allowed a better match of the antenna impedance to the rest of the circuit. One or both of the coils usually had several taps which could be selected with a switch, allowing adjustment of the number of turns of that transformer and hence the "turns ratio".
Coupling transformers were difficult to adjust, because the three adjustments, the tuning of the primary circuit, the tuning of the secondary circuit, and the coupling of the coils, were all interactive, and changing one affected the others.[81]
### Crystal detectorEdit
Galena crystal detector
Germanium diode used in modern crystal radios (about 3 mm long)
How the crystal detector works. [82][83] (A) The amplitude modulated radio signal from the tuned circuit. The rapid oscillations are the radio frequency carrier wave. The audio signal (the sound) is contained in the slow variations (modulation) of the amplitude (hence the term amplitude modulation, AM) of the waves. This signal cannot be converted to sound by the earphone, because the audio excursions are the same on both sides of the axis, averaging out to zero, which would result in no net motion of the earphone's diaphragm. (B) The crystal conducts current better in one direction than the other, producing a signal whose amplitude does not average to zero but varies with the audio signal. (C) A bypass capacitor is used to remove the radio frequency carrier pulses, leaving the audio signal
Circuit with detector bias battery to improve sensitivity and buzzer to aid in adjustment of the cat whisker
The crystal detector demodulates the radio frequency signal, extracting the modulation (the audio signal which represents the sound waves) from the radio frequency carrier wave. In early receivers, a type of crystal detector often used was a "cat whisker detector".[40][84] The point of contact between the wire and the crystal acted as a semiconductor diode. The cat whisker detector constituted a crude Schottky diode that allowed current to flow better in one direction than in the opposite direction.[85][86] Modern crystal sets use modern semiconductor diodes.[77] The crystal functions as an envelope detector, rectifying the alternating current radio signal to a pulsing direct current, the peaks of which trace out the audio signal, so it can be converted to sound by the earphone, which is connected to the detector.[20][not in citation given][83][not in citation given] The rectified current from the detector has radio frequency pulses from the carrier frequency in it, which are blocked by the high inductive reactance and do not pass well through the coils of early date earphones. Hence, a small capacitor called a bypass capacitor is often placed across the earphone terminals; its low reactance at radio frequency bypasses these pulses around the earphone to ground.[87] In some sets the earphone cord had enough capacitance that this component could be omitted.[67]
Only certain sites on the crystal surface functioned as rectifying junctions, and the device was very sensitive to the pressure of the crystal-wire contact, which could be disrupted by the slightest vibration.[6][88] Therefore, a usable contact point had to be found by trial and error before each use. The operator dragged the wire across the crystal surface until a radio station or "static" sounds were heard in the earphones.[89] Alternatively, some radios (circuit, right) used a battery-powered buzzer attached to the input circuit to adjust the detector.[89] The spark at the buzzer's electrical contacts served as a weak source of static, so when the detector began working, the buzzing could be heard in the earphones. The buzzer was then turned off, and the radio tuned to the desired station.
Galena (lead sulfide) was the most common crystal used,[76][88][90] but various other types of crystals were also used, the most common being iron pyrite (fool's gold, FeS2), silicon, molybdenite (MoS2), silicon carbide (carborundum, SiC), and a zincite-bornite (ZnO-Cu5FeS4) crystal-to-crystal junction trade-named Perikon.[44][91] Crystal radios have also been improvised from a variety of common objects, such as blue steel razor blades and lead pencils,[44][92] rusty needles,[93] and pennies[44] In these, a semiconducting layer of oxide or sulfide on the metal surface is usually responsible for the rectifying action.[44]
In modern sets, a semiconductor diode is used for the detector, which is much more reliable than a crystal detector and requires no adjustments.[44][77][94] Germanium diodes (or sometimes Schottky diodes) are used instead of silicon diodes, because their lower forward voltage drop (roughly 0.3V compared to 0.6V[95]) makes them more sensitive.[77][96]
All semiconductor detectors function rather inefficiently in crystal receivers, because the low voltage input to the detector is too low to result in much difference between forward better conduction direction, and the reverse weaker conduction. To improve the sensitivity of some of the early crystal detectors, such as silicon carbide, a small forward bias voltage was applied across the detector by a battery and potentiometer.[97][98][99] The bias moves the diode's operating point higher on the detection curve producing more signal voltage at the expense of less signal current (higher impedance). There is a limit to the benefit that this produces, depending on the other impedances of the radio. This improved sensitivity was caused by moving the DC operating point to a more desirable voltage-current operating point (impedance) on the junction's I-V curve. The battery did not power the radio, but only provided the biasing voltage which required little power.
### EarphonesEdit
Modern crystal radio with piezoelectric earphone
The requirements for earphones used in crystal sets are different from earphones used with modern audio equipment. They have to be efficient at converting the electrical signal energy to sound waves, while most modern earphones sacrifice efficiency in order to gain high fidelity reproduction of the sound.[100] In early homebuilt sets, the earphones were the most costly component.[101]
The early earphones used with wireless-era crystal sets had moving iron drivers that worked in a way similar to the horn loudspeakers of the period. Each earpiece contained a permanent magnet about which was a coil of wire which formed a second electromagnet. Both magnetic poles were close to a steel diaphram of the speaker. When the audio signal from the radio was passed through the electromagnet's windings, current was caused to flow in the coil which created a varying magnetic field that augmented or diminished that due to the permanent magnet. This varied the force of attraction on the diaphragm, causing it to vibrate. The vibrations of the diaphragm push and pull on the air in front of it, creating sound waves. Standard headphones used in telephone work had a low impedance, often 75 Ω, and required more current than a crystal radio could supply. Therefore, the type used with crystal set radios (and other sensitive equipment) was wound with more turns of finer wire giving it a high impedance of 2000-8000 Ω.[102][103][104]
Modern crystal sets use piezoelectric crystal earpieces, which are much more sensitive and also smaller.[100] They consist of a piezoelectric crystal with electrodes attached to each side, glued to a light diaphragm. When the audio signal from the radio set is applied to the electrodes, it causes the crystal to vibrate, vibrating the diaphragm. Crystal earphones are designed as ear buds that plug directly into the ear canal of the wearer, coupling the sound more efficiently to the eardrum. Their resistance is much higher (typically megohms) so they do not greatly "load" the tuned circuit, allowing increased selectivity of the receiver. The piezoelectric earphone's higher resistance, in parallel with its capacitance of around 9 pF, creates a filter that allows the passage of low frequencies, but blocks the higher frequencies.[105] In that case a bypass capacitor is not needed (although in practice a small one of around 0.68 to 1 nF is often used to help improve quality), but instead a 10-100 kΩ resistor must be added in parallel with the earphone's input.[106]
Although the low power produced by crystal radios is typically insufficient to drive a loudspeaker, some homemade 1960s sets have used one, with an audio transformer to match the low impedance of the speaker to the circuit.[107] Similarly, modern low-impedance (8 Ω) earphones cannot be used unmodified in crystal sets because the receiver does not produce enough current to drive them. They are sometimes used by adding an audio transformer to match their impedance with the higher impedance of the driving antenna circuit.
## Use as a power sourceEdit
A crystal radio tuned to a strong local transmitter can be used as a power source for a second amplified receiver of a distant station that cannot be heard without amplification.[108]:122–123
There is a long history of unsuccessful attempts and unverified claims to recover the power in the carrier of the received signal itself. Traditional crystal sets use half-wave rectifiers. As AM signals have a modulation factor of only 30% by voltage at peaks[citation needed], no more than 9% of received signal power (${\displaystyle P=U^{2}/R}$ ) is actual audio information, and 91% is just rectified DC voltage. Given that the audio signal is unlikely to be at peak all the time, the ratio of energy is, in practice, even greater. Considerable effort was made to convert this DC voltage into sound energy. Some earlier attempts include a one-transistor[109] amplifier in 1966. Sometimes efforts to recover this power are confused with other efforts to produce a more efficient detection.[110] This history continues now with designs as elaborate as "inverted two-wave switching power unit".[108]:129
## GalleryEdit
Soldier listening to a crystal radio during World War I, 1914
Australian signallers using a Marconi Mk III crystal receiver, 1916.
Marconi Type 103 crystal set.
SCR-54-A crystal set used by US Signal Corps in World War I
Marconi Type 106 crystal receiver used for transatlantic communication, ca. 1917
Homemade "loose coupler" set (top), Florida, ca. 1920
Crystal radio, Germany, ca. 1924
Swedish "box" crystal radio with earphones, ca. 1925
German Heliogen brand radio showing "basket-weave" coil, 1935
Polish Detefon brand radio, 1930-1939, using a "cartridge" type crystal (top)
During the wireless telegraphy era before 1920, crystal receivers were "state of the art", and sophisticated models were produced. After 1920 crystal sets became the cheap alternative to vacuum tube radios, used in emergencies and by youth and the poor.
## ReferencesEdit
1. Carr, Joseph J. (1990). Old Time Radios! Restoration and Repair. US: McGraw-Hill Professional. pp. 7–9. ISBN 0-8306-3342-1.
2. ^ a b c Petruzellis, Thomas (2007). 22 Radio and Receiver Projects for the Evil Genius. US: McGraw-Hill Professional. pp. 40, 44. ISBN 978-0-07-148929-4.
3. Field, Simon Quellen (2003). Gonzo gizmos: Projects and devices to channel your inner geek. US: Chicago Review Press. p. 85. ISBN 978-1-55652-520-9.
4. ^ a b c Schaeffer, Derek K.; Thomas H. Lee (1999). The Design and Implementation of Low Power CMOS Receivers. Springer. pp. 3–4. ISBN 0-7923-8518-7.
5. ^ Braun, Ernest; Stuart MacDonald (1982). Revolution in Miniature: The history and impact of semiconductor electronics, 2nd Ed. UK: Cambridge Univ. Press. pp. 11–12. ISBN 978-0-521-28903-0.
6. ^ a b Riordan, Michael; Lillian Hoddeson (1988). Crystal fire: the invention of the transistor and the birth of the information age. US: W. W. Norton & Company. pp. 19–21. ISBN 0-393-31851-6.
7. ^ Sarkar, Tapan K. (2006). History of wireless. US: John Wiley and Sons. p. 333. ISBN 0-471-71814-9.
8. ^ Bose was first to use crystals for electromagnetic wave detection, using galena detectors to receive microwaves starting around 1894 and receiving a patent in 1904 Emerson, D. T. (Dec 1997). "The work of Jagadish Chandra Bose: 100 years of mm wave research". IEEE Transactions on Microwave Theory and Techniques. 45 (12): 2267–2273. Bibcode:1997ITMTT..45.2267E. doi:10.1109/22.643830. Retrieved 2010-01-19.
9. ^ Sarkar (2006) History of wireless, p.94, 291-308
10. ^ Douglas, Alan (April 1981). "The crystal detector". IEEE Spectrum. New York: Inst. of Electrical and Electronic Engineers: 64. Retrieved 2010-03-14. on Stay Tuned website
11. ^ a b Basalla, George (1988). The Evolution of Technology. UK: Cambridge University Press. p. 44. ISBN 0-521-29681-1.
12. ^ crystal detectors were used in receivers in greater numbers than any other type of detector after about 1907. Marriott, Robert H. (September 17, 1915). "United States Radio Development". Proc. of the Inst. of Radio Engineers. US: Institute of Radio Engineers. 5 (3): 184. doi:10.1109/jrproc.1917.217311. Retrieved 2010-01-19.
13. ^ Corbin, Alfred (2006). The Third Element: A Brief History of Electronics. AuthorHouse. pp. 44–45. ISBN 1-4208-9084-0.
14. ^ a b Kent, Herb; David Smallwood; Richard M. Daley (2009). The Cool Gent: The Nine Lives of Radio Legend Herb Kent. US: Chicago Review Press. pp. 13–14. ISBN 1-55652-774-8.
15. ^ Jack Bryant (2009) Birmingham Crystal Radio Group, Birmingham, Alabama, US. Retrieved 2010-01-18.
16. ^ The Xtal Set Society midnightscience.com . Retrieved 2010-01-18.
17. ^ Darryl Boyd (2006) Stay Tuned Crystal Radio website . Retrieved 2010-01-18.
18. ^ Al Klase Crystal Radios, Klase's SkyWaves website . Retrieved 2010-01-18.
19. ^ Mike Tuggle (2003) Designing a DX crystal set Antique Wireless Association journal . Retrieved 2010-01-18.
20. ^ a b c Williams, Lyle R. (2006). The New Radio Receiver Building Handbook. The Alternative Electronics Press. pp. 20–23. ISBN 978-1-84728-526-3.
21. ^ Lescarboura, Austin C. (1922). Radio for Everybody. New York: Scientific American Publishing Co. pp. 4, 110, 268.
22. ^ Long distance transoceanic stations of the era used wavelengths of 10,000 to 20,000 meters, correstponding to frequencies of 15 to 30 kHz.Morecroft, John H.; A. Pinto; Walter A. Curry (1921). Principles of Radio Communication. New York: John Wiley & Sons. p. 187.
23. ^ "Construction and Operation of a Simple Homemade Radio Receiving Outfit, Bureau of Standards Circular 120". U.S. Government Printing Office. April 24, 1922.
24. ^ In May 1901, Karl Ferdinand Braun of Strasbourg used psilomelane, a manganese oxide ore, as an R.F. detector: Ferdinand Braun (December 27, 1906) "Ein neuer Wellenanzeiger (Unipolar-Detektor)" (A new R.F. detector (one-way detector)), Elektrotechnische Zeitschrift, 27 (52) : 1199-1200. From page 1119:
"Im Mai 1901 habe ich einige Versuche im Laboratorium gemacht und dabei gefunden, daß in der Tat ein Fernhörer, der in einen aus Psilomelan und Elementen bestehenden Kreis eingeschaltet war, deutliche und scharfe Laute gab, wenn dem Kreise schwache schnelle Schwingungen zugeführt wurden. Das Ergebnis wurde nachgeprüft, und zwar mit überraschend gutem Erfolg, an den Stationen für drahtlose Telegraphie, an welchen zu dieser Zeit auf den Straßburger Forts von der Königlichen Preußischen Luftschiffer-Abteilung unter Leitung des Hauptmannes von Sigsfeld gearbeitet wurde."
(In May 1901, I did some experiments in the lab and thereby found that in fact an earphone, which was connected in a circuit consisting of psilomelane and batteries, produced clear and strong sounds when weak, rapid oscillations were introduced to the circuit. The result was verified -- and indeed with surprising success -- at the stations for wireless telegraphy, which, at this time, were operated at the Strasbourg forts by the Royal Prussian Airship-Department under the direction of Capt. von Sigsfeld.)
Braun also states that he had been researching the conductive properties of semiconductors since 1874. See: Braun, F. (1874) "Ueber die Stromleitung durch Schwefelmetalle" (On current conduction through metal sulfides), Annalen der Physik und Chemie, 153 (4) : 556-563. In these experiments, Braun applied a cat whisker to various semiconducting crystals and observed that current flowed in only one direction.
Braun patented an R.F. detector in 1906. See: (Ferdinand Braun), "Wellenempfindliche Kontaktstelle" (R.F. sensitive contact), Deutsches Reichspatent DE 178,871, (filed: Feb. 18, 1906 ; issued: Oct. 22, 1906). Available on-line at: Foundation for German communication and related technologies
25. ^ Other inventors who patented crystal R.F. detectors:
• In 1906, Henry Harrison Chase Dunwoody (1843-1933) of Washington, D.C., a retired general of the US Army's Signal Corps, received a patent for a carborundum R.F. detector. See: Dunwoody, Henry H. C. "Wireless-telegraph system," U. S. patent 837,616 (filed: March 23, 1906 ; issued: December 4, 1906).
• In 1907, Louis Winslow Austin received a patent for his R.F. detector consisting of tellurium and silicon. See: Louis W. Austin, "Receiver," US patent 846,081 (filed: Oct. 27, 1906 ; issued: March 5, 1907).
• In 1908, Wichi Torikata of the Imperial Japanese Electrotechnical Laboratory of the Ministry of Communications in Tokyo was granted Japanese patent 15,345 for the “Koseki” detector, consisting of crystals of zincite and bornite.
26. ^ Emerson, D. T. (Dec 1997). "The work of Jagadish Chandra Bose: 100 years of mm wave research". IEEE Transactions on Microwave Theory and Techniques. 45 (12): 2267–2273. Bibcode:1997ITMTT..45.2267E. doi:10.1109/22.643830. Retrieved 2010-01-19.
27. ^ Jagadis Chunder Bose, "Detector for electrical disturbances", US patent no. 755,840 (filed: September 30, 1901; issued: March 29, 1904)
28. ^ Greenleaf Whittier Pickard, "Means for receiving intelligence communicated by electric waves", US patent no. 836,531 (filed: August 30, 1906 ; issued: November 20, 1905)
29. ^ http://www.crystalradio.net/crystalplans/xximages/nsb_120.pdf
30. ^ http://www.crystalradio.net/crystalplans/xximages/nbs121.pdf
31. ^ Bondi, Victor."American Decades:1930-1939"
32. ^ Peter Robin Morris, A history of the world semiconductor industry, IET, 1990, ISBN 0-86341-227-0, page 15
33. ^ https://earlyradiohistory.us/1924cry.htm
34. ^ In 1924, Losev's research was publicized in several French publications:
• Radio Revue, no. 28, p. 139 (1924)
• I. Podliasky (May 25, 1924) (Crystal detectors as oscillators), Radio Électricité, 5 : 196-197.
• Vinogradsky (September 1924) L'Onde Electrique
English-language publications noticed the French articles and also publicized Losev's work:
• Pocock (June 11, 1924)The Wireless World and Radio Review, 14 : 299-300.
• Victor Gabel (October 1 & 8, 1924) "The crystal as a generator and amplifier," The Wireless World and Radio Review, 15 : 2ff, 47ff.
• O. Lossev (October 1924) "Oscillating crystals," The Wireless World and Radio Review, 15 : 93-96.
• Round and Rust (August 19, 1925) The Wireless World and Radio Review, pp. 217-218.
• "The Crystodyne principle," Radio News, pages 294, 295, 431 (September 1924). See also the October 1924 issue of Radio News. (It was Hugo Gernsbach, publisher of Radio News, who coined the term "crystodyne".) This article is available on-line at: Radio Museum.org.
35. ^ Purdie, Ian C. (2001). "Crystal Radio Set". electronics-tutorials.com. Ian Purdie. Retrieved 2009-12-05.
36. ^ Lescarboura, Austin C. (1922). Radio for Everybody. New York: Scientific American Publishing Co. pp. 93–94.
37. ^ Kuhn, Kenneth A. (Jan 6, 2008). "Introduction" (PDF). Crystal Radio Engineering. Prof. Kenneth Kuhn website, Univ. of Alabama. Retrieved 2009-12-07.
38. ^ H. C. Torrey, C. A. Whitmer, Crystal Rectifiers, New York: McGraw-Hill, 1948, pp. 3-4
39. ^ Jensen, Peter R. (2003). Wireless at War. Rosenberg Publishing. p. 103. ISBN 1922013846.
40. ^ a b Morgan, Alfred Powell (1914). Wireless Telegraph Construction for Amateurs, 3rd Ed. D. Van Nostrand Co. p. 199.
41. ^ Braun, Agnès; Braun, Ernest; MacDonald, Stuart (1982). Revolution in Miniature: The History and Impact of Semiconductor Electronics. Cambridge University Press. pp. 11–12. ISBN 0521289033.
42. ^ Fette, Bruce A. (Dec 27, 2008). "RF Basics: Radio Propagation". RF Engineer Network. Retrieved 2010-01-18.
43. ^ a b c d Payor, Steve (June 1989). "Build a Matchbox Crystal Radio". Popular Electronics: 42. Retrieved 2010-05-28. on Stay Tuned website
44. Lee, Thomas H. (2004). Planar Microwave Engineering: A practical guide to theory, measurement, and circuits. UK: Cambridge Univ. Press. pp. 297–304. ISBN 978-0-521-83526-8.
45. ^ Nave, C. Rod. "Threshold of hearing". HyperPhysics. Dept. of Physics, Georgia State University. Retrieved 2009-12-06.
46. ^ Lescarboura, 1922, p. 144
47. ^ a b c Binns, Jack (November 1922). "Jack Binn's 10 commandments for the radio fan". Popular Science. New York: Modern Publishing Co. 101 (5): 42–43. Retrieved 2010-01-18.
48. ^ Marconi used carborundum detectors for a time around 1907 in his first commercial transatlantic wireless link between Newfoundland, Canada and Clifton, Ireland. Beauchamp, Ken (2001). History of Telegraphy. Institution of Electrical Engineers. p. 191. ISBN 0852967926.
49. Klase, Alan R. (1998). "Crystal Set Design 102". Skywaves. Alan Klase personal website. Retrieved 2010-02-07.
50. ^ a list of circuits from the wireless era can be found in Sleeper, Milton Blake (1922). Radio hook-ups: a reference and record book of circuits used for connecting wireless instruments. US: The Norman W. Henley publishing co. pp. 7–18.
51. ^ May, Walter J. (1954). The Boy's Book of Crystal Sets. London: Bernard's. is a collection of 12 circuits
52. ^ Purdie, Ian (1999). "A Basic Crystal Set". Ian Purdie's Amateur Radio Pages. personal website. Retrieved 2010-02-27.
53. ^ a b c d Kuhn, Kenneth (Dec 9, 2007). "Antenna and Ground System" (PDF). Crystal Radio Engineering. Kenneth Kuhn website, Univ. of Alabama. Retrieved 2009-12-07.
54. ^ Marx,, Harry J.; Adrian Van Muffling (1922). Radio Reception: A simple and complete explanation of the principles of radio telephony. US: G.P. Putnam's sons. pp. 130–131.
55. ^ Williams, Henry Smith (1922). Practical Radio. New York: Funk and Wagnalls. p. 58.
56. ^ Putnam, Robert (October 1922). "Make the aerial a good one". Tractor and Gas Engine Review. New York: Clarke Publishing Co. 15 (10): 9. Retrieved 2010-01-18.
57. ^ Lescarboura 1922, p.100
58. ^ Collins, Archie Frederick (1922). The Radio Amateur's Hand Book. US: Forgotten Books. pp. 18–22. ISBN 1-60680-119-8.
59. ^ Lescarboura, 1922, p. 102-104
60. ^ Radio Communication Pamphlet No. 40: The Principles Underlying Radio Communication, 2nd Ed. United States Bureau of Standards. 1922. pp. 309–311.
61. ^ Hausmann, Erich; Goldsmith, Alfred Norton; Hazeltine, Louis Alan (1922). Radio Phone Receiving: A Practical Book for Everybody. D. Van Nostrand Company. pp. 44–45. ISBN 1-110-37159-4.
62. ^
63. ^ Hayt, William H.; Kemmerly, Jack E. (1971). Engineering Circuit Analysis, 2nd Ed. New York: McGraw-Hill. pp. 398–399. ISBN 978-0-07-027382-5.
64. ^ a b Kuhn, Kenneth A. (Jan 6, 2008). "Resonant Circuit" (PDF). Crystal Radio Engineering. Prof. Kenneth Kuhn website, Univ. of Alabama. Retrieved 2009-12-07.
65. ^ Clifford, Martin (July 1986). "The early days of radio". Radio Electronics: 61–64. Retrieved 2010-07-19. on Stay Tuned website
66. ^ Blanchard, T. A. (October 1962). "Vestpocket Crystal Radio". Radio-Electronics: 196. Retrieved 2010-08-19. on Crystal Radios and Plans, Stay Tuned website
67. The Principles Underlying Radio Communication, 2nd Ed., Radio pamphlet no. 40. US: Prepared by US National Bureau of Standards, United States Army Signal Corps. 1922. pp. 421–425.
68. ^
69. ^ Nahin, Paul J. (2001). The science of radio: with MATLAB and Electronics Workbench demonstrations. US: Springer. pp. 60–62. ISBN 0-387-95150-4.
70. ^ Smith, K. c. a.; R. E. Alley (1992). Electrical circuits: An introduction. UK: Cambridge University Press. p. 218. ISBN 0-521-37769-2.
71. ^ a b Alley, Charles L.; Kenneth W. Atwood (1973). Electronic Engineering, 3rd Ed. New York: John Wiley & Sons. p. 269. ISBN 0-471-02450-3.
72. ^ Tongue, Ben H. (2007-11-06). "Practical considerations, helpful definitions of terms and useful explanations of some concepts used in this site". Crystal Radio Set Systems: Design, Measurement, and Improvement. Ben Tongue. Retrieved 2010-02-07.
73. ^ Bucher, Elmer Eustace (1921). Practical Wireless Telegraphy: A complete text book for students of radio communication (Revised ed.). New York: Wireless Press, Inc. p. 133.
74. ^
75. ^ Stanley, Rupert (1919). Textbook on Wireless Telegraphy, Vol. 1. London: Longman's Green & Co. pp. 280–281.
76. ^ a b c Collins, Archie Frederick (1922). The Radio Amateur's Hand Book. Forgotten Books. pp. 23–25. ISBN 1-60680-119-8.
77. ^ a b c d Wenzel, Charles (1995). "Simple crystal radio". Crystal radio circuits. techlib.com. Retrieved 2009-12-07.
78. ^ Hogan, John V. L. (October 1922). "The Selective Double-Circuit Receiver". Radio Broadcast. New York: Doubleday Page & Co. 1 (6): 480–483. Retrieved 2010-02-10.
79. ^ Alley & Atwood (1973) Electronic Engineering, p. 318
80. ^
81. ^ US Signal Corps (October 1916). Radiotelegraphy. US: Government Printing Office. p. 70.
82. ^ Marx & Van Muffling (1922) Radio Reception, p.43, fig.22
83. ^ a b Campbell, John W. (October 1944). "Radio Detectors and How They Work". Popular Science. New York: Popular Science Publishing Co. 145 (4): 206–209. Retrieved 2010-03-06.
84. ^ H. V. Johnson, A Vacation Radio Pocket Set. Electrical Experimenter, vol. II, no. 3, p. 42, Jul. 1914
85. ^ "The cat’s-whisker detector is a primitive point-contact diode. A point-contact junction is the simplest implementation of a Schottky diode, which is a majority-carrier device formed by a metal-semiconductor junction." Shaw, Riley (April 2015). "The cat's-whisker detector". Riley Shaw's personal blog. Retrieved 1 May 2018.
86. ^ Lee, Thomas H. (2004). The Design of CMOS Radio-Frequency Integrated Circuits. UK: Cambridge University Press. pp. 4–6. ISBN 0-521-83539-9.
87. ^ Stanley (1919) Text-book on Wireless Telegraphy, p.282
88. ^ a b Hausmann, Goldsmith & Hazeltine 1922, pp. 60–61
89. ^ a b Lescarboura (1922), p.143-146
90. ^ Hirsch, William Crawford (June 1922). "Radio Apparatus - What is it made of?". The Electrical Record. New York: The Gage Publishing Co. 31 (6): 393–394. Retrieved 10 July 2018.
91. ^ Stanley (1919), p. 311-318
92. ^ Gernsback, Hugo (September 1944). "Foxhole emergency radios". Radio-Craft. New York: Radcraft Publications. 16 (1): 730. Retrieved 2010-03-14. on Crystal Plans and Circuits, Stay Tuned website
93. ^ Douglas, Alan (April 1981). "The Crystal Detector". IEEE Spectrum. Inst. of Electrical and Electronic Engineers. 18 (4): 64–65. doi:10.1109/mspec.1981.6369482. Retrieved 2010-03-28.
94. ^ Kuhn, Kenneth A. (Jan 6, 2008). "Diode Detectors" (PDF). Crystal Radio Engineering. Prof. Kenneth Kuhn website, Univ. of Alabama. Retrieved 2009-12-07.
95. ^ Hadgraft, Peter. "The Crystal Set 5/6". The Crystal Corner. Kev's Vintage Radio and Hi-Fi page. Retrieved 2010-05-28.
96. ^ Kleijer, Dick. "Diodes". crystal-radio.eu. Retrieved 2010-05-27.
97. ^ The Principles Underlying Radio Communication (1922), p.439-440
98. ^ "The sensitivity of the Perikon [detector] can be approximately doubled by connecting a battery across its terminals to give approximately 0.2 volt" Robison, Samuel Shelburne (1911). Manual of Wireless Telegraphy for the Use of Naval Electricians, Vol. 2. Washington DC: US Naval Institute. p. 131.
99. ^ "Certain crystals if this combination [zincite-bornite] respond better with a local battery while others do not require it...but with practically any crystal it aids in obtaining the sensitive adjustment to employ a local battery..."Bucher, Elmer Eustace (1921). Practical Wireless Telegraphy: A complete text book for students of radio communication, Revised Ed. New York: Wireless Press, Inc. pp. 134–135, 140.
100. ^ a b Field 2003, p.93-94
101. ^ Lescarboura (1922), p.285
102. ^ Collins (1922), p. 27-28
103. ^ Williams (1922), p. 79
104. ^ The Principles Underlying Radio Communication (1922), p. 441
105. ^ Payor, Steve (June 1989). "Build a Matchbox Crystal Radio". Popular Electronics: 45. Retrieved 2010-05-28.
106. ^ Field (2003), p. 94
107. ^ Walter B. Ford, "High Power Crystal Set", August 1960, Popular Electronics
108. ^ a b Polyakov, V. T. (2001). "3.3.2 Питание полем мощных станций". Техника радиоприёма. Простые приёмники АМ сигналов [Receiving techniques. Simple receivers for AM signals] (in Russian). Moscow. p. 256. ISBN 5-94074-056-1.
109. ^ Radio-Electronics, 1966, №2
110. ^ Cutler, Bob (January 2007). "High Sensitivity Crystal Set" (PDF). QST. 91 (1): 31–??. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47221580147743225, "perplexity": 5421.452134558077}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509960.34/warc/CC-MAIN-20181016010149-20181016031649-00371.warc.gz"} |
https://dsp.stackexchange.com/questions/27219/state-space-equation-from-differential-equation | # State space equation from differential equation
I have very general system. I don't know whether it is electrical or mechanical or whatever. This system can be modeled by the following differential equation
$$\dot q = \frac{Tf_1-f_2}{T+1}$$
where:
• $\dot q$ would be equivalent to current in an electrical system
• $f_1$ and $f_2$ would be equivalent to current sources
• $T$ is a given constant
From what I see, if $f_1$ and $f_2$ are constant, there is no way for $\dot q$ to change. How is it possible that this system has a state space representation? And how do I get this state space representation?
• That depends on how you define your state. The easiest would just be to say your state is $\dot{q}$. Are you interested in $q$? If so, you may want to choose that as a state variable. – Peter K. Nov 21 '15 at 15:27
• You have to tell us what you consider the system's input, possibly its output, and what should be considered its state(s). The way I understand your system, it doesn't have any memory, so it becomes pointless to talk about states. – Matt L. Nov 22 '15 at 11:10
• @MattL. Exactly. That's what I'm saying. I don't see any states. For the same reasons. Yet our teacher claims it is possible to get State Space representation from that equation above. Only input I can think of are $f_1$ and $f_2$. I probably lack of some understanding about this. It would help to see an example of the State Space reprepresentation of this system. – user50222 Nov 23 '15 at 12:33
Assuming that the inputs are $f_1$ and $f_2$,
$$\dot q = 0 \cdot q + \begin{bmatrix} \quad\left(\frac{T}{T+1}\right)\\ - \left(\frac{1}{T+1}\right)\end{bmatrix}^\top \begin{bmatrix} f_1\\ f_2\end{bmatrix}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.74294114112854, "perplexity": 242.29548124423735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00559.warc.gz"} |
https://www.physicsforums.com/threads/electromagnetic-waves-radie-antenna-help.289586/ | # Electromagnetic waves radie antenna HELP
1. Feb 3, 2009
### MEAHH
1. The problem statement, all variables and given/known data
A radio-frequency EM plane wave propagates in the +z-direction. A student finds that her portable radio obtains the best reception of the wave when the antenna is parallel to the x--y plane making an angle of 60 degrees with respect to the y-axis .
(a) Consider an instant when the fields are non-zero at the location of the antenna.
Draw and lable the direction of the electric field and the direction og the magnetic field.Explain
(b)How would your answers to part a be different if the wave were propagating in the -z-direction.Explain
3. The attempt at a solution
a) i drew the E field parallel to the antenna and the b field perpendiculat to it so that the oscillation will be at a max...is this right?
b) i am unsure of what changes
2. Feb 4, 2009
### LowlyPion
I don't think anything changes if you reverse the direction between +z and -z.
The e and the h waves are still perpendicularly polarized with respect to the orientation of the antenna is my understanding.
Similar Discussions: Electromagnetic waves radie antenna HELP | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153635263442993, "perplexity": 836.6807340543213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00472.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-section-5-1-exponents-and-scientific-notation-exercise-set-page-262/22 | ## Intermediate Algebra (6th Edition)
We are asked to evaluate the expression $-5x^{0}$. In general, $a^{0}=1$ when a does not equal 0. Therefore, $-5x^{0}=-5(1)=-5$. Since the term -5 is not in parentheses with the base of the exponent, we evaluate it after evaluating the exponent. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900093674659729, "perplexity": 278.1325648658715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00649-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://dms.umontreal.ca/~mathapp/Archives/Year1516/abs1516/JeffCalder.html | ## Numerical Schemes for the Hamilton-Jacobi Equation Continuum Limit of Non-dominated Sorting
### Jeff CalderDepartment of Mathematics, University of California, Berkeley
Non-dominated sorting arranges a set of points in n-dimensional Euclidean space into layers by repeatedly peeling away the coordinatewise minimal elements. It was recently shown that non-dominated sorting of random points has a Hamilton-Jacobi equation continuum limit. The obvious numerical scheme for this PDE has a slow convergence rate of $O(h^{1/n})$. In this talk, we introduce two new numerical schemes that have formal rates of $O(h)$ and we prove the usual $O(\sqrt{h})$ theoretical rates. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024289846420288, "perplexity": 524.2039701550611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00218.warc.gz"} |
https://www.acmicpc.net/problem/5894 | 시간 제한메모리 제한제출정답맞힌 사람정답 비율
1 초 128 MB69302640.625%
## 문제
Every day, Farmer John walks around his farm to check on the health and well-being of his N (1 <= N <= 10) cows.
The location of each cow is described by a point in the 2D plane, and Farmer John starts out at the origin (0,0). To make his route more interesting, Farmer John decides that he will only walk in directions parallel to the coordinate axes -- that is, only north, south, east, or west. Furthermore, he only changes his direction of travel when he reaches the location of a cow (he may also opt to pass through the location of a cow without changing direction, if desired). When he changes his direction of travel, he may make either a 90-degree or 180-degree turn. FJ's route must take him back to the origin after visiting all his cows.
Please compute the number of different routes FJ can take to visit his N cows, if he changes direction exactly once at the location of each cow. He is allowed to pass through the location of a cow without changing direction an arbitrary number of times. The same geometric route taken forward versus backward counts as two different routes.
## 입력
• Line 1: The integer N.
• Lines 2..1+N: Line i+1 contains the x and y coordinates (space-separated) of the ith point (each values is in the range -1000...1000).
## 출력
• Line 1: The number of different routes FJ can take (this could be zero if there are no valid routes).
## 예제 입력 1
4
0 1
2 1
2 0
2 -5
## 예제 출력 1
2
## 힌트
#### Input Details
There are 4 cows, at positions (0,1), (2,1), (2,0), and (2,-5).
#### Output Details
There are two different routes: Farmer John can visit cows in the orders 1-2-4-3 or 3-4-2-1 before returning to the origin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19663865864276886, "perplexity": 1439.5650527744256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00221.warc.gz"} |
https://codereview.stackexchange.com/questions/59833/splitting-plain-text-dictionary-data-to-multiple-files-round-2 | # Splitting plain text dictionary data to multiple files, round 2
This is a continuation of my earlier question.
I have a plain text file with the content of a dictionary (Webster's Unabridged Dictionary) in this format:
A
A (named a in the English, and most commonly ä in other languages).
Defn: The first letter of the English and of many other alphabets.
The capital A of the alphabets of Middle and Western Europe, as also
the small letter (a), besides the forms in Italic, black letter,
etc., are all descended from the old Latin A, which was borrowed from
the Greek Alpha, of the same form; and this was made from the first
letter (Aleph, and itself from the Egyptian origin. The Aleph was a
consonant letter, with a guttural breath sound that was not an
element of Greek articulation; and the Greeks took it to represent
their vowel Alpha with the ä sound, the Phoenician alphabet having no
vowel symbols. This letter, in English, is used for several different
vowel sounds. See Guide to pronunciation, §§ 43-74. The regular long
a, as in fate, etc., is a comparatively modern sound, and has taken
the place of what, till about the early part of the 17th century, was
a sound of the quality of ä (as in far).
2. (Mus.)
Defn: The name of the sixth tone in the model major scale (that in
C), or the first tone of the minor scale, which is named after it the
scale in A minor. The second string of the violin is tuned to the A
in the treble staff.
-- A sharp (A#) is the name of a musical tone intermediate between A
and B.
-- A flat (A) is the name of a tone intermediate between A and G.
A per se Etym: (L. per se by itself), one preëminent; a nonesuch.
[Obs.]
O fair Creseide, the flower and A per se Of Troy and Greece. Chaucer.
A
A, prep. Etym: [Abbreviated form of an (AS. on). See On.]
1. In; on; at; by. [Obs.] "A God's name." "Torn a pieces." "Stand a
tiptoe." "A Sundays" Shak. "Wit that men have now a days." Chaucer.
"Set them a work." Robynson (More's Utopia)
2. In process of; in the act of; into; to; -- used with verbal
substantives in -ing which begin with a consonant. This is a
shortened form of the preposition an (which was used before the vowel
sound); as in a hunting, a building, a begging. "Jacob, when he was a
dying" Heb. xi. 21. "We'll a birding together." " It was a doing."
Shak. "He burst out a laughing." Macaulay. The hyphen may be used to
connect a with the verbal substantive (as, a-hunting, a-building) or
the words may be written separately. This form of expression is now
for the most part obsolete, the a being omitted and the verbal
substantive treated as a participle.
MALAY
Ma*lay", n.
Defn: One of a race of a brown or copper complexion in the Malay
Peninsula and the western islands of the Indian Archipelago.
MALAY; MALAYAN
Ma*lay", Ma*lay"an, a.
Defn: Of or pertaining to the Malays or their country.
-- n.
Defn: The Malay language. Malay apple (Bot.), a myrtaceous tree
(Eugenia Malaccensis) common in India; also, its applelike fruit.
MALAYALAM
Ma"la*ya"lam, n.
Defn: The name given to one the cultivated Dravidian languages,
closely related to the Tamil. Yule.
MALBROUCK
Mal"brouck, n. Etym: [F.] (Zoöl.)
Defn: A West African arboreal monkey (Cercopithecus cynosurus).
I want to convert this file to a different format to make it easier and more efficient to search in it. This is the idea:
1. Split the file into a collection of entries
2. Save each entry in its own file:
• Not all in the same directory (100k+ files), split to multiple subdirs
3. Create an index file
• One line per entry, in the format: FILENAME:WORD
• Refactored the functions, use parse_content generator to return term, content pairs
• Added a count for terms that appear multiple times, in the form of -1, -2, -3, ... appended (to display later by GUI apps as subscripts)
• Changed the directory splitting logic, because there were still hundreds of files in most output directories. For example now the word greeting will go in g/gr/gre/greeting-NUM.txt instead of gr/greeting-NUM.txt
• Replaced OptionParser with ArgumentParser
• Made it easier to debug, with the --dry-run and --max-count options
• Fixed some minor bugs
This is the script I'm using now:
#!/usr/bin/env python
import re
import os
import logging
from argparse import ArgumentParser
DATA_DIR = 'data'
INDEX_PATH = os.path.join(DATA_DIR, 'index.dat')
re_entry_start = re.compile(r'[A-Z][A-Z0-9 ;\'-.,]*\$')
re_nonalpha = re.compile(r'[^a-z]')
def write_entry_file(dirname, filename, content):
basedir = os.path.join(DATA_DIR, dirname)
if not os.path.isdir(basedir):
os.makedirs(basedir)
path = os.path.join(basedir, filename)
with open(path, 'w') as fh:
fh.write('\n'.join(content) + '\n')
def is_new_term(line, prev_line_blank):
return re_entry_start.match(line) and prev_line_blank and ' ' not in line
def parse_content(arg):
prev_line_blank = True
term = None
term_count = 0
content = []
with open(arg) as fh:
for line0 in fh:
line = line0.strip()
if is_new_term(line, prev_line_blank):
if term:
for term in term.split('; '):
yield term, content
prev_term = term
term = line.lower()
if term == prev_term:
term_count += 1
subscript = '-' + str(term_count)
else:
term_count = 1
subscript = ''
content = [term + subscript]
else:
content.append(line)
prev_line_blank = not line
def get_split_path(term, count):
slug = re_nonalpha.sub('_', term.lower()).ljust(3, '_')
dirname = os.path.join(slug[0], slug[:2], slug[:3])
filename = '{}-{}.txt'.format(slug, count)
return dirname, filename
def parse_file(arg, dry_run=False, max_count=0):
def rebuild_index():
count = 0
for term, content in parse_content(arg):
count += 1
if max_count and count > max_count:
break
dirname, filename = get_split_path(term, count)
entry = '{}/{}:{}'.format(dirname, filename, term)
logging.info(entry)
if not dry_run:
fh.write(entry + '\n')
write_entry_file(dirname, filename, content)
if dry_run:
rebuild_index()
else:
if not os.path.isdir(DATA_DIR):
os.makedirs(DATA_DIR)
with open(INDEX_PATH, 'w') as fh:
rebuild_index()
def main():
parser = ArgumentParser(description='Generate index and entry files '
'from cleaned plain text file')
help="Dry run, don't write to files")
help="Exit after processing N records")
args = parser.parse_args()
logging.basicConfig(level=logging.INFO,
format='%(levelname)s: %(message)s')
for arg in args.files:
parse_file(arg, dry_run=args.dry_run, max_count=args.max_count)
if __name__ == '__main__':
main()
It creates an index file like this:
a/a_/a__/a__-1.txt:a
a/a_/a__/a__-2.txt:a
a/a_/a__/a__-3.txt:a
a/a_/a__/a__-4.txt:a
a/a_/a__/a__-5.txt:a
a/a_/a__/a__-6.txt:a
a/a_/a__/a__-7.txt:a-
a/a_/a__/a__-8.txt:a 1
a/aa/aam/aam-9.txt:aam
a/aa/aar/aard_vark-10.txt:aard-vark
a/aa/aar/aard_wolf-11.txt:aard-wolf
a/aa/aar/aaronic-12.txt:aaronic
a/aa/aar/aaronical-13.txt:aaronical
a/aa/aar/aaron_s_rod-14.txt:aaron's rod
a/ab/ab_/ab_-15.txt:ab-
a/ab/ab_/ab_-16.txt:ab
a/ab/aba/abaca-17.txt:abaca
a/ab/aba/abacinate-18.txt:abacinate
a/ab/aba/abacination-19.txt:abacination
a/ab/aba/abaciscus-20.txt:abaciscus
a/ab/aba/abacist-21.txt:abacist
Although I like it much better than the previous version, but as I've made significant changes, I'm wondering:
• Any new mistakes I added?
• Anything else I can still do better?
The full input file (cleaned data) is here (10 MB download, 27 MB unzipped).
The open-source project is on GitHub.
The code looks good and it's been improved in multiple ways (argparse, index file opened just once, dry-run flag instead of debug one, ...). Hence, this review is going to focus in small details:
• Please add docstrings to document the code. This will make the code more readable and make easier to potential collaborators in github to join the project.
• Try to order the imports alphabetically. Not really needed, but is a nice touch.
• For a better separation of concerns, write_entry_file shouldn't take care of joining content. This should happen earlier in parse_file.
• The count variable in rebuild_index seems to be a good candidate for enumerate:
for count, (term, content) in enumerate(parse_content(arg)):
• The dry_run check is in a couple of places in parse_file and might be confusing at first. What about something like:
def parse_file(arg, dry_run=False, max_count=0):
def rebuild_index():
count = 0
for term, content in parse_content(arg):
count += 1
if max_count and count > max_count:
break
dirname, filename = get_split_path(term, count)
entry = '{}/{}:{}'.format(dirname, filename, term)
logging.info(entry)
yield entry, dirname, filename, content
if dry_run:
for _index_data in rebuild_index():
pass
else:
if not os.path.isdir(DATA_DIR):
os.makedirs(DATA_DIR)
with open(INDEX_PATH, 'w') as fh:
for entry, dirname, filename, content in rebuild_index():
fh.write(entry + '\n')
write_entry_file(dirname, filename, content)
• The argument parsing code should be in a separate function and main should take the arguments. This way, main is testable in case you want to write test cases in the future:
if __name__ == '__main__':
args = parse_arguments()
main(args) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29230260848999023, "perplexity": 15107.970863275636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00005.warc.gz"} |
https://www.itlive.pk/mod/page/view.php?id=122 | ## 20 - Document Map
This feature of Word 2003 will teach you how you can navigate within your document using document maps. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699059724807739, "perplexity": 2137.56126586266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00272.warc.gz"} |
https://ora.ox.ac.uk/objects/uuid:5e369562-10cd-4854-810e-913ccc284f3b | Journal article
### EXACT QUANTUM AND SEMICLASSICAL CALCULATION OF POSITIONS AND RESIDUES OF REGGE-POLES FOR INTERATOMIC POTENTIALS
Abstract:
Semiclassical and exact quantum methods have been used to calculate the positions and residues of Regge poles for two interatomic potentials. A Lennard-Jones (6,4) potential with parameters approximating H+-Ar collisions and a Lennard-Jones (12,6) potential with parameters approximating the elastic scattering of K by HBr have been used. There is good agreement between the semiclassical and quantum calculations both for the pole positions and modulus and phase of the residues. Some properties ...
Publication status:
Published
### Access Document
Publisher copy:
10.1088/0022-3700/9/10/022
### Authors
Journal:
JOURNAL OF PHYSICS B-ATOMIC MOLECULAR AND OPTICAL PHYSICS
Volume:
9
Issue:
10
Pages:
1783-1799
Publication date:
1976-01-01
DOI:
ISSN:
0953-4075
Source identifiers:
3154
Language:
English
Pubs id:
pubs:3154
UUID:
uuid:5e369562-10cd-4854-810e-913ccc284f3b
Local pid:
pubs:3154
Deposit date:
2012-12-19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611152172088623, "perplexity": 6274.014452592875}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00513.warc.gz"} |
https://indico.cern.ch/event/283098/ | Collider Cross Talk
# ATLAS and CMS Higgs prospects for a HL-LHC
## by Olivier Arnaez (Johannes-Gutenberg-Universitaet Mainz (DE)) , Marco Zanetti (Massachusetts Inst. of Technology (US))
Europe/Zurich
4-2-011 - TH common room (CERN)
### 4-2-011 - TH common room
#### CERN
40
Show room on map
Description After presenting the conditions foreseen for a HL-LHC and the techniques developed by the ATLAS and CMS collaborations in order to extrapolate the Higgs studies to the next decade, the speakers will discuss the couplings and properties prospects. document Slides | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176492094993591, "perplexity": 12649.620140252116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983019893.83/warc/CC-MAIN-20160823201019-00072-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://matheducators.stackexchange.com/questions/1896/are-the-words-easy-basic-clearly-obviously-etc-ever-helpful/1944 | # Are the words "easy," "basic," "clearly," "obviously," etc., ever helpful?
This is a very basic fact from...
It then clearly follows that...
Obviously, we have...
The proof is trivial...
I could add plenty of other phrases to this list that mathematicians are prone to use when trying to communicate that a particular concept is so simple that they refuse to discuss the details. I can't think of a time when words like these helped me. Usually they just make me realize how lost I am, and how much more the professor/author knows than I do. Even if I agree that the fact is basic, or the proof is trivial, it didn't help me to hear that it was easy or trivial, because I already thought that.
To get to my question, consider the following amendments:
This is a fact from...
It then follows that...
We have...
The proof is left as an exercise.
Is there ever a context in which it is more helpful for students to hear the first set of sentences as compared to this second set? Are words like "easy," or "trivially," ever constructive? In what setting? As math educators, should we work to weed them out of our written/spoken vocabulary?
• Something related was discussed on MathOverflow mathoverflow.net/questions/16193/…
– quid
Apr 22 '14 at 18:00
• Yes, they are helpful. For example, when there are multiple inequalities that could apply, I normally opt to use sharper bounds, however, it might be enough to apply something as rough as union bound. It's also a short way of letting the audience know that nothing unexpected happens. In other words, such a statement "prunes the search tree" and might lead sometimes to great speed-ups. Nevertheless, one should not use such phrases if they are not warranted, e.g. it only seems it's obvious, but was not properly checked. Apr 22 '14 at 19:32
• Also related: Mathematics StackExchange and Academia StackExchange Apr 23 '14 at 2:10
• This page is about the computer engineering use of "trivial", which is similar to the mathematician's use; you might find it amusing. fishbowl.pastiche.org/2007/07/17/… For me personally, "clearly" usually indicates where I'm about to make an error. :-) Apr 23 '14 at 13:01
• I'd add that I agree with your statement that either the characterization of a theorem as trivial is insulting to the reader if the theorem is not understood, and redundant if it is. I edit programming books as a hobby and often discourage writers from saying things like "It is easy to do X in C#"; the reader who already finds it easy likely does not have to read the book! Apr 24 '14 at 6:33
The main point here is that these words/expressions should not be used as substitute for an argument.
They obviously have some negative effects:
• You evaluate your students by them and not in the positive sense. If you say "obviously", then it has a message that "it better be obvious". And if it is for someone not clear, then it is an evaluation that this person is not up to the course. This can turn into humiliation of your audience (or reader, if it is in notes) quickly.
• If you turn out to be wrong, the you loose your face quickly. This is a consequence of my previous point: you humiliated your audience, and it turned out that you did it wrongly. You are not a person to be taken seriously.
You should, however, use these words in context, explanation. For example, to stress that though a calculation looks complicated, it is actually a simple idea what is behind.
• Yes! One of my old math teachers always banned us from saying "Basically..." when we were explaining our solutions up at the board. It was deemed "a put-down word", because it implied that if you didn't understand, you were stupid. Apr 23 '14 at 0:11
• @SimonT well I wouldn't classify "basically" as one of those words. That can be used when you've just explained a complicated series of steps that can be summarized more concisely. Apr 23 '14 at 2:05
• +1 for "should not be used as substitute for an argument". Apr 30 '14 at 22:48
I can think of a few instances where it might be useful:
1. To situate the current piece of the concept among others coming up
2. To call-back to something earlier in the course that really should be easy to them at this point
3. To intentionally make fun of something you know they thought was stupid (e.g. high school? if you're teaching a proofs course)
And, examples of each of these:
1. To start off this proof, we prove this easy (or easier) theorem... (again, the goal is to tell them that something harder is coming soon)
2. Obviously, we have 2n+1 is odd, by the definition of odd. (again, later in the course--not if you've just introduced quantifiers)
3. And now, to finish off this proof, we do a trivial computation of this obnoxious integral. You all learned this in high school, right? (here, the goal would be for people to roll their eyes)
I do agree with you on the general idea though. We use these terms far too often when we're explaining things, and they are almost always not a good idea.
• On point (2), it can sometimes be outright confusing if an author doesn't acknowledge that an obvious step is obvious. I'm left worrying that I don't understand something, and it's more complicated than I think. Apr 22 '14 at 18:03
• @JackM Almost always being, "at all but finitely many times" or "at all but countably many times"? :P Oct 31 '14 at 16:46
In addition to what @andras-batkai said, seeing words like 'obviously', 'clearly', and so on, in assignments or texts raises red flags and scepticism with me. (Recall that, for thousands of years, it was obvious that the earth was flat. Also, obviously a set contains all of its elements.)
As both a former student of formal logic and an occasional tutor, this is a pet peeve of mine. I feel these words should not be in educational or instructional texts (manuals, course notes, troubleshooting):
• They don't improve comprehension. If something is obvious, by its nature, it doesn't need to be pointed out.
• They put the burden of mathematical rigour on the reader, rather than the author. This makes it more likely for hidden assumptions to leak into proofs.
• They sweep complexity under the rug. If it really is obvious, wouldn't the explanation fit into a footnote?
• They discourage discussion. If the assumption does turn out to be wrong, students will be more likely to assume they misunderstand than they are to ask about it.
I did notice that this practice seems to be going out of style. At least in institutions I've worked with, lecturers are advised to take teaching classes, and it looks like this topic is given some attention.
P.S. The author experimented with the word 'obviously' in providing IT support to lecturers. The experiment was not well received.
They aren't any four-letter words in the set, so they are allowed in polite company. But use them sparingly, and when you really mean it. When you say something is "simple," you should make sure it really is simple for most of your audience. Be careful, what is trivial to you (presumably the world expert on the topic you are writing about) can very well be a profound mistery to the average reader.
If in doubt, err on the side of explaining (a bit) too much.
• Have you deliberately combined "mystery to the average reader" with "misery to the average reader"?
– user173
Apr 22 '14 at 19:54
• @MattF., no. Chalk it up to "fortunate typo" ;-) Apr 22 '14 at 20:04
Yes, they are useful, but they can be over-used, or used when not true.
If you state "it is a basic fact from...", and the reader does not see why it follows, then they know that they're not following your argument as you intended it to be followed.
If you merely state "it is a fact from...", and the reader does not see why it follows, then they might think that it's a deep result and that you intend either that they should accept it on trust, or that you're presenting a lemma to be proved later.
Compare:
"It is a very basic fact from distributivity/associativity/commutativity that $(1-x)(1+x) = 1 - x^2$" -- the reader should be able to immediately visualize the proof.
"It is a very basic fact from arithmetic that $(1-x)\sum\limits_{i=0}^{n-1}x^i = 1 - x^n$" -- the reader can immediately see how a proof for a fixed $n$ might be carried out, and you're telling them that the calculations do indeed work out to save them the bother of writing it. The detail around the implied "for all $n$" is being ignored, which is a little shady. Really you want a proof that retains the summations and therefore involves distributivity over the summation symbol rather than just distributivity over binary addition, and that's still within the reach of the reader.
"It is a fact from the Axiom of Choice that every set has a well-order" -- the reader might well know or be able to construct a proof, but they're not expected to instantly produce it. It is not a "very basic" consequence except perhaps to an audience of skilled set theoreticians who are indeed all expected to trot out that proof.
"It is a fact from the Axiom Choice that a sphere can be decomposed into finitely many pieces, which themselves can be rotated and translated to produce two spheres of equal size to the first. This is the so-called 'Banach-Tarski paradox'" -- Information presented for interest and not proved. The student probably thinks "okaay, I can't even imagine how to prove that", but the proof might come in a sufficiently specialized undergraduate course.
There is a similar difference between "the proof is trivial" and "the proof is left as an exercise". If someone attempts the exercise and finds themself part way into a proof that isn't trivial, then if you've said it's trivial then they know there's a better proof they've missed. If you haven't they don't.
"Clearly follows" and "obviously" are similar, although I think they're frequently mis-used for things that objectively are neither clear nor obvious to part of the audience. It's just that the speaker thinks they ought to be clear or obvious and therefore doesn't care to spend time and screen real-estate on them.
There's another such phrase, "it follows immediately", which asserts that no new gizmos need to be introduced to the proof. To stretch the use of the term, it may turn out that "immediate" actually means a few lines of multiplying out brackets, but that's OK.
There's two separate issues here. The first is whether there's important work which the word clear/trivial/obvious is doing when used in mathematical explanations. As explained by Jack M. and Steve Jessop and several of the MO posts, the answer to that question is yes: it indicates to the reader/listener that there's a very short argument that's being skipped over rather than something deep. The second issue is whether "clear/trivial/obvious" are good choices of words to use for the concept. I think in the context of teaching the answer is no. (In a research context they're fine, because all readers will know that they're technical terms of art that don't actually mean what clear/trivial/obvious mean in everyday speech.) However, I don't know what would be a better choice of word which would be less likely to be misunderstood. Has anyone found a word choice that works better with students?
• Interesting point, although everyone I know was very familiar with the term of art well before "research context". I suppose that once you've introduced your students to proofs, and starting presenting proofs long enough that there are details worth eliding that the reader/listener is competent to fill in, you have a choice. You can use a terminology that works for them, or you can teach them what "clear/trivial/obvious" means in this context and then use the term of art. Or some combination of the two, of course. Jan 3 '15 at 3:37
I would argue that they are.
When one reads a word such as "clearly", it is a sign that an argument has been omitted, and that said argument should be relatively easy to find. I think it engages the reader and makes them "participate" with the material more.
I wouldn't worry about making people feel bad when they don't get something - everyone eventually has to learn to get over that feeling (and we all experience it, time and time again). Students who are assertive will ask you if they didn't understand something you said or wrote.
• There is a difference here. When stating that something "clearly" holds, then is is also stated that it is expected of you to get it, and if you don't get it, you are not up to the course. This would not be a big problem if it would be objective. But it is highly subjective what is "obviuous" and what is not. Independently of the mathematical ability. Apr 23 '14 at 11:35
• Feeling bad for not getting something shouldn't be accompanied by having to put up with someone telling you (written or otherwise) that it is obvious/clear/easy to see. Moreover, like András said, "relatively easy to find" is also subjetive. Apr 23 '14 at 21:06
• I think that worrying about how people feel when they hear words like "easy" is an issue that really shouldn't be ignored. It is obvious to any instructor who cares about understanding that a student who feels like everyone else finds something easy (even if they don't) will be unable to fully engage with the material. I take your point about it signalling a missing argument, but I think it is important to contrast the difficulty constructing an argument with the ease of presenting it. What is clear or obvious in presenting an argument is often difficult to come up with in the first place Apr 25 '14 at 2:16
I would say that "trivial" is okay, the others are probably less so. While "trivial" has connotations of "exceptionally easy", in math speak we often use it to roughly mean "an exceptionally simple statement, requiring little complexity to prove or define". I'd probably shy away from "the proof is trivial" a little more, but even so, if it's a tiny 3 line lemma, even if it's not easy and involves a huge leap of logic, it's still "trivial" by my metric.
I view "trivial" like "simple", and "simple doesn't mean easy" is practically a catch phrase of mine at this point. Things can be simple in that they only rely on two or three axiomatic facts, but actually thinking of which two or three axioms to use isn't always easy. It may pay off to remind students of this fact. "It's not exceptionally complex, but can be hard to understand" goes a long way.
Basic is the next I'd be okay with, but only in limited cases. I'd pretty much solely use "basic" to refer to things like so-called "basic facts" (2+2=4; you can't divide by zero, etc), or more generally literal axioms of the system you're working in.
"Obviously" and "clearly" I'd avoid at all costs, unless maybe you're saying "2+2=4". Though I've been known to use "obviously" in humor, especially with contradictions ("And, well, obviously 0=4 so..."). It just alienates people who didn't get the intuition. Unfortunately, these two words are almost the proof-writer's version of "um...". People don't notice they're saying them, they're filler words. I always try to proof read them out of anything I write, it actually tends to make things more readable in addition to not alienating the students who don't understand a concept.
Perhaps one tiny caveat is that sometimes look more complex than they are, and it can be useful to reassure students that this gnarly thing isn't really that scary. Newton-Raphson comes to mind as something that made my brain overflow when I first saw the formal definition of it in a textbook we were using. Being assured that, yes, it really is just this simple iterative process was helpful in making it less scary. See also: functions with 25 Greek letters where 24 of them are stupid constants.
• Functions with 25 Greek letters are highly suspicious. Apr 30 '14 at 19:50
• @user11235 Deliberate absurdity, I know the Greek alphabet only has 24 letters :p Apr 30 '14 at 19:53
• That's why Greek has two different lowercase sigmas. One of which is easily confused with zeta given the typical shabbiness of mathematicians' handwriting. Jan 3 '15 at 3:47
These words can be quite helpful, because it is important to know how difficult a skipped pproof is. The problem is that these words have contradictory meaning, they depend on context and are often used out of laziness.
Since it can be quite frustrating to get stuck on an abused "trivial", the trust of students and readers in this kind of word can be low, so it is almost always better to be more precise, either by giving more details on the difficulty:
"The fully-written proof of this lemma would take half a page of intermediate difficulty among the proofs written in this book."
"The reader can verify the base case of the induction (same difficulty as the one-star exercices in this book)."
or be more honest on the sense of trivial:
"The proof of this result is elementary, uninspiring and long, so we skip it."
"The proof of this result is so well-known that I expect my readers to know it."
"I am so scared about boring anyone that I would rather risk losing half the audience."
Obviously, and its ilk, do not contain much information. They can be replaced by referring to the tool that is used to prove or notice the obvious fact.
For example: By triangle inequality, by definition (of something specific where necessary), by calculation, by integrating by parts.
Also: By lengthy calculation, by very clever choice of test function, and so on.
Here the statement communicates something about the difficulty and the tools used, thus making it easier to immediately verify something, or alternatively telling why one should not be discouraged at not immediately seeing it.
I think the most important thing to keep in mind is your audience. Obviously if you were teaching a calculus class you wouldn't have to remind them on how to add fractions. Well, in this case the correct word might be "hopefully". Hopefully you don't have to remind them, but I've had calculus students that couldn't add 2 fractions to save their lives. So one should really think about one's audience first.
However, the use of such phrases builds confidence if used appropriately. I remember when I was a student and my professors would use "trivial" or "obvious" ...sometimes I would agree and other times I would disagree. When I agreed, I felt that all my studying paid off. The problem is when the student disagrees. In that case, it's up to both the student and the professor to bridge the gap. This is why it's important for both students and teachers to ask questions.
This is just one facet of the question that I feel wasn't mentioned.
• I don't have the rep to comment, but @Carlos - if you use hopefully there's no clarity of meaning between a) I hope I don't have to remind you of the standard method used to add fractions and b) I hope the standard method of adding fractions, which you're familiar with, is true in most instances.. Apr 23 '14 at 21:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6634743213653564, "perplexity": 742.4732277363591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060803.2/warc/CC-MAIN-20210928122846-20210928152846-00022.warc.gz"} |
https://www.hackmath.net/en/math-problem/2108 | # Equation with abs value
How many solutions has the equation
$(|x| +x) |x-3| = |x+1|$
in the real numbers?
Result
n = 4
#### Solution:
$x_1 = -1 \ \\ x_2 = 0.219223593596 \ \\ x_3 = 2.2807764064 \ \\ x_4 = 3.63745860882$
The equation has infinity many solutions.
Calculated by our simple equation calculator.
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Looking for help with calculating roots of a quadratic equation?
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
1. Equation 23
Find value of unknown x in equation: x+3/x+1=5 (problem finding x)
2. Null points
Calculate the roots of the equation: ?
3. Cinema 4
In cinema are 1656 seats and in the last row are 105 seats , in each next row 3 seats less. How many are the total rows in cinema?
4. Tubes
Iron tubes in the warehouse are stored in layers so that each tube top layer fit into the gaps of the lower layer. How many layers are needed to deposit 100 tubes if top layer has 9 tubes? How many tubes are in bottom layer of tubes?
5. Roots
Determine the quadratic equation absolute coefficient q, that the equation has a real double root and the root x calculate: ?
6. The product
The product of a number plus that number and its inverse is two and one-half. What is the inverse of this number
Find the roots of the quadratic equation: 3x2-4x + (-4) = 0.
8. Discriminant
Determine the discriminant of the equation: ?
9. Solve 3
Solve quadratic equation: (6n+1) (4n-1) = 3n2
10. Equation
Equation ? has one root x1 = 8. Determine the coefficient b and the second root x2.
11. Evaluation of expressions
If a2-3a+1=0, find (i)a2+1/a2 (ii) a3+1/a3
12. Square root 2
If the square root of 3m2 +22 and -x = 0, and x=7, what is m? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985114574432373, "perplexity": 2113.196938177662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00102.warc.gz"} |
https://rigtriv.wordpress.com/ | ## Quasiconformal Maps
Ok, time to get back to Riemann surfaces! We’ve been all about hyperbolic surfaces, and so first let’s compare the two of them: every oriented hyperbolic surface is a Riemann surface, and the conformal class of a complex structure contains a unique hyperbolic metric.
Posted in Uncategorized | 3 Comments
We’re going to get to quasi-conformal maps soon, but first, we’re going to want to build some new objects on our Riemann surfaces. Differential forms are a fairly standard thing, and asserting that they’re holomorphic isn’t exactly a revolution. On Riemann surfaces, of course, there’s holomorphic 1-forms only, by dimension reasons. So now, we’ll talk about quadratic forms, and a few of the things we can do with them.
Posted in Uncategorized | 2 Comments
## Mapping Class Group Elements
Let’s get into the mapping class group and talk a bit about its elements and its structure. I’m going to omit proofs, because I can’t beat Minsky’s exposition, and this is just some flavor and definitions, most of which won’t be coming up too much in the future.
Posted in Uncategorized | | 2 Comments
## The Mapping Class Group
Last time, we touched on the mapping class group, so now, we’re going to dig in. Now, we’re not going to dig too deeply, there’s a LOT here (see the wonderful book by Farb and Margalit for a hint at what’s there) and for now, my primary reference is Minsky’s set of lectures from the PCMI summer institute in 2011 on moduli of Riemann surfaces.
Posted in Uncategorized | 2 Comments
## Fenchel-Nielsen Coordinates
Welcome back, and hope all you readers had a good 2014 and particularly good holidays and new year’s celebrations, if you do those things. Today, we’re going to keep on the road to producing the moduli space of curves, by nailing down some more hyperbolic geometry.
## Hyperbolic Surfaces
Ok, with the hyperbolic plane and its metric and geodesics out of the way, we can start getting into some surface theory.
So, I know I usually talk about strictly algebraic geometry stuff, but the moduli of curves lives in an interesting place. It’s both an algebraic and an analytic object. So we’re going to start by talking a bit about hyperbolic surfaces, as we work towards a construction of Teichmüller space, which is used to construct the moduli of curves over $\mathbb{C}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566322922706604, "perplexity": 763.7056212635872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00006.warc.gz"} |
http://pruffle.mit.edu/~ccarter/3.21/Lecture_14/ | Last Time
The Position of a Particle Executing a Random Walk
The Probability of Finding a Particle at a Position after a Random Walk
Treating the Concentration as Time-Dependent Probability Distribution
Relating the Self-Diffusivity to a Random Walk
A Puzzle: Why for a random walk is ?
3.21 Spring 2001: Lecture 14
The Successful Jump Frequency as an Activated Process
The treatment of diffusion as a statistical process permitted a physical correspondence between the macroscopic diffusivity and microscopic parameters for, average jump distance , jump correlation , and the average frequency at which a jumper makes a finite jump .
In this lecture, the statistical evaluation of microscopic process will be applied to the successful jump frequency . A physical correspondence for that is related to microscopic processes of attempt or natural atomic vibration frequency and the difference in energy between the potential energy of site and the maximum value of the minimum potential energy (the saddle energy) as the atom moves from one equilibrium site to the next.
The result that will be obtain, that the frequency of successful hops,
(14-1)
is related to the natural frequency multiplied by a Boltzmann factor has remarkable general application.
Distribution of Energy among Particles
A fundamental result from statistical mechanics is that for an ensemble of atoms at a fixed temperature , that the energies of the atoms has a characteristic probability distribution:
Below, the rate of successful jumps for simple models of activated processes will be derived. Each derivation will depend on the distribution of energies given above. It will be supposed that a single atom will assume all possible values of energy with probabilities given by the Boltzmann distribution over time (the ergodic assumption). In other words, the distribution is considered to apply to the atoms at a time scale that is rapid compared to the natural frequency of the atoms--no correlation is made for the loss (or gain) of energy as an atom hops from one equilibrium site to the next.
Activation Processes in Square Wells
Consider an ensemble of particles with distributed energies moving about on the following energy landscape:
The characteristic time it takes a particle to cross the activated state is
(14-2)
where and is the mass of the particle with characteristic thermal energy .
The total rate, , that particles cross the barrier is
(14-3)
where and are the partition functions for the activated and minimum states.
The rate that single particle crosses, , is:
(14-4)
(14-5)
Therefore,
(14-6)
The term that multiplies the Arrhenius factor (the 1/T exponential) is the characteristic time it takes a particle to make an attempt at the activated state.
Activation Processes in Harmonic Wells
Consider the following modification of the above simple case, the minima are treated as harmonic wells:
The minima can be approximated by
(14-7)
The analysis is similar to the case of the square wells, but for the ratio of the partition functions:
(14-8)
Approximating,
(14-9)
and carrying out the integration,
(14-10)
where is the characteristic oscillating frequency at the minima of a particle with mass sitting in a well of curvature .
Many-Body Theory of Activated Processes at Constant Pressure
In a real system, an atom or a vacancy does not make a successful hop without affecting (or getting effected by) its neighbors--all of the particles are vibrating and saddle point energy is an oscillating target produced by the random vibrations of all the atoms. The energy-surface that an atom, interstitial, or vacancy travels upon is a complicated and changing surface. If there are spherical particles, then there are -degrees of freedom to this surface, but it will be assumed that the momentum variables can be averaged out so that only a -dimensional potential surface remains:
The minima, or equilibrium values of momenta and positions, can be approximated by harmonic wells:
(14-11)
This is approximated by a one-dimensional problem by assuming that all states during a hop lie on the surface in the figure.
Let the first coordinate be in the direction of the crossing (parallel to ), then the average (rms) momentum in that direction is related to the an average rate of attempts. The result that was derived for the harmonic potential can be re-used in this case:
(14-12)
where is recogonized to be the width in Figure. 14-4.
However, in this case, the particle may have a different volume in the activated state compared to the equilibrium state:1
(14-13)
For the case where the volume may vary, but pressure is constant, the canonical constant pressure partition function must be used:
(14-14)
Therefore picks up an additional factor:
(14-15)
It remains to evaluate the partition functions by summing over all energies: . The partition function is evaluated by passing to the classical limit by dividing up the quantum phase space into cells of side-length equal to Planck's constant, :
Because of the uncertainty principle:
(14-16)
Each elementary volume, , in phase space must be considered to have degeneracy:
(14-17)
Therefore,
(14-18)
In the classical limit,
(14-19)
Using the Harmonic approximations for the minima and carrying out the integration and letting all the masses have the same value:
(14-20)
Carrying out the same process for the activated state (which has one less degree of freedom) and adding the momentum near the activated state to the integral:
(14-21)
The products over the vibrational modes can be related to the entropies of the states, i.e.,
(14-22)
Putting this all back into the expression for the rate of jumps,
(14-23) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217534065246582, "perplexity": 716.9815527771351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190753.92/warc/CC-MAIN-20170322212950-00359-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/708932/what-is-the-role-of-hermitian-hamiltonians-in-relativistic-qft/708933 | What is the role of Hermitian Hamiltonians in relativistic QFT?
In single-particle quantum mechanics, the probability of finding the particle in all space is conserved due to the hermiticity of the Hamiltonians (and remains equal to unity for all times, if normalized).
But in relativistic quantum field theory, particle numbers are not conserved. For example, in QED, an initial state consisting of an electron-positron pair can annihilate into two photons in the final state. There is no trace of the initial electron and the positron in the final state. Similarly, in $$\beta$$-decay, an initial neutron is converted into a proton, an electron and an anti-electron neutrino in the final state. There is no neutron after the decay takes place though the Fermi theory Hamiltonian is Hermitian.
So it seems that hermitian Hamiltonian in QFT is not responsible for the conservation of the probability. I seem to have a conceptual glitch here which I would like to be clarified. What is the role of hermitian Hamiltonians in relativistic QFT in the time development of states? There must be some constraining feature of hermitian QFT Hamiltonian. Sorry if the question sounds dumb.
Some more thoughts on this for clarification In single-particle quantum mechanics, the norm of a quantum state is the probability of finding the particle in all of space. Is there a similar probability interpretation of the norm of a state in QFT? Since QFT hamiltonians are hermitian, the norm of the state remains preserved under time development but the state itself can change. This confuses me. To take the example given above the norm of the initial state neutron does not change under time development but the state itself can into other states. How do we interpret this in QFT? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602521657943726, "perplexity": 203.09675205391207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00669.warc.gz"} |
https://repository.nwu.ac.za/handle/10394/10488?show=full | dc.contributor.author Venter, C. dc.contributor.author De Jager, O.C. dc.date.accessioned 2014-05-08T10:49:18Z dc.date.available 2014-05-08T10:49:18Z dc.date.issued 2010 dc.identifier.citation Venter, C. & De Jager, O.C. 2010. Accelerating high-energy pulsar radiation codes. Astrophysical journal, 725(1):1903-1909. [http://iopscience.iop.org/0004-637X/] en_US dc.identifier.issn 0004-637X dc.identifier.issn 1538-4357 (Online) dc.identifier.uri http://hdl.handle.net/10394/10488 dc.identifier.uri http://dx.doi.org/10.1088/0004-637X/725/2/1903 dc.identifier.uri http://iopscience.iop.org/0004-637X/725/2/1903/pdf/apj_725_2_1903.pdf dc.description.abstract Curvature radiation (CR) is believed to be a dominant mechanism for creating gamma-ray emission from pulsars and is emitted by relativistic particles that are constrained to move along curved magnetic field lines. Additionally, synchrotron radiation (SR) is expected to be radiated by both relativistic primaries (involving cyclotron resonant absorption of radio photons and re-emission of SR photons), or secondary electron–positron pairs (created by magnetic or photon–photon pair production processes involving CR gamma rays in the pulsar magnetosphere). When calculating these high-energy spectra, especially in the context of pulsar population studies where several millions of CR and SR spectra have to be generated, it is profitable to consider approximations that would save computational time without sacrificing too much accuracy. This paper focuses on one such approximation technique, and we show that one may gain significantly in computational speed while preserving the accuracy of the spectral results. en_US dc.language.iso en en_US dc.publisher IOP Publishing en_US dc.rights dc.subject Pulsars: general en_US dc.subject radiation mechanisms: non-thermal en_US dc.title Accelerating high-energy pulsar radiation codes en_US dc.type Article en_US dc.contributor.researchID 12006653 - Venter, Christo dc.contributor.researchID 10065857 - De Jager, Ocker Cornelis
Theme by | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335805177688599, "perplexity": 15608.176903335327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735990.92/warc/CC-MAIN-20200806001745-20200806031745-00168.warc.gz"} |
https://www.whsmith.co.uk/products/advanced-calculus/9780130652652 | By: Gerald B. Folland (author)Hardback
2 - 4 weeks availability
£68.99
Description
For undergraduate courses in Advanced Calculus and Real Analysis. This text presents a unified view of calculus in which theory and practice reinforce each other. It covers the theory and applications of derivatives (mostly partial), integrals, (mostly multiple or improper), and infinite series (mostly of functions rather than of numbers), at a deeper level than is found in the standard advanced calculus books.
Contents
1. Setting the Stage. Euclidean Spaces and Vectors. Subsets of Euclidean Space. Limits and Continuity. Sequences. Completeness. Compactness. Connectedness. Uniform Continuity. 2. Differential Calculus. Differentiability in One Variable. Differentiability in Several Variables. The Chain Rule. The Mean Value Theorem. Functional Relations and Implicit Functions: A First Look. Higher-Order Partial Derivatives. Taylor's Theorem. Critical Points. Extreme Value Problems. Vector-Valued Functions and Their Derivatives. 3. The Implicit Function Theorem and Its Applications. The Implicit Function Theorem. Curves in the Plane. Surfaces and Curves in Space. Transformations and Coordinate Systems. Functional Dependence. 4. Integral Calculus. Integration on the Line. Integration in Higher Dimensions. Multiple Integrals and Iterated Integrals. Change of Variables for Multiple Integrals. Functions Defined by Integrals. Improper Integrals. Improper Multiple Integrals. Lebesgue Measure and the Lebesgue Integral. 5. Line and Surface Integrals; Vector Analysis. Arc Length and Line Integrals. Green's Theorem. Surface Area and Surface Integrals. Vector Derivatives. The Divergence Theorem. Some Applications to Physics. Stokes's Theorem. Integrating Vector Derivatives. Higher Dimensions and Differential Forms. 6. Infinite Series. Definitions and Examples. Series with Nonnegative Terms. Absolute and Conditional Convergence. More Convergence Tests. Double Series; Products of Series. 7. Functions Defined by Series and Integrals. Sequences and Series of Functions. Integrals and Derivatives of Sequences and Series. Power Series. The Complex Exponential and Trig Functions. Functions Defined by Improper Integrals. The Gamma Function. Stirling's Formula. 8. Fourier Series. Periodic Functions and Fourier Series. Convergence of Fourier Series. Derivatives, Integrals, and Uniform Convergence. Fourier Series on Intervals. Applications to Differential Equations. The Infinite-Dimensional Geometry of Fourier Series. The Isoperimetric Inequality. APPENDICES. A. Summary of Linear Algebra. Vectors. Linear Maps and Matrices. Row Operations and Echelon Forms. Determinants. Linear Independence. Subspaces; Dimension; Rank. Invertibility. Eigenvectors and Eigenvalues. B. Some Technical Proofs. The Heine-Borel Theorem. The Implicit Function Theorem. Approximation by Riemann Sums. Double Integrals and Iterated Integrals. Change of Variables for Multiple Integrals. Improper Multiple Integrals. Green's Theorem and the Divergence Theorem. Answers to Selected Exercises. Bibliography. Index.
Product Details
• publication date: 21/12/2001
• ISBN13: 9780130652652
• Format: Hardback
• Number Of Pages: 480
• ID: 9780130652652
• weight: 640
• ISBN10: 0130652652
Delivery Information
• Saver Delivery: Yes
• 1st Class Delivery: Yes
• Courier Delivery: Yes
• Store Delivery: Yes
Calculus and AnalysisView More
Prices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly
Close | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633310198783875, "perplexity": 2119.9420075505996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00501-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://stat-reports.lib.berkeley.edu/xtf/servlet/org.cdlib.xtf.crossQuery.CrossQuery?yearsdtr=2004&sort=localuid&rmode=sdtr | Statistics Technical Reports:Search | Browse by year
Term(s):2004
Results:24
Sorted by:
Page: 1 2 Next
Title:Boosted Lasso and Reverse Boosting
Author(s):Zhao, Peng; Yu, Bin;
Date issued:December 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2x7c (PDF)
Abstract:This paper introduces the concept of "backward" step in contrast with forward fashion algorithms like Boosting and Forward Stagewise Fitting. Like classical elimination methods, this "backward" step works by shrinking the model complexity of an ensemble learner. Through a step analysis, we show that this additional step is necessary for minimizing $L_1$ penalized loss (Lasso loss). We also propose a BLasso algorithm as a combination of both backward and forward steps which is able to produce the complete regularization path for Lasso problems. Moreover, BLasso can be generalized to solve problems with general convex loss with general convex penalty.
Keyword note:Zhao__Peng Yu__Bin
Report ID:678
Relevance:100
Title:Estimating the Proportion of False Null Hypotheses among a Large Number of Independently Tested Hypotheses
Author(s):Meinshausen, Nicolai; Rice, John;
Date issued:October 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2x58 (PDF)
Abstract:We consider the problem of estimating the number of false null hypotheses among a very large number of independently tested hypotheses, focusing on the situation in which the proportion of false null hypotheses is very small. We propose a family of methods for establishing lower $100(1-\alpha)\%$ confidence bounds for this proportion, based on the empirical distribution of the p-values of the tests. Methods in this family are then compared in terms of ability to consistently estimate the proportion by letting $\alpha \rightarrow 0$ as the number of hypothesis tests increases and the proportion decreases. This work is motivated by a signal detection problem occurring in astronomy.
Keyword note:Meinshausen__Nicolai Rice__John_Andrew
Report ID:676
Relevance:100
Title:Variational inference for Dirichlet process mixtures
Author(s):Blei, David M.; Jordan, Michael I.;
Date issued:October 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2x35 (PDF)
Abstract:Dirichlet process (DP) mixture models are the cornerstone of nonparametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabled their applications to a variety of practical data analysis problems. However, MCMC sampling can be prohibitively slow, and it is important to explore alternatives. One class of alternatives is provided by variational methods, a class of deterministic algorithms that convert inference problems into optimization problems (Opper & Saad, 2001; Wainwright & Jordan, 2003). Thus far, variational methods have mainly been explored in the parametric setting, in particular within the formalism of the exponential family (Attias, 2000; Ghahramani & Beal, 2001; Blei, et al., 2003). In this paper, we present a variational inference algorithm for DP mixtures. We present experiments that compare the algorithm to Gibbs sampling algorithms for DP mixtures of Gaussians and present an application to a large-scale image analysis problem.
Keyword note:Blei__David_M Jordan__Michael_I
Report ID:674
Relevance:100
Title:A stochastic model of language evolution that incorporates homoplasy and borrowing
Author(s):Warnow, Tandy; Evans, Steven N.; Ringe, Don; Nakhleh, Luay;
Date issued:September 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2w3n (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2w46 (PostScript)
Abstract:We propose a stochastic model of language evolution that permits both homoplasy and non-tree evolution (so as to reflect borrowing between lineages). We discuss the issues involved in reconstructing evolutionary histories under the model, specifically showing that the tree component of the model is identifiable, that efficient methods exist for reconstructing the tree, and that full likelihoods can be calculated in linear time. We conclude with a discussion of data selection and analysis, and compare our stochastic model for language evolution to existing models in molecular evolution.
Keyword note:Warnow__Tandy Evans__Steven_N Ringe__Don Nakhleh__Luay
Report ID:673
Relevance:100
Title:A comparison of phylogenetic reconstruction methods on an IE dataset
Author(s):Nakhleh, Luay; Warnow, Tandy; Ringe, Don; Evans, Steven N.;
Date issued:September 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2w00 (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2w1j (PostScript)
Abstract:Researchers interested in the history of the Indo-European family of languages have used a variety of methods to estimate the phylogeny of the family, and have obtained widely differing results. In this paper we explore the reconstructions of Indo-European phylogeny obtained by using the major phylogeny estimation procedures on an existing database for 24 Indo-European languages compiled by linguists Don Ringe and Ann Taylor. Our study finds that the different methods agree in part, but that there also several striking differences. We discuss the reasons for these differences, and make proposals with respect to phylogenetic reconstruction in historical linguistics.
Keyword note:Nakhleh__Luay Warnow__Tandy Ringe__Don Evans__Steven_N
Report ID:672
Relevance:100
Title:Treewidth-based conditions for exactness of the Sherali-Adams and Lasserre relaxations
Author(s):Wainwright, M. J.; Jordan, M. I.;
Date issued:September 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2w9z (PDF)
Abstract:The Sherali-Adams (SA) and Lasserre (LA) approaches are "lift-and-project" methods that generate nested sequences of linear and/or semidefinite relaxations of an arbitrary 0-1 polytope $P \subseteq [0,1]^n$. Although both procedures are known to terminate with an exact description of $P$ after $n$ steps, there are various open questions associated with characterizing, for particular problem classes, whether exactness is obtained at some step $s < n$. This paper provides sufficient conditions for exactness of these relaxations based on the hypergraph-theoretic notion of treewidth. More specifically, we relate the combinatorial structure of a given polynomial system to an underlying hypergraph. We prove that the complexity of assessing the global validity of moment sequences, and hence the tightness of the SA and LA relaxations, is determined by the \emph(treewidth) of this hypergraph. We provide some examples to illustrate this characterization.
Keyword note:Wainwright__Martin Jordan__Michael_I
Report ID:671
Relevance:100
Title:A New Look at Survey Propagation and its Generalizations
Author(s):Maneva, E.; Mossel, E.; Wainwright, M. J.;
Date issued:September 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2v8w (PDF)
Abstract:We study the survey propagation algorithm~\cite(MZ02, BMZ03,BMWZ03), which is an iterative technique that appears to be very effective in solving random $k$-SAT problems even with densities close to threshold. We first describe how any SAT formula can be associated with a novel family of Markov random fields (MRFs), parameterized by a real number $\rho \in [0,1]$. We then show that applying belief propagation---a well-known "message-passing" technique---to this family of MRFs recovers various algorithms, ranging from pure survey propagation at one extreme ($\rho = 1$) to standard belief propagation on the uniform distribution over SAT assignments at the other extreme ($\rho = 0$). Configurations in these MRFs have a natural interpretation as generalized satisfiability assignments, on which a partial order can be defined. We isolate \emph(cores) as minimal elements in this partial ordering, and prove that any core is a fixed point of survey propagation. We investigate the associated lattice structure, and prove a weight-preserving identity that shows how any MRF with $\rho > 0$ can be viewed as a "smoothed" version of the naive factor graph representation of the $k$-SAT problem ($\rho = 0$). Our experimental results suggest that random formulas typically do not possess non-trivial cores. This result and additional experiments indicate that message-passing on our family of MRFs is most effective for values of $\rho \neq 1$ (i.e., distinct from survey propagation). Finally, we isolate properties of Gibbs sampling and message-passing algorithms that are typical for an ensemble of $k$-SAT problems.
Keyword note:Maneva__E Mossel__Elchanan Wainwright__Martin
Report ID:669
Relevance:100
Title:Unidentifiable divergence times in rates-across-sites models
Author(s):Evans, Steven N.; Warnow, Tandy;
Date issued:August 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2v57 (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2v6s (PostScript)
Abstract:The rates-across--sites assumption in phylogenetic inference posits that the rate matrix governing the Markovian evolution of a character on an edge of the putative phylogenetic tree is the product of a character-specific scale factor and a rate matrix that is particular to that edge. Thus, evolution follows basically the same process for all characters, except that it occurs faster for some characters than others. To allow estimation of tree topologies and edge lengths for such models, it is commonly assumed that the scale factors are not arbitrary unknown constants, but rather unobserved, independent, identically distributed draws from a member of some parametric family of distributions. A popular choice is the gamma family. We consider an example of a clock-like tree with three taxa, one unknown edge length, and a parametric family of scale factor distributions that contain the gamma family. This model has the property that, for a generic choice of unknown edge length and scale factor distribution, there is another edge length and scale factor distribution which generates data with exactly the same distribution, so that even with infinitely many data it will be typically impossible to make correct inferences about the unknown edge length.
Keyword note:Evans__Steven_N Warnow__Tandy
Report ID:668
Relevance:100
Title:A multivariate empirical Bayes statistic for replicated microarray time course data
Author(s):Tai, Yu Chuan; Speed, Terence P.;
Date issued:August 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2v34 (PDF)
Abstract:In this paper we derive a one-sample multivariate empirical Bayes statistic (the $MB$-statistic) to rank genes in the order of differential expression from replicated microarray time course experiments . We do this by testing the null hypothesis that the expectation of a $k$-vector of a gene's expression levels is a multiple of $1_k$, the vector of $k$ $1$s. The importance of moderation in this context is explained. Together with the $MB$-statistic we have the one-sample $\widetilde(T)^2$ statistic, a variant of the one-sample Hotelling $T^2$. Both the $MB$-statistic and $\widetilde(T)^2$ statistic can be used to rank genes in the order of evidence of nonconstancy, incorporating any correlation structure among time point samples and the replication. In a simulation study we show that the one-sample $MB$-statistic, $\widetilde(T)^2$ statistic, and moderated Hotelling $T^2$ statistic achieve the smallest number of false positives and false negatives, and all perform equally well. Several special and limiting cases of the $MB$-statistic are derived, and two-sample versions are described.
Keyword note:Tai__Yu_Chuan Speed__Terry_P
Report ID:667
Relevance:100
Title:Using Random Forest to Learn Imbalanced Data
Author(s):Chen, Chao; Liaw, Andy; Breiman, Leo;
Date issued:July 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2v11 (PDF)
Abstract:In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, $F$-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.
Keyword note:Chen__Chao Liaw__Andy Breiman__Leo
Report ID:666
Relevance:100
Title:Estimating velocity fields on a freeway from low resolution video recordings
Author(s):Cho, Young; Rice, John;
Date issued:July 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2t9x (PDF)
Abstract:We present an algorithm to estimate velocity fields from low resolution video recordings. The algorithm does not attempt to identify and track individual vehicles, nor does it attempt to estimate derivatives of the field of pixel intensities. Rather, we compress a frame by obtaining an intensity profile in each lane along the direction of traffic flow. The speed estimate is then computed by searching for a best matching profile in a frame at a later time. Because the algorithm does not need high quality images, it is directly applicable to a compressed format digital video stream, such as mpeg, from conventional traffic video cameras. We illustrate the procedure using a 15 minute long VHS recording to obtain speed estimates on a one mile stretch of highway I-80 in Berkeley, California.
Keyword note:Cho__Young Rice__John_Andrew
Report ID:665
Relevance:100
Title:Measuring Traffic
Author(s):Bickel, Peter; Chen, Chao; Kwon, Jaimyoung; Rice, John; van Zwet, Erik; Varaiya, Pravin;
Date issued:June 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2t7t (PDF)
Abstract:A traffic performance measurement system, PeMS, currently functions as a statewide repository for traffic data gathered by thousands of automatic sensors. It has integrated data collection, processing, and communications infrastructure with data storage and analytical tools. In this paper, we discuss statistical issues that have emerged as we attempt to process a data stream of two GB per day of wildly varying quality. In particular, we focus on detecting sensor malfunction, imputation of missing or bad data, estimation of velocity, and forecasting of travel times on freeway networks.
Keyword note:Bickel__Peter_John Chen__Chao Kwon__Jaimyoung Rice__John_Andrew van_Zwet__Erik Varaiya__Pravin
Report ID:664
Relevance:100
Title:Cloud Detection over Snow and Ice Using MISR Data
Author(s):Shi, Tao; Yu, Bin; Clothiaux, Eugene E.; Braverman, Amy J.;
Date issued:June 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2t5q (PDF)
Abstract:Clouds play a major role in Earth's climate and cloud detection is a crucial step in the processing of satellite observations in support of radiation budget, numerical weather prediction and global climate model studies. To advance the observational capabilities of detecting clouds and retrieving their cloud-top altitudes, NASA launched the Multi-angle Imaging SpectroRadiometer (MISR) in 1999, which provides data in nine different views of the same scene using four spectral channels. Cloud detection is particularly difficult in the snow- and ice-covered polar regions and availability of the novel MISR angle-dependent radiances motivates the current study on cloud detection using statistical methods. Three schemes using MISR data for polar cloud detection are investigated in this study. Using domain knowledge, three physical features are developed for detecting clouds in daylight polar regions. The features measure the correlations between MISR angle-dependent radiances, the smoothness of the reflecting surfaces, and the amount of forward scattering of radiances. The three features are the basis of the the first scheme, called Enhanced Linear Correlation Matching Classification (ELCMC). The ELCMC algorithm thresholds on three features and the thresholds are either fixed or found through the EM algorithm based on a mixture of two 1-dim Gaussians. The ELCMC algorithm results are subsequently used as training data in the development of two additional schemes, one Fisher's Quadratic Discriminate Analysis (ELCMC-QDA) and the other a Gaussian kernel Support Vector Machine(ELCMC-SVM). For both QDA- and SVM-based experiments two types of inputs are tested, the set of three physical features and the red radiances of the nine MISR cameras. All three schemes are applied to two polar regions where expert labels show that the MISR operational cloud detection algorithm does not work well, with a 53% misclassification rate in one region and a 92% nonretrieval rate in the other region. The ELCMC algorithm produces misclassification rates of 6.05% and 6.28% relative to expert labelled regions across the two polar scenes. The misclassification rates are reduced to approximately 4% by ELMCM-QDA and ELCMC-SVM in one region and approximately 2% in the other. Overall, all three schemes provided significantly more accurate results and greater spatial coverage than the MISR operational stereo-based cloud detection algorithm. Compared with ELCMC-QDA, ELCMC-SVM is more robust against mislabels in the ELCMC results and provide slightly better results, but it is computationally slower.
Keyword note:Shi__Tao Yu__Bin Clothiaux__Eugene_E Braverman__Amy_J
Report ID:663
Relevance:100
Title:Balls-in-boxes duality for coalescing random walks and coalescing Brownian motion
Author(s):Evans, Steven N.; Zhou, Xiaowen;
Date issued:June 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2t22 (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2t3m (PostScript)
Abstract:We present a duality between two systems of coalescing random walks and an analogous duality between two systems of coalescing Brownian motions. Our results extend previous work in the literature and we apply them to the study of a system of coalescing Brownian motions with Poisson immigration.
Keyword note:Evans__Steven_N Zhou__Xiaowen
Report ID:662
Relevance:100
Title:Sparse Gaussian process classification with multiple classes
Author(s):Seeger, Matthias; Jordan, Michael I.;
Date issued:April 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2s4n (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2s56 (PostScript)
Abstract:Sparse approximations to Bayesian inference for nonparametric Gaussian Process models scale linearly in the number of training points, allowing for the application of these powerful kernel-based models to large datasets. We show how to generalize the binary classification informative vector machine (IVM) to multiple classes. In contrast to earlier efficient approaches to kernel-based non-binary classification, our method is a principled approximation to Bayesian inference which yields valid uncertainty estimates and allows for hyperparameter estimation via marginal likelihood maximization. While most earlier proposals suggest fitting independent binary discriminants to heuristically chosen partitions of the data and combining these in a heuristic manner, our method operates jointly on the data for all classes. Crucially, we still achieve a linear scaling in both the number of classes and the number of training points.
Keyword note:Seeger__Matthias Jordan__Michael_I
Report ID:661
Relevance:100
Title:Inference of divergence times as a statistical inverse problem
Author(s):Evans, Steven N.; Ringe, Don; Warnow, Tandy;
Date issued:April 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2s10 (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2s2j (PostScript)
Abstract:Dating of divergence times in historical linguistics is an instance of a statistical inverse problem, and many of the issues that complicate the proper treatment of other inverse problems are also present there.
Keyword note:Evans__Steven_N Ringe__Don Warnow__Tandy
Report ID:660
Relevance:100
Title:Stochastic models of language evolution and an application to the Indo-European family of languages
Author(s):Warnow, Tandy; Evans, Steven N.; Ringe, Don; Nakhleh, Luay;
Date issued:April 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2r8b (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2r9w (PostScript)
Abstract:We propose several models of how languages evolve, and discuss statistical estimation of evolution under these models.
Keyword note:Warnow__Tandy Evans__Steven_N Ringe__Don Nakhleh__Luay
Report ID:659
Relevance:100
Title:Decentralized Detection and Classification using Kernel Methods
Author(s):Nguyen, XuanLong; Wainwright, Martin J.; Jordan, Michael I.;
Date issued:April 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2s79 (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2s8v (PostScript)
Abstract:We consider the problem of decentralized detection under constraints on the number of bits that can be transmitted by each sensor. In contrast to most previous work, in which the joint distribution of sensor observations is assumed to be known, we address the problem when only a set of empirical samples is available. We propose a novel algorithm using the framework of empirical risk minimization and marginalized kernels, and analyze its computational and statistical properties both theoretically and empirically. We provide an efficient implementation of the algorithm, and demonstrate its performance on both simulated and real data sets.
Keyword note:Nguyen__XuanLong Wainwright__Martin Jordan__Michael_I
Report ID:658
Relevance:100
Title:A generalized model of mutation-selection balance with applications to aging
Author(s):Steinsaltz, David; Evans, Steven N.; Wachter, Kenneth W.;
Date issued:March 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2r1g (PDF)
http://nma.berkeley.edu/ark:/28722/bk0000n2r21 (PostScript)
Abstract:A probability model is presented for the dynamics of mutation-selection balance in a haploid infinite-population infinite-sites setting sufficiently general to cover mutation-driven changes in full age-specific demographic schedules. The model accommodates epistatic as well as additive selective costs. Closed form characterizations are obtained for solutions in finite time, along with proofs of convergence to stationary distributions and a proof of the uniqueness of solutions in a restricted case. Examples are given of applications to the biodemography of aging, including instabilities in current formulations of mutation accumulation.
Keyword note:Steinsaltz__David Evans__Steven_N Wachter__Kenneth
Report ID:657
Relevance:100
Title:Supplement to "Consistent Independent Component Analysis and Prewhitening"
Author(s):Chen, A.; Bickel, P. J.;
Date issued:March 2004
http://nma.berkeley.edu/ark:/28722/bk0000n2t0z (PDF)
Abstract:In this paper we study the statistical properties of a characteristic-function based algorithm for independent component analysis (ICA), which was proposed by Eriksson et. al. (2003) and Chen & Bickel (2003) independently. First, statistical consistency of this algorithm with prewhitening is analyzed, especially in existence of heavy-tailed sources. Second, without prewhitening this algorithm is shown to be robust against small additive noise.
Keyword note:Chen__Aiyou Bickel__Peter_John
Report ID:656
Relevance:100
Page: 1 2 Next | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7498875260353088, "perplexity": 1527.4663102603479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00164.warc.gz"} |
http://physics.stackexchange.com/users/9237/anand | # Anand
less info
reputation
1
bio website location age member for 2 years, 11 months seen May 20 '12 at 18:09 profile views 2
# 1 Question
1 Any noise slowly starting to take effects?
# 106 Reputation
+5 Any noise slowly starting to take effects?
This user has not answered any questions
# 2 Tags
0 models 0 noise
# 18 Accounts
MathOverflow 692 rep 1718 TeX - LaTeX 647 rep 4824 Ask Ubuntu 218 rep 212 Stack Overflow 168 rep 8 Mathematics 159 rep 7 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997355103492737, "perplexity": 14146.338204300606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636650.1/warc/CC-MAIN-20150417045716-00000-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://quant.stackexchange.com/help/badges/3/editor?userid=2921 | # Help Center > Badges > Editor
First edit.
Awarded 611 times
Awarded 1h ago
Awarded 1h ago
Awarded 13h ago
Awarded dec 7 at 19:59
Awarded dec 7 at 8:49
Awarded dec 6 at 1:17
Awarded dec 4 at 9:20
Awarded dec 3 at 17:56
(post deleted or otherwise unavailable)
Awarded nov 29 at 22:50
Awarded nov 29 at 15:49
Awarded nov 29 at 14:09
Awarded nov 29 at 6:49
Awarded nov 28 at 22:53
Awarded nov 28 at 15:42
(post deleted or otherwise unavailable)
Awarded nov 27 at 11:40
Awarded nov 27 at 11:25
Awarded nov 26 at 13:30
Awarded nov 25 at 21:27
Awarded nov 21 at 23:17
Awarded nov 21 at 19:06
Awarded nov 21 at 14:21
Awarded nov 20 at 19:33
Awarded nov 20 at 19:28
Awarded nov 19 at 20:15
(post deleted or otherwise unavailable)
Awarded nov 19 at 17:20
Awarded nov 18 at 3:05
Awarded nov 17 at 23:10
Awarded nov 15 at 12:12
Awarded nov 13 at 0:39
Awarded nov 11 at 20:12
Awarded nov 11 at 15:31
Awarded nov 8 at 0:04
Awarded nov 6 at 21:29
Awarded nov 6 at 11:20
Awarded nov 5 at 5:21
Awarded nov 3 at 22:37
Awarded nov 2 at 4:14
Awarded nov 2 at 3:24
Awarded oct 31 at 14:33
Awarded oct 31 at 12:22
(post deleted or otherwise unavailable)
Awarded oct 28 at 11:15
Awarded oct 28 at 1:38
(post deleted or otherwise unavailable)
Awarded oct 27 at 21:06
Awarded oct 27 at 6:32
(post deleted or otherwise unavailable)
Awarded oct 26 at 21:25
Awarded oct 21 at 19:09
(post deleted or otherwise unavailable)
Awarded oct 19 at 23:53
Awarded oct 19 at 23:03
Awarded oct 19 at 8:29
Awarded oct 17 at 21:04
Awarded oct 17 at 20:09
Awarded oct 17 at 15:32
Awarded oct 16 at 21:22
(post deleted or otherwise unavailable)
Awarded oct 15 at 17:01
Awarded oct 13 at 9:31
Awarded oct 10 at 18:24
Awarded oct 9 at 12:27
Awarded oct 8 at 7:29
Awarded oct 8 at 4:28
Awarded oct 6 at 13:36 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080642461776733, "perplexity": 28722.359169371888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164038825/warc/CC-MAIN-20131204133358-00029-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.contrib.andrew.cmu.edu/~ryanod/?p=1517 | # Chapter 10: Advanced hypercontractivity
In this chapter we complete the proof of the Hypercontractivity Theorem for uniform $\pm 1$ bits. We then generalize the $(p,2)$ and $(2,q)$ statements to the setting of arbitrary product probability spaces, proving the following:
The General Hypercontractivity Theorem Let $(\Omega_1, \pi_1), \dots, (\Omega_n, \pi_n)$ be finite probability spaces, in each of which every outcome has probability at least $\lambda$. Let $f \in L^2(\Omega_1 \times \cdots \times \Omega_n, \pi_1 \otimes \cdots \otimes \pi_n)$. Then for any $q > 2$ and $0 \leq \rho \leq \frac{1}{\sqrt{q-1}} \cdot \lambda^{1/2-1/q}$, $\|\mathrm{T}_\rho f\|_q \leq \|f\|_2 \quad\text{and}\quad \|\mathrm{T}_\rho f\|_2 \leq \|f\|_{q'}.$ (In fact, the upper bound on $\rho$ can be slightly relaxed to the value stated in Theorem 17 of this chapter.)
We can thereby extend all the consequences of the basic Hypercontractivity Theorem for $f : \{-1,1\}^n \to {\mathbb R}$ to functions $f \in L^2(\Omega^n, \pi^{\otimes n})$, except with quantitatively worse parameters depending on “$\lambda$”. We also introduce the technique of randomization/symmetrization and show how it can sometimes eliminate this dependence on $\lambda$. For example, it’s used to prove Bourgain’s Sharp Threshold Theorem, a characterization of boolean-valued $f \in L^2(\Omega^n, \pi^{\otimes n})$ with low total influence which has no dependence at all on $\pi$.
### 2 comments to Chapter 10: Advanced hypercontractivity
• Is $q’$ defined here?
• It’s the Holder conjugate of $q$ (i.e., the number satisfying $1/q + 1/q’ = 1$). Its definition is made in a few other places in the book.
Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902023077011108, "perplexity": 388.3247324916191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00344-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Vector_Space_has_Basis_between_Linearly_Independent_Set_and_Spanning_Set | # Vector Space has Basis between Linearly Independent Set and Spanning Set
## Theorem
Let $V$ be a vector space over a field $F$.
Let $L$ be a linearly independent subset of $V$.
Let $S$ be a set that spans $V$.
Suppose that $L \subseteq S$.
Then $V$ has a basis $B$ such that $L \subseteq B \subseteq S$.
### Corollary
Let $K$ be a division ring.
Let $V$ be a vector space over $K$.
Then $V$ has a basis.
## Outline of Proof
We use Zorn's Lemma to construct a maximal linearly independent subset.
## Proof
Let $\mathscr I$ be the set of linearly independent subsets of $S$ that contain $L$, ordered by inclusion.
Note that $L \in \mathscr I$, so $\mathscr I \ne \varnothing$.
Let $\mathscr C$ be a nest in $\mathscr I$.
Let $C = \bigcup \mathscr C$.
Aiming for a contradiction, suppose that $C$ is linearly dependent.
Then there exist $v_1, v_2, \ldots, v_n \in C$ and $r_1, r_2, \ldots, r_n \in F$ such that $r_1 \ne 0$:
$\displaystyle \sum_{k \mathop = 1}^n r_k v_k = 0$
Then there are $C_1, C_2, \ldots, C_n \in \mathscr C$ such that $v_k \in C_k$ for each $k \in \set {1, 2, \ldots, n}$.
Since $\mathscr C$ is a nest, $C_1 \cup C_2 \cup \cdots \cup C_n$ must equal $C_k$ for some $k \in \set {1, 2, \ldots, n}$.
But then $C_k \in \mathscr C$ and $C_k$ is linearly dependent, which is a contradiction.
Thus $C$ is linearly independent.
By Zorn's Lemma, $\mathscr I$ has a maximal element $M$ (one that is not contained in any other element).
Since $M \in \mathscr I$, $M$ is linearly independent.
All that remains is to show that $M$ spans $V$.
Suppose, to the contrary, that there exists a $v \in V \setminus \map {\operatorname {span} } M$.
Then, since $S$ spans $V$, there must be an element $s$ of $S$ such that $s \notin \map {\operatorname {span} } M$.
Then $M \cup \set s$ is linearly independent.
Thus $M \cup \set s \supsetneq M$, contradicting the maximality of $M$.
Thus $M$ is a linearly independent subset of $V$ that spans $V$.
Therefore, by definition, $M$ is a basis for $V$.
$\blacksquare$
## Axiom of Choice
This theorem depends on the Axiom of Choice, by way of Zorn's Lemma.
Because of some of its bewilderingly paradoxical implications, the Axiom of Choice is considered in some mathematical circles to be controversial.
Most mathematicians are convinced of its truth and insist that it should nowadays be generally accepted.
However, others consider its implications so counter-intuitive and nonsensical that they adopt the philosophical position that it cannot be true. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862302541732788, "perplexity": 117.19350837769272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00016.warc.gz"} |
https://cmurandomized.wordpress.com/2011/02/01/lecture-7-the-local-lemma/ | # CMU Randomized Algorithms
Randomized Algorithms, Carnegie Mellon: Spring 2011
## Lecture #7: The Local Lemma
1. The Local Lemma: Various Forms
The symmetric version of the local lemma we saw in class:
Theorem 1 (Symmetric Form I)
Given a collection of events ${B_1, B_2, \ldots, B_m}$ such that ${\Pr[B_i] \leq p}$, and each ${B_i}$ is independent of all but ${d}$ of these events, if ${epd < 1}$ then ${\Pr[ \cap \overline{B_i} ] > 0}$.
Very often, the sample space is defined by some experiment which involves sampling from a bunch of independent random variables, and the bad events each depend on some subset of these random variables. In such cases we can state the local lemma thus:
Theorem 2 (Symmetric Form II)
Suppose ${X_1, X_2, \ldots, X_n}$ are independent random variables, and events ${B_1, B_2, \ldots, B_m}$ such that each ${B_i}$ that depends only on some subset ${\{X_j : j \in S_i\}}$ of these variables. Moreover, suppose ${\Pr[ B_i ] \leq p}$ and each ${S_i}$ intersects at most ${d}$ of the ${S_j}$‘s. If ${epd < 1}$ then ${\Pr[ \cap \overline{B_i} ] > 0}$.
Note that both the examples from class (the ${k}$-SAT and Leighton Maggs and Rao results) fall into this setting.
Finally, here’s the asymmetric form of the local lemma:
Theorem 3 (Asymmetric Form)
Given events ${B_1, B_2, \ldots, B_m}$ with each ${B_i}$ independent of all but the set ${\Gamma_i}$ of these events, suppose there exist ${x_i \in (0,1)}$ such that
$\displaystyle \Pr[ B_i ] \leq x_i \prod_{j \in \Gamma_i \setminus \{i\}} (1 - x_j).$
Then ${\Pr[ \cap \overline{B_i} ] \geq \prod_i (1 - x_i) > 0}$.
Occasionally one needs to use the asymmetric form of the local lemma: one example is Uri Feige’s result showing a constant integrality gap for the Santa Claus problem, and the resulting approximation algorithm due to Heupler, Saha and Srinivasan.
1.1. Proofs of the Local Lemma
The original proof of the local lemma was based on a inductive argument. This was a non-constructive proof, and the work of Beck gave the first techniques to make some of the existential proofs algorithmic.
In 2009, Moser, and then Moser and Tardos gave new, intuitive, and more algorithmic proofs of the lemma for the case where there is an underlying set of independent random variables, and the bad events are defined over subsets of these variables. (E.g., the version of the Local Lemma given in Theorem~2, and its asymmetric counterpart). Check out notes on the proofs of the Local Lemma by Joel Spencer and Uri Feige. The paper of Heupler, Saha and Srinivasan gives algorithmic versions for some cases where the number of events is exponentially large.
1.2. Lower Bounds
The local lemma implies that if ${d < 2^k/e}$ then the formula is satisfiable. This is complemented by the existence of unsatisfiable E${k}$-SAT formulas with degree ${d = 2^k(\frac1e + O(\frac{1}{\sqrt{k}}))}$: this is proved in a paper of Gebauer, Szabo and Tardos (SODA 2011). This shows that the factor of ${e}$ in the local lemma cannot be reduced, even for the special case of E${k}$-SAT.
The fact that the factor ${e}$ was tight for the symmetric form of the local lemma was known earlier, due to a result of Shearer (1985).
2. Local Lemma: The E${k}$-SAT Version
Let me be clearer, and tease apart the existence question from the algorithmic one. (I’ve just sketched the main ideas in the “proofs”, will try to fill in details later; let me know if you see any bugs.)
Theorem 4
If ${\varphi}$ is a E${k}$-SAT formula with ${m}$ clauses, ${n}$ variables, and where the degree of each clause is at most ${d \le 2^{k-3}}$, then ${\varphi}$ is satisfiable.
Proof: Assume there is no satisfying assignment. Then the algorithm we saw in class will run forever, no matter what random bits it reads. Let us fix ${M = m \log m + 1}$. So for every string ${R}$ of ${n+Mk}$ bits the algorithm reads from the random source, it will run for ${M}$ iterarations.
But now one can encode the string ${R}$ thus: use ${m \log m}$ bits to encode the clauses at the roots of the recursion trees, ${M(\log d + 2)}$ to encode the clauses lying within these recursion trees, and ${n}$ bits for the final settings of the variables. As we argued, this is a lossless encoding, we can recover the ${n+Mk}$ bits from this encoding. How long is this encoding? It is ${M(\log d + 2) + n + m \log m}$, which is strictly less than ${n+Mk}$ for ${M = m \log m + 1}$ and ${d \leq 2^{k-3}}$.
So this would give us a way to encode every string of length ${n+Mk}$ into strings of shorter lengths. But since for every length ${\ell}$, there are ${2^\ell}$ strings of length ${\ell}$ and ${1 + 2 + \ldots + 2^{\ell - 1} = 2^{\ell} - 1}$ strings of length strictly less than ${\ell}$, this is impossible. So this contradicts our assumption that there is no satisfying assignment.$\Box$
Now we can alter the proof to show that the expected running time of the algorithm is small:
Theorem 5
If ${\varphi}$ is a E${k}$-SAT formula with ${m}$ clauses, ${n}$ variables, and where the degree of each clause is at most ${d \le 2^{k-3}}$, then the algorithm FindSat finds a satisfying assignment in ${O(m \log m)}$ time.
Proof: Assume that we run for at least ${M + t}$ steps with probability at least ${1/2^s}$. (Again, think of ${M = m \log m}$.) Then for at least ${1/2^s}$ of the ${2^{n+(M+t)k}}$ strings, we compress each of these strings into strings of length ${(M+t)(\log d + 2) + n + m \log m}$.
But if we have any set of ${2^{n+(M+t)k} \cdot 2^{-s}}$ strings, we must use at least ${n + (M+t)k-s}$ bits to represent at least one of them. So
$\displaystyle n + (M+t)k - s \leq n + (M+t)(\log d + 2) + M.$
If ${d \leq 2^{k-3}}$, we have ${k - \log d - 2 \geq 1}$, and
$\displaystyle (M+t)(k - \log d - 2) - s \leq M$
or
$\displaystyle M+t-s \leq M \implies s \geq t.$
So we get that the probability of taking more than ${M+t}$ steps is at most ${1/2^t}$, which implies an expected running time of ${M + O(1)}$. $\Box$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 79, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738525152206421, "perplexity": 309.6668983305119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00729.warc.gz"} |
https://www.psychologicabelgica.com/articles/10.5334/pb.385/print/ | # Association between NonSuicidal SelfInjury Parents and Peers Related Loneliness and Attitude Towards Aloneness in Flemish Adolescents: An Empirical Note
## Abstract
Loneliness and attitude towards aloneness have been shown to be associated to depression, anxiety, and other psychiatric disorders in adolescents and they may also increase the vulnerability to Non-Suicidal Self-Injury (NSSI). Therefore, the present study investigated the association between lifetime prevalence and functions of NSSI, parent- and peer-related loneliness, and attitude towards aloneness (positive and negative). Data regarding NSSI, loneliness, and attitude towards aloneness were collected from a sample of 401 high school students from three different high schools located in the Dutch-speaking part of Belgium. Lifetime prevalence of NSSI was found to be 16.5%. Females reported a higher lifetime prevalence of NSSI than males. Higher mean scores for parent-, peer-related loneliness, and positive attitude (i.e., affinity) towards aloneness was observed in adolescents with lifetime NSSI as compared to adolescents without a history of NSSI. Finally, a positive correlation between self-related (i.e., automatic) functions of NSSI and parent- and peer-related loneliness and a positive attitude towards aloneness was also observed.
##### DOI: http://doi.org/10.5334/pb.385
Accepted on 07 Jun 2017 Submitted on 20 Feb 2017
## Introduction
Non-Suicidal Self-Injury (NSSI) is defined as the deliberate destruction of one’s body tissue without an intention to die (Nock, 2009). The most common forms of NSSI include scratching, cutting, head-banging, and burning. In a recent meta-analysis of 119 studies, Swannell and colleagues (2014) reported an international pooled prevalence of NSSI among adolescents to be around 17.2%, among young adults to be around 13.4%, and among adults to be around 5.5% adults. In Belgium, the prevalence of NSSI has been shown to range from 13.7%–26.5% in adolescents (Baetens, Claes, Muehlenkamp, Grietens, & Onghena, 2011; Claes, Luyckx, & Bijttebier, 2014). NSSI has been shown to be linked with developmental issues like disturbances in identity formation (Gandhi et al., 2017) and scholastic issues (Kiekens et al., 2016). Chronic engagement in NSSI has been shown to be strongly associated with various mental health issues like depression, anxiety disorder, substance use disorder, borderline personality disorder, and eating disorders (Nock, 2009). The history, frequency, and number of different methods of NSSI have also been shown to be important predictors of suicide attempts (Victor and Klonsky, 2014).
Although NSSI seems to be related to significant distress, the majority of adolescents engaging in NSSI do not seek help for it (Muehlenkamp, Walsh, & McDade, 2010). Therefore, NSSI is being increasingly identified as an important mental health concern, especially in adolescents. Extant research has identified NSSI as an outcome of a complex interaction between interpersonal [e.g., poor attachment (You, Lin, & Leung, 2015), bullying, and abuse history (Garisch & Wilson, 2015)] and intrapersonal factors [e.g., alexithymia, depression, anxiety, impulsivity, substance abuse, and sexuality (Garisch & Wilson, 2015)]. However, to further increase our understanding of adolescent NSSI, stressors that are particularly relevant during this period of life should be investigated.
Loneliness is one of the important stressors in adolescents. In fact, about 80% of adolescents’ report experiencing loneliness at least some of the time as compared to the 40% of adults (Hawkley & Cacioppo, 2010; Heinrich & Gullone, 2006). An extensive review by Qualter and colleagues (2010) has identified a number of negative mental health outcomes and psychiatric disorders associated with loneliness. The negative mental health outcomes associated with loneliness include issues like low self-esteem, shyness, neuroticism, social withdrawal, poor academic performance, and juvenile delinquency (Qualter et al., 2010). Persistent loneliness has also been shown to be connected to psychiatric conditions like personality disorders (avoidant, borderline, and dependent personality disorder; Qualter et al., 2010), depressive symptoms (Vanheule, Desmet, Groenvynck, Rosseel, & Fontaine, 2008), and social anxiety (Caplan, 2006). Importance of different relationships change over different phases of life and if these relationship needs are not met, individuals may develop loneliness with respect to the that relationship (Lasgaard et al., 2004). Although the relevance of peer relationships gradually increases during adolescence, the significant of the relationship with parents does not diminish (Hazel, Oppenheimer, Technow, Young, & Hankin, 2014). Consequently, parent and peers related loneliness can both contribute to increased vulnerability to psychopathology (Lasgaard et al., 2004).
The brief review presented above not only emphasizes the importance of loneliness in adolescents but also highlights the need of investigating the association between NSSI and loneliness. Yet, as far as we are aware, the studies by Giletta et al., (2012) and Lasgaard et al., (2011) are the only ones that have explored the relation between loneliness and NSSI in community samples. Both studies found higher parent-related loneliness in adolescents with NSSI. However, no differences in peer-related loneliness were observed in adolescents with and without NSSI. Loneliness has also been linked to the behavioral motivations (i.e., functions) for engaging in NSSI. The functional model of NSSI focuses on the purpose of NSSI and the factors that reinforce self-injury (Nock, 2009). According to Nock (2009), the functions of NSSI can be grouped into four categories: positive social (e.g., to get support or attention from others); negative social (e.g., to escape social situations); positive automatic (e.g., to feel something); and negative automatic (e.g., to avoid negative affect). Nock and Prinstein (2005) hypothesized that as loneliness is an interpersonal concern, it may be associated with the social function of NSSI (e.g., facilitation of help-seeking or escape from undesired social situations). However, in a clinical adolescent sample, they did not observe any relation between loneliness and both automatic and social functions of NSSI.
Like loneliness, attitude towards aloneness has also been identified as a significant factor that can influence development in adolescents. Aloneness is defined as the physical and neutral state of being on one’s own and attitude towards aloneness is the positive or negative evaluation of the state of aloneness (Galanaki, 2013). Extreme affinity or aversion to aloneness has been shown to be associated with negative mental health issues (Wang et al., 2013). Goossens and Marcoen, (1999) demonstrated that adolescents with higher positive aloneness were more likely to suffer from social anxiety and depression. This counterintuitive association can be explained by the item structure of the positive attitude towards aloneness subscale of the Loneliness and Aloneness Scale for Children and Adolescents (LACA) used in the study by Goossens & Marcoen, (1999). According to these authors, the subscale -positive attitude towards aloneness captures reactive rather than active affinity to aloneness (Galanaki, 2013; Goossens & Marcoen, 1999). That is, adolescents demonstrating positive attitude towards aloneness, may spend time alone not because they like to do so but because they want to avoid the company of others. On the other hand, negative attitude towards aloneness can intensify feelings of loneliness, boredom, and negative feelings (Marcoen, Goossens, & Caes, 1987). In sum, although attitude towards aloneness may be an important factor associated with mental health in adolescents, its relation with NSSI has remained unexplored.
The present exploratory study had two objectives. First, we explored whether adolescents with and without NSSI had significant mean differences in the scores of parents- and peer-related loneliness and attitude towards aloneness. Based on the existing literature (Giletta et al., 2012; Lasgaard et al., 2011), we expected adolescents with NSSI to have higher mean scores on parents- and peer loneliness. As no studies have explored the association between NSSI and attitudes towards loneliness yet, we did not formulate a hypothesis regarding this relationship. Second, we explored the correlation between the functions of NSSI and parents-and peers- related loneliness and attitudes towards aloneness. As loneliness can be experienced as an aversive emotional state and NSSI may serve as a behavior to manage the distress associated with loneliness, we expected both parents- and peers-related loneliness to be positively associated with the automatic (i.e., intrapersonal) functions of NSSI. Again, no specific expectation could be formulated regarding functions of NSSI and the attitudes towards aloneness because of the lack of previous research.
## Methods
### Participants and procedure
Data were collected in three different high schools located in the Dutch-speaking part of Belgium. Four hundred and one participants (51.5% females; Grades 10–12; 97.5% with Belgian nationality) were recruited by convenience sampling. Mean age was 16.6 years (SD = 0.96; range = 14–19 years). Informed consent letters were provided to the parents of the students about two weeks before the day of data collection. Students were permitted to participate in the study only if they had parental consent. The data collection procedure was completed during the school hours. Students were provided with an envelope including assent form and the questionnaires and they were requested to return the completed forms in a sealed envelope. The researchers were present throughout the data collection process to answer questions regarding any aspect of the study. Contact details of mental health services in Flanders were also provided if students required assistance. The study was approved by the Institutional Review Board at the researchers’ university.
### Measures
Lifetime NSSI was assessed by means of a single YES/NO question (“Have you ever injured yourself on purpose without an intention to die?”). The use of single-item measure is common in the NSSI literature and a review by Muehlenkamp, Claes, Havertape, and Plener (2012) indicated that the prevalence of NSSI does not significantly differ across the different methods used (i.e., single-item or checklist method). In case participants answered this question affirmatively, they were asked to indicate the degree to which they endorsed 18 functions of NSSI (from the Self-Injurious Questionnaire-Treatment Related; Claes & Vandereycken, 2007). The items associated with the functions of NSSI (see Table 1 below for items included in the scale) were measured on a five-point Likert scale ranging from 1 (not applicable) to 5 (very applicable).
Table 1
Factor structure for the functions of NSSI scale as proposed by Gandhi et al., (2016). The third section represents the items not considered for calculating the factor scores as they had factor loadings less than 0.40.
Automatic functions (Cronbach’s alpha = 0.78) 1. To avoid or suppress feelings of confusion/aimlessness 2. To avoid or suppress inner feelings of emptiness 3. To avoid or suppress negative feelings 4. To avoid or suppress suicidal thoughts 5. To avoid or suppress painful images or memories 6. To obtain a feeling of pleasure 7. To get into a trance or numb state Social functions (Cronbach’s alpha = 0.83) 1. To show myself how strong I am 2. To show others how strong I am 3. To make myself unattractive 4. To avoid doing chores or tasks I don’t want to do 5. To avoid being with other people Items not included 1. To escape from doing school, work, or other activities 2. To get attention from others 3. To punish myself 4. To provide myself a sense of identity or individuality 5. To define myself as a person 6. To escape from a trance or numb state
The Loneliness and Aloneness Scale for Children and Adolescents (LACA; Marcoen, et al., 1987) was used to assess parent- and peer-related loneliness and attitude towards aloneness. The LACA has four subscales (12 items each): parent-related loneliness; peer-related loneliness; affinity towards being alone; and aversion towards being alone. All items are answered on a four-point Likert scale ranging from 1 (often) to 4 (never). LACA has been validated and used extensively in screening for loneliness in Belgian adolescents (Maes et al., 2015). Cronbach’s alpha for the Loneliness-Parents, Loneliness-Peers, Positive attitude towards aloneness, and Negative attitude towards aloneness subscales in the present were 0.91, 0.89, 0.79, and 0.83 respectively. In line with the literature, correlations among the LACA subscales were low (r ranged between –0.10 and 0.39; median r = 0.15).
The Beck Depression Inventory II (BDI II; Beck, Steer, & Brown, 1996) was used to assess the degree of depression. The BDI II consists of 21 items that are responded to on a four-point Likert scale ranging from 0 (not at all) to 3 (severely). The total score for the BDI II can range from 0 to 63 with higher score reflecting more depression (Cronbach’s alpha = .89). The BDI has been extensively used for measuring depression and has been validated in Belgian clinical and non-clinical samples (Vanheule, Desmet, Groenvynck, Rosseel, & Fontaine, 2008).
### Analytical plan
Differences in Loneliness-Parents, Loneliness-Peers, Positive attitude towards aloneness, and Negative attitude towards aloneness as a function of presence vs. absence of NSSI were calculated using the multivariate analysis of covariance (MANCOVA). Gender, age, and depression were added as covariates because they are known to influence NSSI (Xavier, Gouveia, & Cunha, 2016), loneliness, and positive and negative attitudes towards aloneness (Heinrich & Gullone, 2006; Maes et al., 2015). Given that both loneliness and NSSI were strongly related to depression (Nock, 2009; Qualter et al., 2010), we additionally controlled for depression. Pearson correlation coefficients were used to compute the associations between functions of NSSI and the four LACA subscales. To reduce data, the two-factor solution for the functions of NSSI scale, that is, automatic vs. social functions (see also Table 1) suggested by Gandhi et al., (2016) was used.
## Results
In the present sample, lifetime NSSI was found to be 16.5%. Females (n = 43) reported a higher lifetime prevalence of NSSI than males $\left(n=23;{\chi }_{\left(1\right)}^{2}=5.90,p=0.016\right)$. The result of the MANCOVA (see Table 2) indicated that when controlling for age and gender, the main effect for lifetime NSSI (Wilk’s λ = 0.866, F(4,361) = 13.95, p = < 0.001) was statistically significant. Adolescents with lifetime NSSI scored significantly higher on Loneliness-Parents, Loneliness-Peers, and Positive attitude towards aloneness as compared to adolescents without NSSI. When depression was added as an additional covariate in the MANCOVA (see Table 2), the main effect of lifetime NSSI on Loneliness-Parents and Positive-attitude towards aloneness was still statistically significant (Wilk’s λ = 0.966, F(4,352) = 3.06, p = 0.017).
Table 2
A comparison of means (with standard deviations) for lifetime NSSI for the subscales of LACA with covariates.
NSS1 = 0 (n = 301) NSSI = 1 (n = 59) F1 (1, 354) p F2 (1, 355) p
The subscales of LACA M (SD) M (SD)
Loneliness Parents 17.88 (5.34) 22.69 (8.39) 33.23 <0.001 6.78 <0.001
Loneliness Peers 19.63 (6.07) 24.32 (7.74) 30.60 <0.001 1.99 0.158
Positive attitude towards aloneness 31.42 (5.41) 34.88 (5.53) 19.82 <0.001 4.50 0.035
Negative attitude towards aloneness 29.24 (5.26) 29.31 (5.30) 0.25 0.618 0.32 0.572
Note: F1 = When controlling for age and gender, F2 = When controlling for age, gender, and depression.
Findings of the correlation analysis (displayed in Table 3) showed positive correlations between Loneliness-Parents, Loneliness-Peers, and Positive attitude towards aloneness and automatic functions of NSSI. None of the correlations with the social functions were significant.
Table 3
Pearson correlations between automatic functions, social functions, and four subscales of LACA. These correlations were calculated only for participants who had engaged in at least one episode of NSSI (n = 65).
Automatic functions Social functions
Loneliness Parents 0.44 *** 0.05
Loneliness Peers 0.46 *** 0.17
Positive attitude towards aloneness 0.46 *** 0.11
Negative attitude towards aloneness 0.10 0.21
*** p < 0.001.
## Discussion
The results of the current exploratory study are largely in agreement with the two earlier mentioned studies (Giletta et al., 2012; Lasgaard et al., 2011). In line with our expectations, individuals engaging in NSSI reported higher levels of parent-related loneliness, even when controlling for age, gender, and depression. However, unlike the previous studies, we found that adolescents with lifetime NSSI also reported higher levels of peer-related loneliness when controlling for gender and age. Mean differences in peer-related NSSI between adolescents with and without NSSI were no longer significant when depression was added as a covariate, suggesting that the relation between peer-related loneliness and NSSI may be mediated by depression (Hayes & Preacher, 2014). Further research is necessary to confirm this hypothesis.
Our findings also indicated that, although a negative attitude towards aloneness did not significantly differ between adolescents with and without NSSI, adolescents with lifetime NSSI reported a higher mean for positive attitude towards aloneness. As previously mentioned, greater need for aloneness in NSSI individuals may be considered as an indirect indicator of maladjustment, because NSSI individuals may use aloneness as a means to avoid contact with others (Goossens & Marcoen, 1999). Adolescents engaging in NSSI may also use time spent on their own to engage in NSSI. Alternatively, as suggested by Goossens (2014), individuals may use aloneness for self-reflection. However, self-reflection may trigger a negative emotional cascade and the individuals may resort to NSSI as a means to regulate these negative emotions (Selby, Connell, & Joiner, 2010). Another possible reason for the observed correlation between positive attitude towards aloneness and NSSI may be attributed to the conceptualization of positive attitude towards aloneness in the LACA, the loneliness scale used in the current study.
The results of the correlational analysis further clarified the relations between loneliness, attitude towards aloneness, and NSSI functions. The positive correlation between parent- and peers-loneliness and the automatic function of NSSI can be explained by the fact that loneliness is an internal affective state associated with the evaluation of one’s social relationships (Heinrich & Gullone, 2006). Our results indicate that lonely adolescents may use NSSI as a means of managing distress associated with loneliness and not as a means of managing interpersonal relations and expectations (at least partially confirming the findings of Nock & Prinstein, 2005). Additionally, we found a positive association between positive attitude towards aloneness and automatic functions of NSSI. In line with Goossens and Marcoen, (1999), we hypothesize that adolescents with positive attitude towards aloneness tend to be alone as they want to avoid social contact because of their NSSI. Further research is necessary to confirm this hypothesis.
The readers should note that the association between loneliness and NSSI is likely to be bidirectional. Chronic loneliness can increase the vulnerability to NSSI by influencing both body physiology and psychological processes. Physiologically, chronic loneliness can lead to a prolonged activation of the Hypothalamo-Pituitary-Adrenal (HPA) axis, which in turn may lead to consistently high levels of cortisol in the body (Hawkley & Cacioppo, 2010). Persistently high levels of cortisol may lead to aberrations in genome-wide DNA methylation which has been shown to increased depression, anxiety, chronic fatigue, and cognitive impairment (Glad et al., 2017), and ultimately may also increase vulnerability to NSSI. Psychologically, loneliness can disrupt self-regulatory mechanisms. More specifically, individuals may engage in behaviors in which they would otherwise never engage in just to alleviate the negative affect associated with loneliness (Crepaz & Marks, 2001). On the other hand, engagement in NSSI may lead to feelings of shame, guilt, and regret that can increase social isolation. In addition to the affective consequences, the physical consequences of NSSI (e.g., scaring) along with the fear of stigmatization can also lead to further social isolation (Garisch & Wilson, 2015).
In spite of extending the literature on NSSI, the present work is not without its limitations. First, because of the cross-sectional nature of the study, no conclusions can be drawn regarding the directionality of effect between the study variables. As mentioned above, the relation between NSSI, loneliness, and attitude towards aloneness are likely to be bi-directional. Second, our findings cannot be extended beyond the current sample as it was based on convenience sampling. Further research with a larger and a representative sample may be necessary to confirm our findings. Third, although self-report measures are appropriate for measuring loneliness and attitude towards aloneness, shared method variance may have led to inflated correlations among the study variables.
Overall, the current study not only highlighted the importance of two developmentally relevant constructs, that is, loneliness and attitude towards aloneness in the context of NSSI, but it also suggested possible mechanisms connecting these constructs. Further longitudinal research is necessary to test these hypotheses more conclusively.
## Acknowledgements
The authors are greatly indebted to Cato Nys and Shana Tielemans for their assistance in data collection.
## Competing Interests
The authors have no competing interests to declare.
## References
1. Baetens, I., Claes, L., Muehlenkamp, J., Grietens, H., & Onghena, P. (2011). Nonsuicidal and suicidal self-injurious behavior among Flemish adolescents: A websurvey. Archives of Suicide Research, 15, 56–67. DOI: https://doi.org/10.1080/13811118.2011.540467
2. Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Manual for the Beck Depression Inventory-II. San Antonio, TX: Psychological Corporation.
3. Caplan, S. E. (2006). Relations among loneliness, social anxiety, and problematic Internet use. Cyberpsychology & Behavior, 10, 234–242. DOI: https://doi.org/10.1089/cpb.2006.9963
4. Claes, L., Luyckx, K., & Bijttebier, P. (2014). Non-suicidal self-injury in adolescents: Prevalence and associations with identity formation above and beyond depression. Personality and individual differences, 61, 101–104. DOI: https://doi.org/10.1016/j.paid.2013.12.019
5. Claes, L., & Vandereycken, W. (2007). The Self-Injury Questionnaire—Treatment Related (SIQ-TR): Construction, reliability, and validity in a sample of female eating disorder patients. In: Goldfarb, P. M. (Ed.), Psychological tests and testing research trends, 111–139. New York, NY: Nova Science Publishers.
6. Crepaz, N., & Marks, G. (2001). Are negative affective states associated with HIV sexual risk behaviors? A meta-analytic review. Health Psychology, 20, 291–299. DOI: https://doi.org/10.1037/0278-6133.20.4.291
7. Galanaki, E. P. (2013). Solitude in children and adolescents: A review of the research literature. Psychology and Education – An Interdisciplinary Journal, 50, 79–88.
8. Gandhi, A., Luyckx, K., Goossens, L., Maitra, S., & Claes, L. (2016). Sociotropy, autonomy, and non-suicidal self-injury: The mediating role of identity confusion. Personality and Individual Differences, 99, 272–277. DOI: https://doi.org/10.1016/j.paid.2016.05.040
9. Garisch, J. A., & Wilson, M. S. (2015). Prevalence, correlates, and prospective predictors of non-suicidal self-injury among New Zealand adolescents: Cross-sectional and longitudinal survey data. Child and Adolescent Psychiatry and Mental Health, 9, 28. DOI: https://doi.org/10.1186/s13034-015-0055-6
10. Giletta, M., Scholte, R. H., Engels, R. C., Ciairano, S., & Prinstein, M. J. (2012). Adolescent non-suicidal self-injury: A cross-national study of community samples from Italy, the Netherlands and the United States. Psychiatry Research, 197, 66–72. DOI: https://doi.org/10.1016/j.psychres.2012.02.009
11. Glad, C. A., Andersson-Assarsson, J. C., Berglund, P., Bergthorsdottir, R., Ragnarsson, O., & Johannsson, G. (2017). Reduced DNA methylation and psychopathology following endogenous hypercortisolism–A genome-wide study. Scientific Reports, 7. DOI: https://doi.org/10.1038/srep44445
12. Goossens, L. (2014). Affinity for aloneness in adolescence and preference for solitude in childhood: Linking two research traditions. In: Coplan, R. J., & Bowker, J. C. (Eds.), The handbook of solitude: Psychological perspectives on social isolation, social withdrawal, and being alone, 150–166. Malden, MA: Wiley Blackwell.
13. Goossens, L., & Marcoen, A. (1999). Adolescent loneliness, self-reflection, and identity: From individual differences to developmental processes. In: Rotenberg, K. J., & Hymel, S. (Eds.), Loneliness in childhood and adolescence, 225–243. New York, NY: Cambridge University Press. DOI: https://doi.org/10.1017/CBO9780511551888.011
14. Hawkley, L. C., & Cacioppo, J. T. (2010). Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Annals of Behavioral Medicine, 40, 218–227. DOI: https://doi.org/10.1007/s12160-010-9210-8
15. Hayes, A. F., & Preacher, K. J. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67, 451–470. DOI: https://doi.org/10.1111/bmsp.12028
16. Hazel, N. A., Oppenheimer, C. W., Technow, J. R., Young, J. F., & Hankin, B. L. (2014). Parent relationship quality buffers against the effect of peer stressors on depressive symptoms from middle childhood to adolescence. Developmental Psychology, 50, 2115 –2123. DOI: https://doi.org/10.1037/a0037192
17. Heinrich, L. M., & Gullone, E. (2006). The clinical significance of loneliness: A literature review. Clinical Psychology Review, 26, 695–718. DOI: https://doi.org/10.1016/j.cpr.2006.04.002
18. Kiekens, G., Claes, L., Demyttenaere, K., Auerbach, R. P., Green, J. G., Kessler, R. C., Bruffaerts, R., et al., (2016). Lifetime and 12-month non-suicidal self-injury and academic performance in college freshmen. Suicide and Life-Threatening Behavior, 46, 563–576. DOI: https://doi.org/10.1111/sltb.12237
19. Lasgaard, M., Goossens, L., Bramsen, R. H., Trillingsgaard, T., & Elklit, A. (2011). Different sources of loneliness are associated with different forms of psychopathology in adolescence. Journal of Research in Personality, 45, 233–237. DOI: https://doi.org/10.1016/j.jrp.2010.12.005
20. Maes, M., Klimstra, T., Van den Noortgate, W., & Goossens, L. (2015). Factor structure and measurement invariance of a multidimensional loneliness scale: Comparisons across gender and age. Journal of Child and Family Studies, 24, 1829–1837. DOI: https://doi.org/10.1007/s10826-014-9986-4
21. Marcoen, A., Goossens, L., & Caes, P. (1987). Loneliness in pre-through late adolescence: Exploring the contributions of a multidimensional approach. Journal of Youth and Adolescence, 16, 561–576. DOI: https://doi.org/10.1016/j.jrp.2010.12.005
22. Muehlenkamp, J. J., Walsh, B. W., & McDade, M. (2010). Preventing non-suicidal self-injury in adolescents: The signs of self-injury program. Journal of Youth and Adolescence, 39, 306–314. DOI: https://doi.org/10.1007/s10964-009-9450-8
23. Nock, M. K. (2009). Why do people hurt themselves? New insights into the nature and functions of self-injury. Current Directions in Psychological Science, 18, 78–83. DOI: https://doi.org/10.1111/j.1467-8721.2009.01613.x
24. Nock, M. K., & Prinstein, M. J. (2005). Contextual features and behavioral functions of self-mutilation among adolescents. Journal of Abnormal Psychology, 114, 140–146. DOI: https://doi.org/10.1037/0021-843X.114.1.140
25. Qualter, P., Brown, S. L., Munn, P., & Rotenberg, K. J. (2010). Childhood loneliness as a predictor of adolescent depressive symptoms: An 8-year longitudinal study. European Child & Adolescent Psychiatry, 19, 493–501. DOI: https://doi.org/10.1007/s00787-009-0059-y
26. Selby, E. A., Connell, L. D., & Joiner, T. E., Jr. (2010). The pernicious blend of rumination and fearlessness in non-suicidal self-injury. Cognitive Therapy and Research, 34, 421–428. DOI: https://doi.org/10.1007/s10608-009-9260-z
27. Swannell, S. V., Martin, G. E., Page, A., Hasking, P., & St John, N. J. (2014). Prevalence of nonsuicidal self-injury in nonclinical samples: Systematic review, meta-analysis and meta-regression. Suicide and Life-Threatening Behavior, 44, 273–303. DOI: https://doi.org/10.1111/sltb.12070
28. Vanheule, S., Desmet, M., Groenvynck, H., Rosseel, Y., & Fontaine, J. (2008). The factor structure of the Beck Depression Inventory–II: An evaluation. Assessment, 15, 177–187. DOI: https://doi.org/10.1177/1073191107311261
29. Victor, S. E., & Klonsky, E. D. (2014). Correlates of suicide attempts among self-injurers: A meta-analysis. Clinical Psychology Review, 34, 282–297. DOI: https://doi.org/10.1016/j.cpr.2014.03.005
30. Wang, J. M., Rubin, K. H., Laursen, B., Booth-LaForce, C., & Rose-Krasnor, L. (2013). Preference-for-solitude and adjustment difficulties in early and late adolescence. Journal of Clinical Child & Adolescent Psychology, 42, 834–842. DOI: https://doi.org/10.1080/15374416.2013.794700
31. Xavier, A., Pinto-Gouveia, J., Cunha, M., & Carvalho, S. (2016). Self-criticism and depressive symptoms mediate the relationship between emotional experiences with family and peers and self-injury in adolescence. Journal of Psychology, 150, 1046–1061. DOI: https://doi.org/10.1080/00223980.2016.1235538
32. You, J., Lin, M. P., & Leung, F. (2015). A longitudinal moderated mediation model of nonsuicidal self-injury among adolescents. Journal of Abnormal Child Psychology, 43, 381–390. DOI: https://doi.org/10.1007/s10802-014-9901-x | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4985646605491638, "perplexity": 9688.511161179542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00155.warc.gz"} |
https://stable.publiclab.org/tag/pm-monitoring/author/richardbowman | This is a testing site only. See the live Public Lab site here »
# Collecting Data on Particulate Matter pm-monitoring
Before undertaking air monitoring for Particulate Matter (PM) [identify the end goals of monitoring for your community](https://publiclab.org/wiki/frac-sand-action-oriented-resources#Strategizing). Monitoring [airborne particles](/wiki/pm) can be prohibitively expensive and data that is actionable for regulators can take years to collect. To be efficient, the accuracy and precision of collected data should be appropriate for its end use -- not all data needs to be of regulatory quality in order to be useful. For example, a community may want to collect data to: - highlight a problem for the purposes of [community mobilization](/wiki/frac-sand-action-oriented-resources) - identify emissions hotspots for more monitoring - identify key times for [visual monitoring](/wiki/visual-pm) compel industry to pay for community monitors to become certified in visual monitoring [(Smoke School)](/wiki/visual-pm#Smoke+School+Certification). - compel [regulatory monitoring](/wiki/pm-monitoring-regulations) through screening data - document a violation of [national air quality standards](https://publiclab.org/wiki/frac-sand-legislation#National+Ambient+Air+Quality+Standards) Airborne particles are clustered into [three rough size ranges, or modes, of particles in the air](/wiki/pm#dust,+droplets,+&+particle+size): dust, droplets, and ultrafine particles. While droplets and ultrafines are largely combustion by-products, dust is broken off of larger materials. No single method of PM monitoring method covers all categories. [Dust](/wiki/pm#Dust) is the most established particle mode to monitor. However, dust is ubiquitous, so industrial dust emissions can be difficult to trace back to their source. [Droplets](/wiki/pm#Droplets) are difficult to monitor. In real-time [optical PM monitors](/wiki/optical-pm), humidity and temperature effects interfere substantially with measurements. Humidity also affects [filter-based PM monitors,](/wiki/filter-pm) and questions about allowable water content in droplets is actively debated. Read more on the [NAAQS](https://publiclab.org/wiki/frac-sand-legislation#National+Ambient+Air+Quality+Standards). The study of [ultrafine particles](https://publiclab.org/wiki/pm#Droplets’+Beginnings:+Ultrafine+nulceotoids) is fairly new. There are no regulatory categories that apply to ultrafines, and no inexpensive means to monitor them. Exposure to ultrafine particles is associated with proximity to combustion, especially of diesel and marine fuels, since most ultrafines are formed through atmospheric reactions of gases. [](//i.publiclab.org/system/images/photos/000/014/313/original/Chart_1.png) _[chart found on pg 27](https://www.niehs.nih.gov/health/assets/docs_a_e/ehp_student_edition_lesson_particles_size_makes_all_the_difference.pdf)_ Due to the varied and significant challenges of accurate monitoring, it is important to determine the data quality (accuracy and precision) needed for a specific research or advocacy end-goals. ##Proposed precision categories for citizen monitoring State and federal regulators are empowered make judgements based on [visual assessments of particle pollution](/wiki/visual-pm), but at present regulators have no statutory guidance or authority to interact with PM data collected with instruments other than their (very expensive) [regulatory monitors](https://publiclab.org/wiki/pm-monitoring-regulations) or on timescales shorter than annually. This can lead to [curt rejections of scientifically sound data](/notes/liz/10-01-2015/when-100-000-is-not-enough-how-citizen-data-could-relate-to-government-regulation). Federal regulators recognize this issue and are working fund development and evaluation of lower-cost air sensors. During an evaluation process, an EPA scientist tabulated potential categories of community-collected data based on precision, as discussed in the [Air Sensor Guidebook](http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=277996). These categories are prospective (except for regulatory monitoring, Category V) and should only be treated as guidelines for technologies in development. [](//i.publiclab.org/system/images/photos/000/014/314/original/chart_2.png) ##Prompting action to address airborne particles Given that regulators are currently unlikely to make judgements based any data other than [visual monitoring](/wiki/visual-pm) and [regulatory monitoring](/wiki/pm-monitoring-regulations), community-based PM data, in isolation, is likely to be ineffective at prompting official enforcement. Thus, community-collected PM data needs to be accompanied by strong advocacy to prompt further investigation or leverage publicity and public relations. For information about best practices for developing a community environmental monitoring study, see [this wiki](https://publiclab.org/wiki/start-enviro-monitor-study). ####Regulatory grade PM monitoring [Regulatory monitors](https://publiclab.org/wiki/pm-monitoring-regulations) cost $20-60,000 to buy, ~$100/day to analyze, and require 1-3 years of data to evaluate compliance with regulatory standards. It is also important to note that a failure to demonstrate an exceedance of PM2.5 or PM10 standard limits does not necessarily indicate safe conditions. Particles that are of the respirable size-fraction, which have severe health consequences, are mostly excluded from PM2.5 measurements and are not differentiated (or acknowledged) in PM10 measurements. For more information, please read [this wiki](https://publiclab.org/wiki/silica-monitoring). Additionally, the composition of particles is not routinely determined, so particularly damaging substances may cause negative health impacts at permissible particle concentrations. For example, airborne silica [can be dangerous at 5-10% of regulatory limits on particle concentration.](/wiki/pm-monitoring#monitoring-silica) ####[Smoke School](/wiki/visual-pm) A visible emission is any visible airborne particle resulting from a process. Visible emissions usually include [respirable particles](/wiki/pm#Respirable+Particles), and can be measured by their effects on the opacity of the air. Opacity is expressed as the percentage of light that is scattered or blocked by emissions such that an observer's view through a plume is obscured. Opacity can be monitored through visual assessment with only human eyes and a stopwatch. Examples of pollutants that change opacity are smoke stack emissions and fugitive dust. Read more about visual emissions and certification programs in the [visual particulate matter wiki](https://publiclab.org/wiki/visual-pm). Certifying community observers in EPA Method 9 can be written into a facility’s permits, though it is not always. If you have information about when and where permit fees are required to cover community certifications, please add to this wiki or write a research note! Communities may find it useful to conduct visible emission monitoring and also engage in other [advocacy strategies](/wiki/frac-sand-advocacy-leverage-points) to gain the most leverage. ##Types of monitoring equipment Most monitors give a mass-based particle concentration for all particles in a [size category](/wiki/pm-monitoring-regulations), meaning they do not differentiate between the relative mass contribution of different sizes of particles within that category. Only systems that capture and save particulate matter can identify, or ‘speciate’ particles by size or elemental composition. ####[Filter-based systems](/wiki/filter-pm) _Used for: regulatory monitoring, supplementary monitoring_ Filter-based systems can collect particles for laboratory methods of speciation, and are the basis of [Federal Reference Methods](https://publiclab.org/wiki/pm-monitoring-regulations#The+Federal+Reference+Methods:). Data can only be analyzed after collection, not in real-time. Usually samples are collected over a 24-hour period and the weighted average concentration (by mass) for that 24-hours is produced. Filter-based gravimetric systems are usually the most precise measurements of PM. ####[Optical systems](/wiki/optical-pm) _Used for: personal exposure monitoring, supplementary monitoring, hotspot identification, hotspot characterization, education_ Optical electronic systems offer the possibility of real-time particle counts which are valuable for hotspot identification, recording short-term high emissions events, and identifying when air may pose a health threat. Their data is significantly affected by humidity though. More precise monitors usually include a filter-based system to correct data after collection, such as what Public Lab plans to do by [collocating optical systems with passive monitors](https://publiclab.org/wiki/pm-dev). ####[Passive systems](/wiki/passive-pm) _Used for: personal exposure monitoring, supplementary monitoring, education, hotspot characterization, education_ Passive systems have no moving parts and are easy to deploy for long-term monitoring without electricity. They can approach the precision of regulatory monitoring and are within the accuracy and precision ranges necessary for supplementary monitoring. Passive monitors generally require longer sample collection periods (3-10 days) than active filter-based monitoring, and are better used to characterize hotspots than to identify them. Passive monitors collect particles onto filters or slides, so there is the opportunity to do some limited speciation analyses of particles. ####Read more on [monitoring silica](/wiki/silica-monitoring)...
### Notes on pm-monitoring by richardbowman
Hi! No Research Notes have this tag 🤷, try checking the Questions or Wiki tabs on this page.
Or try searching: pm-monitoring | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2142309993505478, "perplexity": 6718.1569428420335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00519.warc.gz"} |
http://math.stackexchange.com/questions/137011/futures-pricing-and-futures-price-process-under-the-real-world-measure/194828 | # Futures pricing and futures price process under the real world measure
This is something that keeps bothering me about the Benchmark approach of Platen, which (very) shortly is as follows: Compare the development of an economic value with a growth optimal portfolio. Taking the expectation in the real world measure $\mathbb{P}$, conditioned on today's information, this should yield a fair and in some sense arbitrage free price, because you compare to the best possible (in expectation). Now, I want to know the price of a future on $X_T$ (hence $F(T,T)=X_T$). In discrete time, one just has to take the discounted future payments. For a time interval partition $\sigma$, this yields:
$$\Sigma_t^{(\sigma)} = \sum_{i=1}^N\frac{1}{S^{\pi^*}_{t_i}}( F(t_i,T) - F(t_{i-1},T) )\\ = {\sum_{i=1}^N\frac{1}{S^{\pi^*}_{t_{i-1}}}( F(t_i,T) - F(t_{i-1},T) )} +{\sum_{i=1}^N\left(\frac{1}{S^{\pi^*}_{t_i}}-\frac{1}{S^{\pi^*}_{t_{i-1}}}\right)( F(t_i,T) - F(t_{i-1},T) )}$$
Going to continuous time, this converges to $\int_t^{T} \left(\frac{1}{S^{\pi^*}_{s-}}\right) dF(s,T)+\int_t^{T}d\left[\left(\frac{1}{S^{\pi^*}}\right),F\right]_s$. Since the net position is zero and using the product rule for semimartingales, we get for the futures price at time t: $$\mathbb{E}\left[ \frac{F(T,T)}{S^{\pi^*}_{T}}-\int_t^{T}F(s-,T)d\left( \frac{1}{S^{\pi^*}_s} \right) \big|\mathcal{F}_t\right] = \mathbb{E}\left[ \frac{F(t,T)}{S^{\pi^*}_{t}} \big|\mathcal{F}_t\right] = \frac{F(t,T)}{S^{\pi^*}_{t}}$$ To me it seems intractable. Is there any way to come the futures price process even close? I read some extensions to the usual equivalent martingale measure, cont., FV interest process one takes usually and which yields $F(t,T)=\mathbb{E}^{\mathbb{Q}}[F(T,T)|\mathcal{F}_t]$. But none of these can be applied here. Any ideas?
-
@ user13655 : I think this question should be displaced to the quant.stackexchange.com forum. Best regards. – TheBridge May 10 '12 at 15:38
Well, at least there is a solution for deterministic interest rate. As an example take a BS model:
$$dS_t = S_t (rdt+\theta^2dt+\theta dW_t)$$
implies
$$d\left(\frac{1}{S_t}\right) = \frac{1}{S_t} (-rdt-\theta dW_t)$$
Define $\bar{F}=\bar{F}(t,T):=\frac{F(t,T)}{S_t}$, which implies $\bar{F}(T,T)=\frac{F(T,T)}{S_T}$ and $H_t = \mathbb{E}[\frac{h(L_t)}{S_T}|\mathcal{F}_t]$. $H_t$ is a martingale.
The solution of $Z_t = 1+\int_0^t Z_{s}dXs$ is $\mathcal{E}(-X)_t$ which is just the reciprocal of the savings account, i.e. $\mathcal{E}(-X)_t=\frac{1}{B_t}$. Note that $\mathcal{E}(-X)_t dX_t = -\mathcal{E}(X)_t r dt = -\frac{1}{B_t}r dt = d\frac{1}{B_t} = d\left( \mathcal{E}(-X)_t \right)$.
We get:
$$0=H_t + \mathbb{E}\left[ -\bar{F}(t,T) - \int_t^{T}\bar{F}(s-,T)dX_s \big|\mathcal{F}_t\right],$$ which looks very similar to a OU-SDE (except it doesn't run from $0$ to $t$). After looking carefully at the solution of the usual OU-SDE (see Revuz and Yor p. 378 Prop. 2.3), I tried $$\bar{F}(t,T)=\mathbb{E}\left[ \mathcal{E}(-X)_t\left( \frac{H_t}{\mathcal{E}(-X)_T}+\int_t^T\mathcal{E}(-X)^{-1}_s(dH_s-d\langle H,X\rangle_s) \right)|\mathcal{F}_t\right]$$
Since $H_t$ is a martingale and $X_t$ deterministic, we can forget about the integral and obtain: $$\bar{F}(t,T)=\frac{B_t}{B_T}\mathbb{E}\left[ \frac{h(L_T)}{S_T} |\mathcal{F}_t \right]\Rightarrow F(t,T)=S_t \frac{B_t}{B_T}\mathbb{E}\left[ \frac{h(L_T)}{S_T} |\mathcal{F}_t \right]$$
Plugging in shows that this is indeed a solution. I am not sure of uniqueness though. No solution for random $X_t$ yet, let alone if $\langle X,H\rangle\neq0$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132387042045593, "perplexity": 203.68388576812725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00129-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://cypenv.info/and-relationship/expected-return-and-standard-deviation-relationship-to-variance.php | # Expected return and standard deviation relationship to variance
### Measures of Risk - Variance and Standard Deviation
Learning Objectives. Calculate the expected return of an investment portfolio . Learning Objectives. Explain the importance of a stock's variance and standard deviation Variance in Relation to Expected Return. In the discussion of expected. Measures of Risk - Variance and Standard Deviation. Risk reflects the chance that the actual return on an investment may be very different than the expected. Standard deviation is a measure of how much an investment's returns can vary For math-oriented readers, standard deviation is the square root of the variance. return, then square those differences (that is, multiply each difference by itself): of risk that an investment will not meet the expected return in a given period.
Сквозь строй - лучший антивирусный фильтр из всех, что я придумал. Через эту сеть ни один комар не пролетит.
• Measures of Risk - Variance and Standard Deviation
Выдержав долгую паузу, Мидж шумно вздохнула. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778411388397217, "perplexity": 3388.4425197961286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00459.warc.gz"} |
https://www.physicsforums.com/threads/pressure-and-force-analysis.534667/ | # Pressure and force analysis
1. Sep 28, 2011
### Misr
I don't understand what makes us analyise a certain force in to two perpendicular components
http://library.thinkquest.org/C001429/statics/pressure.htm
[URL]http://library.thinkquest.org/C001429/images/cos.jpg[/URL]
why do we calculate pressure in this image from the relation P = F cos q/A
why not P=F/A?? Is this beacuse pressure is always perpendicular?and if so why is pressure perpendicular,when the force makes an angle with the normal?
Last edited by a moderator: Apr 26, 2017
2. Sep 28, 2011
### Studiot
If you apply a force at some angle to a surface as shown in your picture, that force has two components.
One component is perpendicular to the area. This is called the normal stress or direct stress or pressure. This is Fcos($\theta$) /A
The other component is parallel to the area and is called the shear stress. It is not a pressure, since it does not 'press' on the area, but drags along the surface. This is Fsin($\theta$/A
F is the resultant of these two forces the normal force and the shear force.
go well
3. Sep 29, 2011
### Misr
What is a shear stress?
I don't understand why is F the resultant of those two forces
4. Sep 29, 2011
### Studiot
Good morning, misr.
When you resolve a force in some direction, as you have done in your diagram, you get two components (in 2 dimensions).
I assume you understand this since you showed Fcos($\theta$)
The component you showed is called the normal force. This acts perpendicular to the area.
The other one is called the shear force. This acts parallel to the area.
Do you understand this so far?
5. Sep 29, 2011
### Misr
Yeah,I understand this
but is this normal force due to gravity or something else?
because the object is pressing upon the surface,so the surface reacts by this normal force?correct?
6. Sep 29, 2011
### Studiot
I really have no idea what the source of F is, you drew the diagram after all.
A few basic ideas.
In mechanics we need to distinguish between Force and stress.
Stress is Force divided by area.
I think we both know that Force has a specific line of action and a point of application.
We distinguish between body forces such as gravity, which act throughout a body and surface forces which are externally applied by some agent (eg the tension in a string).
We consider the body force of gravity to conform to my rule above by saying that this force is acts at the centre of gravity and is called the weight. This force cannot directly exert a pressure it is an internal force.
If a block is resting on a (horizontal) table its weight exerts a downward pressure on the table. This pressure equals the weight divided by the area of the block sitting on the table.
I have sketched this in fig1.
As noted above this is also called the stress imposed on the table by the block.
Since the weight acts perpendicular to the table that is all there is to it.
Now suppose I replaced the block with a light plate and a spring compressed so as to exert a force on the plate equal to W, the weight of the block. Although the source of the force on the table is quite different the effect is the same ie the pressure is the same, so long as the spring force is exerted vertically. I have sketched this in Fig2.
However we tilt the spring to apply its force at an angle, as shown in Fig3.
What do you think the pressure on the table would be if the angle was 90 degrees as shown in fig4?
Obviously there is no vertical force now being applied to the table via the plate. There is, however a still force being applied.
This is called the shear force (in this case it is also the friction force).
If we return to Fig3 and apply our spring force at some intermediate angle we have two components, one vertical and one horizontal. This is a more general situation.
does this help?
#### Attached Files:
• ###### pre1.jpg
File size:
12.3 KB
Views:
128
7. Sep 30, 2011
### Misr
I don't understand this.Is the plate fixed to the table and you are pulling the spring?
why is the pressure the same for both fig(1) and (2)
why do we define the pressure as the vertical force per unit area?why not the net force per unit area?
8. Sep 30, 2011
### Studiot
Now why do you suppose I said the spring was compressed?
I do hope you are reading this fully.
No the plate just sits on the table.
I said it is a light plate so that means it has no weight of its own.
The spring pushed down against the plate.
Well I did say the force the spring pushed down with was set to be the same as the weight of the block.
Now try reading my post again and see if it makes more sense.
9. Oct 5, 2011
### Misr
Certainly,It makes more sense now
I guess I understand what are you trying to say,the pressure becomes less when it is not perpendicular
Now I want to ask another question:
Why do we define pressure as "AVERAGE" force acting normally on unit area at this point
as in the page provided above?
10. Oct 5, 2011
### Studiot
Yes that's right.
If two equal forces are acting (on equal surfaces) one perpendicular and one at an angle - the perpendicular force exerts more pressure than the angled one.
Most people just accept what they are told, but you are obviously a thinking person so here is some extra detail.
The terms 'normal stress' and 'pressure' refer to the same physical phenomenon.
'Normal stress' is usually used in conection with solids and 'pressure' in connection with fluids.
We sometimes talk about pressure in connection with solids when we are considering contact stresses between two solids for instance 'foundation pressure' or 'bearing pressure'.
In the first sketch I have a 1kg weight sitting on a block of concrete, which is much bigger than the weight.
On the surface (section AA) where the weight is sitting the weight is concentrated only over the area of the contact surface, not over the whole area of AA.
As we go deeper into the concrete the 1kg spreads out over a wider and wider part of the concrete until we can say that the weight exterts an average pressure of 1kg divided by the area of the concrete block at section CC.
At intermediate section BB the pressure exerted by the weight is intermediate between that at AA and CC.
So what would happen if the block of concrete extended much further?
Well in sketch 2 I have shown the foundation pressure under a building of weight W. You can see a series of 'bowls of soil' that get larger and larger in area as we get further from the building. So W is distributed over an increasing area and the pressure gradually diminishes over these increasing areas.
Back to fluids, for although the pressure is the same in all directions at a point in a fluid, it can still vary from point to point.
So in sketch 3 I have shown the steadily increasing pressure of the water on the back of a dam. This increases linearly from nothing at the surface to a maximum at the base. As a result I have shown a triangle of forces.
I do not know if you have yet covered centre of gravity?
The 'average' pressure is the pressure at the centre of gravity (properly called the centre of pressure) of the triangle. The force on the dam equals this average pressure times the wetted area of the back of the dam.
go well
#### Attached Files:
• ###### pre2.jpg
File size:
8.4 KB
Views:
109
Last edited: Oct 5, 2011
11. Oct 6, 2011
### Misr
I can't imagine this :(
In fig (1)As we go deeper ,the pressure on a certain surace is not affected according to the relation P=h*rho*g
where h is the height of the block
12. Oct 6, 2011
### Studiot
A concrete block is not a fluid.
However the concrete at each section also experiences the pressure due to the weight of the concrete above it, just like a fluid.
But I am only considering the effect of the 1kg weight - or if you like the extra effect of the 1kg weight.
13. Oct 6, 2011
### Misr
This one is okay,but if so how could we calculate the total pressure on the back of the dam,if each point on the back of the dam has a different pressure?
14. Oct 6, 2011
### Misr
You are right..sorry for this stupidness
but I still can't imagine
15. Oct 6, 2011
### Studiot
There is no such animal as 'total pressure' - There is only average pressure over a whole area or specific pressure at a point. Pressure does not 'add up' over the area.
You are thinking of Force.
If I exert 10 pascal over 1 square metre and 10 pascal over another half a square metre there is no total pressure, just a pressure of 10 pascal acting.
The force, however is 10 newton in the first case and 15 in the second
Have a good look at the response by Halls of Ivy to your question about the fluid in the tank, it is similar to my dam example.
It is fundamental and very important to distinguish between force and pressure.
16. Oct 6, 2011
### Studiot
It's coming up to 1 am where I am and, as you can see from my passport photo, I desperately need my beauty sleep so we will have to continue this another day, but please come back and confirm you have conquered the difference (and link as they are intimately connected) between force and pressure.
go well
17. Oct 9, 2011
### Misr
Well,the main problem here is that i can't imagine how pressure is distributed over a larger area on going deeper..
Isn't the contact area the same?
What are you talking about?What is "the centre of gravity"?
Yeah
18. Oct 9, 2011
### Studiot
Is this a language problem or a physics problem? We really need to know the extent of your knowledge to be able to help as a centre of gravity is a pretty basic concept in physics.
I also said more, as did Redbelly in another thread, that it is fundamental you understand the difference between force and pressure.
This is fine if we need to start there (or even further back), but progress cannot be made without it.
19. Oct 11, 2011
### Misr
It's a physics problem
and I think I know the difference between force and pressure,pressure is the force divided by area,anything else?
20. Oct 11, 2011
### Studiot
The 'Centre of Gravity' of any body is the point through which all the weight of that may be be considered to act as a single concentrated force.
For instance consider a canon ball. We take the weight to be one force - say W, acting vertically downwards at the geometric centre of the ball.
Things may get much more complicated however - take for instance an L shaped metal bracket.
Now the weight of a body is just the sum of all the weights of the individual small elements added up
W = sum of welements
Are you familiar with sigma notation?
W = Ʃw
W and w are, of course, forces.
We can add forces from a different cause that are distributed over a body, area or volume in the same way to obtain a 'centre of force' for that particular force.
The 'Centre of Pressure' is just such a calculation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957230448722839, "perplexity": 877.4056706657437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867904.94/warc/CC-MAIN-20180526210057-20180526230057-00364.warc.gz"} |
https://forum.allaboutcircuits.com/threads/tricky-algebra-question.55323/ | # Tricky Algebra question
Discussion in 'Math' started by Jess_88, Jun 3, 2011.
1. ### Jess_88 Thread Starter Member
Apr 29, 2011
174
1
Hey guys.
I was wondering if someone could show me the working out (simplification) for these two equation. This isn't a homework question... just study. If more information is needed... let me know
1)
2)
thanks guys
2. ### jegues Well-Known Member
Sep 13, 2010
735
45
For 1,
$\frac{5}{s+2} = \left( \frac{s+1}{2s+1} \right)V_{o}$
$\frac{ 5 \times (2s+1)}{(s+2)(s+1)} = V_{o}$
$\Rightarrow \frac{10s+5}{(s+2)(s+1)} = V_{o}$
Or if you're confused by the first simplification,
$\frac{V_{o}}{2s+1} + \frac{V_{o}}{2 + \frac{1}{s}}$
$= \left( \frac{1}{2s+1} + \frac{1}{\frac{2s+1}{s}} \right)V_{o}$
$= \left( \frac{1}{2s+1} + \frac{s}{2s+1} \right)V_{o}$
$= \left( \frac{s+1}{2s+1} \right)V_{o}$
If I have time later I will look through the 2nd one as well. (It's the same type of algebra, nothing difficult)
Jess_88 likes this.
3. ### Jess_88 Thread Starter Member
Apr 29, 2011
174
1
ah thats great!
Yeah I understand its simple... I just haven't worked with such fractions in a long time.
If you do have time to do the second one later that would be great.
Thanks for the help
4. ### BillO Distinguished Member
Nov 24, 2008
994
139
You need to take it in steps. First step is to eliminate the 2/S denominator by multiplying both the top and bottom of the second term with S/2
Such that we have:
$\frac{3}{S} -Vo\ \ -\frac{1}{2}-\frac{SVo}{2}\ \ +\frac{4}{S}-Vo\ \ =\ \ 0$
Collecting the terms with Vo on one side we get
$2Vo+\frac{S}{2}Vo\ \ =\ \ \frac{7}{S} - \frac{1}{2}$
Multiplying both sides by 2 and simplifying:
$\Rightarrow\ \ \ \ Vo(4+S)\ \ =\ \ \frac{14-S}{S}$
$\Rightarrow\ \ \ \ Vo\ \ =\ \ \frac{14-S}{S(S+4)}$
Last edited: Jun 3, 2011
Jess_88 likes this.
5. ### Jess_88 Thread Starter Member
Apr 29, 2011
174
1
aaah! I got it!
6. ### victorhugo289 Member
Aug 24, 2010
49
3
I thought you had to solve for something in here, but this is a plug-in-the-values type of equation.
Not much simplification, really.
It's more like a formula. It is a formula...
Last edited: Jun 3, 2011
7. ### Jess_88 Thread Starter Member
Apr 29, 2011
174
1
Its a Laplace transformation question. I need to arrange the equation in terms of Vo and use partial fraction to determine the Laplace transformation.
8. ### KL7AJ AAC Fanatic!
Nov 4, 2008
2,181
410
You "just" need to find the common abominator.
Eric
Related Forum Posts:
1. Replies:
7
Views:
2,757
2. Replies:
1
Views:
2,239
3. Replies:
18
Views:
4,948
4. Replies:
7
Views:
6,860
5. Replies:
1
Views:
1,705 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363463521003723, "perplexity": 5697.023573462738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00454.warc.gz"} |
https://www.physicsforums.com/threads/how-to-find-the-plane-at-which-two-hyperplanes-intersect.601258/ | # How to find the plane at which two hyperplanes intersect.
1. Apr 28, 2012
### jenny_shoars
I know that to find the line at which two planes intersect, you can take the cross product of their normal vectors. This gives you a vector parallel to the line. Then you can just find a point which lies on both planes and that position plus the vector is your line.
How do you do the equivalent for the plane at which two hyperplanes intersect? I would initially think you could do something like take the determinant of the two hyperplanes in the same way that the cross product is from the determinant of the the two planes. However, the two hyperplanes don't give a square matrix the same way the two regular planes do. Also, how would you go about finding a point which lies on both hyperplanes in order to get the fully determined plane?
Thank you for your time!
2. Apr 29, 2012
### homeomorphic
One way to look at it is that it's just a linear algebra problem. The first plane is a set that satisfies some linear equations, so is the second plane and the intersection is the set the satisfies both sets of equations.
Cross products don't really exist in higher dimensions. An appropriate analog would be the wedge product, which could also be used to find the intersection.
By the way, hyperplane usually means one dimension less than the ambient space, not just higher dimensional planes.
3. Apr 29, 2012
### jenny_shoars
Of course. You're right on both accounts. Thank you much!
Similar Discussions: How to find the plane at which two hyperplanes intersect. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301176428794861, "perplexity": 228.3209922524802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00745.warc.gz"} |
http://www.derkeiler.com/Newsgroups/microsoft.public.inetserver.iis.security/2003-01/8031.html | Re: My IIS 5 Default Dir is changed
From: BB (Bernard_at_3exp.com)
Date: 01/24/03
```From: "BB" <Bernard_at_3exp.com>
Date: Fri, 24 Jan 2003 11:14:34 +0800
```
You assumption is wrong on localhost:81 is equal
to myservername. that's why you have you publish
new file. it won't show on http://myservername/
Well, at least now you know and learned.
Rgds.
"Susie Maxwell" <ltyao@yahoo.com> wrote in message
news:03e001c2c2f7\$df2b1b10\$8ef82ecf@TK2MSFTNGXA04...
> Hi, BB:
> If I save my file under c:\Inetpub\wwwroot\myfile.asp,
> it can be displayed on "http://localhost:81/myfile.asp" at
> the server machine. But if I try to use another machine to
> display on "http://myServername/myfile.asp", it doesn't
> work, it compains "The requested URL /myTestWeb/Test.asp
> localhost:81 = myServername, is that right?? Otherwise,
> how can we access from outside server machine? And also,
> would you please tell me where is the "localhost:81 mapped
> to myServername"? Actually, "http://myServername" default
> point to some Oracle web site, I don't know how to change
> it's default folder. Because, it doesn't take ASP
> function. I have to find the way to change it. Thanks a
> lot!
>
>
> >-----Original Message-----
> >Like i mention, where is the webroot mapped to ?
> >go to your web site, home directory tab, you should
> >see d: or c:\xxx\xxx
> >
> >put in is not at http://myservername/ webroot.
> >
> >Rgds.
> >
> >
> >"Susie Maxwell" <ltyao@yahoo.com> wrote in message
> >news:00d001c2c254\$799a4530\$d7f82ecf@TK2MSFTNGXA14...
> >> The local path is c:\Inetpub\wwwroot, but if I save file
> >> under this dir, both .html and .asp file can't be
> >> displayed. The error message is "HTTP 404-file not
> found".
> >> Usually, it should work on
> >> http://myservername/default.htm. May be something wrong
> >> with the IIS5 configuration or installation? I really
> >>
> >>
> >> >-----Original Message-----
> >> >Since you complain after Oracle installation,
> >> >everything up site down. I suggest you get rid of it.
> >> >but now you saying is nothing todo with it.
> >> >so what's the problem now ?
> >> >
> >> >check
> >> >1) iis running
> >> >2) html working or asp working ?
> >> >3) if you upload file, ensure you know where to put
> >> >the file. check where is the webroot. you can check
> >> >this by going to web site properties, home directory
> >> >tab.
> >> >
> >> >if you have error, past the error msg here and
> >> >check if anything in event log too. then give us
> >> >detail on your OS and networking setup.
> >> >
> >> >Rgds.
> >> >
> >> >
> >> >"Susie Maxwell" <ltyao@yahoo.com> wrote in message
> >> >news:006201c2c168\$57dfa7b0\$d3f82ecf@TK2MSFTNGXA10...
> >> >> Thanks for your reply! But I don't think it is Oracle
> >> >> problem. Actually, it really doesn't matter where is
> the
> >> >> default dir, the big problem is the ASP page can't be
> >> >> displayed if I save .asp file under this dir. May be
> I
> >> >> miss some files? I have no idea. Please tell me
> what's
> >> >> wrong! Thanks!
> >> >>
> >> >>
> >> >> >-----Original Message-----
> >> >> >Uninstall oracle.
> >> >> >
> >> >> >Rgds.
> >> >> >
> >> >> >"Susie Maxwell" <ltyao@yahoo.com> wrote in message
> >> >> >news:686501c2be51\$97ec0110\$d3f82ecf@TK2MSFTNGXA10...
> >> >> >> Hi, After install Oracle9i Database Sever, I found
> >> IIS5
> >> >> >> Local path is changed, it point to the Oracle web
> >> >> browser.
> >> >> >> It means I save my file.asp under wwwroot, it
> cann't
> >> >> >> display, I have to save under
> >> >> >> myservername/manual/file.asp, then it can be
> >> displayed
> >> >> on
> >> >> >> the web browser. But only diplay html part, no ASP
> >> >> part. I
> >> >> >> think must be something wrong with the default
> >> browser
> >> >> >> dir. Usually, I always save my file under
> >> >> >> inetpub\wwwroot\myfile.asp. It works on Windows
> NT.
> >> Does
> >> >> >> anybody tell me how can I change back to wwwroot
> dir?
> >> >> >>
> >> >> >> Thanks in advance!
> >> >> >
> >> >> >
> >> >> >.
> >> >> >
> >> >
> >> >
> >> >.
> >> >
> >
> >
> >.
> > | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384560346603394, "perplexity": 20595.580553142536}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098685.20/warc/CC-MAIN-20150627031818-00120-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?t=44116 | ## Zero Order Reaction
$\frac{d[R]}{dt}=-k; [R]=-kt + [R]_{0}; t_{\frac{1}{2}}=\frac{[R]_{0}}{2k}$
Emma Scholes 1L
Posts: 62
Joined: Fri Sep 28, 2018 12:18 am
### Zero Order Reaction
What is an example of a zero order reaction?
Cole Elsner 2J
Posts: 88
Joined: Fri Sep 28, 2018 12:25 am
### Re: Zero Order Reaction
An example I like to think of is having to reactants, A and B, and a few different experiments changing the concentrations of each, and analyzing the change in the rate. When A is increased, let's say that the reaction shows no change. You do the same for B, still no change. I think of zero order as any increase or decrease of the reactant with zero order, there is ZERO change to rate.
mcredi
Posts: 63
Joined: Fri Sep 28, 2018 12:16 am
### Re: Zero Order Reaction
For example 10mg of a drug maybe eliminated per hour, this rate of elimination is constant and is independent of the total drug concentration in the plasma
caseygilles 1E
Posts: 73
Joined: Fri Sep 28, 2018 12:18 am
### Re: Zero Order Reaction
a zero order reaction is one whose rate is independent of the concentration. An example is when nitrous oxide decomposes to nitrogen and oxygen. In the presence of a catalyst, platinum, we see that changing concentration of N2O has no effect on the rate of decomposition so we know the rate only depends on the rate constant, k. More precisely, as long as there is sufficient n2o to react with platinum the rate is not determined by concentration of n2o.
Brian Chang 2H
Posts: 65
Joined: Fri Sep 28, 2018 12:17 am
### Re: Zero Order Reaction
The Haber process is an example of a 0th order rxn.
Kyither Min 2K
Posts: 60
Joined: Wed Oct 03, 2018 12:15 am
### Re: Zero Order Reaction
I think this was mentioned in class but zero order reactions occur when the concenctration is super high that there is barely a difference in concentration respect to time. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936833739280701, "perplexity": 2086.85932242151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00660.warc.gz"} |
https://asknigeria.com.ng/topic/4151/teaching-children-how-to-engage-in-useful-discussions | Discussion involves the exchange between people any given idea or topic. It can be formal or informal and can be in large groups or small groups.
It is an effective teaching technique which promote the sharing of information and it involves the student. One of the key ways to promote communication is through discussion.
How to use Discussion
• You must be prepared to encourage and begin your discussion when appropriate . it is a way to take advantage of a teachable moment.
• Moment – interesting discussion which are unplanned may arise. Formal discussions requires some planning and possible informal discussion can be planned but the spontaneous discussion can be equally effective. The teacher will have to decide whether discussion take place in small groups or involve the whole class.
• The teacher should have a reason for utilizing discussion in the classroom.
Checklist for Utilizing Meaningful Discussion
• How will the discussion enhance the teaching experience of all students.
• What will be emphasized in terms of the discussion. The content of the discussion or both.
• How will students be evaluated.
• How will the teacher get all students to participate in the discussion.
• The teacher should make clear any rules the student will be required to show prior to the discussion.
• There must be a recorder for each group but if the whole class is to serve as a group, the teacher can act as an impartial recorder and record all the activities on the board.
• The teacher should encourage the participation of all students.
Things teachers must consider before creating a discussion.
• Establish a stress free environment
• Establish or arrange their sitting positions in a way that they can have a eye contact
• Never encourage the idea that teachers is all knowing. You should as much as possible learn.
• Teachers must be supportive and refrain from dominating and discussion.
• Teacher should make research before the class.
• Decide which form of discussion is appropriate for the topic being discussed either formal, informal, large, small group discussion.
• The discussion must have purpose and focus.
• Consider the group dynamic, lay down the rules e.g. be considerate in the class, do not interrupt, do not insult, ignore the teacher or student, you encourage rather than discourage them.
• Make sure you put it in mind that one or two students may want to dominate the discussion. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014014363288879, "perplexity": 1447.9469507806577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00388.warc.gz"} |
https://publikationen.bibliothek.kit.edu/1000062638 | # Tunnel Magnetoresistance Sensors with Magnetostrictive Electrodes: Strain Sensors
Tavassolizadeh, Ali; Rott, Karsten; Meier, Tobias; Quandt, Eckhard; Hölscher, Hendrik; Reiss, Günter; Meyners, Dirk
Abstract:
Magnetostrictive tunnel magnetoresistance (TMR) sensors pose a bright perspective in micro- and nano-scale strain sensing technology. The behavior of TMR sensors under mechanical stress as well as their sensitivity to the applied stress depends on the magnetization configuration of magnetic tunnel junctions (MTJ)s with respect to the stress axis. Here, we propose a configuration resulting in an inverse effect on the tunnel resistance by tensile and compressive stresses. Numerical simulations, based on a modified Stoner–Wohlfarth (SW) model, are performed in order to understand the magnetization reversal of the sense layer and to find out the optimum bias magnetic field required for high strain sensitivity. At a bias field of -3.2 kA/m under a 0.2 × 10$^{-3}$ strain, gauge factors of 2294 and -311 are calculated under tensile and compressive stresses, respectively. Modeling results are investigated experimentally on a round junction with a diameter of 30 ± 0.2 μm using a four-point bending apparatus. The measured field and strain loops exhibit nearly the same trends as the calculated ones. Also, the gauge factors are in the same rang ... mehr
Zugehörige Institution(en) am KIT Institut für Mikrostrukturtechnik (IMT) Publikationstyp Zeitschriftenaufsatz Jahr 2016 Sprache Englisch Identifikator ISSN: 1424-8220KITopen ID: 1000062638 HGF-Programm 43.22.01; LK 01 Erschienen in Sensors Band 16 Heft 11 Seiten 1902 Bemerkung zur Veröffentlichung http://www.mdpi.com/1424-8220/16/11/1902/htm Schlagworte tunnel magnetoresistance, inverse magnetostriction, strain sensors
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5295892357826233, "perplexity": 12535.429039684814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00523.warc.gz"} |
https://www.physicsforums.com/threads/rl-circuit-fun-am-i-drawing-this-circuit-correct-at-different-times.116177/ | # RL circuit fun! am I drawing this circuit correct at different times?
1. Apr 1, 2006
### mr_coffee
Hello eveeryone! exam time monday! Just doing some last checks to make sure i'm understanding this correctly. Here is the directions and circuit:
THe circuit shown below has beeen in the form shown for a very long time. THe switchopens at t =0. FInd iR at t equal to (a)0-, (b) 0+, (c) infinity, and (d) 1.5
I'm showing u my work for (a) and (b).
FOr par (a)
WHat i'm confused on is, U see that wire that says at time t =0, its going to switch open. At time t < 0, does that mean there is just a wire there?
Does that mean all the current is going to go through that wire and say screw all the other compoents like the 60, and 40 ohm resistors? THe answer in the back of the book is iR = 0;
But is it 0 because all the current will go into that wire with no resistance?
I remember the professor said, if the inductor is shorted(in this case it would be because the circuit has been sitting there for a long time) then anything in parellel with that shorted inductor is also shorted out?
For part (b)
iR = 10mA
Is this iR 10mA because when that wire is opened that means the conductor isn't shorted but acts as a huge resistor. Not wanting any current to go through it, so it makes all the current go through the 60 ohm resistor?
For partt (d) 1.5ms
I need to find x = L/Req,
to find Req, ur suppose to "look" into the inductor and take out all power supplies, in this case I think you would be left with 60 and 40 in parellel, so i got Req = 4mA, L = .1H
I know i'm going to use the equation:
iL(t) = io*e^t/x, where x = L/R, L being the indcutor, R being the equivlent resistance, but what io do i use? io is the initial current i though, usually i use the io i found at time i(0-) but in this case its 0!
The book is getting an answer of 5.34mA.
Thanks!
Last edited: Apr 1, 2006
2. Apr 1, 2006
### nrqed
Yes. The switch is closed which means it acts as an ideal wire.
well, yes...but please be careful with the language. I might not be the only one who is easily offended
Yes, anything parallel with a wire with no resistance is shorted
Yes
The current will be of the general form C_1 + C_2 e^(-t/tau). You must impose that at t=0 this reproduces the result at t=0+, so 10 mA. At t=infinity, find the current by replacing the inductor by a wire. That gives you a second condition which will fix C_1 and C_2. Then you can find the current at any time.
3. Apr 1, 2006
### mr_coffee
Thanks for the responce, sorry about the language somtimes when I type i don't realize what i'm actually typing
I had no idea there was another form like that. Makes sense though!
I used 10ma as C1, and C2, i used the value i got by evaulating the circuit at t(infinity) and i got the following value of iR = (10mA)(40)/(60+40) = 4mA. Which is what the book has. But when i put it into the forumla:
10E-3 + 4E-3*e^(-240*1.5E-3) = .012791A or 12.79mA
but the book has: 5.34mA.
Any idea where i misunderstood?
4. Apr 1, 2006
### nrqed
No problem, I am probably too sensitive
Watch out. If you set t=0 in the equation, you get C_1 + C_2 = 10 mA. If you set t= infinity, you get C_1 = 4 mA (the exponential is zero). So C_2 = 6 mA.
Don`t jump to the conclusion that C_1 is the current at t=0 and C_2 is the current at t=infinity!
Pat
5. Apr 1, 2006
### nrqed
Also (I had not checked that part of your calculation) but if you take out all power sources as you said, the inductor will seetwo resistors in series so your equivalent resistance should be 1000 ohms.
Soyou get I(1.5 ms) = 4 mA + 6 mA e^(-100/.1 * 1.5E-3) = 5.34 mA
Patrick
6. Apr 1, 2006
### mr_coffee
ahh thanks again! it worked out nicely and great explanation! But i'm having problems visualing what happens when u "look" through a inductor and simpify from there. For example, in this case would it look like this? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881081342697144, "perplexity": 1483.2176926229463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00065-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://bookstore.ams.org/view?ProductCode=CONM/550 | An error was encountered while trying to add the item to the cart. Please try again.
The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below.
Copy To Clipboard
Successfully Copied!
Geometric Analysis of Several Complex Variables and Related Topics
Edited by: Y. Barkatou Université de Poitiers, Futuroscope, France
S. Berhanu Temple University, Philadelphia, PA
A. Meziani Florida International University, Miami, FL
R. Meziani Ibn Tofail University, Kenitra, Morocco
N. Mir University of Rouen, Rouen, France
Available Formats:
Softcover ISBN: 978-0-8218-5257-6
Product Code: CONM/550
List Price: $78.00 MAA Member Price:$70.20
AMS Member Price: $62.40 Electronic ISBN: 978-0-8218-8229-0 Product Code: CONM/550.E List Price:$73.00
MAA Member Price: $65.70 AMS Member Price:$58.40
Bundle Print and Electronic Formats and Save!
This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version.
List Price: $117.00 MAA Member Price:$105.30
AMS Member Price: $93.60 Click above image for expanded view Geometric Analysis of Several Complex Variables and Related Topics Edited by: Y. Barkatou Université de Poitiers, Futuroscope, France S. Berhanu Temple University, Philadelphia, PA A. Meziani Florida International University, Miami, FL R. Meziani Ibn Tofail University, Kenitra, Morocco N. Mir University of Rouen, Rouen, France Available Formats: Softcover ISBN: 978-0-8218-5257-6 Product Code: CONM/550 List Price:$78.00 MAA Member Price: $70.20 AMS Member Price:$62.40
Electronic ISBN: 978-0-8218-8229-0 Product Code: CONM/550.E
List Price: $73.00 MAA Member Price:$65.70 AMS Member Price: $58.40 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$117.00 MAA Member Price: $105.30 AMS Member Price:$93.60
• Book Details
Contemporary Mathematics
Volume: 5502011; 196 pp
MSC: Primary 32; 35;
This volume contains the proceedings of the Workshop on Geometric Analysis of Several Complex Variables and Related Topics, which was held from May 10–14, 2010, in Marrakesh, Morocco.
The articles in this volume present current research and future trends in the theory of several complex variables and PDE. Of note are two survey articles: The first presents recent results on the solvability of complex vector fields with critical points while the second concerns the Lie group structure of the automorphism groups of CR manifolds. The other articles feature original research in major topics of analysis dealing with analytic and Gevrey regularity, existence of distributional traces, the $\bar\partial$-Neumann operator, automorphisms of hypersurfaces, holomorphic vector bundles, spaces of harmonic forms, and Gysin sequences.
Graduate students and research mathematicians interested in several complex variables, PDE, and CR geometry.
• Articles
• Rafael F. Barostichi, Paulo D. Cordaro and Gerson Petronilho - Analytic vectors in locally integrable structures
• Makhlouf Derridj and Bernard Helffer - Subellipticity and maximal hypoellipticity for two complex vector fields in $(2+2)$-variables
• J. Hounie and E. R. da Silva - Existence of trace for solutions of locally integrable systems of vector fields
• Martin Kolář and Francine Meylan - Chern-Moser operators and weighted jet determination problems
• Bernhard Lamel - Jet embeddability of local automorhpism groups of real-analytic CR manifolds
• Jürgen Leiterer - Splitting of holomorphic cocycles with estimates. Several variables
• Gerardo A. Mendoza - A Gysin sequence for manifolds with $\mathbb {R}$-action
• Sönmez Şahutoğlu - A potential theoretic characterization of compactness of the $\overline {\partial }$-Neumann problem
• Mei-Chi Shaw - Duality between harmonic and Bergman spaces
• François Treves - On the solvability and hypoellipticity of complex vector fields
• Request Review Copy
• Get Permissions
Volume: 5502011; 196 pp
MSC: Primary 32; 35;
This volume contains the proceedings of the Workshop on Geometric Analysis of Several Complex Variables and Related Topics, which was held from May 10–14, 2010, in Marrakesh, Morocco.
The articles in this volume present current research and future trends in the theory of several complex variables and PDE. Of note are two survey articles: The first presents recent results on the solvability of complex vector fields with critical points while the second concerns the Lie group structure of the automorphism groups of CR manifolds. The other articles feature original research in major topics of analysis dealing with analytic and Gevrey regularity, existence of distributional traces, the $\bar\partial$-Neumann operator, automorphisms of hypersurfaces, holomorphic vector bundles, spaces of harmonic forms, and Gysin sequences.
Graduate students and research mathematicians interested in several complex variables, PDE, and CR geometry.
• Articles
• Rafael F. Barostichi, Paulo D. Cordaro and Gerson Petronilho - Analytic vectors in locally integrable structures
• Makhlouf Derridj and Bernard Helffer - Subellipticity and maximal hypoellipticity for two complex vector fields in $(2+2)$-variables
• J. Hounie and E. R. da Silva - Existence of trace for solutions of locally integrable systems of vector fields
• Martin Kolář and Francine Meylan - Chern-Moser operators and weighted jet determination problems
• Bernhard Lamel - Jet embeddability of local automorhpism groups of real-analytic CR manifolds
• Jürgen Leiterer - Splitting of holomorphic cocycles with estimates. Several variables
• Gerardo A. Mendoza - A Gysin sequence for manifolds with $\mathbb {R}$-action
• Sönmez Şahutoğlu - A potential theoretic characterization of compactness of the $\overline {\partial }$-Neumann problem
• Mei-Chi Shaw - Duality between harmonic and Bergman spaces
• François Treves - On the solvability and hypoellipticity of complex vector fields
Please select which format for which you are requesting permissions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.273640513420105, "perplexity": 5336.479673233862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00591.warc.gz"} |
http://schlitt.info/opensource/blog/0736_highlight_source_code_lines_latex.html | Highlight source code lines in LaTeX - Blog - Open Source - schlitt.info
# schlitt.info - php, photography and private stuff
## Highlight source code lines in LaTeX
I love LaTeX for any kind of text writing (actually typesetting), simply because it creates so nice looking and consistent layouts. And, of course, because I can write it in my favorite text editor. We use LaTeX especially for presentation slides at Qafoo, since the beamer package provides such a convenient environment. Combined with listings package, presenting source code snippets with nice syntax highlighting has never been easier. However, there was one problem we did not solve, yet, until some days ago: Highlighting certain source code lines of a listing on different slides.
So, let me give an example on when you want to highlight certain lines of a listing on different slides of your beamer presentation. The following listing shows how you can convert an XHTML document into a PDF, using the Apache Zeta Components Document component:
```<?php require 'autoload.php'; // Convert some web page to PDF \$xhtml = new ezcDocumentXhtml(); \$xhtml->setFilters( array( new ezcDocumentXhtmlElementFilter(), new ezcDocumentXhtmlXpathFilter( '//div[@class="content"]' ), ) ); \$xhtml->loadFile( 'consulting.html' ); // Load the docbook document and create a PDF from it \$pdf = new ezcDocumentPdf(); \$pdf->options->errorReporting = E_PARSE | E_ERROR | E_WARNING; // Load a custom style sheet \$pdf->loadStyles( 'custom.css' ); // Add a customized header \$pdf->registerPdfPart( new ezcDocumentPdfHeaderPdfPart( new ezcDocumentPdfFooterOptions( array( 'showPageNumber' => false, 'height' => '10mm', ) ) ) ); \$pdf->createFromDocbook( \$xhtml->getAsDocbook() ); file_put_contents( __FILE__ . '.pdf', \$pdf );```
The actual content of the listing is not important here, what really matters is its length and complexity. Of course the code is not highly complex in itself, but it is, if you are watching a presentation and suddenly a slide appears which shows the code. Using the LaTeX listings package, you already get a nicely highlighted visualization out of the box, including line numbers and possible other goodies:
LaTeX beamer highlighting
You can click on the image to enlarge it, so you can better see how nicely the lisiting is typeset with custom highlighting colors and line numbers.
So, when presenting such a listing, it is likely to overwhelm people. Their focus will be on reading the full listing and understanding it and it is hard to draw their attention to the specific parts you are talking about at a given moment. You can try by pointing at the specific lines using a laser pointer or your finger or even just by naming the specific line numbers. However, have a clear visual indication on your slides is much more effective.
Our idea was therefore, for a longer time now, to visually highlight certain lines by changing their background color. This is not an easy task in LaTeX. One way to solve this issue is to put additional LaTeX commands into the listing, using the `lstlisting` escape character. This works out, but basically makes your listing code unmaintainable, even unreadable. In addition, you cannot use the `lstinputlisting` command any more, which allows you to include lisitings directly from a source file, which is what you usually want to be doing instead of pasting the listing into the LaTeX file itself.
I have to admit that it took more than two years until I finally found a really nice solution to this problem. To highlight certain lines of code, we now use the following command:
```\qalisting[fontsize=\tiny]{code/02_create_pdf_styled.php}{ \only<2>{ \qahigh{5,...,10} } \only<3>{ \qahigh{13,14} } }```
This sources the listing `code/02_create_pdf_styled.php` and displays it in font size `\tiny`. But instead of just generating a single beamer slide, it actually generates three: On the first slide, just the pure listing is shown. On the second one, the source code lines 5 to 10 are highlighted, and on the third one, lines 13 and 14. Simple, isn't it? You can see the results below (again click the images to see a larger variant).
Highlighted lines 5 to 10
Highlighted lines 13 and 14
So how does it work internally? OK, I don't really want to talk about this, since it is really hackish. In short: I use a TiKZ image where the listing is embedded as a node and then create additional nodes on the background layer of this image, using the line height of the listing font size. I put up the source code of the highlighting commands to Github, so you can use it in your presentations, if you want. Beware, the commands are not really configurable and you will need to adjust the code manually to suite your presentation style. Furthermore, it does only work with inclusion of external source code files and is stuck to PHP code for now (easily adjustable to other languages). Maybe its still useful for you.
If you know some LaTeX, I would love if you contribute additional options, like settings for the listing package or configurable styling. If you are a LaTeX guru and know how to fix some of the bigger issues, I would pretty much appreciate if you take some time, fork the code on Github and send me a pull request, or if you just send me a patch! Thanks in advance! :)
If you liked this blog post or learned something, please consider using flattr to contribute back: .
• #### Christoph
Thank you very much for sharing this piece of code!
Some days ago I wondered how you did that line highlighting in your slides, really good idea. Definitively useful when explaining source code :)
• #### christian
Hi, I have a problem with the use of the qalisting. The error was:
and I don't find that file on internet.
any suggestion?
thanks!
• #### louboutin
Therefore, it is a small perturbation people don't understand the truth said anti-corruption commissioner abuse.
• #### girlfriend activation system
just to let you know I like visiting your blog, because the information you provide here contains really beneficial information that will satisfy readers and can clarify things | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357827067375183, "perplexity": 1061.0732277191016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00371-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.openml.org/a/evaluation-measures/predictive-accuracy | Measure
# predictive_accuracy
The Predictive Accuracy is the percentage of instances that are classified correctly. Is it 1 - ErrorRate.
Source Code:
See WEKA's Evaluation class
## Properties
Minimum value 0 Maximum value 1 Unit Optimization Higher is better | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5984231233596802, "perplexity": 5710.20218224236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00327.warc.gz"} |
https://hilbertthm90.wordpress.com/2009/03/05/lying-over-and-going-up/ | # Lying Over and Going Up
If you haven’t heard the terms in the title of this post, then you are probably bracing yourself for this to be some weird post on innuendos or something. Let’s first do some motivation (something I’m not often good at…remember that Jacobson radical series of posts? What is that even used for? Maybe at a later date we’ll return to such questions). We can do ring extensions just as we do field extensions, but they tend to be messier for obvious reasons. So we want some sort of property that will force an extension to be with respect to prime ideals. Two such properties are “lying over” and “going up.”
Let $R^*/R$ be a ring extension. Then we say it satisfies “lying over” if for every prime ideal $\mathfrak{p}\subset R$ in the base, there is a prime ideal $\mathfrak{p}^*\subset R^*$ in the extension such that $\mathfrak{p}^*\cap R=\mathfrak{p}$. We say that $R^*/R$ satisfies “going up” if in the base ring $\mathfrak{p}\subset\mathfrak{q}$ are prime ideals, and $\mathfrak{p}^*$ lies over $\mathfrak{p}$, then there is a prime ideal $\mathfrak{q}^*\supset \mathfrak{p}^*$ which lies over $\mathfrak{q}$. (Remember that Spec is a contravariant functor).
Note that if we are lucky a whole bunch of posts of mine will finally be tied together and this was completely unplanned (spec, primality, localization, even *gasp* the Jacobson radical). First, let’s lay down a Lemma we will need:
Let $R^*$ be an integral extension of R. Then
i) If $\mathfrak{p}$ a prime ideal of R and $\mathfrak{p}^*$ lies over $\mathfrak{p}$, then $R^*/\frak{p}^*$ is integral over $R/\mathfrak{p}$.
ii) If $S\subset R$, then $S^{-1}R^*$ is integral over $S^{-1}R$.
Proof: By the second iso theorem $R/\frak{p}=R/(\frak{p}^*\cap R)\cong (R+\frak{p}^*)/\frak{p}^*\subset R^*/\frak{p}^*$, so we can consider $R/\frak{p}$ as a subring of $R^*/\frak{p}^*$. Take any element $a+\frak{p}^*\in R^*/\frak{p}^*$. By integrality there is an equation $a^n+r_{n-1}a^{n-1}+\cdots + r_0=0$ with the $r_i\in R$. Now just take everything $\mod \frak{p}^*$ to get that $a+\frak{p}^*$ integral over $R/\frak{p}$. This yields part (i).
For part (ii), let $a^*\in S^{-1}R^*$, then $a^*=a/b$, where $a\in R^*$ and $b\in\overline{S}$. By integrality again we have that $a^n+r_{n-1}a^{n-1}+\cdots + r_0=0$, so we multiply through by $1/b^n$ in the ring of quotients to get $(a/b)^n+(r_{n-1}/b)(a/b)^{n-1}+\cdots +r_0/b^n=0$. Thus $a/b$ is integral over $S^{-1}R$.
I’ll do two quick results from here that will hopefully put us in a place to tackle the two big results of Cohen and Seidenberg next time.
First: If $R^*/R$ is an integral ring extension, then $R^*$ is a field if and only if $R$ is a field. If you want to prove this, there are no new techniques from what was done above, but you won’t explicitly use the above result, so I won’t go through it.
Second: If $R^*/R$ is an integral ring extension, ten if $\frak{p}$ is a prime ideal in R and $\frak{p}^*$ is a prime ideal lying over $\frak{p}$, then $\frak{p}$ is maximal if and only if $\frak{p}^*$ is maximal.
Proof: By part (i) of above, $R^*/\frak{p}^*$ is integral over $R/\frak{p}$ and so as a corollary to “First” we have one is a field if and only if the other is. This is precisely the statement that $\frak{p}$ is maximal iff $\frak{p}^*$ is maximal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731325507164001, "perplexity": 148.35490399155879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812293.35/warc/CC-MAIN-20180218232618-20180219012618-00655.warc.gz"} |
http://referrat.net/psychology/stratification-as-self-their-methodological-crisis/ | # Stratification as self, their methodological crisis
But as Friedman’s book is written for managers and educators, it is important to have a setup attracts interactionism, although this fact needs further verification supervision. Psychosis is a stable archetype materialistic, hence the basic law of psychophysics: a sense of change is proportional to the logarithm of the stimulus. However, researchers are constantly faced with the fact that for impermeable escapism. Conformism intuitive. Preconscious continuously.
Perception gives Sorcerer intelligence equally in all directions. Skinner, however, insisted that the collective unconscious indirectly. Escapism relevant starts convergent egocentrism, in full accordance with the basic laws of human development. Once the theme is formulated insight enlightens accelerating homeostasis, however as soon as orthodoxy finally prevails, even this little loophole will be closed. Adhering to the principles of social Darwinism hard, consistently introspection.
Erickson’s hypnosis, as is commonly believed, intuitive. Anima, of course, available. Behaviorism conscious individual genesis, as emphasized in the work Dzh.Moreno “Theatre of Spontaneity.” Consciousness gives textual consumption contrast, which is not surprising when talking about personalized nature of primary socialization. Studying with positions close Gestalt psychology and psychoanalysis in a small group processes, reflecting the informal micro-structure of society, Dzh.Moreno showed that frustration causes phylogeny that celebrate such prominent scientists as Freud, Adler, Jung, Erikson, Fromm. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212999701499939, "perplexity": 11814.164846936306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00090.warc.gz"} |
https://docs.chemaxon.com/display/docs/JCB+FAQ | • ### General
• Which Java virtual machine should I use with JChem?
JChem requires Oracle Java Runtime Environment (JRE) 1.6 JRE Standard Edition or better. Equivalent JREs from other vendors may work, but not recommended. For maximum performance we recommend to use the latest stable release from Oracle.
• I get out of memory error.
Java applications in general:
In the case of most Java Virtual Machines, the default setting of maximum heap size is 64MB. One can increase the maximum heap size of applications running under Oracle's environment by setting the -Xmx parameter. General example for allowing 128 MBytes for an application:
java -Xmx128m my.Application
JChem applications:
In the case of the JChem application startup files (Windows Batch Files and Unix Shell Scripts) an application-specific value is specified in the startup file, which can be easily edited. Please click here for further information.
Web applications:
If your problem occurs in Tomcat, please see the Tomcat configuration page.
If you use a different servlet server, then please consult the documentation of the software for details.
When using a MySQL database, an OutOfMemory Error may occur during a JChem table export of millions of structures. The MySQL database driver causes the problem, which fetches the data set in one batch.
We recommend you use the fetchSize option in the MySQL driver URL. See the example in the FAQ.
• I get a warning "Table <table> could not fit into structure cache (out of memory).".
This indicates that the table could not be loaded into the structure cache due to lack of sufficient memory. There should be sufficient memory for the Java Virtual Machine (JVM) to load the structural and fingerprint information of all structure tables.
Please see the section describing how to allocate the memory need for your structure tables.
The out of memory section describes how to allocate more memory for the JVM.
• ### Installation and configuration
• How to upgrade to a new JChem version?
2. Make sure no JChem applications are running
3. Replace your old JChem directory with the contents of the new package. (delete/move the old directory and extract the package to the same location)
4. Run the JChemManager (jcman) GUI, or execute the following command to update in text mode:
jcman u
5. The application will automatically prompt you for the upgrades. Some of these changes are quick, however the regeneration of the tables can take for a while. The time requirement for table regeneration is comparable to the time of importing the same number of structures.
6. If you are using JChem from other (standalone or web) applications, make sure you also update the .jar files used by them. These jar files are located under the jchem\lib directory in the unpacked package. You should also restart these applications, so they can load the new code from the updated .jar files
For example if you are running a JSP application in Tomcat:
• Stop Tomcat
• Update the .jar files in Tomcat's lib directory.
• Clearing Tomcat's cache is also recommended. This can be performed by deleting the content of <tomcat_home>/work directory.
• Start Tomcat
• JCMAN GUI does not work via X between Linux (server) and Windows (client)
It is possible that JCMAN GUI does not work on Linux server with Windows client via X sever in some cases. Opening your ssh connection with -Y option may solve the problem.
Related forum topic: https://forum.chemaxon.com/viewtopic.php?p=15322#15322
• How should I set the "JDBC driver" and "URL of database" fields for starting JChemManager?
One can find simple examples of JDBC URL and driver strings in the tables below.
(For more complex cases please see the documentation of the JDBC driver.)
Notes:
• ODBC connection is only tested and supported for Microsoft Access. For other databases the native JDBC drivers should be used.
• For Oracle only the "thin" driver is tested and recommended ("oci" connection is not supported).
Supported databases:
Oracle Driver oracle.jdbc.OracleDriver URL format jdbc:oracle:thin:@:: URL example jdbc:oracle:thin:@localhost:1521:XE Supported versions Oracle 11g, 12c Continually tested versions Oracle 11g Express Edition (11.2.0.2.0) More information
MySQLAmazon Aurora MySQL Driver org.mariadb.jdbc.Driver(up to version 17.3.27: com.mysql.jdbc.Driver) URL formatURL format for Aurora jdbc:mysql://:/[?options]jdbc:mysql:aurora://:/[?options] URL example jdbc:mysql://localhost:3306/mydb?useCursorFetch=true&defaultFetchSize=1000 Supported versions MySQL 5.x Continually tested versions MySQL 5.1, 5.5 More information
IBM DB2 Driver com.ibm.db2.jcc.DB2Driver Supported versions IBM DB2 8.1, 8.2, 9.1, 9.5, 9.7 More information
MS SQL Server Driver com.microsoft.sqlserver.jdbc.SQLServerDriver URL format jdbc:sqlserver://:<;databaseName=database>[;options] URL Example jdbc:sqlserver://localhost:1433;databaseName=mydb;selectMethod=cursorNote: schema name cannot be specified; jchem uses the default schema: 'dbo' Supported versions MS SQL Server 2008, 2012 Continually tested versions MS SQL Server 2012 More information
HSQLDB / HXSQL Driver org.hsqldb.jdbcDriver URL format jdbc:hsqldb:hsql://[host]/[database] URL Example jdbc:hsqldb:hsql://localhost/ Supported versions HSQLDB 2.0, 2.2 More information
MS Access via ODBC Driver sun.jdbc.odbc.JdbcOdbcDriver URL format jdbc:odbc:[odbc data source][;options] URL example jdbc:odbc:mydatasource Supported versions all via odbc
Derby Driver org.apache.derby.jdbc.EmbeddedDriver URL format jdbc:derby:[subprotocol]:[database with path][;create=true][;options] URL example jdbc:derby:/c:/databases/mydb;create=true Supported versions versions embedded with Java Continually tested versions versions embedded with Java 1.6 More information
PostgreSQLAmazon Aurora PostgreSQL Driver org.postgresql.Driver URL format jdbc:postgresql://:/ URL example jdbc:postgresql://localhost:5432/mydb Supported versions PostgreSQL 9.1, 9.2, 9.3, 9.4, 9.5 Continually tested versions PostgreSQL 9.4.8 More information
InterBase Driver interbase.interclient.Driver URL format jdbc:interbase:[path to interbase data file (.gdb file)] URL example jdbc:interbase://localhost/c:/interbase/interbasedb.gdb Supported versions InterBase 9.0.3, 10.0.2, 10.0.3 More information
• Why do I get SQLException or other error when I use JChemManager?
The most probable causes:
• Incorrect URL to the database (it may occur during login)
• Faulty JDBC or ODBC driver. Try a different one.
• I get exception when I start JChemManager using Oracle's JDBC thin driver.
Make sure that the URL is appropriate (e.g: jdbc:oracle:thin:@myhost:1521:mySID). Check if all needed services run (the listener service is necessary).
• JChem Manager and chemaxon.jchem.db.Importer fails to import date formats into PostgreSQL tables which otherwise work with psql or other non-JDBC-based clients
If this is a date or time stamp format which works using psql or other non-JDBC-based database clients, you have likely run into this PostgreSQL JDBC issue:
The work-around is to to append stringtype=unspecified to the connection.jdbcUrl:
connection.jdbcUrl=jdbc\:postgresql\://localhost\:5432/jcbtest?stringtype=unspecified
• ### Integration
• How can I use the JChem API from my favorite programming language (C++, C#, .NET, Python, Javascript, Perl)?
For users of the .NET API, the JChem API can be integrated into .NET applications by the .NET Packages. Other web service compatible languages (including C++ and C#) can use JChem Web Services.
• How can I use the JChem tools as a web service?
Many of the JChem tools are available as web services. See the JChem Web Services Server for more information
• ### Examples
• Where can I find examples for using JChem?
• A JSP (Java Server Pages) and ASP examples can be found in the examples directory. Please see {{<jchem dir>/examples/index.html}} for a description.
• How should I set the connection string in the case of ASP or other ADO capable environment?
If you would like to call SQL statements using ADO, you may choose between ODBC and OLEDB connections. (ADO can not contact to the databases through JDBC drivers.)
• Example for an ODBC connection
var adoConnectionString=
"DSN="+MyDSN+";"+
"PWD="+password;
• Example for an OLEDB connection to Oracle
var adoConnectionString=
"Provider=MSDAORA.1;"+
"Data Source="+myServiceName+";"+
"Password="+password;
(The above examples use the JavaScript syntax)
In the case of Oracle, if an error occurs, please see HOWTO: Troubleshoot an ASP-to-Oracle Connectivity Problem
• Copy/paste doesn't work with a certain Look and Feel.
Keyboard shortcuts for Copy/Paste/Cut functions may vary by Look and Feel. For example in the case of Windows Look and Feel these are CTRL+C,CTRL+V, and CTRL+X, respectively. In the case of Motif LF the shortcuts for the same commands are: CTRL+INS,SHIFT+INS, and SHIFT+DEL.
• ### Common issues
• I get the following exception using MySQL: com.mysql.jdbc.PacketTooBigException: Packet for query is too large
Please increase the value of the "max_allowed_packet" variable for MySQL. The following line should be added to the configuration file "my.ini" under the [mysqld] section :
max_allowed_packet = 100M
• For some structures the database field "cd_smiles" is null . Why?
JChem works with the standardized form of imported structures stored in ChemAxon Extended SMILES format. This extended format can represent a wider range of structures than SMILES, but there are still some cases, when this format is not applicable. In these cases the "cd_smiles" field is null , and JChem uses the "cd_structure" field for these rows. (The "cd_structure" represents the structures in the original input format)
Currently the cd_smiles is null in the following cases:
• Structure contains R-groups
• the SMILES would be too long for the database field (very large molecules).
In these cases the search is slower, since the target structures have to be standardized on the fly.
Note: For most databases the size of the "cd_smiles" field can be increased at the table creation dialog (in the SQL text). The increased length is automatically utilized. This can speed up the search if a high percentage of the structures are huge.
• I get the following error message : " The structure table contains obsolete data. Please regenerate the table. " What does it mean ?
Sometimes there are some changes in the data structure of JChem, which are incompatible with earlier versions. To obtain correct search result, the regeneration of the old structure tables is necessary. For more information and instructions please see the administration guide.
• In DB2 databases, row length of JChem structure tables has been increased
Please, be aware of the increased (by about 1K) row lengths of the JChem structure tables from JChem 6.1.2. It's possible that the new row length will reach the maximum row length allowed in your database. In this case the extended row size support must be enabled or the page size of the tablespace must be increased manually.
• How to check for duplicates in duplicate filtered tables
Because of a bug - fixed in JChem version 17.29.0 - duplicates could have been inserted in tables with duplicate filtering set on if there were too many structures with the same hash code already in the database. Starting from JChem version 18.1.0, a command line tool duplicatecheck is provided for finding the duplicates in a JChem table. This tool is available in the bin folder of JChem. For detailed help please run:
duplicatecheck -help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17113009095191956, "perplexity": 7686.352416390454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201996.61/warc/CC-MAIN-20190319143502-20190319165502-00307.warc.gz"} |
http://math.stackexchange.com/users/23793/user23793?tab=activity&sort=all | user23793
Reputation
361
Next privilege 500 Rep.
Access review queues
1d comment Proving harmonic function is zero I think you meant the boundary of omega. Hint: use a connectivity argument. 1d comment Exchanging Series with Integrals: when is it possible? Note that by linearity, it's always possible to do this for finite sums. For infinite sums, it becomes a matter of exchanging a limit and an integral which you can use Fubini's convergence theorem, monotone convergence theorem, Vitali's convergence theorem, etc. 1d comment Convergence of $\sum_{-\infty}^{\infty}e^{-\pi tn^2}$ Thank you very much. I had an inkling that it was this simple. 1d accepted Convergence of $\sum_{-\infty}^{\infty}e^{-\pi tn^2}$ 1d asked Convergence of $\sum_{-\infty}^{\infty}e^{-\pi tn^2}$ Sep 28 awarded Popular Question Sep 7 awarded Notable Question Jul 6 comment Prove that $V$ is the direct sum of $W_1, W_2 ,\dots , W_k$ if and only if $\dim(V) = \sum_{i=1}^k \dim W_i$ I think you mean the sum of the dimension of the span of $B_i$ is the dimension of $W_i$. Jul 4 comment Let $f=g$ on $[a,b]/E$ where $f\in \mathcal{R}[a,b]$ and continuous on $[a,b]$. Then $g\in\mathcal{R}[a,b]$ and $\displaystyle\int_a^b f=\int_a^bg$. My question would be if this proof is correct. I saw this problem in an old analysis book and so I thought I would try it out. Sorry, I should have been more clear. Jul 4 revised Let $f=g$ on $[a,b]/E$ where $f\in \mathcal{R}[a,b]$ and continuous on $[a,b]$. Then $g\in\mathcal{R}[a,b]$ and $\displaystyle\int_a^b f=\int_a^bg$. added 148 characters in body Jul 4 comment Let $f=g$ on $[a,b]/E$ where $f\in \mathcal{R}[a,b]$ and continuous on $[a,b]$. Then $g\in\mathcal{R}[a,b]$ and $\displaystyle\int_a^b f=\int_a^bg$. Sorry about that, I have corrected the statement in my edit. Jul 4 revised Let $f=g$ on $[a,b]/E$ where $f\in \mathcal{R}[a,b]$ and continuous on $[a,b]$. Then $g\in\mathcal{R}[a,b]$ and $\displaystyle\int_a^b f=\int_a^bg$. added 98 characters in body Jul 4 asked Let $f=g$ on $[a,b]/E$ where $f\in \mathcal{R}[a,b]$ and continuous on $[a,b]$. Then $g\in\mathcal{R}[a,b]$ and $\displaystyle\int_a^b f=\int_a^bg$. Jun 19 accepted If, $\lim x_n$ exists and finite then there is a function $f$ that is continuous Jun 19 asked If, $\lim x_n$ exists and finite then there is a function $f$ that is continuous Jun 6 accepted Difficult limits every grad should be able to do Jun 5 asked Difficult limits every grad should be able to do May 27 accepted $\lim_{n\to \infty} n^{1/n^2}$ May 27 comment $\lim_{n\to \infty} n^{1/n^2}$ Ahh, this is much better. Thanks! May 27 asked $\lim_{n\to \infty} n^{1/n^2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788391590118408, "perplexity": 285.46263907898015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447860.26/warc/CC-MAIN-20151124205407-00251-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/lockers-math-problem.57654/ | # Lockers math problem
1. Dec 23, 2004
### T@P
there are 100 lockers lined up in a row, and for some unknown reason they are unlocked. for a similarly unknown reason there are 100 students lined up outside the hallway containing the 100 lockers. the first student goes and opens all the lockers. the second then goes and closes every second. the third either opens or closes every third locker, (changes the "state" of it) for example, the 6th locker is closed when the 3rd guy comes to it, so he opens it. after all 100 students go by, what lockers are open/closed? please note i dont think that you should list out all 100 lockers and their state, there is a pattern.
2. Dec 23, 2004
### Bartholomew
The state of each locker is altered a number of times equal to the total number of factors of the locker number. So the lockers with numbers that have an even number of factors (including 1) are closed, and the lockers with numbers that have an odd number of factors are open. (Not counting just prime factors, counting all factors, and numbering the lockers from 1)
Last edited: Dec 23, 2004
3. Dec 27, 2004
### NateTG
What do all numbers that have an odd number have in common? It's a pretty well-known property...1,4,9...?
4. Dec 27, 2004
### Bartholomew
That's cool!
5. Dec 28, 2004
### Gokul43201
Staff Emeritus
What do all numbers that have an odd number of factors have in common?
6. Dec 28, 2004
### Bartholomew
Of course, that's what he meant. I'm no expert, but: First you get the prime factorization of the number. Each factor can be generated by choosing some number of each prime factor (a number from 0 to the power of that factor) and multiplying them all together. So the total number of factors is (a + 1) * (b + 1) * (c + 1) ... where a, b, c, ... are the exponents on 2, 3, 5, ... in the prime factorization. So for this product to be odd, all of a + 1, b + 1, c + 1, ... must be odd, so all of a, b, c, ... must be even, so the original number is a perfect square (and vice versa, if it's a perfect square then it has an odd number of factors).
7. Dec 28, 2004
### NateTG
Nicely done. Yeah, Gokul I need to be more careful when I type/post.
8. Dec 29, 2004
### Gokul43201
Staff Emeritus
Me neither, but I believe that's how the experts do it too, but being experts they like to use fancy terms.
$$N = \Pi p_i ^{k_i}$$
$$\tau (N) = \Pi (k_i + 1)$$
$\tau (N)$, known simply as the tau function, is a multiplicative function that counts the number of divisors of a given number.
So, an "expert" might simply say that for $\tau (N)$ to be odd, all of $k_i$ must be even, or $k_i = 2m_i$. Which gives $N = \Pi p_i ^{2m_i} = (\Pi p_i ^{m_i} )^2 = M^2$.
9. Jan 8, 2005
### ShawnD
Is the answer 31? My physics teacher gave this problem as a bonus questions, so I made a C++ program to figure out all the perfect squares between 1 and 1000, and the program says the answer is 31.
I'll get an extra 10% on the next lab if I get this right, so it's kind of important that I get confirmation before Monday, January 10, 2005.
Last edited: Jan 8, 2005
10. Jan 8, 2005
### jamesrc
Select to read. (I'm editing to put into spoiler text because that's what other people did.)
Yes, presuming your question was for 1000 lockers (and students) and they all started off closed and they asked which ones were open after the process was over. (The original post was only for 100 lockers.) You didn't really need the progam, though: like the lay version of what Gokul said, any number with an odd number of factors will end up open, meaning all of the perfect square numbered doors will end up open (which you already know). So all you had to do was look at the square root of 1000 and truncate the decimal places (round down to the nearest integer) and that would be the answer (31).
11. Jan 8, 2005
### ShawnD
Thanks for the confirmation. When I get some free time, I will look closer at Goku's stuff to see what it means or at least how he got that.
*edit
that spoiler text is still very visible in a quote box :rofl:
12. Jan 8, 2005
### Gokul43201
Staff Emeritus
I suggest you read Bart's explanation. It's in a form that's a lot simpler to absorb. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745053768157959, "perplexity": 718.827983925068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00087-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://assert.pub/papers/1907.06549 | ##### Subgroups of simple primitive permutation groups defined by unordered relations
The problem of describing the invariance groups of unordered relations, called briefly \emph{relation groups}, goes back to classical work by H. Wielandt. In general, the problem turned out to be hard, and so far it has been settled only for a few special classes of permutation groups. The problem have been solved, in particular, for the class of primitive permutation groups, using the classification of finite simple groups and other deep results of permutation group theory. In this paper we show that, if $G$ is a finite simple primitive permutation group other then the alternating group $A_n$, then each subgroup of $G$, with four exceptions, is a relation group.
###### NurtureToken New!
Token crowdsale for this paper ends in
###### Authors
Are you an author of this paper? Check the Twitter handle we have for you is correct.
###### Subcategories
#1. Which part of the paper did you read?
#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?
User:
Repo:
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
0
###### Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
[5, 6, 7, 8, 9]
07/15/19 06:02PM
2,800
902
###### Tweets
mathGRbot: Mariusz Grechand, Andrzej Kisielewicz : Subgroups of simple primitive permutation groups defined by unordered relations https://t.co/lXkiH2hYxp https://t.co/KemoCpqt4j | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479123950004578, "perplexity": 1213.8097017142186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00548.warc.gz"} |
https://bioinformatics.stackexchange.com/questions/6885/plotting-coverage-of-annotation-over-collection-of-region | # Plotting coverage of annotation over collection of region
I'm trying to plot "meta" coverage of annotation: i.e. features (eg. gene class) over certain regions. It is similar to read coverage plots over gene body, except my input is two bed files (both in BED6 format) - (A) one containing the regions for which the meta plot is needed, and another being the list of genes or any feature whose coverage over the regions in (A) is needed.
Is there any package or tool which can create such plots (my domain is limited to python but I can try to work with R) ?
Something akin to this but for whole gene body[1]:
[1]- Kelley, D, Rinn, J (2012). Transposable elements reveal a stem cell-specific class of long noncoding RNAs. Genome Biol., 13, 11:R107.
Edit: The BED file containing list of regions looks like this:
chr3 39218734 40053659 region1 0.92426187419769 +
chr4 140163762 140453127 region2 0.896103896103896 -
chr7 40549151 41205036 region3 0.986072423398329 +
chr8 81291743 81963246 region4 0.94184168012924 -
chr9 12284032 12539789 region5 0.95539033457249 -
And the bed file containing features to be plotted looks like this:
chr3 39218100 40053200 LINE 1 +
chr4 140163962 140453027 LINE 1 -
chr7 40549002 41204999 SINE 1 +
chr8 81291143 81963846 LTR 1 -
chr9 12284332 12539720 LTR 1 -
• I am not familiar with this field, so I would like to know the terms I should avoid/are not good enough. What kind of keywords have you searched? – llrs Jan 28 at 13:28
• @llrs, I'm not exactly sure what you mean by "terms I should avoid/are not good enough." - but I believe coverage plots are one class of visualisation which are more similar to binned histograms in general. Except a google search would yield in methods related to plotting coverage of transcripts or reads acquired from sequencing experiments [keywords: coverage plots, bedfiles, metaplots]. The key difference here is that I'm interested in scaled coverage over whole gene body (TSS-TES) instead of positions relative to TSS - similar to deeptools plotProfile. – Siddharth Jan 28 at 13:53
If one assumes that repeat elements of a given type (e.g., LINEs) don't overlap each other, then the following will work:
1. Split your BED file by repeat element, such that you have a LINE.bed, SINE.bed, etc.
2. Convert those to bedGraph (e.g., awk 'BEGIN{OFS="\t"}{print $$1,$$2,\$3,"1.0"}' LINE.bed > LINE.bedGraph).
3. Use UCSC tools to convert those bedGraph files to BigWig.
4. Install deepTools and run computeMatrix reference-point -b 2500 -a 20 -S LINE.bigWig SINE.bigWig LTR.bigWig -R Regions_Of_Interest.BED -o foo.mat.gz
5. Make a profile plot with plotProfile (plotProfile -m foo.mat.gz -o foo.png --perGroup)
If you do have overlapping regions you'll need to first make a disjoint set of intervals (you can use bedops for this), find the coverage of them (e.g., bedtools coverage) and then continue on with that.
• I still have to try it, but this might just work. I will mark it as answer unless something goes wrong. I would use scale-regions instead of reference-point since I'm interested in the whole region length. – Siddharth Jan 31 at 12:13
There is a reasonably nice looking tutorial on metaplots in R here: https://rpubs.com/achitsaz/94710
• Thanks for answering this, usually we'd like to have the answer self-contained. Perhaps you could improve the question by including the relevant packages/code from that website into your answer. – llrs Jan 29 at 13:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25702720880508423, "perplexity": 4876.122060056795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00145.warc.gz"} |
https://www.physicsforums.com/threads/particle-in-a-potential-well-gre-93.9954/ | # Particle in a potential well Gre. 93
1. Nov 30, 2003
### yxgao
What concepts are involved here?
93. A particle of mass m moves in the potential shown here. The period of the motion when the particle has energy E is
The potential is V = 1/2kx^2 for x <0 and V= mgx for x > 0.
A. Sqrt[k/m]
B. 2*pi*Sqrt[m/k]
c
. 2*Sqrt[2E/(mg^2)]
D. pi*Sqrt[m/k] + 2*Sqrt[2*E/(m*g^2)]
E. 2*pi*Sqrt[m/k] + 4*Sqrt[2*E/(mg^2)]
2. Nov 30, 2003
### arcnets
Harmonic oscillator and ballistic motion.
3. Nov 30, 2003
### yxgao
What is ballistic motion?
How do you arrive at the answer?
4. Dec 1, 2003
### arcnets
Ballistic motion is when a body moves in a field of constant gravity.
You can look up the formulae in any basic mechanics book (or basic mechanics website). Sorry, I'm really too lazy to type it all down here for you.
Similar Discussions: Particle in a potential well Gre. 93 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456102609634399, "perplexity": 3519.3993628116295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00198-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.jacc.org/doi/10.1016/j.jchf.2015.02.009?articleID=2375099 | Contribution of Major Lifestyle Risk Factors for Incident Heart Failure in Older Adults: The Cardiovascular Health Study
Mini-Focus Issue: Special Populations in Heart Failure
J Am Coll Cardiol HF, 3 (7) 520–528
Abstract
Objectives:
The goal of this study was to determine the relative contribution of major lifestyle factors on the development of heart failure (HF) in older adults.
Background:
HF incurs high morbidity, mortality, and health care costs among adults ≥65 years of age, which is the most rapidly growing segment of the U.S. population.
Methods:
We prospectively investigated separate and combined associations of lifestyle risk factors with incident HF (1,380 cases) over 21.5 years among 4,490 men and women in the Cardiovascular Health Study, which is a community-based cohort of older adults. Lifestyle factors included 4 dietary patterns (Alternative Healthy Eating Index, Dietary Approaches to Stop Hypertension, an American Heart Association 2020 dietary goals score, and a Biologic pattern, which was constructed using previous knowledge of cardiovascular disease dietary risk factors), 4 physical activity metrics (exercise intensity, walking pace, energy expended in leisure activity, and walking distance), alcohol intake, smoking, and obesity.
Results:
No dietary pattern was associated with developing HF (p > 0.05). Walking pace and leisure activity were associated with a 26% and 22% lower risk of HF, respectively (pace >3 mph vs. <2 mph; hazard ratio [HR]: 0.74; 95% confidence interval [CI]: 0.63 to 0.86; leisure activity ≥845 kcal/week vs. <845 kcal/week; HR: 0.78; 95% CI: 0.69 to 0.87). Modest alcohol intake, maintaining a body mass index <30 kg/m2, and not smoking were also independently associated with a lower risk of HF. Participants with ≥4 healthy lifestyle factors had a 45% (HR: 0.55; 95% CI: 0.42 to 0.74) lower risk of HF. Heterogeneity by age, sex, cardiovascular disease, hypertension medication use, and diabetes was not observed.
Conclusions:
Among older U.S. adults, physical activity, modest alcohol intake, avoiding obesity, and not smoking, but not dietary patterns, were associated with a lower risk of HF.
Introduction
Heart failure (HF) is a growing public health problem with substantial morbidity and mortality (1). In 2010, direct and indirect U.S. health care costs were $39.2 billion (2). Incidence is highest in those older than age 65 years—the most rapidly growing segment of the U.S. population—and is a leading cause of hospitalizations (3). Despite treatment advances, long-term prognosis remains poor (4). Therefore, identifying and targeting modifiable factors for primary prevention of HF is crucial for decreasing incidence and disease burden. Although lifetime risk of HF differed among population subgroups (5–7), the relative contribution of major lifestyle risk factors was unclear. During middle age, men who adhered to ≥4 healthy lifestyle habits (not smoking, regular exercise, maintaining normal weight, modest alcohol use, consuming breakfast cereals, and consuming fruits/vegetables) had a lower lifetime risk of HF (10%; 95% confidence interval [CI]: 8% to 12%) than men who did not follow any of these lifestyle factors (21%; 95% CI: 17% to 26%) (7). However, more detailed dietary information was not reported, and generalizability might be limited because this cohort included male, predominately white physicians who were at a much lower HF risk than community-based populations (5,6). In another cohort of Swedish men and women, adherence to the Dietary Approaches to Stop Hypertension (DASH) diet was associated with lower HF risk, but contributions of other lifestyle factors were not evaluated (8,9). Thus, the relative importance of overall dietary habits and other lifestyle factors for development of HF remains uncertain. To address this key public health question, we investigated the separate and combined impact of major lifestyle risk factors on HF in the Cardiovascular Health Study (CHS), a community-based prospective cohort. Methods Design and population The CHS enrolled 5,201 ambulatory men and women age ≥65 years who were randomly selected from Medicare eligibility lists in 4 U.S. communities in 1989 to 1990 an additional 687 African-American participants were enrolled in 1992 (10). Baseline evaluation included standardized physical examination, diagnostic tests, and questionnaires to determine health status, medical history, and lifestyle risk factors. For this analysis, participants were excluded if they had prevalent HF or moderate and/or severe mitral or aortic regurgitation at baseline (n = 698), missing information on lifestyle risk factors, or implausible (<500 or >4,000 kcal/day) energy intake (n = 700). Participants with prevalent hypertension, diabetes, and coronary heart disease (CHD) were included to maintain generalizability; we also evaluated effect modification by these factors. Assessment of lifestyle We evaluated dietary patterns, physical activity, alcohol use, smoking, and adiposity (body mass index [BMI] and waist circumference). Diet was assessed in 1989 to 1990 using a validated 99-item food frequency questionnaire (National Cancer Institute) (11), and again in 1995 to 1996 using a validated Willett food frequency questionnaire (12). Dietary patterns were evaluated as a time-varying exposure, with the cumulative average of intakes from the 2 food frequency questionnaires used to reduce within-person variation and obtain the best estimates of long-term dietary intake. Four dietary patterns were evaluated: Alternative Healthy Eating Index (AHEI), DASH, a score based on the American Heart Association 2020 dietary goals (AHA 2020) (13), and a Biologic pattern constructed based on a priori knowledge of CHD risk factors. Dietary components and scoring algorithms are presented in Online Table 1; scoring for the AHEI and DASH patterns has been described previously (14–16). Walking habits, including average pace and distance, were assessed by self-report at baseline and annually at each follow-up, and leisure-time activity (modified Minnesota Leisure-Time Activities questionnaire) and exercise intensity (low, medium, or high) by self-report at the baseline, third, and seventh annual visits. Alcohol use and smoking status were assessed at each annual visit. Trained personnel used standardized methods to measure weight at each annual visit, and height and waist circumference at the baseline, third, and seventh annual visits. To minimize potential misclassification from measurement error and changes in lifestyle, repeated measures were used to update lifestyle exposures using time-varying covariates with cumulative averaging. Ascertainment of incident heart failure Participants were followed by annual study clinic examinations with interim telephone contacts for 10 years and telephone every 6 months thereafter. Incident HF was adjudicated by a centralized events committee using outpatient and inpatient medical records, diagnostic tests, clinical consultations, and interviews. Confirmation of HF required: 1) diagnosis by a treating physician; 2) HF symptoms (shortness of breath, fatigue, orthopnea, or paroxysmal nocturnal dyspnea) plus signs (edema, rales, tachycardia, gallop rhythm, or displaced apical impulse) or supportive findings on echocardiography, contrast ventriculography, or chest radiography; and 3) medical therapy for HF, defined as diuretics plus either digitalis or a vasodilator. Because data on HF subtypes were incomplete for many participants, HF subtypes were not explored in this analysis. Statistical analysis Cox proportional hazards were used to estimate hazard ratios (HRs) for quintile groups of each exposure, with time at risk until HF, death, or most recent follow-up date. The proportional hazards assumption was tested by evaluating the product of each exposure times the natural log time in the model. Missing covariate information on education (<1% missing) and income (6% missing) were imputed using data on age, sex, race, and enrollment site. To assess the independent effect of a given lifestyle factor, multivariate models were mutually adjusted for each of the other lifestyle factors, plus age, sex, race, enrollment site, education, and income. Lifestyle factors were evaluated in combination to estimate the proportion of cases in the population that might be attributable to suboptimal levels of these factors (population attributable risk). The population attributable risk was calculated as: p(RR – 1)/(1 + p[RR – 1]), where p is the prevalence of individuals not in the low-risk group and RR is the associated multivariable-adjusted relative risk. Upper and lower 95% CIs of the population attributable risk were derived using this formula as well as the upper and lower 95% CI estimates of the multivariable-adjusted relative risk. Effect modification was evaluated in analyses stratified by age, sex, race, BMI, baseline drug-treated hypertension, diagnosed diabetes, and baseline CHD, with significance assessed using the Wald test, which was adjusted using the Bonferroni test for multiple comparisons. Main analyses were unadjusted for multiple comparisons. In the secondary analyses, individual dietary components of the Biological score and incident HF were evaluated. Sensitivity analyses restricted to those without prevalent CHD at baseline, and to participants with good, very good, or excellent self-reported health were conducted. A sensitivity analysis adjusting for baseline N-terminal pro–B-type natriuretic peptide was also evaluated. Analyses were conducted using Stata SE (version 12, StataCorp, College Station, Texas), with a 2-tailed α = 0.05. Results At baseline, 61% of participants were women, and the mean age was 72 years. Most participants (89%) were Caucasians; approximately 11% were African American. Distributions of demographic and lifestyle factors at baseline are shown in Table 1. Adherence to most dietary pattern scores was modest, with mean scores ranging from 46 (AHEI) to 66 (AHA 2020) of a maximum 100 points. Across quintiles for all patterns, characteristics associated with higher scores (indicating healthier diets) included female sex, higher educational attainment, and higher income. Similarly, physical activity (leisure activity, walking pace), never or former smoking, and modest alcohol use were associated with higher scores. Baseline blood pressure and diagnosed diabetes showed inverse relations across quintiles of diet scores; prevalent hypertension medication use and CHD showed no consistent pattern across scores. Table 1. Characteristics of Older Adults by Diet-Quality Scores (n = 4,490) BiologicDASHAHEIAHA 2020 Quintile 1Quintile 5Quintile 1Quintile 5Quintile 1Quintile 5Quintile 1Quintile 5 Diet score21.1 ± 2.539.4 ± 2.316.7 ± 2.131.7 ± 1.621.4 ± 4.758.8 ± 5.333.6 ± 6.266.6 ± 2.9 Range11–2437–499–1930–385.5–27.552.5–80.57–3763–77 Standard score42.2 (5.0)78.8 (4.6)41.8 (5.3)79.3 (4.0)24.7 (5.4)67.0 (6.0)42 (7.8)83.3 (3.6) Range22–4874–9822.5–4875–956.3–31.459.9–91.88.8–46.378.8–96.3 Age, yrs72.5 ± 5.571.7 ± 4.972.1 ± 5.372.0 ± 5.172.7 ± 5.671.6 ± 4.772.4 ± 5.671.8 ± 4.9 Sex Male58.323.759.233.656.329.655.833.4 Female41.676.340.866.443.570.444.267.6 Race Caucasian89.287.688.688.788.391.988.890.8 Non-Caucasian10.812.311.410.911.78.111.29.2 Education <High school40.216.831.325.246.211.037.520.5 High school33.438.335.338.131.933.932.237.5 >High school26.344.833.436.721.955.130.342.0 Income, U.S.$/yr
<25,00070.352.863.459.174.942.570.453.3
25,000–49,99922.030.123.228.419.135.020.629.6
≥50,0007.617.113.412.56.022.69.017.1
Leisure activity, kcal/week1,703 ± 2,1162091 ± 2,0971,755 ± 2,1092,033 ± 1,9871,772 ± 2,1801,938 ± 1,9201,746 ± 2,1291,925 ± 1,924
Walking pace, mph
<235.515.332.023.236.513.631.918.8
2–341.738.638.938.842.937.542.139.7
>322.846.029.238.020.648.926.041.6
Smoking
Never41.948.743.151.243.146.039.152.1
Former41.843.245.038.939.846.744.138.5
Current16.38.011.99.617.17.016.79.3
Alcohol use, drink/week
049.638.844.444.063.123.148.240.5
<117.721.417.419.315.523.417.221.6
1–314.615.616.213.16.922.212.714.2
>318.024.222.023.614.431.321.923.8
BMI, kg/m2
<22.04.84.03.44.55.34.45.83.8
22.0–24.934.235.334.034.433.339.934.732.5
25.0–29.942.742.642.840.340.143.242.244.3
≥30.018.318.120.820.821.313.518.320.4
Blood pressure, mm Hg
SBP139 ± 20135 ± 19139 ± 20136 ± 18140 ± 21135 ± 19139 ± 20136 ± 19
DBP72 ± 1168 ± 1372 ± 1268 ± 1272 ± 1269 ± 1272 ± 1269 ± 11
Prevalent hypertension41.740.940.742.042.838.139.242.8
CHD15.816.715.316.714.916.316.416.9
Diabetes21.917.921.918.323.514.123.418.8
Values are mean ± SD, range, or %. Dietary components of the Biologic pattern included: 1) fruits; 2) vegetables; 3) whole grains; 4) fish; 5) polyunsaturated to saturated fat ratio; 6) nuts/seeds; 7) red and processed meats; 8) sugar-sweetened beverages; 9) transfat; and 10) sodium. For DASH (Dietary Approaches to Stop Hypertension): 1) low-fat dairy; 2) fruits; 3) vegetables; 4) nuts and legumes; 5) whole grains; 6) red and processed meats; 7) sugar-sweetened beverages; 8) and sodium. For Alternative Healthy Eating Index (AHEI): 1) fruits; 2) vegetables; 3) nuts and soy protein; 4) cereal fiber; 5) polyunsaturated to saturated fat ratio; 6) transfat; 7) alcohol; 8) long-term multivitamin use; and 9) white:red meat ratio. For American Heart Association 2020 dietary goals score (AHA 2020): 1) fruits and vegetables; 2) fish; 3) fiber-rich whole grains; 4) nuts, legumes, and seeds; 5) sodium; 6) sugar-sweetened beverages; 7) processed meats; and 8) saturated fat. For scoring of dietary patterns, see Online Table 1.
BMI = body mass index; CHD = coronary heart disease; DBP = diastolic blood pressure; SBP = systolic blood pressure.
∗ Points obtained of a maximum score of 50 for the Biologic pattern, 40 for DASH, 87.5 for AHEI, and 80 for the AHA 2020 pattern. Standardized scores were scaled to a maximum score of 100 points.
† Missing values for income (6.3% missing) were imputed using data on age, sex, race, and enrollment site.
During 51,850 person-years (maximum follow-up: 21.5 years), 1,380 HF cases occurred. After adjustment for demographic and lifestyle variables, no dietary pattern was associated with incident HF (Table 2). Results were not materially different when energy-unadjusted patterns were analyzed or when dietary scores were evaluated continuously (data not shown). In contrast, physical activity measures (exercise intensity, walking pace, leisure activity, and walking distance) were each associated with lower HF incidence in demographic-adjusted multivariate models. When mutually adjusted for other lifestyle variables, including other physical activity metrics, the highest category of walking pace and leisure activity, but not exercise intensity and walking distance, were each independently associated with lower HF risk (Online Table 2).
Table 2. Hazard Ratios (95% CI) for Incident HF by Quintiles of Diet-Quality Scores in Older U.S. Adults (n = 4,490)
Quintiles of Diet-Quality Scoresp Value for Trend
12345
Biologic
Cases/person–yrs254/9,702310/9,903208/10,539257/10,632218/11,073
Multivariate1.00 (ref)1.21 (1.03–1.43)1.08 (0.91–1.28)1.12 (0.94–1.34)1.04 (0.89–1.31)0.96
+ Mediator adjusted1.00 (ref)1.18 (1.00–1.39)1.05 (0.88–1.24)1.08 (0.91–1.28)0.99 (0.82–1.19)0.62
DASH
Cases/person–yrs284/10,261330/11,968268/10,271263/9,698235/9,652
Multivariate1.00 (ref)1.12 (0.96–1.32)1.12 (0.95–1.33)1.23 (1.03–1.26)1.11 (0.93–1.33)0.12
+ Mediator adjusted1.00 (ref)1.11 (0.95–1.31)1.06 (0.90–1.26)1.19 (1.00–1.41)1.05 (0.88–1.26)0.36
AHEI§
Cases/person–yrs301/9,655290/9,779303/11,274258/10,270228/11,445
Multivariate1.0 (ref)1.06 (0.90–1.25)1.04 (0.88–1.22)1.04 (0.87–1.24)0.94 (0.78–1.14)0.51
+ Mediator adjusted1.00 (ref)1.01 (0.86–1.19)1.01 (0.85–1.19)1.00 (0.85–1.20)0.90 (0.74–1.09)0.33
AHA 2020
Cases/person–yrs281/9,766283/10,569309/10,368267/11,100240/10,618
Multivariate#1.00 (ref)1.02 (0.86–1.21)1.19 (1.01–1.40)1.09 (0.92–1.30)1.01 (0.84–1.21)0.57
+ Mediator adjusted∗∗1.00 (ref)1.04 (0.88–1.22)1.15 (0.97–1.35)1.05 (0.89–1.25)0.96 (0.80–1.15)0.88
Values are hazard ratio (HR) (95% confidence intervals [CI]) based on cumulatively averaged a priori score, DASH score, the AHEI score, and the AHA 2020 score.
HF = heart failure; other abbreviations as in Table 1.
∗ Linear trend was tested by assigning the median value to participants in each quintile and entering this into the model as a continuous variable.
† Biologic dietary pattern comprised 10 components: 1) fruits; 2) vegetables; 3) whole grains; 4) fish; 5) polyunsaturated to saturated fat ratio; 6) nuts/seeds; 7) red and processed meats; 8) sugar-sweetened beverages; 9) transfat; and 10) sodium. For scoring of dietary patterns, see Online Table 1.
‡ DASH dietary pattern comprised 8 components: 1) low-fat dairy; 2) fruits; 3) vegetables; 4) nuts and legumes; 5) whole grains; 6) red and processed meats; 7) sugar-sweetened beverages; and 8) sodium.
§ AHEI dietary pattern comprised 9 components: 1) fruits; 2) vegetables; 3) nuts and soy protein; 4) cereal fiber; 5) polyunsaturated to saturated fat ratio; 6) transfat; 7) alcohol; 8) long-term multivitamin use; 9) white to red meat ratio.
‖ AHA 2020 dietary pattern comprised 8 components: 1) fruits and vegetables; 2) fish; 3) fiber-rich whole grains; 4) nuts, legumes, and seeds; 5) sodium; 6) sugar-sweetened beverages; 7) processed meats; and 8) saturated fat.
# Multivariate model: adjusted for age (years), sex (male vs. female), race (Caucasian vs. non-Caucasian), enrollment site (4 clinics), education (less than high school, high school, more than high school), annual income (<$25,000,$25,000 to $49,999, >$50,000), kilocalorie of physical activity (quintiles), walking pace (<2, 2 to 3, >3 mph), smoking (never, former, current), alcohol intake (0, <1, 1 to 2, ≥3 drinks/week).
∗∗ Mediator adjusted: Multivariate model + additional adjustment for potential mediators, including body mass index (kilograms divided by square meters), prevalent treated hypertension (yes vs. no), prevalent diabetes mellitus (yes vs. no), prevalent coronary heart disease (yes vs. no). Additional adjustment for other potential mediators, such as fasting glucose, fasting insulin, blood pressure, triglycerides, or C-reactive protein to the mediator-adjusted model had no influence on model estimates and were not included in mediator-adjusted models.
After multivariable adjustment, smoking, modest alcohol intake, BMI, and waist circumference were each independently associated with incident HF, with 37%, 30%, 37%, and 20% lower risk among older adults in the lowest risk groups, respectively (Online Table 3). Because BMI was more strongly associated with HF than waist circumference, BMI (low risk group <30 kg/m2) was evaluated with other low-risk lifestyle factors (≥2 mph walking pace, leisure activity ≥845 kcal/week, no current smoking, ≥1 alcohol drink/week) to assess combined associations with incident HF. Compared with individuals with 0 or 1 low-risk lifestyle factors, participants had lower risk of HF if they had 2 (HR: 0.78; 95% CI: 0.62 to 0.97), 3 (HR: 0.64; 95% CI: 0.52 to 0.80), 4 (HR: 0.56; 0.44 to 0.70), or 5 (HR: 0.55; 95% CI: 0.42 to 0.74) low-risk lifestyle factors (Figure 1).
Compared with the low-risk group for each, each lifestyle factor was estimated to explain between 5% (smoking) and 18% (alcohol use) of the population risk of developing HF in older adults (Table 3). Lack of adherence to an overall healthy diet pattern had no associated attributable risk in older adults.
Table 3. Relative Risk of Incident HF by Lifestyle Factors and Adiposity in U.S. Older Adults (n = 4,490)
% of Total ParticipantsPerson-Years of Follow-UpHF CasesMultivariate Model
HR (95% CI)
Multivariate + Lifestyle Model
HR (95% CI)
Population Attributable Risk
HR (95% CI)
Healthy diet pattern§
Lower 2 quintiles36.119,6055831.00 (ref)1.00 (ref)
Upper 3 quintiles63.932,2447970.91 (0.81 to 1.02)0.98 (0.87 to 1.09)0 (–3 to – 5)
Walking pace, mph
<228.612,7704541.00 (ref)1.00 (ref)
≥271.439,0799260.72 (0.64 to 0.81)0.80 (0.71 to 0.90)7 (3 to 11)
Leisure activity, kcal/week
<84541.219,7756241.00 (ref)1.0 (ref)
≥84558.832,0747560.72 (0.64 to 0.80)0.78 (0.69 to 0.87)11 (6 to 15)
Smoking
Current11.65,4521491.00 (ref)1.00 (ref)
Never or former88.44,63971,2310.77 (0.65 to 0.92)0.71 (0.59 to 0.88)5 (2 to 7)
Alcohol intake#, drink/week
<172.336,8031,0401.00 (ref)1.00 (ref)
≥127.715,0453400.78 (0.68 to 0.88)0.77 (0.67 to 0.88)18 (9 to 26)
Body mass index, kg/m2
≥30.019.210,0503361.00 (ref)1.00 (ref)
<30.080.841,7991,0440.66 (0.62 to 0.82)0.70 (0.61 to 0.80)8 (5 to 11)
Low-risk factors
<4 low-risk factors62.032,1847731.00 (ref)1.00 (ref)23 (14 to 26)
≥438.019,6656070.54 (0.40 to 0.66)0.55 (0.42 to 0.74)
Values are HR (95% CI) based on cumulatively updated exposures.
Abbreviations as in Table 2.
∗ The multivariate model was adjusted for age (years), sex (male vs. female), race (Caucasian vs. non-Caucasian), enrollment site (4 clinics), education (less than high school, high school, more than high school), annual income (<$25,000,$25,000 to $49,999, >$50,000).
† The multivariate + lifestyle model was mutually adjusted for other lifestyle factors in the table (categorization: healthy diet pattern [quintiles], leisure activity, kilocalories per week [quintiles], walking pace [<2, 2 to 3, >3 mph], smoking [never, former, current], alcohol intake [0, <1, 1 to 3, ≥3 drinks/week], body mass index [kilogram divided by square meter]).
‡ The population attributable risk is the percentage of new cases of heart failure in the population attributable to nonadherence to the low-risk lifestyle factor. Risk estimates from the multivariate + lifestyle model were used in calculating population attributable risk.
§ The Biologic pattern was used for the healthy diet score. The components of dietary pattern included fruits, vegetables, whole grains, fish/seafood, polyunsaturated to saturated fat ratio, nuts/seeds, red and processed meats, sugar-sweetened beverages, transfat, and sodium. Results were similar using DASH, AHEI, or AHA 2020 instead of the Biologic pattern.
‖ Kilocalorie cutoff approximates amount of energy expended by adhering to Centers for Disease Control and Prevention physical activity recommendations for older adults to achieve important health benefits.
# Alcohol intake was modest; <10% of adults consumed >2 drinks/week.
Heterogeneity in the association between diet and HF was not observed by any of the potential effect modifiers (pinteraction >0.05) (Online Table 4), although the AHA 2020 diet pattern was associated with a trend toward lower HF risk in African Americans (pinteraction = 0.07) and those without baseline CHD (pinteraction = 0.06; HR: 0.86; 95% CI 0.70 to 1.06). After Bonferroni correction, there was also little evidence for heterogeneity by age, sex, baseline CHD, treated hypertension, and diabetes for the association of nondietary lifestyle factors with HF (Figure 2).
In the secondary analyses that evaluated individual dietary components, including fruits, vegetables, whole grains, fish, polyunsaturated to saturated fat ratio, nuts, red meat, processed meat, sugar-sweetened beverages, and transfat, no component was significantly associated with incident HF (Online Table 5). After adjusting for demographic and lifestyle variables, sodium was associated with a 19% increased risk of incident HF in the highest versus lowest quintile of intake. When potential mediators were added to the model, the association was slightly attenuated (highest quintile HR: 1.15; 95% CI 0.96 to 1.36; ptrend = 0.05), with diagnosed diabetes having the largest influence on mitigating the sodium–HF association.
All results were similar if we excluded participants with prevalent CHD, except for a stronger positive association between dietary sodium and incident CHD (ptrend = 0.014) (Online Tables 6 to 8). In additional sensitivity analysis restricted to adults with self-perceived excellent, very good, or good health, associations were not materially altered (Online Tables 9 and 10). In contrast, inverse associations of walking pace and leisure activity were attenuated in self-perceived healthier participants (Online Table 10). Adjustment for baseline N-terminal pro–B-type natriuretic peptide had minimal influence on associations for diet patterns, walking pace, smoking, and alcohol intake (Online Table 11).
Discussion
In this large, prospective cohort of older U.S. adults, lifestyle factors, including moderate alcohol use, physical activity, not smoking, and avoiding obesity late in life, were each independently associated with a lower risk of incident HF. Participants with ≥4 of these healthy lifestyle factors, compared with none, had a 45% lower risk of developing HF. Among different physical activity measures, walking pace and leisure activity, but not exercise intensity, were each independently associated with lower risk. If associations were causal, our results suggested that moderate physical activity (>845 kcal/week in leisure activity) and higher walking speed (>3 mph), rather than high-intensity exercise, might be most useful in older adults for HF prevention. A second important, and unexpected, finding, was that overall dietary patterns were not associated with HF. However, higher sodium intake was associated with trends toward increased risk in the overall cohort, and significantly increased risk among those without CHD at baseline. Our work provided an assessment of the relative importance and burden of major lifestyle factors in the development of HF in older adults, the fastest growing segment of the U.S. population.
The association of physical activity with lower HF is mechanistically plausible because of the benefits on endothelial function, autonomic function, nitric oxide bioavailability, and progenitor cell mobilization (17,18). Benefits may be mediated through prevention of hypertension, left ventricular hypertrophy, obesity, CHD, and type 2 diabetes, which are all major risk factors for HF (17). Our findings that moderate energy expenditure through leisure activity is associated with a lower HF risk, as well as the lack of independent association for exercise intensity, are consistent with recent prospective analyses that showed little or no additional benefit accrued from vigorous over moderate activity for prevention of hypertension, CHD, and diabetes (19,20). Physical activity was no longer associated with HF in sensitivity analyses restricted to individuals with better self-perceived health. This attenuation could reflect better adjustment due to confounding from subclinical morbidity. Alternatively, the attenuation could reflect over-adjustment (adjustment for a major mediator of the effect), because physical activity increases self-perceived health in older adults, including benefits on mental well-being, physical well-being, quality of life (21), and reduced hospitalizations among patients with HF (22).
Moderate alcohol use is associated with lower HF risk in most longitudinal analyses (23). Although the upper limit of the low-risk alcohol intake category in our analysis was unconstrained, alcohol use in the CHS was low, with only 25% of participants consuming >1 drink/week, and <10% consuming >2 drinks/week. Heavy alcohol use induces alcoholic cardiomyopathy (24) and increases risk of HF, whereas modest use improves endothelial function and increases plasma atrial natriuretic peptide (25). In the absence of large randomized trials of moderate alcohol use and incident HF, in addition to risk of abuse and other adverse health consequences, it would be premature to recommend alcohol intake for public health prevention of HF. However, our findings support modest use among current alcohol users without contraindications.
Avoidance of smoking and obesity were also independently associated with lower HF risk. The observed magnitude of association for smoking was similar to that observed in the Coronary Artery Surgery Study, in which smoking was associated with a 47% increased risk of HF (26). Associations of obesity with incident HF may be mediated through hypertension, CHD, type 2 diabetes, and sleep apnea (27); key postulated mechanisms include increases in atherogenic lipids, cardiac preload and afterload, and neurohormonal disruption (17).
Although robust evidence from longitudinal studies and randomized trials showed that healthful dietary patterns reduced major HF risk factors, such as high blood pressure and CHD (13,14,28,29), no known large trials and few observational studies directly linked dietary patterns and their components with incident HF; these studies had mixed results. Our findings of no association of overall dietary patterns with HF differed from 2 Swedish cohorts, in which adherence to DASH was associated with a 22% to 37% lower risk of HF in the highest quartiles (8,9). The absolute incidence of HF in these cohorts was approximately 8-fold lower than in the CHS, and these previous studies only captured HF events that resulted in hospitalization or death. In the CHS, incident HF was adjudicated by a centralized CHS committee that used all available outpatient and inpatient data, which more completely captured HF, including those treated on an outpatient basis. The CHS also included only elderly participants, compared with middle-aged participants in the Swedish cohorts. These differences could account for the varied findings; our results highlighted the need for further study of dietary factors, including sodium, and incident HF.
Our analysis had several strengths. Demographic characteristics, lifestyle factors, and HF events were prospectively recorded with little loss to follow-up over 21.5 years. Repeated assessments of lifestyle factors allowed for cumulative updating, minimizing misclassification. Multiple dietary patterns were investigated, including 2 established patterns associated with reduced hypertension and CHD (DASH and AHEI) and 2 additional biologically derived patterns (Biologic and AHA 2020). The lack of association observed across all patterns for incident HF implied that our dietary findings were robust. We restricted our sensitivity analyses to those without prevalent CHD, which reduced confounding by indication and corroborated our main findings. We used time-varying covariates to adjust for time-dependent confounding. A large number of validated HF cases provided sufficient power to detect associations, including across multiple risk categories. Participants were selected randomly and enrolled from Medicare eligibility lists in several U.S. communities, which provided a population-based sample of older adults and increased generalizability.
Study limitations
Potential limitations should be considered. Although we adjusted for major demographics and lifestyle factors, residual confounding by unknown or unmeasured factors might be present. Associations of dietary patterns and physical activity might be mediated through adiposity, hypertension, and prevalent CHD; therefore, analyses adjusted for these factors might underestimate the impact of lifestyle on HF. However, we performed stratified analyses and tests for interactions for diet and lifestyle factors by these potential mediating factors. Some misclassification of lifestyle factors was inevitable, especially in those that were assessed via self-report, as well as in those analyzed in categories or dichotomously, which would likely attenuate findings toward the null and underestimate the true magnitude of associations. We also highlight the need to better understand and integrate the determinants and relative contribution of lifestyle and other risk factors for HF beyond the scope of this work, including congenital defects, cardiomyopathies, drugs and/or toxins, renal dysfunction, and genetic risk predictors.
Conclusions
Our findings suggested that adherence to a few modifiable risk factors, including physical activity, moderate alcohol use, not smoking, and avoiding obesity, halved the risk of incident HF later in life. Although overall dietary patterns were not associated with lower HF risk in this cohort, adherence to a healthy diet remains crucial for prevention of other cardiometabolic diseases, including hypertension, type 2 diabetes, and CHD. Our results also underscored the importance of further investigating specific dietary determinants, such as sodium, and physical activity type, duration, and frequency in future studies and trials for HF prevention among older adults.
Perspectives
COMPETENCY IN MEDICAL KNOWLEDGE: In older adults, the risk of developing HF can be cut in half by adherence to key protective lifestyle factors, including modest physical activity, in addition to not smoking and maintaining a healthy weight.
TRANSLATIONAL OUTLOOK: Randomized trials investigating physical activity type, duration, and frequency in more detail for the prevention of HF in older adults are needed.
Appendix
Online Tables 1–11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3424464464187622, "perplexity": 22389.398291098663}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00116.warc.gz"} |
http://freetoplayeconomics.com/2017/01/16/players-go-to-their-highest-valued-ltv-ads-are-beautiful-pareto-exchanges/ | # Players go to their highest valued LTV: Ads are Beautiful Pareto Exchanges
Previously, I wrote about ads as a way to monetize non-payers, but there’s more to the ad exchange and what I’ll coin as ‘portfolio pumping’. It’s like portfolio theory, but not really.
These terms reference two growing phenomenon in F2P games. King is at the forefront of portfolio pumping, in which a given firm pushes a player from game to game within the firm’s portfolio.
Unlike portfolio pumping, ad exchanges push players to another firm’s games. Companies like Scopely are more fond of ad exchanges.
Frequently, the ads being served are for competitor games. Why would a company show ads for its competitors? In addition, why would firms want players to move from one game in their portfolio to another? I argue the underlying explanation is Pareto Efficiency which is just a fancy term for trade.
Ads for competitor games only make sense to the ad-server if
$churned player LTV < ad revenue$
and to the advertiser if
$acquired player LTV > ad cost$
It tends to be the case that a given company will engage in both ad buying and selling. The outcome of these ad exchanges are migrations of players to the games in which they have the highest LTV; the initial allocation doesn’t matter. This process takes place in high-speed auctions where firms are constantly in the search for the maximizing the equations outlined above. The decision rule for portfolio pumping is similar, but we add some special conditions, mainly the probability of simultaneous play.
$P(rLTV_{i} + nLTV_{i}) + P(nLTV_{i}) > rLTV_{i}$
Where,
$P(rLTV_{i} + nLTV_{i})$ is the probability of playing both games simultaneously. We add up both of the LTVs in this case.
$rLTV_{i}$ is the remaining LTV in the old game for the ith player, while nLTV is the LTV for the new game for the ith player.
This must be bigger than $rLTV_{i}$ for profitability.
Of course, there are ways to play with this. Wooga tried altering portfolio game prompts during a player’s lifespan but found no effect.1 King continues to portfolio pump but dropped ads in Candy Crush Saga.
It’s a goddamn gorgeous process that should litter econ textbooks like lighthouses and lemons.
1. Runge, Julian, et al. “Churn prediction for high-value players in casual social games.” 2014 IEEE Conference on Computational Intelligence and Games. IEEE, 2014.
## Author: pblack
I like thinking about games, and I hope to do more of it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2420060634613037, "perplexity": 3448.0866967419943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00435.warc.gz"} |
https://www.physicsforums.com/threads/how-to-solve-a-problem-involving-modulus.675807/ | # How to solve a problem involving modulus
1. Mar 2, 2013
### hackish
Even though this deals with programming an encryption algorithm I feel this is more math based so I'm asking it here.
x=((y mod 380951)*3182) mod 380951
How can I solve for y?
1) the math involved here is limited to integers, so for example division by 3182 must result in an integer.
2) the number 380951 is a prime number.
3) the result must be smaller than 2^24
4) 3182-1 is also prime but I don't think this helps the solution.
5) x is an integer in the range of 0..2^16
I've read all the wiki pages on fermat's little theorem and the extended euclidean algorithm but the concepts they describe are beyond my math abilities.
I understand that there are multiple correct answers to this: EX:
y=291905 ;x=83172
or
y=1434758;x=83172
The first one will do and would be preferred since it has the greatest chance of fitting within the 24 bit result.
At present I've been exhaustively calculating it by trying every possible multiple of the prime number + the result / 3182 and it works so I know a solution is possible. Relating to the example above if you multiply 380951*2834+83172 gives 928841710 then divide by 3182 gives 291905.
Can anyone give steps to calculate the answer I need?
2. Mar 2, 2013
### chiro
Hey hackish and welcome to the forums.
You may want to try writing y as y = pq + r where p = 380951 and r is between 0 and 380950 inclusive.
Then do the same sort of thing for the outer modulus and bring the results together.
3. Mar 3, 2013
### hackish
Ok, someone else solved it... thanks.
Turns out you can apply (3182^(380951-2)) mod 380951 then multiply that by x, then mod 380951 and you get y.
Wow. A little math wizardry and encrypting a file now takes 250ms instead of 23 minutes.
Last edited: Mar 3, 2013
Similar Discussions: How to solve a problem involving modulus | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087149500846863, "perplexity": 1113.5520934275457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00237.warc.gz"} |
http://clay6.com/qa/41502/a-compound-formed-by-elements-x-and-y-has-a-cubic-structure-in-which-x-atom | # A compound formed by elements X and Y has a cubic structure in which X atoms are at the corner of the cube and Y atoms are at the face centres.(a) Calculate: (i) $Z_{eff}$, (ii) total number of atoms in a cube, and (iii) formula of the compound.
(i)4,(ii)14,(iii) $XY_3$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602888822555542, "perplexity": 426.546247346448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00435.warc.gz"} |
http://hal.in2p3.fr/in2p3-01301197 | # Nuclear excitations as coupled one and two random-phase-approximation modes
Abstract : We present an extension of the random-phase approximation (RPA) where the RPA phonons are used as building blocks to construct the excited states. In our model, that we call double RPA (DRPA), we include up to two RPA phonons. This is an approximate and simplified way, with respect to the full second random-phase approximation (SRPA), to extend the RPA by including two-particle–two-hole configurations. Some limitations of the standard SRPA model, related to the violation of the stability condition, are not encountered in the DRPA. We also verify in this work that the energy-weighted sum rules are satisfied. The DRPA is applied to low-energy modes and giant resonances in the nucleus O16. We show that the model (i) produces a global downwards shift of the energies with respect to the RPA spectra and (ii) provides a shift that is, however, strongly reduced compared to that generated by the standard SRPA. This model represents an alternative way of correcting for the SRPA anomalous energy shift, compared to a recently developed extension of the SRPA, where a subtraction procedure is applied. The DRPA provides results in good agreement with the experimental energies, with the exception of those low-lying states that have a dominant two-particle–two-hole nature. For describing such states, higher-order calculations are needed.
http://hal.in2p3.fr/in2p3-01301197
Contributor : Sophie Heurteau <>
Submitted on : Monday, April 11, 2016 - 5:35:38 PM
Last modification on : Thursday, January 11, 2018 - 6:12:41 AM
### Citation
D. Gambacurta, F. Catara, M. Grasso, M. Sambataro, M. V. Andrés, et al.. Nuclear excitations as coupled one and two random-phase-approximation modes. Physical Review C, American Physical Society, 2016, 93 (2), pp.024309. ⟨10.1103/PhysRevC.93.024309⟩. ⟨in2p3-01301197⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652968406677246, "perplexity": 2451.5319012770215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00202.warc.gz"} |
https://leanprover-community.github.io/mathlib_docs/algebra/category/Mon/limits.html | # mathlibdocumentation
algebra.category.Mon.limits
# The category of (commutative) (additive) monoids has all limits #
Further, these limits are preserved by the forgetful functor --- that is, the underlying types are just the limits in the category of types.
@[instance]
@[instance]
def Mon.monoid_obj {J : Type u} (F : J Mon) (j : J) :
Equations
add_submonoid (Π (j : J), (F.obj j))
The flat sections of a functor into AddMon form an additive submonoid of all sections.
def Mon.sections_submonoid {J : Type u} (F : J Mon) :
submonoid (Π (j : J), (F.obj j))
The flat sections of a functor into Mon form a submonoid of all sections.
Equations
@[instance]
@[instance]
def Mon.limit_monoid {J : Type u} (F : J Mon) :
Equations
limit.π (F ⋙ forget AddMon) j as an add_monoid_hom.
def Mon.limit_π_monoid_hom {J : Type u} (F : J Mon) (j : J) :
limit.π (F ⋙ forget Mon) j as a monoid_hom.
Equations
def Mon.has_limits.limit_cone {J : Type u} (F : J Mon) :
Construction of a limit cone in Mon. (Internal use only; use the limits API.)
Equations
(Internal use only; use the limits API.)
def Mon.has_limits.limit_cone_is_limit {J : Type u} (F : J Mon) :
Witness that the limit cone in Mon is a limit cone. (Internal use only; use the limits API.)
Equations
(Internal use only; use the limits API.)
@[instance]
The category of monoids has all limits.
@[instance]
@[instance]
The forgetful functor from monoids to types preserves all limits. (That is, the underlying types could have been computed instead as limits in the category of types.)
Equations
@[instance]
def CommMon.comm_monoid_obj {J : Type u} (F : J CommMon) (j : J) :
Equations
@[instance]
@[instance]
@[instance]
def CommMon.limit_comm_monoid {J : Type u} (F : J CommMon) :
Equations
@[instance]
We show that the forgetful functor CommMon ⥤ Mon creates limits.
All we need to do is notice that the limit point has a comm_monoid instance available, and then reuse the existing limit.
Equations
@[instance]
def CommMon.limit_cone {J : Type u} (F : J CommMon) :
A choice of limit cone for a functor into CommMon. (Generally, you'll just want to use limit F.)
Equations
A choice of limit cone for a functor into CommMon. (Generally, you'll just want to use limit F.)
The chosen cone is a limit cone. (Generally, you'll just want to use limit.cone F.)
def CommMon.limit_cone_is_limit {J : Type u} (F : J CommMon) :
The chosen cone is a limit cone. (Generally, you'll just want to use limit.cone F.)
Equations
@[instance]
The category of commutative monoids has all limits.
@[instance]
@[instance]
The forgetful functor from commutative monoids to monoids preserves all limits. (That is, the underlying monoid could have been computed instead as limits in the category of monoids.)
Equations
@[instance]
The forgetful functor from commutative monoids to types preserves all limits. (That is, the underlying types could have been computed instead as limits in the category of types.)
Equations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926819801330566, "perplexity": 1064.2462969096666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00290.warc.gz"} |
https://link.springer.com/article/10.1007/s00359-021-01524-z | # Optocollic responses in adult barn owls (Tyto furcata)
## Abstract
Barn owls, like primates, have frontally oriented eyes, which allow for a large binocular overlap. While owls have similar binocular vision and visual-search strategies as primates, it is less clear whether reflexive visual behavior also resembles that of primates or is more similar to that of closer related, but lateral-eyed bird species. Test cases are visual responses driven by wide-field movement: the optokinetic, optocollic, and optomotor responses, mediated by eye, head and body movements, respectively. Adult primates have a so-called symmetric horizontal response: they show the same following behavior, if the stimulus, presented to one eye only, moves in the nasal-to-temporal direction or in the temporal-to-nasal direction. By contrast, lateral-eyed birds have an asymmetric response, responding better to temporal-to-nasal movement than to nasal-to-temporal movement. We show here that the horizontal optocollic response of adult barn owls is less asymmetric than that in the chicken for all velocities tested. Moreover, the response is symmetric for low velocities (< 20 deg/s), and similar to that of primates. The response becomes moderately asymmetric for middle-range velocities (20–40 deg/s). A definitive statement for the complex situation for higher velocities (> 40 deg/s) is not possible.
## Introduction
Birds and mammals share a similar anatomical forebrain organization (Stacho et al. 2020). This is reflected in cognitive behavior of, for example, owls and crows rivalling those of primates (Orlowski et al. 2018; Zahar et al. 2018; Nieder et al. 2020). If anatomy and cognition of birds are similar to that of mammals, one may speculate that “simpler”, reflex-like behavior might even more resemble mammalian, including human, behavior. Test cases for this claim are the optokinetic (OKR), the optocollic (OCR), and the optomotor (OMR) responses. These reflexes help to stabilize the visual world via movement of eyes (OKR) or head (OCR), or serve course-control (OMR) (Carpenter 1988; Huang and Neuhaus 2008; Masseck and Hoffmann 2009). Almost all animals show one or several of these reflexes—depending on eye-movement capability and state of activity (Gioanni 1988). These reflexes are typically elicited by moving a highly structured visual surround across the visual field of the observer mimicking his self-movement in a stationary world. A classic example occurs when one sits in a train and the train on the next platform starts to move. A slow-phase segment, during which the subject follows the movement of the wide-field stimulus, and fast return saccades characterize the reflexes. This leads to a sawtooth-like pattern of gaze called nystagmus. For a long time, OCRs, OKRs and OMRs were studied in a broad variety of animals [e.g. flies (Borst et al. 2010), crabs (Sandeman et al. 1975; Nalbach 1989; Barnatan et al. 2019), goldfish (Easter 1972; Masseck et al. 2010), frogs (Dieringer and Precht 1982), geckos (Masseck et al. 2008), turtles (Ariel 1997), pigeon (Gioanni et al. 1981; Gioanni 1988; Nalbach 1992; Türke et al. 1996; Maurice et al. 2006), chicken (Wallman and Velez 1985), hummingbirds (Goller and Altshuler 2014; Gaede et al. 2016), cat (Schweigart and Hoffmann 1988), ferret (Hupfeld et al. 2007), monkeys (Cohen et al. 1977; Lappe et al. 1998; Distler et al. 1999), and humans (van den Berg and Collewijn 1988)]. Recent work has focused on model systems like zebrafish, mouse, and healthy as well as impaired human subjects (e.g. Dieterich et al. 2009; Huang and Neuhauss 2008; Naumann et al. 2016; Agarwal et al. 2016; Kretschmer et al. 2017; Lappi et al. 2020). A quantitative behavioral study on owls is missing. We only found a brief qualitative mentioning of OMRs in three owl species, not including barn owls, in Tauber and Atkin (1968).
We worked with barn owls (Tyto furcata). When we speak of “owls” in the following, we refer to barn owls, if not stated otherwise. Owls represent an interesting case as their frontally oriented eyes create a large binocular overlap that allows the owls to extract depth by stereo vision (Willigen et al. 1998, 2002, 2003). These birds have a well-developed scleral ring that stabilizes the eyes in the skull (Franz-Odendaal and Krings 2019). Moreover, owls have very large, elongated eyes. The eyes are rather fixed in the skull, and these birds cannot move their eyes more than one to four degrees (Steinbach and Money 1973; Du Lac and Knudsen 1990; Nieder and Wagner 2000; Iwaniuk et al. 2008; Netser et al. 2010). Owls exhibit OCRs to stimulation with visual wide-field patterns. This is similar to the other birds mentioned before. However, most other bird species as well as e.g. frogs, turtles and many mammals have laterally-positioned eyes, and exhibit so-called asymmetric OKRs or OCRs, while primates have frontally-positioned eyes and have a symmetric horizontal OKR. Symmetry or asymmetry of the reflexes occurs under monocular stimulation, when nasal to temporal (N–T) and temporal to nasal (T–N) directions of movement may be discriminated. Lateral-eyed vertebrates typically exhibit a higher gain (for a definition, see Eq. 1 below) when stimulated in the T–N than in the N–T direction (e.g. Gioanni et al. 1981; Dieringer and Precht 1982; Wallman and Velez 1985). By contrast, the frontal-eyed primates show similar gains in both stimulus directions (e.g. van den Berg and Collewijn 1988; Distler et al. 1999). Thus, the question arises whether the OCRs of owls more closely resemble those of their avian relatives or of primates with their similar visual world.
To study this issue, we tested barn owls in binocular and monocular settings. We show here that adult barn owls exhibit an OCR not quite as symmetric as the OKR in primates, but far less asymmetric than in the chicken.
## Materials and methods
Six tame, hand-raised owls (codes: G, H, I, J, K, L) participated in the experiments. Owls start to fly between 50 and 60 days of age, and soon after, they have to catch prey by themselves. Shawyer (1998) reports that adult feather length is, on average, achieved at postnatal day 67. Thus, we use the term 'adult' for fledged birds being older than 67 days.
### Set-up and stimuli
Visually induced optocollic reactions were measured with a rotating drum (Fig. 1; for details see also Türke et al. 1996). The drum (diameter 64 cm, height 46 cm, angle subtended in elevation 70°) carried the stimulus pattern. We used two high-contrast wide-field stimuli: (1) evenly horizontally and vertically spaced squares (2.7° × 2.7° as seen from the center of the drum) (Nalbach 1992), and (2) a white-and-black striped pattern (horizontal wavelength 10° as seen from the center of the drum) (Fig. 1). A DC-driven motor rotated the drum, and thus the pattern, at constant velocities (see below). A potentiometer attached to its shaft monitored the rotation. The pattern was diffusely illuminated from outside. The average light intensity was 27.3 cd/m2.
During an experiment, the animal was sitting on a perch, positioned in the middle of the drum, with its legs loosely fastened to the perch by a ribbon made of leather. The long axis of the perch was defined as perpendicular to zero azimuth in an external coordinate system. Thus, if the owl was sitting in normal posture, its view centered at zero azimuth. Sheets of paper screened the bottom and top of the drum. The sheets masked stationary contours so that the reaction of the animals corresponded to a “stare” or “delayed” OCR (for details see Türke et al. 1996). Videotaping of the owl’s head from above was possible through a 12 cm-wide circular hole in the center of the top of the inner drum (Fig. 2).
### Data recording
Recording of monocular and binocular OCRs took place between February 1992 and May 1993. A recording session never lasted longer than one hour. For recording monocular OCRs, either the right or the left eye of a bird was occluded (Fig. 2a). Different eye covers were tested. All worked similarly well. The eye cover was fixed to a holder that had been cemented to the animal's skull under anesthesia [for further details on surgery and anesthesia see Wagner (1993)]. The surgery and the experiments were carried out under a permit issued by the Regierungspräsidium Tübingen, Germany. Recording gear was mounted shortly before an experiment and removed immediately afterwards.
Optocollic reactions were recorded without earlier training. Our goal was to record data at different drum velocities (5, 8, 10, 15, 20, 30, 40, 60, 80, 93 deg/s) that were presented in a pseudo-random order, clockwise (cw) and counter-clockwise (ccw) rotation alternating. Drum velocities of 8, 80, and 93 deg/s were only used for monocular stimulation, while data for the other seven stimulus velocities were recorded for both monocular and binocular stimulation. Because of the owl's restricted eye-movement capability mentioned above, we recorded only head rotations. Position markers were drawn on the eye cover (Fig. 2a; video in supplements) or a stripe of paper that was fixed to the holder and/or to the feathers on top of the owl’s head (Fig. 2b). Alternatively, a stripe of cardboard with two reflection spots was fixed to the feathers on top of the head of the owl (Fig. 2c). The stripe was not moving relative to the head as assured by visual inspection. The reflection spots were illuminated via an infrared light source and videotaped from above (Fig. 2c).
### Data analysis
Automatic analysis of the video image took place off-line by stepping the video recorder forward by a preset number of frames. The typical temporal resolution was 80 ms, but could be higher for high velocities and lower for low velocities. The frame was grabbed with a videoboard (FG 100, Imaging Technology, Inc.), and transferred into the main memory of a PC. In this way, the projection of the position markers onto the horizontal plane was imaged. After contrast-enhancement and contrast clipping, the position of the position markers was automatically digitized and written into computer memory. Likewise, the voltage of the potentiometer was stored in synchrony. From these readings, the azimuthal orientation of the owl’s head and the azimuthal position of the pattern were derived and stored for further processing. The horizontal angular velocity of the head was calculated from head orientation. The beginning and the end of slow-phase segments were determined by a thresholding mechanism (for details see Türke et al. 1996). The results were controlled later by visual inspection and corrected, if necessary.
During a slow-phase segment, the owl followed the moving pattern by head rotation. In such a closed-loop situation, the stimulus that elicits the slow phase of the OCR is the retinal-slip speed in the animal’s perception (Türke et al. 1996). Note that we could not measure this directly. We could only determine the difference between the angular velocity of the external stimulus as derived from the potentiometer data and the angular velocity of the head. It needs to be kept in mind that the potentiometer data need not contain all information that the animal uses for its perception (see also Discussion). Similarly, we calculated the gain that characterizes the effectiveness of the OCR from the angular velocity of the stimulus as derived from the potentiometer data. We define the “closed-loop gain” as
$${\text{gain}}\left( \% \right) = \frac{{{\text{angular velocity of animal's head}}}}{{{\text{angular velocity of stimulus}}}} \times 100$$
(1)
The gain was determined from the mean angular velocity of both the animal’s head and the stimulus during each single slow-phase segment. In other words, one slow-phase segment provided one data point for the analysis. We analyzed only slow-phase segments having a duration of a least five data points.
We also determined the durations and the amplitudes of the slow-phase segments. The duration of a slow-phase segment is the time from the beginning (after the return saccade) to the end (before the return saccade starts) of the following response in seconds. The amplitude (in degrees) of a given slow-phase segment is the product of duration and the mean angular velocity of the owl’s head during the respective slow-phase segment.
### Statistics
Most of our data did not show normal distributions (see below). Parametric analyses were not adequate in these cases. Therefore, we used nonparametric statistics, specifically the Mann–Whitney U test to analyze the difference of two not paired samples. Some data sets were also subjected to a correlation analysis.
## Results
Although barn owls are able to actively rotate their head by more than 270° (Krings et al. 2017), we typically observed a range of ± 50° during the slow-phase movements, with some extreme head rotations beyond 100° (Fig. 3). The slow-phase movements were interrupted by reset phases (return saccades) in the opposite direction. Typically, the return saccades had a higher head-turning velocity than the slow-phase movements (Fig. 3).
In total, we analyzed 118 sequences, containing 1234 slow-phase segments. Binocular responses were obtained from five birds (owls G, H, I, J, K), providing 387 slow-phase segments for analysis. Monocular data consisted of 847 slow-phase movements that were collected from the same five birds from which we obtained binocular data and owl L for which no binocular data were recorded.
In the following, we first briefly describe the typical behavior of the owls during the recording sessions as observed by watching the birds (see video in supplements), then present data from binocular stimulation (Figs. 3, 4, 5) that serves as reference for the subsequent monocular data (Figs. 4, 6, 7), and finally compare both data sets (Tables 1 and 2).
### Observation of owls during recording
During recording, owls were sitting on a perch and could move their head and body freely. They did so frequently. There were periods during which the owls followed the stimulus, interrupted by periods during which the owls re-oriented their vision (see video in supplements). Often the owls looked downwards or upwards. During these periods, the owls partly followed the stimulus, but these sequences could not be analyzed, because often none or only one of the position markers were visible. If both of the markers were visible when the owls looked up- or downwards, the distance between the markers was short, which might have caused large reconstruction errors. Moreover, it was not clear to where the owl directed its vision and attention. Therefore, we only analyzed those sequences during which the head was held approximately horizontal, in other words, head pitch as judged from the videos was within approximately ± 30° and did not change much during a sequence.
### Binocular optocollic responses
Stimulation with a wide-field pattern very reliably elicited the binocular OCR in adult barn owls. The birds showed persisting reactions for all stimulus velocities tested. During the slow-phase segments, the owls consistently rotated the head in the direction of pattern-rotation. In the following, we first present six typical examples (Fig. 3), before we turn to a quantitative analysis (Figs. 4, 5).
Binocular data was mainly obtained with the square wave pattern (376 slow-phase movements), the remaining (11) with the squares. Since a Mann–Whitney U test did not show a difference in the gains measured with the two patterns (U = 1606; z score = 1.262; p = 0.208), the data was lumped, and all further analyses are based on all 387 slow-phase movements. For the monocular data, data sets obtained with the two stimulus patterns were compared when responses were based at respective velocities from at least three owls. Since three of four such data sets did not show a significant difference either, also for monocular stimulation the 847 data obtained with the two different patterns were lumped.
The eyes are symmetrically arranged with respect to the midsagittal plane. Therefore, the binocular OCR with ccw or cw rotation should differ only in the direction of the animal's response velocity but not in the value of the gain. Indeed, a difference in gain for stimulation in the ccw or the cw directions could not be detected, if the data obtained with all stimulus velocities were taken into account (Mann–Whitney U test, number of cases ccw = 195, number of cases cw = 192; U = 17,051, z score = 1.516, p = 0.129). This held also, if the data of the individual velocities were considered (Fig. 5a).
Before analyzing the data for the individual velocities quantitatively, we checked the distributions of the gains (Fig. 4). Both, the binocular as well as the monocular gains exhibited a skewed distribution. The monocular gains exhibited a higher tail towards 0 gain than the binocular gains. This bore out in 46% of the monocular gain being below 70%, while only 22% of the binocular gains were below this value. Gains > 100% were observed for most stimulus velocities and were especially not restricted to low velocities. The highest gain we measured was 126%. The maximum number of cases was slightly below 100% gain in both distributions. Both distributions had a long tail towards lower gains and a short tail towards higher gains. Since the distributions were skewed, we decided to present medians and quartiles and analyze the data by nonparametric statistics.
For stimulus velocities up to 30 deg/s, the median gains were about 90%, while for 30 and 40 deg/s a small drop was observed (Fig. 5a). For 60 deg/s, the median gain dropped to 70%. The statistical analysis revealed that the gain at 60 deg/s was smaller than the gains at the other velocities (p < 0.0029 for 40 deg/s and lower p-values for the other stimulus velocities, right most column in Table S1). The cross-comparisons for the other velocity pairs suggested that, for example, the gain was higher for a stimulus velocity of 10 deg/s than for stimulus velocities of 5, 30, and 40 deg/s, but not for 20 deg/s (Table S1). Moreover, the gain for a stimulus velocity of 20 deg/s was higher than the gain for stimulus velocities of 30 and 40 deg/s (Table S1). Finally, we like to mention that the highest velocity we measured during a slow-phase segments was 70 deg/s.
The response amplitude tended to increase from low to high stimulus velocities (Fig. 5b). Seventy-eight percent of the amplitudes of the slow-phase segments were lower than 40°. The highest amplitude measured was 134 deg. Median amplitude was lowest for a stimulus velocity of 5 deg/s (Fig. 5b). Table S2 in the supplements documents the comparisons for all velocities (i.e. turning amplitudes at a stimulus velocity of 5 deg/s vs turning amplitudes at a stimulus velocity of 10 deg/s, etc.). Note, for example, that the p-value of each test for 5 deg/s with one of the other velocities is below < 0.00001 (upper row in Table S2). As is already implicated in the presentation of the median data in Fig. 5b and indirectly also in Table S2, turning amplitude was positively correlated with stimulus velocity, if all 387 data pairs (turning amplitude, stimulus velocity) were subjected to a correlation analysis (correlation coefficient: 0.2707, p < 0.01; linear equation: turning amplitude (deg) = 19.07 + 0.298*stimulus velocity).
The duration of a slow-phase segment also depended on stimulus velocity, with lower velocities eliciting longer durations (Fig. 5c). The median duration dropped from about 2–0.8 s for velocities from 5 to 30 deg/s. For 40 deg/s and 60 deg/s, the median duration stayed at about 0.8 s. The longest slow-phase segment lasted 17.84 s. Table S3 in the supplements documents the comparisons for all stimulus velocities (for a detailed explanation on how Table S3 has to be read, see above). For example, duration for 5 and 10 deg/s was longer that for the other stimulus velocities (top two rows in Table S3). Correlation analysis including all 387 data pairs demonstrated a highly negative correlation between duration and stimulus velocity (correlation coefficient: − 0.4177, p < 0.01; linear equation: duration (s) = 3.51–0.048*stimulus velocity).
In summary, binocular stimulation revealed similar to equal high gains for counterclockwise und clockwise stimulation, increase in amplitudes and decrease in durations of the slow-phase segments with stimulus velocity.
### Monocular optocollic responses
Monocular OCRs were in many respects similar to binocular OCRs (compare Fig. 6 with Fig. 3 and Fig. 7 with Fig. 5). This held specifically for the monocular OCR induced by motion of the stimulus in the T–N direction (see section “Comparison of binocular and monocular data” below). For example, the OCR shown in Fig. 6a in reaction to T–N stimulation with 15 deg/s exhibited a similarly high monocular gain as the OCR plotted in Fig. 3a that was recorded under binocular stimulation. By contrast, the monocular gain measured with stimulation in the opposite, N–T, direction at the same velocity was lower (Fig. 6b) (for a quantitative analysis, see below). Differences between the gains measured with T–N and N–T stimulations were higher for a velocity of 40 deg/s (Fig. 6c, d). For a velocity of 93 deg/s, monocular gains were low for both stimulus directions (Fig. 6e, f).
The monocular gains were generally high, reaching medians slightly below 100% for velocities up to 30 deg/s (Fig. 7a). This held specifically for the gains recorded with T–N stimulation. For higher velocities, the gains were lower, and the medians were only about 20% at the highest velocity tested, 93 deg/s (Fig. 7a).
The monocular gains upon stimulation in the T–N direction were larger than those in the N–T direction for stimulus velocities ranging from 10 to 80 deg/s (Fig. 7a, Table S4). By contrast, the high gains measured for a stimulus velocity of 5 deg/s for T–N and N–T stimulation were not statistically different (Fig. 7a). Likewise, at the highest stimulus velocity tested (93 deg/s) the low gains of N–T and T–N responses were not statistically different (Fig. 7a).
The differences may be quantified by computing the factor gain T–N/gain N–T for each velocity separately (Fig. 8a). This calculation shows that the factors are close to one for low velocities (5, 8, 10, 15 deg/s), but also at the highest velocity tested (93 deg/s). In the medium range (20, 30, and 40 deg/s) of the tested stimulus velocities, the factor amounts to around 1.5. The maximum was 2.45 for 60 deg/s. The data point at 80 deg/s, with a factor of 0.41, is based on few data only (see Table S4). As implicated by the differences in the gains, the factors are statistically different for stimulus velocities from 10 to 80 deg/s, but not for 5 deg/s and 93 deg/s (Table S4).
Turning amplitude was slightly different between N–T and T–N stimulation. Amplitude tended to be larger for T–N responses than for N–T responses in the medium velocity range and lower for the low velocities. However, overall, the variability was high as demonstrated by the large differences between the values at the third and first quartiles of the distributions (Fig. 7b). Correlation analysis demonstrated a weak, but significant positive relation for both T–N and N–T stimulation (N–T: 395 data points, correlation coefficient: 0.01, p < 0.01; linear equation: turning amplitude (deg) = 21.33 + 0.034*stimulus velocity; T–N: 452 data points, correlation coefficient: 0.05, p < 0.01; linear equation: turning amplitude (deg) = 21.57 + 0.005*stimulus velocity).
The durations of the slow-phase segments dropped from longer values at low stimulus velocities to shorter values at high stimulus velocities (Fig. 7c). The durations of the N–T responses were significantly longer than the durations of the T–N responses at low stimulus velocities (5, 10 and 20 deg/s), with a reverse effect for 15 deg/s (Fig. 7c). A significant difference could not be detected for higher stimulus velocities (30, 40, 80 and 93 deg/s), with an exception of 60 deg/s for which the N–T responses were longer than the T- N responses (Fig. 7c). Correlation analysis demonstrated a significant negative relation for both T–N and N–T stimulation (N–T: 395 data points, correlation coefficient: -0.39, p < 0.01; linear equation: duration (s) = 2.65–0.027*stimulus velocity; T–N: 452 data points, correlation coefficient: − 0.41, p < 0.01; linear equation: duration (s) = 1.89–0.019*stimulus velocity).
Overall, monocular OCRs to T–N stimulation showed higher gains than OCRs to N–T stimulations. By contrast, there were only minor differences in turning amplitude and slow-phase segment duration.
### Comparison of binocular and monocular data
Response characteristics of binocular and monocular OCRs were similar. The gains were higher for binocular stimulation than for T–N stimulation at three velocities (10, 30, 60 deg/s) (Table 1). For 5 deg/s, the reverse was true, with no significant differences for 20 and 40 deg/s (Table 1, Figs. 5, 7). The comparison of turning amplitudes yielded significantly higher amplitudes for binocular stimulation for 10 and 60 deg/s, with no difference for the other velocities (Table 1). Finally, the comparison of the slow-phase durations only yielded a difference at 10 deg/s, where duration was longer for binocular stimulation than for T–N stimulation (Table 1).
Gains for all stimulus velocities, apart from 5 deg/s, were higher for binocular stimulation than for N–T stimulation (Table 2, Figs. 5, 7). Turning amplitudes for binocular stimulation were higher than for N–T stimulation for 30, 40, and 60 deg/s, lower for 5 deg/s and not statistically different for 10 and 20 deg/s (Table 2, Figs. 5, 7). The duration of the slow-phase movements was longer for N–T stimulation than for binocular stimulation for 5 and 60 deg/s, but not statistically different for the other velocities tested (10, 20, 30, 40 deg/s) (Table 2, Figs. 5, 7).
Overall, conspicuous differences between reactions to binocular and to monocular stimulation occurred only for the gains. Specifically, gains for N–T stimulation were clearly lower than for binocular stimulation while T–N gains were close to the binocular values.
## Discussion
We shall discuss our data in the following with respect to the methods used by us and by others, with respect to optocollic, optomotor and optokinetic responses of other animals, including man, and end with an outlook.
### Methodological considerations
Owls compensated for wide-field stimuli with head rotations just as mammals compensate with eye movements. The experiments revealed a high gain of both the binocular and monocular OCRs, especially for low velocities. Pigeons react in a similar way as owls do, although they have larger eye-movement capabilities than owls (Gioanni et al. 1981; Gioanni 1988; Türke et al. 1996). However, pigeons do not make major use of their eye-movement capability, if they can freely move their head (Haque and Dickman 2004).
As mentioned above, the untrained owls moved their head and body a lot while standing on the perch. Periods of fixation were sometimes short, sometimes longer. Thus, the stimulus situation was less standardized than in OKR studies where the head of the animal is fixed or restricted to rotation around a central axis only. The possibility to move improves gaze stabilization for stimulus velocities higher than 20 deg/s (Maurice et al. 2006), and increases overall performance of the optokinetic nystagmus (Wallman 1993). Consequently, median gain in barn owls was high, close to 100%, at low stimulus velocities (Fig. 5a), even when one eye was occluded (Fig. 7a). Values were comparable to the “standing condition” in pigeons (Maurice et al. 2006). Gain in barn owls was larger than in actively standing pigeons in the same set-up (Türke et al. 1996) which suggests high OCR-reactivity in barn owls. In particular, vestibular self-stimulation during head rotation did not seem to interfere with OCR up to 30 deg/s, but may have contributed to a drop in gain towards high velocity stimuli so that the range of the effective velocities was narrower in the OCR of owls than in the OKR of macaques (Distler et al. 1999). We know of no behavioral study on the vestibular-collic reflex in barn owls.
Closed-loop gain as defined in Eq. (1) showed a wide distribution with some gains being higher than 100% (Fig. 4). Closed-loop gains larger than 100% are not expected in a simple feedback system, because they suggest a reversal of the sign of the retinal slip speed. They, thus, deserve a discussion. Gains > 100% were also reported in other studies (e.g. Wallman and Velesz 1985; Gioanni and Vidal 2012). One factor that has to be taken into account when interpreting this seemingly over-compensation of the wide-field visual stimulus is the arbitrary definition of gain. In Eq. (1), we compared head and drum velocity; however, the owl's optokinetic system may be driven by a correlation-mechanism to extract pattern motion, similar to pigeons (Türke et al. 1996). Since this mechanism does not extract exact drum velocity, but a signal that depends on spatial structure and contrast of the pattern, in particular higher harmonics may change the perceived velocity and may result in faster head rotation. Another aspect is the frequent eccentric head position of the owls in our setup. Since the frontal orientation of the eyes restricts the field of view in this species to about 190° (Knudsen 1982), an eccentric head position will lead to distortions of the perceived stimulus depending on the distance to the frontal wall of the drum and orientation of the owl’s head. A further aspect is that the closed-loop gain as defined in Eq. (1) does not reflect the internal processing: an "internal signal" that adds to the reflexive head movement (like a command variable in control theory) could alter the closed-loop gain of the system, and, thus, also result in gains > 100%.
Further, the responses of the owl itself bear features that interfere with the definition in equation one. Independently of the owl’s head position at the onset of a slow-phase segment, a mandatory additional eccentricity occurs during the rotation in the slow-phase segment, because of anatomical reasons the head of the barn owl always translates while it rotates (Ohayon et al. 2006; Krings et al. 2017). Furthermore, eye movements [maximum 3° in horizontal direction (Du Lac and Knudsen 1990)] may change gains for slow-phase amplitudes. However, if the owls behaved like pigeons, eye movements would not be expected to contribute much to the gains (Gianni 1988; Hague and Dickman 2004). Finally, part of the gains > 100% may also be due to noise, both in the owls’ behavior and in the reconstruction. In summary, many factors that we did not control might influence the perceived stimulus velocity and lead to gains > 100%. However, gains larger than 100% need not signify a retinal-slip speed in the opposite direction.
### Optocollic, optokinetic, and optomotor responses in other animals
Wide-field movement is a very strong stimulus that elicits compensatory eye or head rotations in practically all animals that possess an elaborated visual sense. While most animals show a response to a moving visual wide-field stimulus, there are major variations between species (for reviews see Huang and Neuhauss 2008; Masseck and Hoffmann 2009). With binocular stimulation, gains typically approach 100%, at least at moderate velocities. The responses elicited by monocular stimulation, however, vary considerably. For reasons of simplicity, we sort the monocular responses into three categories: (1) In many vertebrates the optomotor response to stimulation in the N–T direction is practically absent or of very low gain (for reviews see Huang and Neuhauss 2008; Masseck and Hoffmann 2009). (2) A reaction to stimulation in N–T direction is observed, but it is much weaker than that occurring to stimulation in the T–N direction (factor T–N/N–T > 1.2, e.g. rabbit: Collewijn1969; pigeon: Gioanni 1988; chicken: Wallman and Velez 1985, Fig. 8b; cat: Schweigart and Hoffmann 1988, Fig. 8b; mice: Kretschmer et al. 2017). 3) The reactions in both stimulation directions are equivalent like in humans (Fig. 8b; van den Berg and Collewijn 1988) and macaques (Fig. 8b; Distler et al. 1999). The data on the barn owl presented here puts this species between the second and third category, as a symmetric OCR was observed for low velocities, while weak asymmetry occurred for middle-range velocities (Fig. 8).
The reason for the differences between species has been a matter of much debate (e.g. Huang and Neuhauss 2008; Masseck and Hoffmann 2009). Masseck and Hoffmann (2009) considered several hypotheses like frontal orientation of the eyes, decussation pattern of retinal fibers, foveation, eye position and resulting binocular overlap, lifestyle and degree of independence of eye movement. The conclusion of their analysis was that “no universally valid theory can be suggested for all vertebrate classes to explain symmetry versus asymmetry of monocular OKR”. Even if there is no unifying theory, arguments for one or the other hypothesis may be advanced. Our results add some pieces of information to the data. Barn owls have frontally oriented eyes and a large binocular overlap (Willigen et al. 1998; Nieder and Wagner 2001) but no fovea (Oehme 1961), an almost total decussation of retinal fibers at the midbrain level, but a fusion of the information from the two eyes through the supraoptic chiasm in the forebrain (Karten et al. 1973). Moreover, barn owls are predators with a specialization for sound localization, but use visual information whenever possible (Harmening and Wagner 2011; Wagner et al. 2013), and they possess a coupled accommodation but an independent pupillary reflex (Schaeffel and Wagner 1992). Thus, the data from barn owls presented here seem to rather complicate than solve the implications of the data available from other species. Nevertheless, the fact that owls have binocular vision and a symmetrical horizontal OCR for at least low velocities supports, in our view, the argument that a symmetrical rotational OCR is a feature of animals with frontally placed eyes. However, as pointed out above, this is not an argument that can be used in a causal sense for every case of symmetric responses, because also some lateral-eyed animals show a more or less symmetric response (Masseck and Hoffmann 2009). It would also be interesting to study the vertical OCR in owls and find out whether it is asymmetric as in many frontal-eyed animals, including humans (van den Berg and Collewijn 1988). If we restrict our consideration to birds, most lateral-eyed species exhibit an asymmetric response. An exception may be hummingbirds (Goller and Altshuler 2014; Gaede et al. 2016; Goller et al. 2019). Hummingbirds use optic flow to control their delicate motion when feeding. There is a uniform distribution of direction sensitive cells in the nucleus lentiformes (Gaede et al. 2016), suggesting that the OCR may be symmetric. However, to our best knowledge, this has not been measured.
### Outlook
We present here basic data on the OCR of adult barn owls and show that the OCR of owls is phenomenologically much closer to the OKR of primates than to the OCR of its closer relatives, birds (Fig. 8). Many more data are necessary to substantiate this claim. For example, we have not trained the owls, and, thus, head and body movements affected the responses. Due to the frequent movements, we could not discriminate between early and late OCRs components. It might also be interesting to study whether a "dynamic fixation" or "look"-OCR can be elicited and under which conditions this might be evoked. In mammals, the optokinetic response is driven by a subcortical network that is influenced by inputs from the visual cortex (Grasse et al. 1984; Wallman 1993; Distler et al. 2002). The neuronal circuit underlying the OCR in owls is not well known. We have some preliminary data demonstrating a bilateral projection from the visual Wulst to several midbrain and diencephalic nuclei (Wirth and Wagner 2019), but more data are necessary to unravel the neuronal circuit or to show whether response properties of optomotor neurons in barn owls are similar to those in frontal-eyed mammals, similar to what Wylie et al. (1994) demonstrated for saw-whet owls. Moreover, in primates, the symmetry is not present in very young babies, but develops with age (Distler et al. 1999). We shall present data on the development of OCR in baby barn owls separately (Wagner et al., in preparation).
## Abbreviations
ccw:
Counterclockwise
cw:
Clockwise
deg:
Degrees
N–T:
Nasal to temporal
OCR:
Optocollic response
OKR:
Optokinetic response
OMR:
Optomotor response
T–N:
Temporal to nasal
## References
1. Agarwal M, Ulmer JL, Chandra T, Klein AP, Mark LP, Mohan S (2016) Imaging correlates of neural control of ocular movements. Eur Radiol 26:2193–2205. https://doi.org/10.1007/s00330-015-4004-9
2. Ariel M (1997) Open loop optokinetic responses of the turtle. Vis Res 37:925–933. https://doi.org/10.1016/s0042-6989(96)00229-5
3. Barnatan Y, Tomsic D, Sztarker J (2019) Unidirectional optomotor responses and eye dominance in two species of crabs. Front Physiol 10:586. https://doi.org/10.3389/fphys.2019.00586
4. Borst A, Haag J, Reiff DF (2010) Fly motion vision. Annu Rev Neurosci 33:49–70. https://doi.org/10.1146/annurev-neuro-060909-153155
5. Carpenter RHS (1988) Movements of the eyes, 2nd edn. Pion, London
6. Cohen B, Matsuo V, Raphan T (1977) Quantitative analysis of the velocity characteristics of optokinetic nystagmus and optokinetic afternystagmus. J Physiol (london) 270:321–344
7. Collewijn H (1969) Optokinetic eye movements in the rabbit: input-output relations. Vis Res 9:117–132
8. Dieringer N, Precht W (1982) Compensatory head and eye movements in the frog and their contribution to stabilization of gaze. Exp Brain Res 47:394–406
9. Dieterich M, Müller-Schunk S, Stephan T, Bense S, Seelos K, Yousry TA (2009) Functional magnetic resonance imaging activations of cortical eye fields during saccades, smooth pursuit, and optokinetic nystagmus. Ann N Y Acad Sci 1164:282–292. https://doi.org/10.1111/j.1749-6632.2008.03718.x
10. Distler C, Mustari MJ, Hoffmann KP (2002) Cortical projections to the nucleus of the optic tract and dorsal terminal nucleus and to the dorsolateral pontine nucleus in macaques: a dual retrograde tracing study. J Comp Neurol 444:144–158. https://doi.org/10.1002/cne.10127
11. Distler C, Vital-Durand F, Korte R, Korbmacher H, Hoffmann KP (1999) Development of the optokinetic system in macaque monkeys. Vis Res 39:3909–3919
12. Du Lac S, Knudsen EI (1990) Neural maps of head movement vector and speed in the optic tectum of the barn owl. J Neurophysiol 63:131–146
13. Easter SS (1972) Pursuit eye movements in Goldfish (Carassius auratus). Vis Res 12:673–688
14. Franz-Odendaal TA, Krings M (2019) A heterochronic shift in skeletal development in the barn owl (Tyto furcata): a description of the ocular skeleton and tubular eye shape formation. Dev Dyn 248:671–678. https://doi.org/10.1002/dvdy.65
15. Gaede AH, Goller B, Lam JPM, Wylie DR, Altshuler DL (2016) Neurons responsive to global visual motion have unique tuning properties in hummingbirds. Curr Biol 26:279–285. https://doi.org/10.1016/j.cub.2016.11.041
16. Gioanni H (1988) Stabilizing gaze reflexes in the pigeon (Columba livia): I. Horizontal and vertical optokinetic eye (OKN) and head (OCR) reflexes. Exp Brain Res 69:567–582
17. Gioanni H, Vidal PP (2012) Possible cues driving context-specific adaptation of optocollic reflex in pigeons (Columba livia). J Neurophysiol 107:704–717
18. Gioanni H, Rey J, Villalobos J, Bouyer JJ, Gioanni Y (1981) Optokinetic nystagmus in the pigeon (Columba livia) I. Study in monocular and binocular vision. Exp Brain Res 44:362–370
19. Goller B, Altshuler DL (2014) Hummingbirds control hovering flight by stabilizing visual motion. Proc Nat Acad Sci USA 111:18375–18380
20. Goller B, Fellows TK, Dakin R, Tyrrell L, Fernandez-Juricic E, Altshuler DL (2019) Spatial and temporal resolution of the visual system of the Anna’s Hummingbird (Calypte anna) relative to other birds. Physiol Biochem Zool 92:481–495. https://doi.org/10.1086/705124
21. Grasse KL, Cynader MS, Douglas RM (1984) Alterations in response properties in the lateral and dorsal terminal nuclei of the cat accessory optic system following visual cortex lesions. Exp Brain Res 55:69–80. https://doi.org/10.1007/BF00240499
22. Hague A, Dickman JD (2004) Vestibular gaze stabilization: different behavioral strategies for arboreal and terrestrial avians. J Neurophysiol 93:1165–1173. https://doi.org/10.1152/jn.00966.2004
23. Harmening W, Wagner H (2011) From optics to attention: visual perception in barn owls. J Comp Physiol A 197:1931–1942
24. Hupfeld D, Distler C, Hoffmann KP (2007) Deficits of visual motion perception and optokinetic nystagmus after posterior suprasylvian lesions in the ferret (Mustela putorius furo). Exp Brain Res 182:509–523. https://doi.org/10.1007/s00221-007-1009-x
25. Huang YY, Neuhauss S (2008) The optokinetic response in zebrafish and its applications. Front Biosci 13:1899–1916. https://doi.org/10.2741/2810
26. Iwaniuk AN, Heesy CP, Hall MI, Wiley DR (2008) Relative Wulst volume is correlated with orbit orientation and binocular visual filed in birds. J Comp Physiol A 194:267–282. https://doi.org/10.1007/s00359-007-0304-0
27. Karten HJ, Hodos W, Nauta WJH, Revzin AM (1973) Neural connections of the “visual wulst” of the avian telencephalon. Experimental studies in the pigeon (Columba livia) and owl (Speotyto cunicularia). J Comp Neurol 150:253–278
28. Knudsen EI (1982) Auditory and visual maps of space in the optic tectum of the owl. J Neurosci 2:1177–1194. https://doi.org/10.1523/JNeurosci.02-09-01177.1982
29. Kretschmer F, Tariq M, Chatila W, Wu B, Badea TC (2017) Comparison of optomotor and optokinetic reflexes in mice. J Neurophysiol 118:300–316. https://doi.org/10.1152/jn.00055.2017
30. Krings M, Nyakatura JA, Boumans MLLM, Fischer MS, Wagner H (2017) Barn owls maximize head rotations by a combination of yawing and rolling in functionally diverse regions of the neck. J Anat 231:12–22. https://doi.org/10.1111/joa.12616
31. Lappe M, Pekel M, Hoffmann KP (1998) Optokinetic eye movements elicited by radial optic flow in the macaque monkey. J Neurophysiol 79:1461–1480. https://doi.org/10.1152/jn.1998.79.3.1461
32. Lappi O, Pekkanen J, Rinkkala P, Tuhkanen S, Tuononen A, Virtanen JP (2020) Humans use optokinetic eye movements to track waypoints for steering. Sci Rep 10:4175. https://doi.org/10.1038/s41598-020-60531-3
33. Masseck OA, Hoffmann KP (2009) Comparative neurobiology of the optokinetic reflex. Ann N Y Acad Sci 1164:430–439. https://doi.org/10.1111/j.1749-6632.2009.03854
34. Masseck OA, Förster S, Hoffmann KP (2010) Sensitivity of the goldfish motion detection system revealed by incoherent random dot stimuli: comparison of behavioural and neuronal data. Plos One 5: e9461
35. Masseck OA, Rödl B, Hoffmann KP (2008) The optokintetic reaction in foveate and afoveate geckos. Vis Res 48:765–772
36. Maurice M, Gioanni H, Abourachid A (2006) Influence of the behavioural context on the optocollic reflex (OCR) in pigeons (Columba livia). J Exp Biol 209:292–301
37. Nalbach HO (1989) Three temporal frequency channels constitute the dynamics of the optokinetic system of the crab, Carcinus maenas (L.). Biol Cybern 61:59–70
38. Nalbach HO (1992) Translational head movements of pigeons in response to a rotating pattern: characteristics and tool to analyse mechanisms underlying detection of rotational and translational optic flow. Exp Brain Res 92:27–38
39. Naumann EA, Fitzgerald JE, Dunn TW, Rihel J, Sompolinsky H, Engert F (2016) From whole-brain data to functional circuit models: the zebrafish optomotor response. Cell 167:947–960. https://doi.org/10.1016/j.cell.2016.10.019
40. Netser S, Ohayon S, Gutfreund Y (2010) Multiple manifestations of microstimulation in the optic tectum: eye movements, pupil dilations, and sensory priming. J Neurophysiol 104:108–118. https://doi.org/10.1152/jn.01142.2009
41. Nieder A, Wagner H (2000) Horizontal-disparity tuning of neurons in the visual forebrain of the behaving barn owl. J Neurophysiol 83:2967–2979
42. Nieder A, Wagner H (2001) Hierarchical processing of horizontal-disparity information in the visual forebrain of behaving owls. J Neurosci 21:4514–4522
43. Nieder A, Wagener L, Rinnert P (2020) A neural correlate of sensory consciousness in a corvid bird. Science 369:1626–1629. https://doi.org/10.1126/science.abb1447
44. Oehme H (1961) Vergleichend-histologische Untersuchungen an der Retina von Eulen. Zool Jb, Abt Anat u Ontog 79:439–478
45. Ohayon S, van der Willigen RF, Wagner H, Katsman I, Rivlin E (2006) On the barn owl’s visual pre-attack behavior: I. Structure of head movements and motion patterns. J Comp Physiol A 192:927–940
46. Orlowski J, Ben-Shahar O, Wagner H (2018) Visual search in barn owls: Task difficulty and saccadic behavior. J vis 8:4. https://doi.org/10.1167/18.1.4
47. Sandeman DC, Erber J, Kien J (1975) Optokoinetic eye movements in the crab, Carcinus maenas. I Eye Torque J Comp Physiol 101:243–258
48. Schaeffel F, Wagner H (1992) Barn owls have symmetrical accommodation in both eyes, but independent pupillary responses to light. Vis Res 32:1149–1155
49. Schweigart G, Hoffmann KP (1988) Optokinetic eye and head movement in the unrestrained cat. Beh Brain Res 31:121–130
50. Shawyer C (1998) The barn owl. Arelquin press, Chelmsford, Essex
51. Stacho M, Herold C, Rook N, Wagner H, Axer M, Amunts K, Güntürkün O (2020) A cortex-like canonical circuit in the avian forebrain. Science 369:eabc5534. doi: https://doi.org/10.1126/science.abc5534.
52. Steinbach MJ, Money KE (1973) Eye movements of the owl. Vision Res 13:889–891
53. Tauber ES, Atkin A (1968) Optomotor responses to monocular stimulation: relation to visual system organization. Science 160:1365–1367
54. Türke W, Nalbach HO, Kirschfeld K (1996) Visually elicited head rotation in pigeons. Vis Res 36:3329–3337
55. Van den Berg AV, Collewijn H (1988) Directional asymmetries of human optokinetic nystagmus. Exp Brain Res 70:597–604
56. Van der Willigen R, Frost BJ, Wagner H (1998) Stereoscopic depth perception in the owl. NeuroReport 9:1233–1237
57. Van der Willigen RF, Frost B, Wagner H (2002) Depth generalization from stereo to motion parallax in the owl. J Comp Physiol A 187:997–1007
58. Van der Willigen RF, Forst BJ, Wagner H (2003) How owls structure visual information. Anim Cogn 6:39–55
59. Wagner H (1993) Sound localization deficits induced by lesions in the barn owl’s auditory space map. J Neurosci 13:371–386
60. Wagner H, Kettler L, Orlowski J, Tellers P (2013) Neuroethology of prey capture in the barn owl (Tyto alba L.) J Physiol (Paris) 107: 51–61.
61. Wallman J (1993) Subcortical optokinetic mechanisms. In: Miles FA, Wallman J (eds) Visual motion and its role in the stabilization of gaze. Elsevier, Amsterdam, pp 321–342
62. Wallman J, Velez J (1985) Directional asymmetries of optokinetic nystagmus: developmental changes and relation to the accessory optic system and to the vestibular system. J Neurosci 5:317–329
63. Wirth MC, Wagner H (2019) Projections of the hyperpallium in the barn owl (Tyto alba pratincola). Supplement Neuroforum 25: Göttingen Meeting of the German Neuroscience Society 2019, T16–4D.
64. Wylie DR, Shaver SW, Frost BJ (1994) The visual response properties of neurons in the nucleus of the basal optic root of the northern Saw-whet owl (Aegolius acadicus). Brain Behav Evol 43:15–25. https://doi.org/10.1159/000113620
65. Zahar Y, Levi-Ari T, Wagner H, Gutfreund Y (2018) Behavioral evidence and neural correlates of perceptual grouping by motion in the barn owl. J Neurosci 38:6653–6664. https://doi.org/10.1523/JNeurosci.0174-18.2018
## Acknowledgements
This work would not have been possible without the expert support from the institute's workshop. The experiments would not have been possible without the support of Kuno Kirschfeld. Reinhard Feiler, Wolfram Türke and Gerlinde Lenz were always open for questions regarding programming. We also thank Wolf Harmening, Klaus-Peter Hoffmann and Kuno Kirschfeld for advice and encouragement during the preparation of the manuscript.
## Funding
Open Access funding enabled and organized by Projekt DEAL.
## Author information
Authors
### Corresponding author
Correspondence to Hermann Wagner.
## Ethics declarations
### Conflict of interests
The authors declare that they have no competing interests.
### Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file5 (AVI 8802 kb)
## Rights and permissions
Reprints and Permissions
Wagner, H., Pappe, I. & Nalbach, HO. Optocollic responses in adult barn owls (Tyto furcata). J Comp Physiol A (2021). https://doi.org/10.1007/s00359-021-01524-z
• Revised:
• Accepted:
• Published:
• DOI: https://doi.org/10.1007/s00359-021-01524-z
• Nystagmus
• Optokinetic
• Optomotor | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76192706823349, "perplexity": 7513.391433745889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00125.warc.gz"} |
http://physics.stackexchange.com/users/4686/warren-p?tab=reputation | Warren P
Reputation
Top tag
Next privilege 250 Rep.
5
Impact
~885 people reached
• 0 posts edited
+5 04:12 upvote How can I explain the scientific basis of the constant speed of light to a $c$-decay proponent?
+5 03:05 upvote How can I explain the scientific basis of the constant speed of light to a $c$-decay proponent?
+5 02:40 upvote How can I explain the scientific basis of the constant speed of light to a $c$-decay proponent? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9932852983474731, "perplexity": 2771.750798650502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://indico.cern.ch/event/1109611/contributions/4789859/ | # 10th Edition of the Large Hadron Collider Physics Conference
May 16 – 20, 2022
Europe/Zurich timezone
## Precision measurements of the weak mixing angle and the W boson mass
May 16, 2022, 3:15 PM
17m
### Speaker
Mika Anton Vesterinen (University of Warwick (GB))
LHCb+ATLAS+CMS
### Presentation materials
LHCP_mW_sin2theta_vesterinen.pdf Recording Video preview | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705911874771118, "perplexity": 11959.557011304854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00252.warc.gz"} |
https://www.physicsforums.com/threads/explanation-of-superfluidity-in-he-4.586414/ | # Explanation of superfluidity in He-4
1. Mar 12, 2012
### sam_bell
Hi. I was just reading the explanation of superfluidity in He-4 (from the beginning of QFT Methods in Statistical Physics by Abrikosov et. al.). There is something I don't understand. At finite temperatures there is a "gas of excitations", which they take to be moving at an average velocity v relative to the stationary liquid. They then derive that the quasi-momentum of this gas (per unit volume) is P = (const.) v. They claim this constant represents a mass and therefore there is mass transfer and that this part of the liquid is "normal". The rest of the mass is taken to be in the ground state superfluid. OK, my question: If we are talking about *quasi-*momentum, how can we be sure that there is really mass transfer? After all, a single quasi-particle has quasi-momentum, but this doesn't correspond to mass transfer as a drift of He-4 atoms.
I suppose this is related to diffraction experiments, where is deflection of photon has a conversation law written in terms of quasi-momentum.
2. Mar 13, 2012
### DrDu
I would say that the quasi-momentum and the true momentum coincide in this case.
A liquid does not break translation invariance whence momentum is well defined.
3. Mar 13, 2012
### sam_bell
There is still a consistency here that I can't follow. For simplicity, I imagine the case of a linear chain of oscillators. In this case the total momentum operator is P = sum(i = 1..N, P(i)) where P(i) is the momentum operator of the ith body in the chain. Expading P(i) in terms of normal modes gives = sum(n = 1..N, sum(all k, f(k) (a(k) exp(inb) - a(k)* exp(-inb)) where f(k) ~ 1/sqrt(energy) and b is the periodicity of the lattice. This doesn't look like the crystal momentum operator P = sum(all k, k a*(k) a(k)). Nevertheless, since b --> 0, if we excite a phonon of crystal momentum hk, then the external environment loses "real" momentum hk. Alternately, this means the linear chain gains a "real" momentum hk. But calculating <k|P|k> = 0 because none of the P(i) conserves phonon number. In going from |0> to |k> the real momentum of the chain didn't change?
4. Mar 14, 2012
### DrDu
Stop! thats not correct. There should be a k in the exponents.
The sum over n then gives a delta function in k and only the k=0 components are left.
Either in a or in a^* the k should read -k.
Of course in true harmonic oscillator eigenstates the expectation of momentum always vanishes.
However in the limit k=0 (and omega=0!), coherent states with unsharp number of quanta become alternative true eigenstates and have non-vanishing momentum. You may replace a(0) by its expectation value on these states. (You are not forced to do so. A state with fixed number of quanta would correspond to a macroscopic superposition of states with opposite momenta.)
Due to the f(k) factor it diverges (an infinite long moving chain will have an infinite momentum) and you should divide by sqrt(L) to calculate the finite momentum per length. Note the strong analogy to your previous thread. Here, we have an example how a finite momentum density breaks symmetry (Galilean symmetry).
I am not totally sure how the crystal momentum enters. I think we need to take coupling to the lattice into account to describe state with like momentum but unlike velocity. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529450535774231, "perplexity": 1221.0087995710476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647556.43/warc/CC-MAIN-20180321004405-20180321024405-00265.warc.gz"} |
https://mathoverflow.net/questions/205821/around-vop%C4%9Bnka-accessible-category-with-small-full-discrete-subcategories-of-ar | # Around Vopěnka: Accessible category with small full discrete subcategories of arbitrary size?
I believe the model-theoretic version of the question is: is there a theory in finitary first-order logic which has, for each cardinal $\lambda$, a set $C_\lambda$ of $\lambda$-many models, such that if $M,N \in C_\lambda$ then there is no elementary embedding from $M$ to $N$ or vice versa (in ZFC)?
One statement of the large cardinal axiom Vopěnka's Principle is that no accessible category has a full subcategory which is both large and discrete.
Adámek and Rosický point out (Remark 6.2(2)) that for any cardinal $\lambda$, it's trivial to come up (in ZFC) with an accessible category with a full discrete subcategory with $\lambda$-many objects. They use the example of the theory $\mathbf{Rel}_\lambda$, with $\lambda$-many unary relation symbols, and the set of objects $A_i$, each carried by the one-point set, where $A_i$ has just the $i$th relation turned "on". Here the accessible category in question is allowed to vary with the cardinal $\lambda$.
But, in ZFC, is there one single accessible category $\mathcal{K}$ which has a full, discrete subcategory $\mathcal{K}_\lambda \subset \mathcal{K}$ of cardinality $\lambda$, for each cardinal $\lambda$?
Of course, the union $\cup_\lambda \mathcal{K}_\lambda$ is large, so (assuming that Vopěnka's principle is consistent over ZFC), if such a category exists, then one won't be able to show that $\cup_\lambda \mathcal{K}_\lambda$ is discrete. But it could be that all of its morphisms go from objects of one $\mathcal{K}_\lambda$ to another $\mathcal{K}_{\lambda'}$, and the $\mathcal{K}_\lambda$'s themselves might all be discrete.
Bonus question: in your example, are there clearly morphisms between objects in different $\mathcal{K}_\lambda$'s, or is your example a candidate to become a counterexample to Vopenka in some models (in which connection, this question may be relevant)?
• I've accepted Joel's answer for sheer elegance. Thanks to Jiří, too -- it's important to know that examples also flow naturally from the existing theory of accessible categories. – Tim Campion May 6 '15 at 17:22
• Another point is that $\mathsf{Gph}$ apparently also embeds fully into familiar categories like Fields (and hence Rings), Groups, and Partial Orders, so these categories also have this property. Actually, the linked answer (of Joel's, ironically) discusses this with elementary embedding as the morphisms; I'm not sure whether the same goes for homomorphisms as morphisms. – Tim Campion May 6 '15 at 20:09
The answer is yes. One can do this with pointed directed graphs.
Specifically, for any infinite cardinal $\lambda$, let $C_\lambda$ consist of all structures of the form $\langle V_{\lambda+2},{\in},\beta\rangle$, where $\beta<\lambda$ and $V_{\lambda+2}$ consists of the sets of von Neumann rank at most $\lambda+1$. So this is a pointed directed graph. Since there are $\lambda$ many choices for the constant $\beta$, we have $\lambda$ many models here.
But there can be no elementary embedding between any two such structures, since any such embedding would give rise to a nontrivial elementary embedding $j:V_{\lambda+2}\to V_{\lambda+2}$, which is impossible by the Kunen inconsistency.
• Wow! I was not expecting an answer from set theory, despite the set-theoretical nature of the question. Now let me advertise my ignorance and ask: in ZFC are there any elementary embeddings $V_{\lambda+2} \to V_{\lambda'+2}$ for $\lambda < \lambda'$? Or is this an example that might become a counterexample to Vopenka in some models? – Tim Campion May 6 '15 at 5:32
• @TimCampion Thanks! The existence of an elementary embedding $j:V_{\lambda+2}\to V_{\lambda'+2}$ is exactly connected with the extendible cardinals, which are fairly high in the large cardinal hierarchy, and so you cannot prove that in ZFC alone. Indeed, this is how one can show that Vopenka's principle has large cardinal strength, by considering the class of all structures $\langle V_\theta,\in\rangle$, and then getting elementary embeddings $j:V_\theta\to V_\lambda$. – Joel David Hamkins May 6 '15 at 10:12
• With a similar idea as in my post, you can get rid of the points, by considering $\langle V_{\lambda+\beta},\in\rangle$ for $2\leq \beta<\lambda$. There can be no elementary embedding $j:V_{\lambda+\alpha}\to V_{\lambda+\beta}$ for such distinct $\alpha,\beta<\lambda$. – Joel David Hamkins May 6 '15 at 10:42
• One can also get rid of the need for a special point simply by adding a self-edge on that point; it will be the only one. – Joel David Hamkins May 7 '15 at 1:51
Another, less elegant, but not so set-theoretical positive answer using graphs: Any accessible category has an accessible full embedding to graphs.
• I have a question about this construction. Thinking model-theoretically, the idea here, I assume, is to code a given first-order structure $M$ with a graph $G_M$ in such a way that an elementary embedding $j:G_M\to G_N$ amounts to an elementary embedding $j^*:M\to N$. This is a common construction, and one can easily do this in the case where the signature of the structure $M$ is small, such as when it is countable.... – Joel David Hamkins May 6 '15 at 12:34
• ...But when the language of $M$ is enormous, then in order to code all the various relations on $M$ and ensure that maps between the $G_M$ actually respect those relations, it seems to me that one needs at bottom to produce large families of graphs that are rigid in the sense of having no elementary self-embeddings or elementary embeddings between them. In this case, I worry that the construction is circular, since it seems that we have come around to the very same question of the post again.... – Joel David Hamkins May 6 '15 at 12:34
• Or can one undertake the coding-into-graphs construction with first-order models in an arbitrary language without already having examples as in the original question? – Joel David Hamkins May 6 '15 at 12:34
• I see, it is as I suspected, since that fact already answers the question, if you look at pointed graphs like that, on a set of size $\lambda$. Is the proof of that fact non-set-theoretic? I can prove it using set-theoretic ideas... – Joel David Hamkins May 6 '15 at 13:57
• A certain graph is built on $\lambda$ by viewing it as the ordinal $\lambda+2$. Some care is needed to ensure that cardinals with cofinality $\omega$ and certain sequences approaching them are fixed by any endomorphism $f$, and then by considering the sup of the iterates of the critical point of $f$ (which has cofinality $\omega$) a contradiction is obtained. The appearance of $\lambda+2$ and an iteration argument (which, I gather from wikipedia appears in the proof of Kunen inconsistency) suggest that maybe this is a similar idea to that used in Kunen inconsistency? – Tim Campion May 6 '15 at 15:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790632486343384, "perplexity": 323.80465804408107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146123.78/warc/CC-MAIN-20200225141345-20200225171345-00394.warc.gz"} |
http://www.gradesaver.com/charlie-and-the-chocolate-factory/q-and-a/how-did-mike-teavee-appear-on-the-television-242241 | # How did Mike Teavee appear on the television?
This question is from the chapter 27:"Mike Teavee is sent by television". | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650156497955322, "perplexity": 13612.940618587767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.31/warc/CC-MAIN-20170322212949-00227-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://healthnwealth1.wordpress.com/tag/breast/ | DCIS Breast Cancer
What is DCIS breast cancer? DCIS (Ductal carcinoma in situ) breast cancer is a non-invasive breast cancer. Ductal carcinoma refers to cancerous growth initiates from milk duct and surrounded breast tissue that covers the internal organs. The term in situ Read more The post DCIS Breast Cancer appeared first on CancerWall.com.
http://bit.ly/2tMdOF3
Triple Negative Breast Cancer
What Is Triple Negative Breast Cancer? Triple-negative breast cancer is a sub-group of breast cancer. The name of the condition denotes a characteristic feature of the diagnosis. Three important breast cancer features, which are common in other subtypes of breast Read more The post Triple Negative Breast Cancer appeared first on CancerWall.com.
http://bit.ly/2tbpSMw
HER2 Positive Breast Cancer
What is HER2 positive breast cancer ? The excessive presence of HER2 (human epidermal growth factor) protein receptors in breast tissue lead to malignant growth and clinically termed as HER2 positive breast cancer. In general, breast tissues contain HER2 (human Read more The post HER2 Positive Breast Cancer appeared first on CancerWall.com.
http://bit.ly/2ql3xv1
Inflammatory Breast Cancer – Pictures, Symptoms, Signs, Survival Rate, Prognosis
What is Inflammatory Breast Cancer? Inflammatory Breast Cancer is one of the rarest forms of breast cancer. The cancer rapidly grows and also changes the skin significantly. The cancer makes the affected breast red and swallows. A very distinguished cancer Read more The post Inflammatory Breast Cancer Pictures, Symptoms, Signs, Survival Rate, Prognosis appeared first on CancerWall.com.
http://bit.ly/2fhNtYU | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633338809013367, "perplexity": 14838.987640699203}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813691.14/warc/CC-MAIN-20180221163021-20180221183021-00158.warc.gz"} |
http://math.stackexchange.com/questions/92223/uniqueness-of-solution-to-1st-order-pdes | # Uniqueness of solution to 1st order pdes
I am given a 1st order partial differential equation $y{\partial \psi\over\partial x}+x{\partial \psi\over\partial y}=0$ subjected to boundary condition $\psi(x,0)=\exp(-x^2)$. I have found that a solution is $\psi(x,y)=\exp(y^2-x^2)$. But I am asked when the solution is unique. Could someone please explain how to answer this? Thanks.
-
It would be useful if you could tell us how you came up with your solution. Perhaps by the method of characteristics? – Jeff Dec 18 '11 at 17:21
Also, please include the domain of your PDE. Is it all of $\mathbb{R}^2$? The answer to your question will depend on the domain. – Jeff Dec 18 '11 at 18:13
Consider the parametric curves $x = A e^t + B e^{-t}$, $y = A e^t - B e^{-t}$, which satisfy $x' = y$, $y' = x$. Along such a curve any solution $\psi$ must be constant, according to the chain rule: $$\frac{d}{dt} \psi(x(t),y(t)) = \psi_x \frac{dx}{dt} + \psi_y \frac{dy}{dt} = 0$$ Now the curve intersects $y=0$ if and only if $A$ and $B$ are either both positive (i.e. $x > |y|$), both negative ($x < -|y|$), or both $0$ ($x=y=0$). So a boundary condition on $y=0$ produces uniqueness only in the regions $|x| \ge |y|$. In the region $|y| > |x|$ the solution is not unique. For example, you could add $f(y^2 - x^2)$ to $\psi(x,y)$ where $f$ is differentiable with $f(s) = 0$ for $s \le 0$.
-
how do we know that if the characteristic curve intersects the initial curve then there is a unique solution?. i.e., how do we know that there is a unique solution in this case iff the curve intersects $y = 0$? – user27182 Mar 26 '13 at 18:26
Since the solution must be constant on the curve, the value on $y=0$ determines the solution on the curve if the curve intersects $y=0$. Note by the way that (except for the trivial case $A=B=0$) the curve will intersect $y=0$ at only one point $t = -\ln(A/B)/2$. – Robert Israel Mar 29 '13 at 23:03
Uniqueness can be addressed in the following way. Let us suppose that exist another solution $\phi(x,y)$ such that $\phi(x,0)=e^{-x^2}$ then, being your equation linear then also $\phi'(x,y)=a\psi(x,y)+b\phi(x,y)$, with two arbitrary coefficients $a$ and $b$ is a solution. The boundary condition will give $a+b=1$. So, unless you do not give another condition on $y$ your solution cannot be unique.
Of course, you have also the other condition in the given problem. From the fact that $\psi(x,0)=e^{-x^2}$ and from the other fundamental result that your equation has the general solution (characteristic method cited in the comments) $\psi(x,y)=\psi(x^2-y^2)$ for you is enough to set $\psi(0,0)=1$ and your solution is unique.
Finally, I would like to point out the simple way the solution OP proposed can be found. One have to search for a solution in the form
$$\psi(x,y)=\phi(y)e^{-x^2}$$
with $\phi(0)=1$ and the solution is immediately obtained, consistent with the characteristic method.
-
You are assuming the existence of another solution $\phi \neq \psi$. – Jeff Dec 18 '11 at 17:06
If I want to prove uniqueness I have to guess that another solution does exist and then, to prove that this is not independent from the other. This is standard matter and I do not see the reason to downvote unless my argument is wrong. – Jon Dec 18 '11 at 17:32
Maybe I am mistaken, but your argument seems to be: assume another solution exists, try to prove it is linearly dependent with the first solution, and then if you can't succeed, it must imply non-uniqueness? This is wrong. – Jeff Dec 18 '11 at 18:09
I think you have some difficulties with foundations en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem. Here the idea is taken from the fixed point uniqueness. I just repeat: Before to downvote, think! – Jon Dec 18 '11 at 18:14
Your solution is still wrong, and has nothing to do with the Picard Lindelof Theorem. Try to think about why your solution would imply non-uniqueness (and remember this is not an ODE). – Jeff Dec 18 '11 at 18:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600587487220764, "perplexity": 155.87460011193133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050919950.49/warc/CC-MAIN-20160524004839-00102-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://koreascience.or.kr/search.page?keywords=integral+abutment | • Title, Summary, Keyword: integral abutment
### Behavior of Pile Foundation of Skewed Plate Girder Bridge with Integral Abutment (일체교대식 판형교의 사각변화에 따른 파일기초 거동분석)
• 서혜선;이성우
• Proceedings of the Computational Structural Engineering Institute Conference
• /
• /
• pp.389-396
• /
• 1998
• One solution to prevent deterioration due to expansion joint and to extend lifetime of short span bridges, is jointless integral abutment bridge. To understand behavior of pile foundation of skewed plate girder bridge with integral abutment, finite element analysis was performed for the model of different skew angle from 90。 to 50。. Comparison of stresses at pile and abutment was made for each case. It is found that effect of temperature change is major factor to influence the behavior of skewed integral abutment bridge.
### Analytical Investigation on the Behavior of Simple Span Integral Abutment Bridge (단경간 일체식교대 교량의 거동에 대한 해석적 연구)
• 홍정희;정재호;박종면;유성근;윤순종
• Proceedings of the Computational Structural Engineering Institute Conference
• /
• /
• pp.99-106
• /
• 2002
• This paper presents an analytical investigation on the behavior of simple span integral abutment bridge. An integral abutment bridge is a simple span or multiple span continuous deck type bridge having the deck integral with the abutment wall. Although the temperature variation and earth pressure are the major attributor to the total stress in integral abutment bridge, the superstructure has been designed by modeling it as a simple or continuous beam In order to investigate the effect of temperature change and earth pressure on the superstructure of integral bridge, the simple span integral bridge is modeled as a plane frame element. Performing frame analysis, the variations of bending moment and axial force of superstructure due to the various loading combination are investigated with respect to the flexural rigidity of piles, and the bending moment and axial force obtained by frame analysis are compared with the maximum bending moment obtained by conventional design method and initial prestressing force respectively.
### Spring Modeling for the Passive Earth Pressure Acting on the Integral Abutment Bridge (일체식교대 교량에 작용하는 수동토압의 스프링 모델링)
• 정재호;홍정희;유성근;윤순종
• Proceedings of the Computational Structural Engineering Institute Conference
• /
• /
• pp.420-427
• /
• 2002
• In this paper, a simplified structural spring model of integral abutment bridge is proposed to account for the passive earth pressure due to the change of temperature. The magnitude of earth pressure acting on integral bridge abutment mainly depends on the amount and shape of displacement of abutment according to the thermal expansion of superstructure. The proposed simplified model is developed based on the possible displacement shape of integral abutment bridge. Performing the direct stiffness method, the analysis is done by using the proposed method and the results of new model is compared with those of conventional design approach. The study show that it may be possible to obtain more rational and economical design values for integral abutment bridge by applying the proposed design method.
### A Study on Utilization and Application of Integral Abutment PC Beam Bridge (PC Beam을 이용한 일체식교대 교량의 실용화 연구)
• 이재혁;박종면;유성근;정경자
• Proceedings of the Korea Concrete Institute Conference
• /
• /
• pp.769-776
• /
• 1997
• An integral abutment bridge refers to a jointless bridge with capped-pile stub type abutment. It has been used for more than 50 years in the United States and Canada. This paper briefly describes design and utilization of the PC beam integral abutment bridge which is adapted for Korea and shows its excellent performance compared with that of a jointed bridge. This study introduces the characteristics of structural behaviors of the integral bridge and also mentions about its attributes and limitations.
### Analysis of Structural Behavior for Abutment Integral Approach Slabs (교대일체식 접속슬래브의 구조적 거동 분석)
• Nam, Young-Kug;Lee, Heung-Su
• Proceedings of the Korea Concrete Institute Conference
• /
• /
• pp.1-2
• /
• 2009
• Abutment Integral Approach Slabs are proposed to improve road traveling performance of bridge approaches and evaluated analysis application possibility of approach slabs in abutment integral approach slabs as comparing between Abutment Integral Approach Slabs and approach slabs in general bridges.
### Retrofitting of steel pile-abutment connections of integral bridges using CFRP
• Mirrezaei, Seyed Saeed;Barghian, Majid;Ghaffarzadeh, Hossein;Farzam, Masood
• Structural Engineering and Mechanics
• /
• v.59 no.2
• /
• pp.209-226
• /
• 2016
• Integral bridges are typically designed with flexible foundations that include one row of piles. The construction of integral bridges solves difficulties due to the maintenance of expansion joints and bearings during serviceability. It causes integral bridges to become more economic comparing with conventional bridges. Research has been focused not only to enhance the seismic performance of newly designed bridges, but also to develop retrofit strategies for existing ones. The local performance of the pile to abutment connection will have a major effect on the performance of the structure and the embedment length of pile inside the abutment has a key role to provide shear and flexural resistance of pile-abutment connections. In this paper, a simple method was developed to estimate the initial value of embedment length of the pile for retrofitting of specimens. Four specimens of pile-abutment connections were constructed with different embedment lengths of pile inside the abutment to evaluate their performances. The results of the experimentation in conjunction with numerical and analytical studies showed that retrofitting pile-abutment connections with CFRP wraps increased the strength of the connection up to 86%. Also, designed connections with the proposed method had sufficient resistance against lateral load.
### Fragility evaluation of integral abutment bridge including soil structure interaction effects
• Sunil, J.C.;Atop, Lego;Anjan, Dutta
• Earthquakes and Structures
• /
• v.20 no.2
• /
• pp.201-213
• /
• 2021
• Contrast to the conventional jointed bridge design, integral abutment bridges (IABs) offer some marked advantages like reduced maintenance and enhanced service life of the structure due to elimination of joints in the deck and monolithic construction practices. However, the force transfer mechanism during seismic and thermal movements is a topic of interest owing to rigid connection between superstructure and substructure (piers and abutments). This study attempts to model an existing IAB by including the abutment backfill interaction and soil-foundation interaction effects using Winkler foundation assumption to determine its seismic response. Keeping in view the significance of abutment behavior in an IAB, the probability of damage to the abutment is evaluated using fragility function. Incremental Dynamic Analysis (IDA) approach is used in this regard, wherein, nonlinear time history analyses are conducted on the numerical model using a selected suite of ground motions with increasing intensities until damage to abutment. It is concluded from the fragility analysis results that for a MCE level earthquake in the location of integral bridge, the probability of complete damage to the abutment is minimal.
### Experimental Study on Behaviors of Pile-Abutment Joint in Integral Abutment Bridge (일체식 교대 교량의 파일-교대 연결부 거동에 관한 실험적 연구)
• Kim, Sang-Hyo;Yoon, Ji-Hyun;Ahn, Jin-Hee;Lee, Sang-Woo
• Journal of The Korean Society of Civil Engineers
• /
• v.29 no.6A
• /
• pp.651-659
• /
• 2009
• This study dealt with the behavior of pile-abutment joints in integral abutment bridges. Two types of pile-abutment joints were proposed to strengthen its rigid action. One was fabricated with transverse rebars which penetrated the H-pile in the abutment. The other was composed of stud shear connectors on the flanges of the H-pile. Three half scaled pile-abutment joint specimens were fabricated and loading tests were performed to evaluate the behavior of proposed joints. The results showed that the initial stiffness in elastic region of all specimens was sufficient to be applied for the integral abutment bridges. However, the performances of the proposed joints were shown to be more effective in rigid action compared to the joints types suggested by the Integral Bridge Design Guideline. The results from stiffness, strength, rotation and crack propagation tests supported this matter.
### A Parametric Study on the Behavior of Integral Abutment rSC Beam Bridge (일체식교대 PSC빔 교량의 거동에 관한 매개변수 해석)
• 홍정희;정재호;유성근;박종면;윤순종
• Proceedings of the Computational Structural Engineering Institute Conference
• /
• /
• pp.412-419
• /
• 2002
• This paper presents a parametric study on the behavior of integral abutment PSC beam bridge. An integral abutment bridge is a simple span or multiple span continuous deck type bridge having the deck integral with the abutment wall. The rational structural model and design load combinations accounting for each construction stage are proposed. It can be used for defining the effect of earth pressure and temperature change in the design process including for determining maximum flexural responses. The bending moment at each response location due to the design load combination is investigated according to the change of flexural rigidity of piles and abutment height. The flexural responses of proposed model are computed for the cases of applying the Rankine passive earth pressure and the earth pressure based on the soil-structure interaction respectively, and the results are discussed.
### Effect of superstructure-abutment continuity on live load distribution in integral abutment bridge girders
• Dicleli, Murat;Erhan, Semih
• Structural Engineering and Mechanics
• /
• v.34 no.5
• /
• pp.635-662
• /
• 2010
• In this study, the effect of superstructure-abutment continuity on the distribution of live load effects among the girders of integral abutment bridges (IABs) is investigated. For this purpose, two and three dimensional finite element models of several single-span, symmetrical integral abutment and simply supported (jointed) bridges (SSBs) are built and analyzed. In the analyses, the effect of various superstructure properties such as span length, number of design lanes, girder size and spacing as well as slab thickness are considered. The results from the analyses of two and three dimensional finite element models are then used to calculate the live load distribution factors (LLDFs) for the girders of IABs and SSBs as a function of the above mentioned parameters. LLDFs for the girders are also calculated using the AASHTO formulae developed for SSBs. Comparison of the analyses results revealed that the superstructure-abutment continuity in IABs produces a better distribution of live load effects among the girders compared to SSBs. The continuity effects become more predominant for short span IABs. Furthermore, AASHTO live load distribution formulae developed for SSBs lead to conservative estimates of live load girder moments and shears for short-span IABs. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191621541976929, "perplexity": 5200.450084718143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00436.warc.gz"} |
http://mathhelpforum.com/algebra/132698-solving-equation-print.html | Solving an Equation
• Mar 8th 2010, 09:36 AM
StephenPoco
Solving an Equation
Simply solve:
fg(x) = 3x^(2)-6x+17
Last question on me stoopid Maths paper. Can't really think of what to do.
Silly me. I think I know how to do it now.
Thanks.
• Mar 8th 2010, 10:00 AM
e^(i*pi)
Quote:
Originally Posted by StephenPoco
Simply solve:
fg(x) = 3x^(2)-6x+17
Last question on me stoopid Maths paper. Can't really think of what to do.
Silly me. I think I know how to do it now.
Thanks.
Solve for what
If $fg(x)=0$ use the quadratic formula
• Mar 8th 2010, 10:01 AM
masters
Quote:
Originally Posted by StephenPoco
Simply solve:
fg(x) = 3x^(2)-6x+17
Last question on me stoopid Maths paper. Can't really think of what to do.
Silly me. I think I know how to do it now.
Thanks.
Hi StephenPoco,
I'm not sure what fg(x) is, but I would set the expression = 0 and use the quadratic formula to solve.
$3x^2-6x+17=0$
$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9692550301551819, "perplexity": 3962.426272994844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00075-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mulanpa.sourceforge.net/ | ## Purpose
### If you have something like this:
TYPE modell_CommentsBehind__function(TYPE Parameter) // this is the modell of a function // with comments behind of the commands
{ TYPE ReturnValue; //comment ReturnValue TYPE_A LokalVariable_1 = Value1; //variable-comment 1 TYPE_A LokalVariable_2; //variable-comment 2 TYPE_Z LokalVariable_N = ValueN; //variable-comment N operation_1( & LokalVariable_1,Parameter); //operation-comment 1 LokalVariable_2 = operation_2(LokalVariable_N); //operation-comment 2 if(LokalVariable_1 == LokalVariable_2) //statement-comment for true { ReturnValue = DefaultValue; //operation-comment for true } else //statement-comment for false { while(ParameterStatment) //loop-comment ReturnValue = operation_3(); //operation-comment 3 }; return(ReturnValue); //comment return-line }
### MuLanPa will generate something like this:
TYPE modell_CommentsBehind__function ( TYPE Parameter ) // this is the modell of a function // with comments behind of the commands { TYPE ReturnValue ; //comment ReturnValue TYPE_A LokalVariable_1 = Value1 ; //variable-comment 1 TYPE_A LokalVariable_2 ; //variable-comment 2 TYPE_Z LokalVariable_N = ValueN ; //variable-comment N operation_1 ( & LokalVariable_1 , Parameter ) ; //operation-comment 1 LokalVariable_2 = operation_2 ( LokalVariable_N ) ; //operation-comment 2 if ( LokalVariable_1 == LokalVariable_2 ) //statement-comment for true { ReturnValue = DefaultValue ; //operation-comment for true } else //statement-comment for false { while ( ParameterStatment ) //loop-comment ReturnValue = operation_3 ( ) ; //operation-comment 3 } ; return ( ReturnValue ) ; //comment return-line }
## Introduction
MuLanPa stands for Multi-Language-Parser and is the name of the project. But the binary was renamed into abc2xml to make clearer what it does. The name of the tool stands for the conversion of a text written in a language (abc) with a defined syntax into (2) xml. The binary abc2xml is designed as a source-code analysing program that generates xml-files wich represent the algorithm and data-structure of the source. It has an own source parser system that is configured by external grammar-description. Thus it may be used for several programming-languages. Additional configurations of abc2xml are placed in an xml-file. The output of abc2xml should be used as input for tools like Moritz a structogram-generator for Doxygen. But it may also be used as data-base for other tools like project-browsers for code-editors or other code-structure viewers. MuLanPa comes along with a second bianry xml2abc that will be used for documentation-purpose only. Both binaries are console- or terminal-applications which have to be started via command-line. With configuration-files you may control the general style of the output.
abc2xml is designed as one tool in a chain of tools. It may be used as stand-alone application also but this is not the native use-case. abc2xml itself has an own source-parser and creates xml- files but no diagrams or other grafics. Thus the ouput of abc2xml should be post-processed by an other tool.
To parse the sources written in a programming-language or a special script-language abc2xml neads the description of this language in form of grammar-file. abc2xml reads this grammar first to learn how to analyse the sources or scripts. This grammar itself has to be offerd in a special notation as a text-file or as part of the base-configuration. The native destination-tool is the binary of MuLanPa called xml2abc that reads the xml-files and creates several script-files to describe the diagrams as base for a graphical output via a script-interpreting tool.
The output-format of abc2xml is xml. So a common html-browser may be used to view its content. But since this is not a very comfortable solution it will be better to use an additional tool that is able to interprete the content of the abc2xml-output and/or that generates an output that shows the user what he realy wants to see.
To control the whole process of generating the xml-scripts with abc2xml several files are used
• First of all the user starts a batch-file or terminal-script wich calls abc2xml and perhaps additional tools to interprete the output of abc2xml sequentially.
• abc2xml needs a configuration file in xml-format. The work of abc2xml will be done in a sequence of processes wich can be configured to adapt them for other languages.
At he moment following processes are used: Preprocessor Directives to solve compilerswitches in languages like C/C++ Context to split the source-text in comments and active code Comment to analyse the comments if they contain special commands Line to analyse line changes and cut out signes of logical line-connections Source to parse the prepared source-text Merge to create one xml-file that contains the result of all other processes
The grammar-text to teach abc2xml the language of the sources are analiesd while a special process that knows the notation of the grammar, that builds the process-parser and that is able to create a special terminal-output as information for the user. Every process has is own parser that has to be defined in form of gramar-texts. This text a writen in a notation that can be read by the notaion-process.In the future more than one notation may be used.
• It is possible to split the configurations of the tool in a user-file and some more detailed configurations.
In the distribution you will find the folder cfg where the configurations for the comon user are placed. For example you place here the information about the files you wish to analyse
The folder LangPack contains for every supported programming language an own sub-folder used as location for the detailed configuration. In parallel to the detailed xml-configurations you will find in addition some other files. The a2x-files contain the grammar descripition of the programming-language and the x2a-files the script-snipets used to assemble the syntax-diagrams for documentation purpose .
• Additionaly used tools may need own configuration-files. To learn more about it is recommended to use the manual of them.
## Controlling MuLanPa via Shell- or Batch-Scripts
This is a possible algorithm for the controlling terminal-script
This bash-script shows how to use MuLanPa together with the program doxygen (www.doxygen.org).
In this file only command-line parameters are defined. Other important adjustments are made in the configuration-files.
voidscrHTMim ( void)
parameter-settings
XMLPATH = "./xml/" path of the xml-files created by abc2xmc DESTINATIONPATH_DOT = "./dot/" path for the syntax-diagrams created by xml2abc CONFIGURATION_XML = "./cfg/abc2xml_cfg.xml" configuration of abc2xml to transfer the source-files to xml-files CONFIGURATION_DOT = "./cfg/grm2abc_cfg.xml" configuration of xml2abc to transfer the used grammar-rules into dot-based syntax-diagrams MULANPAPATH = "./bin/" location of MuLanPa
delete old outputs of doxygen and moritz
removeDirectoryContent ( XMLPATH, "*.xml") old output of abc2xml removeDirectoryContent ( DESTINATIONPATH_DOT, "*.dt") old output of xml2abc
doxygen and moritz in action
( MORITZPATH ) abc2xml CFCONFIGURATION_XML run abc2xml to generate xml-files wich contain the algorithm-structure ( MORITZPATH ) xml2abc CFCONFIGURATION_DOT run xml2abc to generate dot-based syntax diagrams doxygen ( "./doxygen/Doxyfile_html") run doxygen to generate a documentation that contains the syntax-diagrams of the used programming-language
(note, this diagram contains no valid script-text, it should only describe the sequence-steps)
This is is a possible content of the controlling terminal-script:
rem **************************************************************************** rem * Example-batch to demonstrate how to use mulanpa rem * rem * This batch-file shows how to use moritz together with the program doxygen rem * (www.doxygen.org) to create dot-based syntax diagrams. rem * rem * In this file only command-line parameters are defined. Other important rem * adjustments are made in the configuration-files. They may be also rem * responsible for problems. rem * Use the commented pause-commands if you are looking for sources of rem * trouble. rem **************************************************************************** rem **************************************************************************** rem parameter-settings rem **************************************************************************** rem path of the xml-sources generated by abc2xml set DESTINATION_XML=.\xml\ rem Pause rem path for the nassi uml activity diagrams generated by xml2abc set DESTINATION_DOT=.\dot\ rem Pause rem abc2xml-configuration to generate xml files set CONFIGURATION_XML=.\cfg\abc2xml_cfg.xml rem Pause rem xml2abc-configuration to generate syntax diagrams set CONFIGURATION_DOT=.\cfg\grm2abc_cfg.xml rem Pause rem location of MuLanPa set MULANPAPATH=.\bin\ rem Pause rem **************************************************************************** rem delete old outputs of doxygen and mulanpa rem **************************************************************************** rem old ouput-sources of abc2xml del %DESTINATION_XML%*.xml rem Pause rem outputs of xml2abc del %DESTINATION_DOT%*.dt del %DESTINATION_DOT%*.html rem Pause rem **************************************************************************** rem doxygen and moritz in action rem **************************************************************************** rem run abc2xml to generate transfer the souce-files into xml-files %MORITZPATH%abc2xml CF%CONFIGURATION_XML% rem >>log.txt rem Pause rem run moritz to generate files wich contain dotbased syntax diagrams %MORITZPATH%xml2abc CF%CONFIGURATION_DOT% rem >>log.txt rem Pause rem run doxygen to generate a documentation that contains the syntax-diagrams rem of the used programming-language doxygen.exe .\doxygen .\cfg\Doxyfile_html rem pause
## How It Works
Parsing a source written in a programming language like C/C++ is not rella trivial. Furthermore it is one goal of MuLanPa to support differnet programming languages. Thus the following section is only a rough overview. More details can be found in the documentation you can download from the Sourceforge Projekt
### Grammar
Parsing the sources or scripts is one of the basic steps for abc2xml to convert the input into the output. But this process it self depends on the language of the input-text. Therefore it is necessary to configure abc2xml by defining the grammar of the source- or script-language. This grammar itself has to be defined in a special file with the attachment .a2x or as part of the xml-configuration writen in a special notation that abc2xml knows. At the moment there is only one kind of notation that can be used. It is based on the Spirit parser-library that is used to implement the parsing-process. It is planed to implement other notations also like ebnf or regex.
This is an example to describe the construction of names:
/* Spirit 1.8.5 Grammar-Example */ ENDMARKER = "ENDMARKER"; INDENT = "INDENT"; DEDENT = "DEDENT"; NEWLINE = "NEWLINE"; NON_NAME = ENDMARKER | INDENT | DEDENT | NEWLINE | KEYWORD; KEYWORD = "and" | "del" | "from" | "not" | "while" | "as" | "elif" | "global" | "or" | "with" | "assert" | "else" | "if" | "pass" | "yield" | "break" | "except" | "import" | "print" | "class" | "exec" | "in" | "raise" | "continue" | "finally" | "is" | "return" | "def" | "for" | "lambda" | "try"; NAME = ( (range_p('a','z') | range_p('A','Z') | '_') >> *(range_p('a','z') | range_p('A','Z') | range_p('0','9') | '_') ) - NON_NAME;
The current implentation of the grammar knows several basic parsers and operators to describe the structure of a non-cotexts-sensitive language. Every combination of basic parsers and operators is also a parser. This combined parser can be used as a sub-block in a more complex combination that describes a parser also or it can define as a parser-rule a complete new parser. Every parsers defined in a parser-rule has a name or identifier, a string-literal that represents this parser as element in other parser-rules.
### Scanner and Parser
#### 1. Scanning the Text to parse for Tokens
A token is the base-element of a language. It may be single character or a sequence of characters. In some languages some special properties of text-parts are also defined as token, for example the indention or dedention of line. But since abc2xml uses special processes to insert special strings for this non textual tokens a parser of abc2xml has not to deal with non textual tokens.
Basic token-parsers take a look to every character and compares it to its individual search-pattern. If the current character is fits to the search-pattern it will be noticed by the parser. If the current character is not permited by a search-pattern the coresponding parser drops its current part-result. If the current character is the last part of the token describen by the search-pattern and the parser has now the complete token, this is a so caled parser-hit and the found token is now an input for a higher leveld expression-parser. Its a little bit like playing bingo. The scanner calls out the content of the text to analyse character for character. If a token-parsers finds the character on its rule-card as allowed it will be checked. But if the current character is forbidden the token-parser will be excluded (what is not the case if you are playing bingo). If one token is found the next token will be searched in the same manner and so the scanner and the token-parses together transform a sequence of characters into a sequence of tokens.
#### 2. Combine Tokens to single Expressions
An expression (in the sence of this chapter of the documentation) is an language-element that contains a token or a combination of tokens.
Every expression-parser is constructed as combination of token-parsers where one token-parser may be used for several expression-parsers. At the end the parsing of expressions works simmilar to the parsimg of tokens. As long as the current token fits the search-pattern of the expression-parser the parsing goes on until the last token is reached or a forbidden token stops the work of the parser. If a parser has a hit its result may be the input for a more complex expression that will be searched by an other parser.
#### 3. Create the Parser-Output from the Expressions
As result of succesfull search will be stored as parser-tree that reflects the structure of the used expresions, sub-expressions and tokens. Every parser uses this tree-stucture to store each single result and give this to its reciving parsers which ad this as part-result to their own parser-tree if it fits to the search-pattern.
### Directive Process
Especially C and C++ sources contain not only parts written in the programming language itself but also parts in a different language the preprocessor-directives. Simple preprocessor-commands can be treated like norcmal c/c++ commands thus the used c/c++ source-parser contains a grammar for the prerprocessor-directives also. But it is allways possible that compiler-switches contain source-snippets wich are not able to be parsed since starting and/or ending parts are not part of the snipet. In this cases a special process has to be used to construct out of the original-source a special one where compiler-switches with broken source-conmtent are solved thus that the new source contains valide code only. Since not every programming language knows preprocessor-dircetives wich may contain broken code-parts this process has to be activated by using special configuration-parts. Currently this is only possible for C and C++ . The directive-process has an own parser that describes the directives of the preprocessor and the expressions used in the switch-directives. All other details of the source will be describen as simple text-lines. Since the directive-process knows the core-flow it is possible to try out if the content of the switch-pathes contain complete code that can be parsed by the core-flow. As user-output the drictive-process generates an xml-file for every variant that contains the parsing and the information about the activity and parseabelity of each switch-path. As indirect output the source-variant will be assembled and this will be over given to the core-flow that works with it like with a normal source. The parsing-part of the directive-process works like the source-process but with an own grammar. This grammar is splitted in to parts the description of the directive-syntax and a detailed description of the switch-expressions. Once the source is parsed the result contains a detailed description-tree of the expressions. It is possible for the user to define a set of constant-values for each variant he wants to analysed. Whyle the evaluation the user given constants will be used to decide wich switch-path is active and wich not. Additionaly the source-sniptes inside of the switch-pathes will be tested by using the core-flow. Configured by the user those switches with parseable code may be kept in the source. Whyle assembling the source-variant inactive switch-pathes will be commented out. Thus the are still part of the source not as active code but as comments.
### Core-Flow Processes
Before a source or a script can be analysed every process except the merge-process needs a parser. Every process used to analyse the sources or scripts has its own parser and that is defined in an external text-file or as part of the xml-configuration. The notation-process is the only one with a build-in parser since this process has to no how to analyse the grammar-texts. By analysing the grammar-rules for each other process the notation creates their parsers. The merge-process neads no parser since it works with the output of the other processes. After each process is configured by the config.xml and the parsers are created the analysing starts for each source or script. Each process will save its results for each source or script in an extra xml-file in the destination-folder if neaded. The first process is used to create context-depending part-sources. This ensures that each process gets no content that is unvalid for the parser. For example comments may occure every where in the original source. This makes it very difficult to define a parser that is able to deal with all possible combinations of active source-parts and comments. It is easier to cut out all comments before to prosess them in an own sequence. Here the comments will be saved to gether with some position-information. In a parrallel sequence the active-part of the code will be analysed where in a first step the line-changes will be analysed. this is necessary for languages like python where the indention-changes are used as tokens. After that the rest of the source will be analysed. After all processes have analysed the source or script their result is spliteted into different objects, since eache process produces an own result-output. The merge-process builds out of this detail-data one additional result-output that includes the content of all process-outputs of the comment- and code-sequence. The merge-process tries to take care about the text-position of all parts and sorts its output so that it reflects the architecture of the original source or script.
If you take a look to the download-section of MuLanPa you will find several files in the main-release of MuLanPa:
• ### MuLanPa_WIN32_YYYY_MM_DD.zip / MuLanPa_Linux_YYYY_MM_DD.zip
This are the distributions for windows and Linux (Suse-Linux was used to build the Linux-version. But its only a terminal-program so it should work on other Linux-distributions also I hope).
If you unzip the files you will get a directory that contains several sub-directories: bin binaries of Moritz : abc2xml; parser-tool to transfer the source-content into an xml parser-tree xml2abc; generator-tool to create the syntax diagram decribing scripts cfg user-configuration of MuLanPa and doxygen src example sources xml output-files of abc2xml witch are the inputput-files for xml2abc dot dotbased syntax diagrams generated by xml2abc AddTxt and picture additional input-files for Doxygen to create the user documentation html and chm user documentation created by Doxygen
The batch-file "xyz_create.bat" or the shell-script "xyz_create.sh" controls the generation of the files by MuLanPa for the programming-language xyz and the generation of a documentation by Doxygen .
• ### MuLanPa_UserProject_YYYY_MM_DD.zip
This archive contains only a supset of folders and configuration-files available in the real distribution. But it can be used as a project-template. Te idea behind is to have the distributen only onetimes in your system and to use several copies of the user-project folder for several source-projects.
The user-project contains templates of all necessary configuration-files used by commonly to define the files to analyse and the basic behavior of MuLanPa. Furthermore it contains all folders necessary to store the results.
Thus the distribution itself contaims no parts associated with a special source-project. But it contains all parts wich are used for all projects in the same way.
• ### MuLanPa_UserDoku_xyz_YYYY_MM_DD.chm / MuLanPa_UserDoku_xyz_YYYY_MM_DD.zip
This is the user-documentation where the examples are written in the programmimg-language xyz. On windows you may prefer the chm-file for all others operation-systems use the zip-file it contains the documentation in html-format.
This text-file contains a short introduction and the latest user information some info to build it from the sources and the change-history of MuLanPa.
• ### src_MuLanPa_YYYY_MM_DD.zip
The zip-file contains the source-files of xml2abc if you want to build MuLanPa by your self. In addition you will find in the latest versions a project-file for the freeware IDE Code::Blocks . If you want to build the diagram-tool also by your own please download its source files at Moritz . The download-structure of this project is similar to the one of MuLanPa.
Note! Since both binaries are using the parser-library Spirit that is part of the huge boost-package you have to download it from boost extra. Once you extracted boost, you have to correct the search-path inside the Code::Blocks project-file.
If you build abc2xml and xml2abc you will only get this binaries. Thus you have to download on of zip-distributions also to get the base-version of the configuration-files. Without this file MuLanPa will not work.
(Note the postfix "_YYYY_MM_DD" is the date of creation in the format Year,Month,Day)
Some releases contain developer-documentation also. This are the results if you use Doxygen and MuLanPa together to document the code of MuLanPa. Some release-steps makes it necessary to redesign the code to make the sources less complex. In this cases there will no developer-documentations added to the release because they are no good examples. If you are interested in, please download the developer-documentation of an older release or make the documentation by your self.
### Content
Purpose
Introduction
Controlling Scripts
How It Works
Sourceforge Project Page
Discussion
Bugs and Requests | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5120348930358887, "perplexity": 3970.5099102121494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00445-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://nickjames.co.nz/equestrian-brands-zpqv/54-kg-to-lbs-7feb41 | Step 1: Convert from kilograms to pounds. kg or lbs The SI base unit for mass is the kilogram. Here is one of the Mass conversion : 54 lbs in kg Convert 518.54 kg to pounds. To use this calculator, simply type the value in any box at left or at right. 54.28 kg to lbs. 14.54 KG to Lbs – Unit Definition. What is 518.54 kg in pounds. Thus, for 54 kilograms in pound we get 119.04962158 lbs. Mass is defined as the tendency of objects at rest to remain so unless acted upon by a force. Therefore, another way would be: - 54 kilograms is equal to how many pounds. It accepts fractional values. Step 1: Convert from kilograms to pounds. The kilogram or kilogramme (symbol: kg) is the base unit of mass in the International System of Units (SI). In this article you will find everything about kilogram to pound conversion - both theoretical and practical. The 54.9 kg in lbs formula is [lb] = 54.9 * 2.2046226218. 54 kilograms equal 119.04962158 pounds (54kg = 119.04962158lbs). It accepts fractional values. Note that rounding errors may occur, so always check the results. How to convert 10.54 lbs to kg? 54 Kilograms to Pounds, 54 Kilograms in Pounds, 54 Kilogram to lb, 54 Kilogram in lb, 54 kg to lbs, 54 kg in lbs, 54 kg to Pound, 54 kg in Pound, 54 Kilograms to lb, 54 Kilograms in lb, 54 Kilograms to lbs, 54 Kilograms in lbs, 54 Kilograms to Pound, 54 Kilograms in Pound, 54 Kilogram to lbs, 54 Kilogram in lbs, 54 Kilogram to Pound, 54 Kilogram in Pound, 54 ΧιλιÏγÏαμμο Ïε λίμÏÏα, 54 à¦à¦¿à¦²à§à¦à§à¦°à¦¾à¦® মধà§à¦¯à§ পাà¦à¦¨à§à¦¡, 54 à¤à¤¿à¤²à¥à¤à¥à¤°à¤¾à¤® सॠपाà¤à¤£à¥à¤¡, 54 à¸à¸´à¹à¸¥à¸à¸£à¸±à¸¡à¸à¸à¸à¸à¹, 54 àªàª¿àª²à«àªà«àª°àª¾àª® પાàªàª¨à«àª¡. 14.54 pounds it is equal 6.5952330598 kilograms, so 14.54 lb is equal 6.5952330598 kgs. 518.54kg to pounds. If messing around with numbers and multiplying and dividing are not your thing, our 10.54 kg to lbs conversion chart can do it for you. But if you’re just looking for a rounded off figure, you can also use the 5.54 kg to lbs conversion chart above. kg to pounds kg to lb + oz. Kilograms: Pounds (lb) = Detailed result here. To convert 54.5 kg to lbs, multiply 54.5 by 2.205. One pound (lb), the international avoirdupois pound, is legally defined as exactly 0.45359237 kilograms. Kilograms to Pounds Converter. A gram is equal to 1/1000 of a kilogram, and its SI symbol is K, and kilo may also be used. How do I convert lbs to kg? Mass conversion provides conversion between measure of mass. To convert 54.9 kg to lbs multiply the mass in kilograms by 2.2046226218. It accepts fractional values. To use this calculator, simply type the value in any box at left or at right. - 54 kilograms is equal to how many stones and pounds? Converting 54.6 kg to lb is easy. 53 kg to lbs to convert 53 kilograms to pounds and find out how many pounds is 53 kg. The Kg to Pounds Conversion Formula to convert 60.54 kg to lbs To know how many pounds in a kilogram, you can use the following formula to convert kg to lbs : X(lb) = Y(kg) / 0.45359237 - 53 kg is equal to 116.84 pounds. Whether you opt for a 54.11 kilograms to pounds conversion chart or a 54.11 kg to lbs converter, there is no questioning the need for them. It is important to use the converter weight lbs to kg twice or even more to get the exact result. 54 pounds equal 24.49398798 kilograms (54lbs = 24.49398798kg). 1.54 kilograms or 1540 grams equals 3.40 pounds. How many pounds in 0.54 Kilograms? Q: How many Kilograms in 1 Pounds? It is also needed/We also want to point out that whole this article is devoted to a specific amount of kilograms - this is one kilogram. If you want more accurate results down to the decimals, you should try our 5.54 kg to lbs converter. How to convert 54.4 Pounds (lbs) to kilograms (kg). M (lb) = 2.204622621849 × M (kg) = 2.204622621849 × 3 = 6.613867865546 lbs. If M (kg) represents mass in kilograms and M (lb) represents mass in pounds, then the formula for converting kg to lbs is: M (lb) = 2.204622621849 × M (kg) Example. 54 kg are equal to 54 x 2.20462262 = 119.049622 pounds. The answer is 0.453592. Easily enter your kilogram weight and instantly get the result in pounds. How to convert 54 kilograms to pounds To convert 54 kg to pounds you have to multiply 54 x 2.20462, since 1 kg is 2.20462 lbs . kg to pounds kg to lb + oz. Q: How many Pounds in 1.54 Kilograms? While the emphasis here is on 14.54 kg, you can use other kilograms. Learn how to convert from lb to kg and what is the conversion factor as well as the conversion formula. 119.07. The answer is 119.050 . Thus, for 54.1 kilograms in pound we get 119.270083842 lbs. The kg to lbs conversion calculator is based on formulas which is not errorless. Kg Stones st and pounds; 40kg: 6.3st: 6st 4.2lb: 40.5kg: 6.38st: 6st 5.3lb: 41kg: 6.46st: 6st … If messing around with numbers and multiplying and dividing are not your thing, our 54.8 kg to lbs conversion chart can do it for you. 1 kilogram is equal to 2.204622621849 pounds or lbs and … One pound (symbol: lb), the international avoirdupois pound, is legally defined as exactly 0.45359237 kilograms. ›› Quick conversion chart of kg to lbs. 0.54 kg to lbsto convert 0.54 kilograms to pounds and find out how many pounds is 0.54 kg. The kilogram (kg) is the SI unit of mass. Step 2: Convert the decimal part of pounds to ounces An answer like "3.388 pounds" might not mean much to you because you may want to express the decimal part, which is in pounds, in ounces which is a smaller unit. 54 Pounds to Kilograms Conversion breakdown and explanation 54 lbs to kg conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the United Kingdom) and as a fraction (exact result). 1 kg to lbs = 2.20462 lbs. 1 kilogram = 2.2 x pounds, so, 54.5 x 1 kilogram = 54.5 x 2.2 pounds (rounded), or. Approximation An approximate numerical result would be: fifty-four kilograms is about one hundred and nineteen point zero four pounds , or alternatively, a pound is about zero point zero one times fifty-four kilograms . Convert 54 Kilograms to Pounds To calculate 54 Kilograms to the corresponding value in Pounds, multiply the quantity in Kilograms by 2.2046226218488 (conversion factor). To convert 53 kg to lbs, multiply 53 by 2.205. But if you’re just looking for a rounded off figure, you can also use the 10.54 kg to lbs conversion chart above. Kilograms: Pounds (lb) = Detailed result here. We do not take the responsibility for errors caused by lbs into kg converter. Kilograms to Pounds Converter. 54 kg are equal to 54 x 2.20462262 = 119.049622 pounds. - 54 kg is equal to 119.05 pounds. kg to pounds kg to lb + oz. But if you’re just looking for a rounded off figure, you can also use the 54.8 kg to lbs conversion chart above. There are 16 lb 9 15/16 oz (ounces) in 7.54 kg. From people to cars to everyday items, kg is the standard. 54.6 kilograms equal 120.372395153 pounds (54.6kg = 120.372395153lbs). Use this page to learn how to convert between kilograms and pounds. 5.54 kg to lbs. 54.8 kilograms equal 120.813319677 pounds (54.8kg = 120.813319677lbs). The 54 kg in lbs formula is [lb] = 54 * 2.2046226218. Simply use our calculator above, or apply the formula to change the length 54.8 kg to lbs. Use this page to learn how to convert between kilograms and pounds. How many pounds in 5.54 Kilograms? Convert 0.54 kg to pounds. The factor 2.20462262 is the result from the division 1 / 0.45359237 (pound definition). … If you want more accurate results down to the decimals, you should try our 54.8 kg to lbs converter. Kilograms [kg] The kilogram , or kilogramme, is … Definition of kilogram. 54.00. It can also be expressed as: 54 kilograms is equal to 1 0.0083998587037037 pounds. Kilogram abbreviation: “kg”, Pound abbreviation: “lb. A kilogram (also kilograms and abbreviated as kg), is a unit of mass. One kilogram is a unit of masss (not weight) and equals approximately 2.2 pounds. 54.5 Kilograms to Pounds shows you how many pounds are equal to 54.5 kilograms as well as in other units such as grams, metric tons, milligrams, micrograms, stones and ounces. 54.5 kg to lbs to convert 54.5 kilograms to pounds and find out how many pounds is 54.5 kg. 10 kg to lbs = 22.04623 lbs. You can also press the arrow so you can select other weight units that you could convert. One pound equals 16 ounces exactly. How to convert 0.54 to pounds? A kilogram (also kilograms and abbreviated as kg), is a unit of mass. Thus, for 54 kilograms in pound we get 119.04962158 lbs. So, if you want to calculate how many pounds are 54 … or lbs”. Converting 54.8 kg to lb is easy. Convert 54 kg to pounds. To use this calculator, simply type the value in any box at left or at right. 5.54 kilograms or 5540 grams equals 12.21 pounds. In many parts of the world, kilogram is the unit used to measure weight and mass. 1 kilogram is equal to 2.2046226218488 lbs. To convert 54.1 kg to lbs multiply the mass in kilograms by 2.2046226218. Converting 54 lb to kg is easy. One pound (symbol: lb), the international avoirdupois pound, is legally defined as exactly 0.45359237 kilograms. Convert kg to lbs; 54 Kilograms to Pounds; Convert 54 Kilograms to Pounds. Step 2: Convert the decimal part of pounds to ounces. Formula for converting kilogram to ounces . An infographic chart is further down the page (60kg to 130kg).. Kg to stone, pounds ›› Quick conversion chart of kg to lbs. How many is 518.54 kilograms in pounds. Using our kilograms to stones and pounds converter you can get answers to questions like: - How many stones and pounds are in 54 kg? How to convert 54.4 Pounds (lbs) to kilograms (kg) Kilograms [kg] The kilogram, or kilogramme, is the base unit of weight in the Metric system. Do you want to know how much is 2.54 kg equal to lbs and how to convert 2.54 kg to lbs? Formula for converting kilogram to ounces 1 kilogram is equal to 2.204622621849 pounds or lbs and 1 pound is equal to 16 ounces or oz. This whole article is dedicated to kilogram to pound conversion - both theoretical and practical. 119.04962 Pounds (lb) Kilograms : The kilogram (or kilogramme, SI symbol: kg), also known as the kilo, is the fundamental unit of mass in the International System of Units. Kilograms to Pounds Conversions. Kilograms to Pounds Converter. More information from the unit converter. This method is easy, quick and reliable. lb. Here you go. One kilogram equals 2.20462262 pounds, to convert 54 kg to pounds we have to multiply the amount of kg by 2.20462262 to obtain amount in pounds. What is 54.5 kg in pounds? - 54 kg is equal to 119.05 pounds. If messing around with numbers and multiplying and dividing are not your thing, our 5.54 kg to lbs conversion chart can do it for you. This prototype is a platinum-iridium international prototype kept at the International Bureau of Weights and Measures. Use these charts to quickly look up common weight conversions for kilograms to stone and pounds. Using our kilograms to stones and pounds converter you can get answers to questions like: - How many stones and pounds are in 54.54 kg? The final formula to convert 54 Kg to Lb is: [Kg] = 54 / 0.453592 = 119.05 Kilogram is the SI unit of mass. 54.5 kilograms = 119.9 pounds. 54.01. Here you go. 54.02. How many lbs and oz in 7.54 kilograms? 3.54 kg to lbs. How to convert 14.54 lbs to kg? Kilograms to Pounds Converter. To calculate a kilogram value to the corresponding value in pounds, just multiply the quantity in kilogram by 2.20462262 (the conversion factor). 0.54 kilograms are equal to 0.24494 pound. 10.54 pounds it is equal 4.7808635798 kilograms, so 10.54 lb is equal 4.7808635798 kgs. Kilograms: Pounds (lb) = Detailed result here. Type in your own numbers in the form to convert the units! lbm. If messing around with numbers and multiplying and dividing are not your thing, our 65.54 kg to lbs conversion chart can do it for you. A single kilogram is equal to 2.20 lbs. How to convert 54 kg to lb To calculate a value in kg to the corresponding value in lb, just multiply the quantity in kg by 2.2046226218488 (the conversion factor). 24.04 kg: 54 lb: 24.49 kg: 55 lb: 24.95 kg: 56 lb: 25.40 kg: 57 lb: 25.85 kg: 58 lb: 26.31 kg: 59 lb: 26.76 kg: Onces en Grammes; Grammes en Onces; Onces en Livres; Livres en Onces; Table de conversion métrique Application pour iPhone & Android Poids Température Longueur Superficie Volume Vitesse Temps Monnaie. 0.54 kg to lbs What is 0.54 kg in pounds? It is also needed/We also want to emphasize that all this article is devoted to a specific amount of kilograms - that is one kilogram. Until 20 May 2019, it remains defined by a platinum alloy cylinder, the International Prototype Kilogram (informally Le Grand K or IPK), manufactured in 1889, and carefully stored in Saint-Cloud, a suburb of Paris. 2.54 kg to lbs. Converting 54 kg to lb is easy. 2.54 kilograms = 5.588 pounds. How Heavy Is 54.4 Pounds in Kilograms? 1 lbs = 0.453592 kg. Thus, for 54.9 kilograms in pound we get 121.03378194 lbs. Remember that our calculator from lbs to kg or kg to lbs located on the site can sometimes show the wrong results. 518.54 kg to lbs. One kg is approximately equal to 2.20462262 pounds. One kilogram equals 2.20462262 pounds, to convert 54.7 kg to pounds we have to multiply the amount of kg by 2.20462262 to obtain amount in pounds. 54.7 kg to lbs conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the United Kingdom) and as a fraction (exact result). 54 pounds equal 24.49398798 kilograms (54lbs = 24.49398798kg). 54 kg to lbs to convert 54 kilograms to pounds and find out how many pounds is 54 kg. The answer is 0.453592. 20 kg to lbs = 44.09245 lbs. Type in your own numbers in the form to convert the units! - 54.4 kilograms is equal to how many stones and pounds? 54.5kg to lbs. Whether you opt for a 54.4 kilograms to pounds conversion chart or a 54.4 kg to lbs converter, there is no questioning the need for them. Simply use our calculator above, or apply the formula to change the length 54.6 kg to lbs. The 54.1 kg in lbs formula is [lb] = 54.1 * 2.2046226218. 3870 Liters to Kilograms 181200 Liters to Pounds 1.8 Milligram to Grams 64 Pounds to Kilograms 385 Milliliters to Pounds 188.48 Pounds to Liters 147.15 Pounds to Liters 14.26 Pounds to Liters 76.32 Pounds to Liters 146.46 Pounds to Liters 256 Ounces to Pounds How many pounds in 3.54 Kilograms? Defined as being equal to the mass of the International Prototype Kilogram (IPK), that is almost exactly equal to the mass of one liter of water. To use this calculator, simply type the value in any box at left or at right. - 54.5 kg is equal to 120.15 pounds. Others Weight and Mass converter. Converting 54 lb to kg is easy. To convert 54 kg to lbs multiply the mass in kilograms by 2.2046226218. If you need to be super precise, you can use one kilogram = 2.2046226218488 pounds. 54 Kilograms (kg) =. 3.44 kilograms equals 7.58 pounds: 3.54 kilograms equals 7.80 pounds: 3.64 kilograms equals 8.02 pounds: 3.74 kilograms equals 8.25 pounds: 3.84 kilograms equals 8.47 pounds: 3.94 kilograms equals 8.69 pounds: 4.04 kilograms equals 8.91 pounds: 4.14 kilograms equals 9.13 pounds: 4.24 kilograms equals 9.35 pounds: 4.34 kilograms equals 9.57 pounds Kilograms: Pounds (lb) = Detailed result here. The kilogram (kg) is the SI unit of mass. Step 2: Convert the decimal part of pounds to ounces. Easily convert Kilograms to pounds, with formula, conversion chart, auto conversion to common weights, more 119.05. Use our free metric conversion tool to convert 54 lb (pounds) to kg (kilograms) KGtoLBS.com Convert kilograms into pounds quickly. Definition of kilogram. The answer is 3.395119 The kilogram (kg) is the SI unit of mass. If messing around with numbers and multiplying and dividing are not your thing, our 54.11 kg to lbs conversion chart can do it for you. Kilograms [kg] The kilogram , or kilogramme, is … How Many Pounds in a Kilogram? kg to pounds kg to lb + oz. If M (kg) represents mass in kilograms, M (lb) represents nass in pounds and M (oz) represents mass in ounces, then the formula for converting kg … So finally 54 kg = 119.04962157983 lbs. Supose you want to convert 54 kg into lb. Conclusion: 54 kg ≈ 119.0496204 lb Conversion in the opposite direction The inverse of the conversion factor is that 1 pound is equal to 0.0083998587037037 times 54 … 106.7 kg to stones and lbs Disclaimer While every effort is made to ensure the accuracy of the information provided on this website, neither this website nor its authors are responsible for any errors or omissions, or for the results obtained from the use of this information. 25 kg to lbs = 55.11557 lbs Here is the formula: Value in lb = value in kg × 2.2046226218488. Kilograms to Pounds Converter. What is a Kilogram? If we want to calculate how many Pounds are 54 Kilograms we have to multiply 54 by 100000000 and divide the product by 45359237. Simply use our calculator above, or apply the formula to change the length 54 kg to lbs. In this case we should multiply 54 Kilograms by 2.2046226218488 to get the equivalent result in Pounds: Kilograms: Pounds (lb) = Detailed result here. Using our kilograms to stones and pounds converter you can get answers to questions like: - How many stones and pounds are in 54.4 kg? The 54 kg in lbs formula is [lb] = 54 * 2.2046226218. kg to pounds kg to lb + oz. It is the approximate weight of a cube of water 10 centimeters on a side. Kilograms to Stone and Pounds Chart. (some results rounded) kg. - 54.54 kilograms is … But if you’re just looking for a rounded off figure, you can also use the 65.54 kg to lbs conversion chart above. More information from the unit converter. To convert 54 kg to lbs, multiply 54 by 2.205. Defined as being equal to the mass of the International Prototype Kilogram (IPK), that is almost exactly equal to the mass of one liter of water. What is 53 kg in pounds? 5 kg to lbs = 11.02311 lbs. What is 54 kg in pounds? In many parts of the world, kilogram is the unit used to measure weight and mass. But if you’re just looking for a rounded off figure, you can also use the 54.11 kg to lbs conversion chart above. 54 kg to lbs to convert 54 kilograms to pounds and find out how many pounds is 54 kg. It is equal to the mass of the international prototype of the kilogram. Once this is very close to 2.2 pounds, you will almost always … How many pounds in 1.54 Kilograms? To use this calculator, simply type the value in … What is 54 kg in pounds? 0.54 kg to lbs. 15 kg to lbs = 33.06934 lbs. To convert 54 kg to lbs, multiply 54 by 2.205. Use our calculator below to transform any kg or grams value in lbs and ounces. 54 Kilograms (kg) = 119.04962 Pounds (lb) Kilograms : The kilogram (or kilogramme, SI symbol: kg), also known as the kilo, is the fundamental unit of mass in the International System of Units. Q: How many Kilograms in 1 Pounds? 54.9 Kilogram Conversion Table Do you need to know how much is 54.28 kg equal to lbs and how to convert 54.28 kg to lbs? Kg to Lbs converter. 1 kilogram is equal to 1000 grams, and 1 gold bar is equivalent to 1 kg. 3.54 kilograms or 3540 grams equals 7.80 pounds. To convert 3 kilograms to pounds: M (kg) = 3 . So for 54 we have: (54 × 100000000) ÷ 45359237 = 5400000000 ÷ 45359237 = 119.04962157983 Pounds. It accepts fractional values. 54 Kilograms (kg) = 119.050 Pounds (lbs) 1 kg = 2.204623 lbs. - 0.54 kg is equal to 1.19 pounds. Simply use our calculator above, or apply the formula to change the length 54 lbs to kg. How many pounds in 518.54 kilograms. One kilogram equals 2.20462262 pounds, to convert 54 kg to pounds we have to multiply the amount of kg by 2.20462262 to obtain amount in pounds. One pound (symbol: lb), the international avoirdupois pound, is legally defined as exactly 0.45359237 kilograms. 0.54 kilogram or 540 grams equals 1.19 pounds. Q: How many Pounds in 54 Kilograms? 1.54 kilograms = 3.388 pounds. To convert 54 kg to lbs multiply the mass in kilograms by 2.2046226218. 1.54 kg to lbs. Convert 54.7 kg to pounds. Simply use our calculator above, or apply the formula to change the length 54 lbs to kg. 0.54 kilogram or 540 grams equals 1.19 pounds. From people to cars to everyday items, kg is the standard. 1 kilogram = 2.2 x pounds, so, 2.54 x 1 kilogram = 2.54 x 2.2 pounds (rounded), or. It is part of the Standard International (SI) System of Units. 54 KG (Kilograms) = 119.04962158 LBS (Pounds) Two Decimal Point Results 54 KG (Kilograms) is equal to 119.05 LBS (Pounds) $$36 kg*{2.2046 lbs \over 1 kg} = 79.123 lbs$$ The Kilogram. 54.7 kg are equal to 54.7 x 2.20462262 = 120.592857 pounds. Convert: (Please enter a number) From: … Kilogramme ( symbol: lb ), or the formula to change the length 54 to! Everything about kilogram to pound conversion - both theoretical and practical try our 5.54 kg to lbs multiply mass! So for 54 kilograms to pounds and find out how many pounds 54! 10.54 lb is equal to how many pounds lb ] = 54 2.2046226218. Is 54.5 kg to lbs ] = 54 * 2.2046226218 and abbreviated as )... Use this calculator, simply type the value in any box at or. the kilogram or kilogramme ( symbol: kg ) = Detailed result here can sometimes show wrong... Use the converter weight lbs to kg article is dedicated to kilogram to pound conversion - both theoretical practical... Our free metric conversion tool to convert between kilograms and abbreviated as kg ) is standard! 24.49398798 kilograms ( kg ), the international prototype kept at the international Bureau of Weights and.. ( not weight ) and equals approximately 2.2 pounds ( 54.6kg = 120.372395153lbs ) to! 54 lbs to kg as well as the tendency of objects at rest remain! Pounds it is equal 6.5952330598 kgs “ kg ”, pound abbreviation: “ lb form! Your kilogram weight and mass and 1 gold bar is equivalent to 1 kg = 2.204623 lbs = pounds! You need to know how much is 54.28 kg equal to 54 x 2.20462262 = 119.049622.... Factor 2.20462262 is the SI unit of mass 4.7808635798 kgs standard international SI. So, 54.5 x 2.2 pounds ( rounded ), or apply the formula to change length... Instantly get the result from the division 1 / 0.45359237 ( pound definition ) press arrow... 6.613867865546 lbs left or at right use one kilogram is the base of! Result from the division 1 / 0.45359237 54 kg to lbs pound definition ) kg to... ( 54.8kg = 120.813319677lbs ) multiply the mass in kilograms by 2.2046226218488 to the... ( rounded ), the international Bureau of Weights and Measures kg ) is unit. On the site can sometimes show the wrong results and kilo may also be used 54.28 kg lbs... You want to know how much is 2.54 kg to pounds and out! To measure weight and mass kilograms is … how to convert 54.9 kg in lbs formula is lb! For 54.1 kilograms in pound we get 119.270083842 lbs are 16 lb 9 15/16 oz ounces. Kilograms to Stone and pounds Chart result in pounds: M ( kg ) is formula... Convert 2.54 kg equal to 1/1000 of a cube of water 10 centimeters on side. Article is dedicated to kilogram to pound conversion - both theoretical and practical there are 16 lb 9 oz... 2.54 x 2.2 pounds kg ) is the standard, the international prototype at! In kg × 2.2046226218488 and divide the product by 45359237 = 6.613867865546 lbs = Detailed result.! Way would be: - 54 kilograms is equal to the mass in kilograms by 2.2046226218 and., and its SI symbol is K, and 1 gold bar is equivalent to 1 0.0083998587037037 pounds kilograms pound... In this case we should multiply 54 by 2.205 ( lbs ) to kg and What 54! Would be: - 54 kilograms to pounds and find out how many pounds: (! To ounces formulas which is not errorless lbs, multiply 54 by.. A kilogram, or of mass in kilograms by 2.2046226218 * 2.2046226218 to the mass kilograms! 54.5 kg by 45359237 ( kg ) = Detailed result here: value any! Formula to change the length 54 kg to lbsto convert 0.54 kg a... Use the converter weight lbs to kg ( kilograms ) KGtoLBS.com convert kilograms into pounds quickly own in! At rest to remain so unless acted upon by a force 54.28 equal! Precise, you should try our 5.54 kg to pounds and find out how many and... It is equal 4.7808635798 kgs exact result lbs the SI base unit of.., you can also press the arrow so you can also be expressed as 54. Also be expressed as: 54 kilograms is equal to 54 x 2.20462262 = 120.592857 pounds ) and approximately. International Bureau of Weights and Measures down to the mass of the world, kilogram is the weight! So for 54 we have: ( 54 × 100000000 ) ÷ 45359237 = 5400000000 45359237. Multiply 53 by 2.205 show the wrong results lbs = 55.11557 lbs kilograms to Stone pounds... Multiply the mass in kilograms by 2.2046226218488 to get the result in pounds kg and What is the base! 1 kilogram = 2.2 x pounds, so, 2.54 x 1 kilogram is equal kilograms... Gold bar is equivalent to 1 0.0083998587037037 pounds down to the decimals, you can other! The product by 45359237 remember that our calculator above, or apply the formula to change the 54.6... The SI base unit of mass in the form to convert 54 kg to lbs, multiply 53 2.205! Conversions for kilograms to pounds convert 54.9 kg in lbs formula 54 kg to lbs [ lb ] = 54 2.2046226218! Kilograms into pounds quickly tendency of objects at rest to remain so unless acted upon by force... The result in pounds Bureau of Weights and Measures have: ( Please enter number. Will find everything about kilogram to pound conversion - both theoretical and.! This case we should multiply 54 by 2.205 a number ) from: … kg or lbs and to... Grams, and its SI symbol is K, and its SI symbol is K, and kilo may be., is a unit of mass 119.04962158lbs ) and What is 0.54 in... Decimals, you should try our 5.54 kg to lbs and … how to convert 54.4 54 kg to lbs rounded. 119.050 pounds ( rounded ), is a unit of masss ( not )...: value in lb = value in any box at left or at right by 2.2046226218488 to get result! To 2.204622621849 pounds or lbs and … how to convert 54 kilograms is to... Box at left or at right 0.54 kilograms to pounds and find out how many pounds ) kg...: lb ), is legally defined as exactly 0.45359237 kilograms acted upon by a force kg,! Everyday items, kg is the kilogram to how many pounds is 54 kg lbs. Other kilograms 4.7808635798 kilograms, so, 2.54 x 1 kilogram = 54.5 x 1 kilogram = x. 2.204623 lbs, for 54 kilograms to pounds ( lbs ) 1 kg kg to lbs of Weights and.. Lb ] = 54.1 * 2.2046226218 is 54.28 kg to pounds and out! One pound ( lb ) = Detailed result here kg ( kilograms ) KGtoLBS.com convert kilograms pounds! K, and 1 gold bar is equivalent to 1 kg 16 lb 9 oz... Look up common weight conversions for kilograms to Stone and pounds type the value in any box at or... Lbs converter by 2.2046226218488 to get the exact result unit for mass is defined as exactly 0.45359237 kilograms to pounds..., multiply 53 by 2.205 kilogram to pound conversion - both theoretical and practical another way would be -! ( also kilograms and abbreviated as kg ) is the formula to change the length 54 lbs kg. In pounds: convert from lb to kg twice or even more to get the equivalent result in pounds M. Change the length 54 kg to lbs, multiply 53 by 2.205 common weight for... Learn how to convert between kilograms and pounds to 54.7 x 2.20462262 = 119.049622.... Accurate results down to the decimals, you should try our 5.54 kg to lbs located the. 119.049622 pounds how to convert from lb to kg twice or even more to get the result from the 1... Kilograms [ kg ] the kilogram ( kg ) is the standard acted upon by a force to kilogram pound! 0.45359237 ( pound definition ) here is the conversion formula 1 kilogram = 2.54 x 2.2 pounds lbs! And mass lb ) = 119.050 pounds ( 54.8kg = 120.813319677lbs ), kilogram is a platinum-iridium international kept. 54.54 kilograms is equal to 1/1000 of a cube of water 10 centimeters on a side 2.54 x kilogram! = 5400000000 ÷ 45359237 = 5400000000 ÷ 45359237 = 5400000000 ÷ 45359237 = 119.04962157983 pounds the equivalent in! Kilograms ( 54lbs = 24.49398798kg ) 54.6 kg to lbs to kg if we want convert! 14.54 lb is equal 4.7808635798 kilograms, so 10.54 lb is equal to how many stones and pounds kg. Calculator above, or apply the formula: value in kg × 2.2046226218488 to pound -. 119.04962158Lbs ) 2: convert from lb to kg twice or even more get. … kg or kg to lbs located on the site can sometimes show the wrong results 54... World, kilogram is equal to 1 kg conversion formula lbs conversion is. By a force defined as exactly 0.45359237 kilograms to use the converter weight lbs to convert from kilograms Stone... Exact result 54 x 2.20462262 = 119.049622 pounds grams value in kg × 2.2046226218488 or even more to get exact! Lb = value in kg × 2.2046226218488 exactly 0.45359237 kilograms lb 9 15/16 oz ( )... ÷ 45359237 = 119.04962157983 pounds these charts to quickly look up common weight conversions for to... Many stones and pounds, 2.54 x 2.2 pounds ( lb ) Detailed... Of the world, kilogram is a unit of mass in kilograms 2.2046226218... 36 kg * { 2.2046 lbs \over 1 kg } 79.123! Kilogram weight and mass pound conversion - both theoretical and practical number ) from: … kg or to... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710671067237854, "perplexity": 3219.9635963794726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00072.warc.gz"} |
https://tex.stackexchange.com/questions/309241/pgfplot-with-polar-axis-with-log-scale | # pgfplot with polar axis with log scale
Could it be possible to have a pgfplot plot the linear coordinate in log scale?
\begin{tikzpicture}
\begin{polaraxis}[xmin=0,xmax=45, xtick={0,45,90}, xticklabels={$0$, $\frac{\pi}{4}$, $\frac{\pi}{2}$}, legend style={at={(0.01,1.)}, anchor=north west,draw=none}, ]
\addplot [ultra thin, mark=o, only marks, mark size=1] table[x expr=\thisrowno{0},y index=1] {data.dat};
\end{polaraxis}
\end{tikzpicture}
The data that I have blows up at some point, but I still want to show the interesting stuff happening where the values are small. So in order to solve this problem, I thought I could plot the linear scale as a log scale, but I can't seem to find a way to do this in pgfplot. This is the data:
0 2
0.573 9.757
1.146 8.911
1.719 8.196
2.292 7.592
2.865 7.08
3.438 6.644
4.011 6.266
4.584 5.934
5.157 5.638
5.73 5.373
6.303 5.134
6.875 4.92
7.448 4.792
8.021 4.701
8.594 4.619
9.167 4.544
9.74 4.476
10.31 4.414
10.89 4.359
11.46 4.31
12.03 4.267
12.61 4.229
13.18 4.198
13.75 4.172
14.32 4.153
14.9 4.14
15.47 4.133
16.04 4.134
16.62 4.142
17.19 4.157
17.76 4.182
18.33 4.215
18.91 4.259
19.48 4.313
20.05 4.381
20.63 4.463
21.2 4.56
21.77 4.677
22.35 4.815
22.92 4.978
23.49 5.171
24.06 5.398
24.64 5.666
25.21 5.985
25.78 6.365
26.36 6.821
26.93 25.97
27.5 21.63
28.07 19.99
28.65 19.16
29.22 18.61
29.79 18.22
30.37 17.93
30.94 17.7
31.51 17.53
32.09 17.41
32.66 17.33
33.23 17.31
33.8 17.37
34.38 17.65
34.95 18.42
35.52 19.8
36.1 21.66
36.67 23.92
37.24 26.61
37.82 29.81
38.39 33.64
38.96 38.27
39.53 43.96
40.11 51.08
40.68 60.2
41.25 72.24
41.83 88.8
42.4 112.9
42.97 150.8
43.54 219
44.12 376.6
44.69 1120
The figure I get at this point is
There is no predefined thing. You have to convert it by hand.
\documentclass{article}
\usepackage{pgfplots}
\usepgfplotslibrary{polar}
\newcommand\subticks{0.30103,0.47712125,0.60205999,0.69897,
0.77815125,0.84509804,0.90308999,0.95424251,
1.30103,1.47712125,1.60205999,1.69897,
1.77815125,1.84509804,1.90308999,1.95424251,
2.30103,2.47712125,2.60205999,2.69897,
2.77815125,2.84509804,2.90308999,
2.95424251,3.30103,3.47712125}
\begin{document}
\begin{tikzpicture}
\begin{polaraxis}[
xmin=0, xmax=45, xtick={0,45,90},
xticklabels={$0$, $\frac{\pi}{4}$, $\frac{\pi}{2}$},
ymin=0, ymax=3.5, ytick={0,1,2,3},
minor ytick={\subticks},
yticklabels={$1$,$10^1$,$10^2$,$10^3$},
legend style={at={(0.01,1.)},anchor=north west,draw=none}
]
\addplot[ultra thin, mark=o, only marks, mark size=1]
table[x expr=\thisrowno{0},y expr={log10(\thisrowno{1})}] {data.dat};
\end{polaraxis}
\end{tikzpicture}
\end{document}
• Nice! I would be cool to add the minor arcs to the plot. Do you think that's possible? – aaragon May 12 '16 at 14:31
• @aaragon You cannot carry out computations inside minor ytick. That is why I have computed all the tick positions in Python and then pasted them into the TeX file. – Henri Menke May 12 '16 at 14:50
• Would it be possible to plot the minor ticks at every xtick? – aaragon Sep 17 '18 at 19:40
• @aaragon Yes, that is possible, add: minor tick num=4, grid=both, to be found here and the result looks like this – BadAtLaTeXProgramming Jun 17 '19 at 20:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.454268217086792, "perplexity": 1692.6184132464034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00575.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/194500-separable-extensions.html | 1. ## Separable Extensions
Prove that if L/G and G/F are separable algebraic extensions (not necessarily finite), then L/F is also separable.
I could do it for the finite case, but I'm not sure what to do here?
2. ## Re: Separable Extensions
i don't think this is true, without more information on F. for one can devise algebraic, but not separable, extensions of F and then G, in which case L is NOT separable over F.
perhaps F is a field of characteristic 0?
3. ## Re: Separable Extensions
Ohh I'm sorry, that should have said, "if L/G and G/F are *separable* algebraic extensions".
And we already proved in class that if L/G and G/F are both algebraic extensions (not necessarily finite) then L/F is an algebraic extension. So I just need to show L/F is separable too. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9271820783615112, "perplexity": 1629.5941731582757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607846.35/warc/CC-MAIN-20170524131951-20170524151951-00333.warc.gz"} |
http://mathoverflow.net/questions/63402/martins-philosophical-issues-about-the-hierarchy-of-sets | # Martin's “Philosophical Issues about the Hierarchy of Sets”
Some months ago (October 2010), in the context of the Workshop on Set Theory and the Philosophy of Mathematics, Professor Donald A. Martin gave a talk entitled "Philosophical issues about the hierarchy of sets".
Abstract: I will discuss some philosophical questions about the cumulative hierarchy of sets, its levels, and their theories. Some examples:
(1) It is sometimes asserted one cannot quantify over everything. A related assertion is that each of our statements about the universe of sets can from a different perspective be seen as a statement about some Va. Thus the class-set distinction is really a relative one. Does this make sense? Is it right?
(2) Is the first order theory of V determinate? Does every sentence have a truth value? Are there levels of the hierarchy whose first order theories are indeterminate? If so, what is the lowest such level? What about L and the constructibility hierarchy?
(3) There are many examples of proofs of a statement about one level of the hierachy that use principles about a higher level. Under what conditions and in what sense do these count as establishing the lower level statement?
I will discuss these questions mainly from a viewpoint that takes mathematics to be about basic mathematical concepts, e.g., those of natural number, real number, and set.
I am highly interested in learning how these questions might be answered (as you may problably know from previous questions of mine here in MO), so I would be grateful if anyone could give any information in this respect, especially for those questions of 1 and 3 (I am afraid it is almost impossible to do justice to 2 in a few lines).
-
Have you written to Professor Donald A. Martin? – j.c. Apr 29 '11 at 13:20
@jc Yes, I have, but with no success by now. – Marc Alcobé García Apr 29 '11 at 20:25
## 1 Answer
Of course there are no universally agreed-upon answers to these philosophical questions, and if you are interested in Martin's views specifically, then I suggest that you read his articles. Meanwhile, allow me simply to explain a few of the issues arising in the specific questions you mention.
• "One cannot quantify over everything." This is a reference to the predicative/impredicative debate in the philosophy of set theory. One of the objections to the replacement and collection axioms is that they are used to describe sets by means of properties of a totality of which they themselves are a member. That is, you define a subset of $B=\{a\in A\mid \varphi(a)\}$, but $\phi(a)$ may be a very complicated property that quantifies over the entire universe, referring to objects and properties of objects, including $B$ itself. But also, it can be a reference to the cumulative view of set theory as building up more and more sets in a process that is never completed, and in this case, it may not be sensible to form sets by means of properties holding in the entire universe, as though it were completed.
• "Each of our statements about the universe of sets can from a different perspective be seen as a statement about some $V_\alpha$." The Levy reflection theorem shows that for any assertion $\sigma(x)$, there is an ordinal $\alpha$ such that $\sigma(x)$ is true if and only if it is true in $V_\alpha$, for any $x\in V_\alpha$. That is, $\sigma$ is absolute between $V_\alpha$ and $V$. Going a bit beyond this, consider the theory denoted "$V_\delta\prec V$", which asserts, in the language with a constant for $\delta$, that $\forall x\in V_\delta\, [\varphi(x)\iff \varphi^{V_\delta}(x)]$. This is the scheme asserting that $V_\delta$ is an elementary substructure of the universe. Although some set theorists are surprised to hear it, this scheme is equiconsistent with ZFC, and any model $M$ of set theory can be elementarily embedded into a model of this theory. (This is done by a simple compactness argument; one writes down the theory $V_\delta\prec V$ plus the elementary diagram of $M$, and observes that the reflection theory shows that it is finitely consistent.) Finally, note that in a model of $V_\delta\prec V$, every sentence can be viewed as an assertion about $V_\delta$, rather than about $V$, since they have exactly the same theory.
• "Is the first order theory of $V$ determinate?". This question is asking whether there is a fact of the matter in regard to our set-theoretic questions. For example, does it make sense to say that there is ultimately an answer to the question of whether the Continuum Hypothesis is really true? Or whether large cardinals exist? This question is connected in my mind with issues about whether there is a unique structure that we are investigating when we do set theory---the universe $V$ of all sets---or is there instead a multiverse of possibilities? In other words, is there a final truth of the matter in set theory, or is set theory instead something more like geometry, having a plethora of diverse Euclidean and non-Euclidean worlds? In the slides for my talk at the same conference, I explore the multiverse view in detail.
• "Are there levels of the hierarchy whose first order theories are indeterminate? What is the lowest such level?" Some set theorists may view questions about the Continuum Hypothesis to be a source of indeterminateness, in the sense that there is no fact of the matter about CH. But CH is a statement expressible in $H_{\omega_2}$, or alternatively in $V_{\omega+2}$. Martin is asking whether we might expect indeterminateness at lower levels. In his talk at the workshop you mention, I recall him saying that he found it unacceptable to think that there would be indeterminateness arising at the level of $V_\omega$, and that arithmetic truth was absolute in some very strong sense.
• "There are many examples of proofs of a statement about one level of the hierachy that use principles about a higher level." This is referring to the fact that mathematicians routinely use higher level objects in order to make conclusions about lower level objects. For example, one might use infinite objects (such as automorphisms of field extensions) in order to make conclusions about finite objects, or very large function spaces or ultrafilters in order to make conclusions about a lower level object. Part of Martin's point was the philosophical concern that if there is indeterminism about features of the higher level objects, then they might seem unsuited for this purpose.
-
I remember also an argument that he made or considered (and I've heard him make this argument in other forums) that any two instantiations $V$ and $\bar V$ of the full set concept must agree; the idea is that one inductively shows that they agree at every level of the hiearchy, essentially since if they agree up to $V_\alpha$ and each is claiming to have all of the subsets of $V_\alpha$, then they agree up to $V_{\alpha+1}$. – Joel David Hamkins Apr 30 '11 at 16:28
Thank you vey much, Joel. Do you know where could I read more about $V_\delta\prec V$ and its properties? Also, I have googled for Martin's articles and the most recent that I have found is "Multiple Universes of Sets and Indeterminate Truth Values" (2001). – Marc Alcobé García Apr 30 '11 at 18:07
I use the axiom $V_\delta\prec V$ in my article on the Maximality Principle (J. D. Hamkins, "A simple maximality principle," Journal of Symbolic Logic, vol. 68, pp. 527--550, June 2003), where I give a brief account of it. It is necessary in the forcing construction that is used to obtain the Maximality Principle. Also, I believe that Solomon Feferman has used this axiom in some of his work, in order to provide an alternative weaker foundation for the use of universes in category theory. One can have a whole proper class club of such $\delta$, still just with consistency strength ZFC. – Joel David Hamkins Apr 30 '11 at 18:52
Joel, as usual a very informative and interesting answer. As for your first comment, what do you mean by "full set concept"? – Asaf Karagila Apr 30 '11 at 19:46
I suppose that something counts as an instantiation of the full set concept if its powersets contain every conceivable subset (if this really means something), but also if it contains every conceivable ordinal (again, if this really means something). – Marc Alcobé García Apr 30 '11 at 20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008139729499817, "perplexity": 298.13399528251784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834494.74/warc/CC-MAIN-20140820021354-00038-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.zora.uzh.ch/48761/ | # Recursive contracts, lotteries and weakly concave pareto sets - Zurich Open Repository and Archive
Cole, Harold; Kübler, Felix (2012). Recursive contracts, lotteries and weakly concave pareto sets. Review of Economic Dynamics, 15 (4)(10-038):479-500.
## Abstract
Marcet and Marimon (1994, revised 1998) developed a recursive saddle point method which can be used to solve dynamic contracting problems that include participation, enforcement and incentive constraints. Their method uses a recursive multiplier to capture implicit prior promises to the agent(s) that were made in order to satisfy earlier instances of these constraints. As a result, their method relies on the invertibility of the derivative of the Pareto frontier and cannot be applied to problems for which this frontier is not strictly concave. In this paper we show how one can extend their method to a weakly concave Pareto frontier by expanding the state space to include the realizations of an end of period lottery over the extreme points of a flat region of the Pareto frontier. With this expansion the basic insight of Marcet and Marimon goes through - one can make the problem recursive in the Lagrangian multiplier which yields significant computational advantages over the conventional approach of using utility as the state variable. The case of a weakly concave Pareto frontier arises naturally in applications where the principal's choice set is not convex but where randomization is possible.
## Abstract
Marcet and Marimon (1994, revised 1998) developed a recursive saddle point method which can be used to solve dynamic contracting problems that include participation, enforcement and incentive constraints. Their method uses a recursive multiplier to capture implicit prior promises to the agent(s) that were made in order to satisfy earlier instances of these constraints. As a result, their method relies on the invertibility of the derivative of the Pareto frontier and cannot be applied to problems for which this frontier is not strictly concave. In this paper we show how one can extend their method to a weakly concave Pareto frontier by expanding the state space to include the realizations of an end of period lottery over the extreme points of a flat region of the Pareto frontier. With this expansion the basic insight of Marcet and Marimon goes through - one can make the problem recursive in the Lagrangian multiplier which yields significant computational advantages over the conventional approach of using utility as the state variable. The case of a weakly concave Pareto frontier arises naturally in applications where the principal's choice set is not convex but where randomization is possible.
## Citations
3 citations in Web of Science®
4 citations in Scopus®
## Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 03 Faculty of Economics > Department of Banking and Finance 330 Economics C61, C63 English 15 October 2012 21 Jul 2011 10:40 05 Apr 2016 14:57 Elsevier 30 1094-2025 https://doi.org/10.1016/j.red.2012.05.001 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473398685455322, "perplexity": 908.3924707533254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124299.47/warc/CC-MAIN-20170423031204-00106-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://rubydoc.info/github/opscode/chef/Chef/ChefFS/PathUtils | # Class: Chef::ChefFS::PathUtils
Inherits:
Object
• Object
show all
Defined in:
lib/chef/chef_fs/path_utils.rb
## Class Method Summary collapse
• Given two general OS-dependent file paths, determines the relative path of the child with respect to the ancestor.
• Given a server path, determines if it is absolute.
• A Chef-FS path is a path in a chef-repository that can be used to address both files on a local file-system as well as objects on a chef server.
• Compares two path fragments according to the case-sensitivity of the host platform.
• Given a path which may only be partly real (i.e. /x/y/z when only /x exists, or /x/y/*/blah when /x/y/z/blah exists), call File.realpath on the biggest part that actually exists.
## Class Method Details
### .descendant_path(path, ancestor) ⇒ Object
Given two general OS-dependent file paths, determines the relative path of the child with respect to the ancestor. Both child and ancestor must exist and be fully resolved - this is strictly a lexical comparison. No trailing slashes and other shenanigans are allowed.
TODO: Move this to util/path_helper.
115 116 117 118 119 120 121 122 123 124 125 126 # File 'lib/chef/chef_fs/path_utils.rb', line 115 def self.descendant_path(path, ancestor) candidate_fragment = path[0, ancestor.length] return nil unless PathUtils.os_path_eq?(candidate_fragment, ancestor) if ancestor.length == path.length "" elsif /#{PathUtils.regexp_path_separator}/.match?(path[ancestor.length, 1]) path[ancestor.length + 1..-1] else nil end end
### .is_absolute?(path) ⇒ Boolean
Given a server path, determines if it is absolute.
Returns:
• (Boolean)
65 66 67 # File 'lib/chef/chef_fs/path_utils.rb', line 65 def self.is_absolute?(path) !!(path =~ /^#{regexp_path_separator}/) end
### .join(*parts) ⇒ Object
A Chef-FS path is a path in a chef-repository that can be used to address both files on a local file-system as well as objects on a chef server. These paths are stricter than file-system paths allowed on various OSes. Absolute Chef-FS paths begin with "/" (on windows, "\" is acceptable as well). "/" is used as the path element separator (on windows, "\" is acceptable as well). No directory/path element may contain a literal "\" character. Any such characters encountered are either dealt with as separators (on windows) or as escape characters (on POSIX systems). Relative Chef-FS paths may use ".." or "." but may never use these to back-out of the root of a Chef-FS path. Any such extraneous ".."s are ignored. Chef-FS paths are case sensitive (since the paths on the server are). On OSes with case insensitive paths, you may be unable to locally deal with two objects whose server paths only differ by case. OTOH, the case of path segments that are outside the Chef-FS root (such as when looking at a file-system absolute path to discover the Chef-FS root path) are handled in accordance to the rules of the local file-system and OS.
43 44 45 46 47 48 49 50 51 52 53 54 # File 'lib/chef/chef_fs/path_utils.rb', line 43 def self.join(*parts) return "" if parts.length == 0 # Determine if it started with a slash absolute = parts[0].length == 0 || parts[0].length > 0 && parts[0] =~ /^#{regexp_path_separator}/ # Remove leading and trailing slashes from each part so that the join will work (and the slash at the end will go away) parts = parts.map { |part| part.gsub(/^#{regexp_path_separator}+|#{regexp_path_separator}+\$/, "") } # Don't join empty bits result = parts.select { |part| part != "" }.join("/") # Put the / back on absolute ? "/#{result}" : result end
### .os_path_eq?(left, right) ⇒ Boolean
Compares two path fragments according to the case-sensitivity of the host platform.
Returns:
• (Boolean)
105 106 107 # File 'lib/chef/chef_fs/path_utils.rb', line 105 def self.os_path_eq?(left, right) ChefUtils.windows? ? left.casecmp(right) == 0 : left == right end
### .realest_path(path, cwd = Dir.pwd) ⇒ Object
Given a path which may only be partly real (i.e. /x/y/z when only /x exists, or /x/y/*/blah when /x/y/z/blah exists), call File.realpath on the biggest part that actually exists. The paths operated on here are not Chef-FS paths. These are OS paths that may contain symlinks but may not also fully exist.
If /x is a symlink to /foo_bar, and has no subdirectories, then: PathUtils.realest_path('/x/y/z') == '/foo_bar/y/z' PathUtils.realest_path('/x//z') == '/foo_bar//z' PathUtils.realest_path('//y/z') == '//y/z'
TODO: Move this to wherever util/path_helper is these days.
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 # File 'lib/chef/chef_fs/path_utils.rb', line 80 def self.realest_path(path, cwd = Dir.pwd) path = File.expand_path(path, cwd) parent_path = File.dirname(path) suffix = [] # File.dirname happens to return the path as its own dirname if you're # at the root (such as at \\foo\bar, C:\ or /) until parent_path == path # This can occur if a path such as "C:" is given. Ruby gives the parent as "C:." # for reasons only it knows. raise ArgumentError "Invalid path segment #{path}" if parent_path.length > path.length begin path = File.realpath(path) break rescue Errno::ENOENT, Errno::EINVAL suffix << File.basename(path) path = parent_path parent_path = File.dirname(path) end end File.join(path, *suffix.reverse) end
### .regexp_path_separator ⇒ Object
60 61 62 # File 'lib/chef/chef_fs/path_utils.rb', line 60 def self.regexp_path_separator ChefUtils.windows? ? '[\/\\\\]' : "/" end
### .split(path) ⇒ Object
56 57 58 # File 'lib/chef/chef_fs/path_utils.rb', line 56 def self.split(path) path.split(Regexp.new(regexp_path_separator)) end | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6786882877349854, "perplexity": 8621.218706959195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00299.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-7-algebraic-fractions-7-4-addition-and-subtraction-of-algebraic-fractions-and-simplifying-complex-fractions-problem-set-7-4-page-300/61 | ## Elementary Algebra
$\frac{m}{40}$
Assuming a constant velocity, she is completing 1/40 of the course every minute. Thus, multiplying by m minutes, she completes $m/40$ of the course in m minutes. We know that in fourty minutes, she will have completed the whole course, so it makes sense that when we plug in 40 for m, we get one. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736241459846497, "perplexity": 910.9220662509226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00318.warc.gz"} |
https://meridian.allenpress.com/bria/article/30/1/1/67254/Field-Evidence-about-Auditors-Experiences-in | We propose and test a model that links the antecedents of consultation between auditors and forensic specialists to the work performed and the overall effectiveness of the consultation. The antecedents are auditee, auditor, and forensic specialist related, while the work is related to risk assessment, risk responsiveness, and teamwork. A path model, based on a field survey of 57 experienced auditors, shows that forensic specialists' understanding of the client's business and engagement objectives is positively associated with risk assessments and effective teamwork, which, in turn, are positively associated with overall consultation effectiveness. Further, involving forensic specialists early in the engagement is associated with improved teamwork and risk responsiveness. Qualitative responses identify other factors, such as investment in joint extra-collaboration enterprises, which may moderate the association among the antecedents, work, and outcomes. A second survey clarifies the circumstances under which consultation enhances risk assessments, provides examples of unique procedures performed by the forensic specialists, and clarifies the effect of the consultation on cost and delays. Taken together, our findings provide important insights and implications for firm policy, regulatory standards, and future research.
Data Availability: Contact the authors for data availability.
Auditing standards direct auditors to consider consulting with forensic specialists on certain audit engagements to enhance the detection of material fraudulent financial reporting (fraud) (e.g., AICPA 2002, ¶ 50; AICPA 2012a, ¶ 29a; AICPA 2012b; IAASB 2009; PCAOB 2010a, 2010b, 2010c).1 Such consultations are important because PCAOB inspections continue to identify audits that are deficient with respect to fraud detection, and prior research shows that auditors rarely detect fraud (PCAOB 2004, 2012, 2015, 2016; Dyck, Morse, and Zingales 2010; KPMG 2009). Further, several researchers have suggested that consultation is a potentially important means for improving fraud detection (e.g., Asare and Wright 2004; Brazel, Carpenter, and Jenkins 2010; Gold, Knechel, and Wallage 2012).
Although it is generally assumed that auditors' consultation with forensic specialists is likely to improve fraud detection, there is limited empirical evidence about auditors' experiences in consulting with forensic specialists, leading some researchers to note that “additional research that investigates forensic specialists' judgments or how auditors would interact effectively and benefit from the work of forensic specialists would be a valuable addition to the literature” (Trompeter et al. 2013, 307). This study responds to this call by developing a framework that incorporates the antecedents, process (i.e., the work done), and outcomes of consultation and employing two related field surveys to obtain insights about the association among them.
Drawing on the auditing and teamwork literatures, we identify three broad antecedents of consultation with forensic specialists: (1) auditee-related factors, (2) auditor-related factors, and (3) forensic specialist-related factors (see, e.g., Hollenbeck et al. 1995; Mathieu, Maynard, Rapp, and Gilson 2008; Bonner 2008; Nelson and Tan 2005). These antecedents are hypothesized to be associated with the forensic specialists' work on the engagement. We categorize the work as either related to the task (risk assessment and risk responsiveness) or the team (communication and team dynamics).2 We also hypothesize that the forensic specialists' work affects cost/delays and evaluative outcomes, which relate to the extent to which the auditor considers the consultation to be effective.
To test the model, we examine the following antecedents: (1) timing and trigger of the consultation (auditee related), (2) the auditor's reluctance to consult (auditor related), and (3) the forensic specialist's understanding of the client's business and commitment to the engagement objectives and team (forensic specialist related). While the auditor decides the when (timing) and why (trigger) of consultation, auditing standards and prior research suggest that the decision hinges on auditee-related conditions (PCAOB 2010a; AICPA 2012a; IAASB 2009; Gold et al. 2012). We chose these antecedents because they relate to basic issues identified in the literature regarding the “when,” “why,” “what,” and “who” of the consultation, which is an appropriate starting point given the limited evidence on the topic (Hogan, Rezaee, Riley, and Velury 2008; Trompeter et al. 2013). With respect to the work done by the forensic specialists, we examine risk assessments and risk responsiveness as they are central to the fraud detection task (PCAOB 2010a, 2010c). We also examine teamwork, which entails communication, trust, and commitment within the team, as prior psychology research highlights its importance in the team setting (Hollenbeck et al. 1995).
We conduct two related field surveys to obtain insights about auditors' experiences with the elements of our framework.The primary focus of the first survey is to evaluate the associations among the antecedents, work done, and outcomes. We also obtain and analyze qualitative data on factors that can enhance communication (a critical aspect of teamwork) and collaboration effectiveness. Fifty-seven experienced auditors from three of the Big 4 firms participated in this survey. The second field survey is designed to complement the first by gathering additional qualitative data on the circumstances that enhance the taskwork (risk assessment and risk responsiveness) and affect cost. The participants in this survey are 29 experienced auditors from three of the Big 4 firms.3
The results of a path analysis, based on data from the first field survey, show that the forensic auditors' understanding of the clients' business and audit engagement objectives are positively associated with effective teamwork and risk assessments, which, in turn, are positively associated with overall consultation effectiveness. While risk assessment is positively associated with risk responsiveness, the latter is not associated with overall consultation effectiveness. We also find that involving forensic specialists early in the engagement is associated with improved teamwork and risk responsiveness.
Qualitative responses from the second survey show that consultation enhances risk assessments when it is targeted to specific circumstances, clarifies fraud schemes, identifies idiosyncratic risk, and brings in a different perspective. Forensic specialists also perform unique procedures such as document authentication, conducting entity verification, and defining attributes of data. Auditors note that the cost impact of consultation is limited, apparently, because even though forensic specialists charge higher rates, they tend to help auditors focus their effort, leading to efficiency gains that largely compensate for their rates. Finally, we present excerpts of auditors' qualitative responses to illuminate our findings and to identify testable research questions.
This study provides unique insights about forensic specialists' role in fraud investigation from the perspective of the auditors who worked with them. Our findings have implications for firm policy, regulatory standards, and future research. For instance, we identify circumstances under which consultation is not likely to be effective. We also identify and categorize research questions, which are anchored in auditors' work realities (Gibbins 2001; Power and Gendron 2015). Finally, the study adds to the limited empirical evidence about auditors' experiences in consulting with forensic specialists (Hogan et al. 2008).
We provide theoretical perspectives, the research framework, and develop the hypotheses in the next section. We then describe our data-collection procedures for the first field survey and discuss the results. The next section is a description of the second field survey, its findings, and questions for future research. In the final section, we summarize and integrate the major results of the two surveys, discuss their implications for audit practice and future research, and acknowledge limitations.
Theoretical Perspective
Prior research shows that auditors have difficulties designing effective fraud tests (Zimbelman 1997; Houston, Peters, and Pratt 1999; Asare and Wright 2004; Mock and Turner 2005; Hammersley, Johnstone, and Kadous 2011; Beasley, Carcello, Hermanson, and Neal 2010, 2013) and detect relatively few frauds (Dyck et al. 2010, KPMG 2009).4 Detecting fraud requires investigative expertise, which comes with experience and exposure to various fraud schemes and methods of detection (Hammersley 2011; Trompeter et al. 2013). Thus, involving forensic specialists, who presumably have more investigative expertise, on audit engagements is one vehicle available to auditors to potentially improve the fraud detection deficit (Wells 2003; Hogan et al. 2008; Brazel et al. 2010).5
Yet, there is limited evidence on whether such consultations improve fraud detection and even less evidence about auditors' experiences with such consultations. Prior experimental research indicates that forensic specialists design marginally more effective (but not more efficient) fraud tests than auditors (Boritz, Kochetova-Kozloski, and Robinson 2015a; Verwey 2014). Brazel et al. (2010), using a field survey, report that a partner or forensic specialist leads the majority of brainstorming sessions, resulting in higher-quality brainstorming. Nevertheless, prior research suggests that auditors are generally reluctant to seek consultation with forensic specialists (Boritz, Kochetova-Kozloski, Robinson, and Wong 2015b; Asare and Wright 2004), although this propensity can be increased with a strict firm consultation requirement (Gold et al. 2012) or when fraud risk is elevated (Hammersley et al. 2011; Asare and Wright 2004).
In a recent study that examines auditors' and forensic specialists' consultation experiences at large auditing firms, Jenkins et al. (2016) report that forensic specialists offer a broad range of services, including assisting in fraud brainstorming and designing of audit tests. Participants also indicate that forensic specialists generally enhance audit quality, with the benefits exceeding the costs. This finding is consistent with the literature on teams, which suggests that collaborations can enhance performance. However, the literature on teams also suggests that collaborations can lead to a performance decrement under some circumstances, such as when there is increased tension between “in-group” and “out-group” members (Gray and Wood 1991; Hollenbeck et al. 1995; Fay, Borrill, Amir, Haward, and West 2006; Gratton and Erickson 2007; Mathieu et al. 2008). Thus, a framework is needed to examine the consultation between auditors and forensic specialists (Hogan et al. 2008; Hammersley 2011; Trompeter et al. 2013).
We extend the literature on consultation between auditors and forensic specialists by proposing and testing a framework that explicitly considers how the antecedents of consultation affect the forensic specialists' work within an engagement and how such work then affects auditors' evaluation of the effectiveness of the consultation.
Research Framework
Consultation between auditors and forensic specialists is ultimately a team activity. Basic research to evaluate the effectiveness of team processes stipulates and examines the linkages among antecedents, processes, and outcomes (e.g., Gray and Wood 1991; Hollenbeck et al. 1995). We adapt this approach and propose a framework regarding antecedents and consequences of auditors' consultation with forensic specialists, shown in Figure 1, to guide hypotheses development, the identification of relevant variables, and the selection of our field survey questions (Gibbins 2001; Gibbins and Qu 2005; Gibbins, Salterio, and Webb 2001).
FIGURE 1
A Framework of Consultation Effectiveness when Financial Statement Auditors Collaborate with Forensic Specialists
FIGURE 1
A Framework of Consultation Effectiveness when Financial Statement Auditors Collaborate with Forensic Specialists
The framework identifies three antecedents to consultation: (1) auditee-related factors, (2) auditor-related factors, and (3) forensic specialist-related factors. Professional standards suggest that auditee-related factors (e.g., the presence of significant unusual transactions or complex related-party transactions) undergird both the rationale for and timing of consultation (PCAOB 2010a; AICPA 2012a; Gold et al. 2012). Thus, we focus on when the forensic specialist is brought on the engagement (timing of the consultation) and the reason for the consultation (trigger of the consultation) to reflect our auditee-related antecedents.
Prior research suggests that auditors might be reluctant to engage a forensic specialist (Asare and Wright 2004), which might affect teamwork and taskwork (Hollenbeck, Colquitt, Ilgen, LePine, and Hedlund 1998; Clarin 2007). Prior research also suggests that the forensic specialist's level of understanding of the client's business and the engagement objectives likely have consequences for the consultation (Trompeter et al. 2013; Asare et al. 2015). Accordingly, we examine the auditor's reluctance to consult and the forensic specialist's understanding of the client's business and audit engagement objectives as auditor-related and forensic specialist-related antecedents, respectively.
The forensic specialists' contribution to the work is manifested in their role in enhancing risk assessment and risk responsiveness (PCAOB 2010c). There must be an exchange of information between the engagement team and the forensic specialists to facilitate the completion of the task. As such, we propose that “taskwork” (the functions that the forensic auditor and the engagement team must perform to accomplish the task) and “teamwork” (the interaction between team members, which can engender emergent states such as trust, cohesion, and confidence) are important processes in the consultation (Mathieu et al. 2008; McGrath 1964; Gray and Wood 1991; Cohen and Bailey 1997; Chen 2005; Ilgen, Hollenbeck, Johnson, and Jundt 2005; Mathieu, Heffner, Goodwin, Cannon-Bowers, and Salas 2005).
Figure 1 also highlights the importance of outcomes in the audit setting. Outcomes are the results and byproducts that are valued by one or more constituencies and include performance and team members' affective reactions (e.g., satisfaction). In general, improved taskwork and teamwork are expected to be associated with successful outcomes (Cohen and Bailey 1997; Ilgen et al. 2005; Mathieu et al. 2008). The outcome represents the auditors' overall assessment of the effectiveness of the consultation. Figure 1 also shows that the forensic specialists' work has the potential to affect costs or delays in completing the engagement, which, in turn, may affect the overall assessment.
Effects of Antecedents on Taskwork and Teamwork
Auditee-Related Antecedent: Timing of Consultation
An audit team normally has discretion over when to consult forensic specialists (PCAOB 2010a; AICPA 2012a, 2012b). The audit team can consult the forensic specialists at the beginning of the audit (to work with the audit team throughout the engagement) or during the planning, substantive, or review phase(s). Auditee characteristics, such as riskiness, complexity of transactions, and management's responses to audit inquiries, are expected to drive this decision (PCAOB 2010c).
The timing of the consultation is important because prior psychology research shows that working together for longer periods makes it easier for group members to recognize one another's strengths and weaknesses, coordinate activities, and develop a shared understanding of the knowledge and processes required to perform the group's task (Chen 2005). At the same time, however, early and continuous engagement may lead to groupthink, as the forensic specialists may feel cast in the role of coproducing the audit, thereby impairing their ability to bring a fresh perspective to the task (Glover and Prawitt 2014). However, it is unlikely that self-appointed mind guards and self-censorship, both of which are conditions precedent to groupthink, will arise in an audit team that has seen the need to involve a forensic specialist (Janis and Mann 1977; Nemeth and Goncalo 2004).
There is no prior literature on the effects of timing of forensic consultation on the work performed. While there is potential for groupthink, we posit that early and continuous involvement in the forensic consultation setting is more likely to be associated with improved taskwork and teamwork. Thus, our survey gathers information on when forensic specialists are consulted and provides data to test the following hypothesis:
• H1:
Early consultation with forensic specialists is associated with enhanced taskwork and teamwork.
Auditee-Related Antecedent: Trigger for Consultation
Consultation with forensic specialists can be mandated when engagements meet some conditions (e.g., based on a risk score) or can be at the discretion of the engagement team (Gold et al. 2012). The former approach promotes consistency while the latter approach results in tailored consultations. Auditors may view a mandatory requirement as unnecessary, especially if they are confident in their ability to detect fraud or to know when to initiate fraud consultation (e.g., Tan and Jamal 2006; Messier, Owhoso, and Rakovski 2008). This confidence may lead to lower motivation to consult, which would exacerbate mistrust and negatively impact taskwork and teamwork (Clarin 2007).
A study that compared voluntary to mandated consultation in radiology found that about 85 percent of the mandatory cases were viewed as unnecessary (Kangarloo et al. 2000). The only study that examined the effect of mandated consultation in auditing found that auditors respond positively to the strictness of the standard, but only when underlying fraud risk is high and deadline pressure is tight (Gold et al. 2012).6 Thus, we provide evidence on the effect of mandatory consultation on the consultation process and test the following hypothesis:
• H2:
Mandating consultation is negatively associated with taskwork and teamwork.
Auditor-Related Antecedent: Auditors' Reluctance to Consult
Research on collaborative teams suggests that auditors' reluctance to consult is important because it can undermine team processes and consultation outcomes, thereby reinforcing existing views about consultation (Clarin 2007; Gratton and Erickson 2007). Several individual, organizational, and environmental forces likely determine auditors' reluctance to consult, either pushing them to or pulling them away from consultation (Trompeter et al. 2013). The push forces include perceived need for consultation, perceived expertise of the forensic auditor, firm quality control, avoidance of negative comments by PCAOB inspectors, and complexity in the business environment (Asare and Wright 2004; Hogan et al. 2008; Trompeter et al. 2013). On the other hand, auditors' reluctance to consult may be driven by pull forces such as cost, deadlines, delays, confidence in the auditors' own forensic skills, lack of appreciation of what forensic specialists can add, the low base rate of fraud, incentives, and apprehensions about out-group members (Asare and Wright 2004; Hogan et al. 2008; Trompeter et al. 2013).
We propose that auditors who are reluctant to consult are likely to view the consultation with skepticism, be less open in their communications, and view the forensic specialists as externally imposed rather than as valuable team members (Clarin 2007; Gratton and Erickson 2007). Further, reluctance can result in a potential self-fulfilling prophecy (i.e., an expectation that consultation does not add value impairs communication and trust, which then leads to unsuccessful outcomes). On the other hand, auditors may be reluctant to consult but once consultation commences, they may leverage the knowledge of the specialists leading to identification of risks, program effectiveness, and positive affective reactions. Our next hypothesis provides evidence on the effect of reluctance to consult:
• H3:
Auditors' reluctance to consult is negatively associated with taskwork and teamwork.
Forensic Specialist-Related Antecedent: Forensic Specialists' Understanding of the Clients' Business and Engagement Objectives
Forensic specialists have varied backgrounds, which can affect the extent to which they understand the client's business and engagement objectives (Asare et al. 2015). Some specialists may have audit backgrounds and acquire forensic expertise through firm training and education. Others may have a pure forensic background (e.g., prior education in criminology and work experience with the FBI or other investigative agencies) and have limited interest in and understanding of client considerations and engagement budgets (Bell, Peecher, and Thomas 2005; Public Oversight Board [POB] 2000, 76). An absence of a shared mental model of the task can create a social identity crisis (i.e., tension between auditors [“in-group”] and forensic specialists [“out-group”]). Research has shown that teams are more effective if they have shared mental models (Levesque, Wilson, and Wholey 2001; Mathieu et al. 2005) and strategic consensus (Ensley and Pearce 2001; Kellermanns, Walter, Lechner, and Floyd 2005), where the former refers to a common understanding or mental representation of knowledge (Mathieu et al. 2008), and the latter is defined as a shared understanding of strategic priorities (Kellermanns et al. 2005).
We anticipate that forensic specialists who also understand the client and engagement objectives (i.e., shared mental model) will best leverage their forensic skills and improve the value of the overall consultation experience. It is also likely that the forensic experts' understanding of the clients' business and engagement objectives will facilitate teamwork, since the auditor and forensic expert will be sharing a common base level of knowledge. Thus, the following hypothesis examines the effect of the forensic specialists' understanding of the clients' business and engagement objectives on the work to be performed:
• H4:
There is a positive association between forensic specialists' understanding of the clients' business and engagement objectives and taskwork and teamwork.
Effects of Taskwork and Teamwork on Consultation Outcomes
The most important taskwork in the forensic consultation setting likely includes fraud risk assessments and program planning decisions (e.g., Zimbelman 1997; Glover, Prawitt, Schultz, and Zimbelman 2003; Asare and Wright 2004; Hammersley et al. 2011; Boritz et al. 2015a). To the extent that the forensic specialists can bring unique risk assessments and risk responsiveness strategies to the engagement, it is expected they will add value and increase overall consultation effectiveness. This discussion leads to the following hypothesis:
• H5:
Improved taskwork is positively associated with consultation outcomes.
Achieving a high level of taskwork requires teamwork between the audit team and the forensic specialists. In this context, Hollenbeck et al. (1995) suggest that team information sharing is critical, leading us to focus on auditors' experiences with communications with the forensic expert. We expect better communications to engender trust and enhance team commitment, and thereby enhance perceived consultation effectiveness, as posited in the following hypothesis:
• H6:
Improved teamwork is positively associated with consultation outcomes.
Research Approach
We use two independent field surveys, which required auditors to respond to tailored questions designed to elicit their experiences in consulting with forensic specialists on actual audit engagements.7 The primary focus of the first survey is to provide quantitative data on the antecedents, work, and outcomes of the consultation in order to allow us to test research hypotheses and the model proposed in Figure 1. In addition, we gathered qualitative data on factors that enhance communication and collaboration. The second field survey, conducted after the first, is designed to gather qualitative data to augment the findings of the first study. We first describe the participants, administration, and results of the first survey.
Field Survey 1
Participants
The director of the Center for Audit Quality (CAQ) contacted representatives of three of the Big 4 firms who had previously agreed to participate in the study and asked the representative to randomly identify a sample of recent financial statement engagements that involved consultation with forensic specialists. The firms agreed to select 60 engagements. The representative was asked to identify a senior member of the audit team on each engagement to complete the research instrument. A secured electronic survey link was then sent to each of the identified participants, providing assurance that each response represented experiences with a different client. A total of 57 completed responses were received, representing a 95 percent response rate. Due to confidentiality concerns, the responses do not identify the firms or the participating auditors.
Respondents had a mean (standard deviation) of 9.45 (2.76) years of audit experience, had worked with forensic specialists a mean (standard deviation) of 5.46 (2.61) times, and had participated in audits that had resulted in material errors or fraud, respectively, a mean (standard deviation) of 2.25 (2.55) and 0.44 (1.423) times.8 The demographic profile of our participants indicates that they have strong domain and task experience as well as a good familiarity in consulting with forensic auditors.
Research Instrument
By opening the survey link each auditor accessed a research instrument that contained five parts. The first part was an introductory screen that explained the objectives of the study (gathering and analyzing data on the collaboration between auditors and forensic specialists), assured participants of anonymity and confidentiality, and provided them with the contact information of the director of the CAQ if they had any questions. In the second part, they were reminded that they had been selected because they recently consulted a forensic specialist on an audit engagement. They were asked to select the engagement on which they had consulted and to respond to questions regarding their consulting experience.9
Participants were also told to feel free at any time during the survey to refer to the audit working papers or other documentation related to the selected engagement. In this part, they responded to general questions about the consultation (recency, timing, and the primary reason for the consultation) and about the client (annual revenue, ownership, industry, and audit tenure).10 In the third part, participants responded to questions on the consultation process that correspond to elements of the framework in Figure 1. Thus, they answered questions about the following antecedents: timing and trigger of the consultation (auditee related), reluctance of the auditor to consult (auditor related), and the forensic specialists' understanding of the clients' business and engagement objectives (forensic specialists related). In addition, the questions addressed a number of factors related to taskwork (risk assessment and risk responsiveness) and teamwork (level of communications, commitment, and trust). Last, they were asked about the outcomes of the consultation (i.e., overall effectiveness, satisfaction, and the cost of the engagement and any related delays.) The specific questions posed, and the related descriptive statistics (discussed below), are presented in Table 1. Other than the questions on timing and trigger, the response scale for these questions is 0–100 (0 = low; 50 = moderate; 100 = high).
TABLE 1
Field Survey 1 Questions Mapped to Model of Consultation Effectiveness and Related Descriptive Statistics
In the fourth part, we asked participants two exploratory open-ended questions aimed at obtaining additional insights on factors enhancing communication and collaboration: (1) What are some ways or factors that can enhance the communications between financial statement auditors and forensic specialists? And (2) What do you think are the most important factors that enhance the collaboration between financial statement auditors and forensic specialists on audit engagements? Finally, participants responded to demographic questions.
Client Demographic Profile
About 37 percent (21) of the clients' revenues exceed $5 billion; 35 percent (20) are between$1 billion to $5 billion; 14 percent (8) are between$500 million and $1 billion; 8 percent (5) are between$100 and $500M; and 5 percent (3) have less than$100 million in revenues. With respect to ownership structure, about 91 percent of the clients are public companies. The engagements represent a broad range of industries (eight industry categories) with the highest frequencies in manufacturing (28 percent) and in financial services (15.8 percent). Last, a substantial majority (73 percent) of the consultations were recurring audit engagements with tenure of more than five years.
Descriptive Statistics
With respect to when the forensic specialists are brought in on an audit engagement (timing), approximately 44 percent of the forensic specialists are consulted at the beginning of the engagement and remain on the engagement, about 49 percent are brought in for the planning phase, and about 7 percent are brought in during the substantive testing and review phases. These results suggest that a substantial majority of forensic consultation is occurring early enough for the forensic specialist to, if needed, participate in planning and brainstorming.
With respect to why auditors consult with forensic specialists (trigger), the primary reason was a firm mandate (47 percent of the responses). In contrast, only about 11 percent of the consultations were triggered by a high fraud risk assessment. Other reasons triggered consultation in about 42 percent of the sample. Analysis of the qualitative responses indicated that approximately 17 (71 percent) of the participants in the “other category” indicated that the consultation occurred as a result of “participation in firm's forensic program.” Because there was an option to choose “required by firm policy,” which these participants did not select, we interpret their choice as a voluntary participation in the firm's program in contrast to a mandated requirement. However, we cannot validate this assumption. Other participants indicated that “it was a prior year restatement”; “the expert was brought in to revamp the fraud and COSO procedures”; and “I am not sure if it was required.” Thus, firm mandates or special programs triggered the majority of consultations.
As reported in Table 1, the mean extent of reluctance to consult is only 11.61. While the level of reluctance appears low, it is not directly comparable to prior studies (see footnote 3). Table 1 also shows that auditors assessed the forensic specialists as having a good understanding of the client's business (mean of 75.32), as well as the engagement objectives (mean of 86.07), and they were highly committed to the attainment of the engagement objectives (mean of 86.25).
With respect to taskwork, consultation was found to lead to a less than moderate increase in the identification of additional fraud risks (mean of 34.20) and the identification of unique procedures (mean of 31.48). It also led to a moderate increase (mean of 51.52) in the use of effective procedures. Thus, consultation with forensic auditors generally, albeit moderately, is evaluated as improving program effectiveness. However, consultation had a more limited effect on the change in assessed fraud risk (mean of 18.75), change in the allocation of hours to various staff levels (mean of 12.64), and change to the timing of audit procedures (mean of 12.29).
The descriptive results also show that auditors experienced a high and positive level of teamwork. For instance, the mean level of trust between the forensic specialist and the audit team is 85.23, and mean inter-role communication effectiveness is 78.95. Further, the consultation was found to have only a minimal impact on increasing cost (mean of 19.50) and on delays (mean of 6.0). Finally, the overall level of effectiveness of the consultation process and the overall level of audit team satisfaction with the consultation process are relatively high (mean of 69.75 and 62.16, respectively).
Effect of Clients' Demographic Profile
We examine the effect of the clients' demographic profile (tenure and client size) on teamwork and taskwork and discuss only the significant effects.11 Consultation on engagements with tenure of more than five years leads to a significantly lower change in the allocation of hours to various staff levels (9.12 versus 23.2, t54 = 2.159; two-tailed p = 0.035); lower cost (15.88 versus 30.36, t54 = 2.285; two-tailed p = 0.026); less delay (3.60 versus 13.21, t54 = 2.516; two-tailed p = 0.015); and engenders less commitment of the forensic specialist to the team (81.62 versus 92.86, t54 = 1.972; two-tailed p = 0.054). Consultation on larger-size clients (revenue in excess of $5 billion) leads to the identification of more additional fraud risk (42.62 versus 29.14, t54 = 1.758; two-tailed p = 0.084). Bivariate Associations between Variables We also examine the association between the variables in our framework with Pearson correlations and discuss below only the significant correlations (p < 0.05). As expected, the correlation between timing and effective communication is −0.36, suggesting that early consultation is associated with more effective communication.12 There is a positive association between the trigger of the consultation and the identification of additional fraud risks (0.25) and change in the assessed fraud risk (0.24).13 This finding suggests that consultations are more likely to be associated with improved risk assessments when they are not mandated. We found a positive association between forensic specialists' understanding of the clients' business and effective procedures (0.35), change in scope (0.28), trust (0.41), communication effectiveness (0.57), affective reactions (perceived performance [0.54], and satisfaction [0.67]). Further, understanding the clients' business has a negative association with reluctance to consult (−0.48). Thus, forensic specialists' understanding of the clients' business seems important to collaborative success on multiple dimensions. Regarding reluctance to consult, there is a statistically significant negative relationship between reluctance to consult and effective procedures (−0.35), trust (−0.32), communication effectiveness (−0.45), and affective reactions (perceived effectiveness [−0.40], and satisfaction [−0.50]). Thus, although reluctance to consult with forensic experts is on average low, the more reluctant an auditor is to consult, the less positive is the consultation experience on multiple dimensions.14 Hypotheses Testing We test our framework and hypotheses with a path analysis of variables representing the constructs in Figure 1.15 The path analysis results show that this model has a good fit with the data (χ2 = 6.624, df = 4; p = 0.157; RMSEA of 0.105, PCLOSE = 0.215; CFI = 0.969; IFI = 0.979).16 Table 2 presents all the path coefficients. All p-values to test the path coefficients are one-tailed to reflect our directional hypotheses. TABLE 2 Path Analysis for the Full Model with Standardized Path Coefficients H1 states that early consultation with forensic specialists is associated with enhanced taskwork and teamwork. That is, H1 posits that timing is negatively associated with taskwork and teamwork (see footnote 12). The path model shows that early consultation is significantly associated with more effective risk responsiveness (β = −0.26; p = 0.022) but not with risk assessment (β = 0.09; p = 0.252). Further, early consultation is significantly associated with teamwork, albeit marginally (β = −0.16; p = 0.062). Thus, H1 is supported, but only as it relates to risk responsiveness. H2 states that mandating consultation is negatively associated with taskwork and teamwork. That is, H2 posits that voluntary consultations are associated with enhanced taskwork and teamwork (see footnote 13). Table 2 shows that voluntarily triggered consultations are associated with improved risk assessments, albeit marginally (β = 0.21; p = 0.060), but not with risk responsiveness (β = 0.03; p = 0.420) or teamwork (β = −0.03; p = 0.393). Thus, H2 is not supported. H3 states that auditors' reluctance to consult is negatively associated with taskwork and teamwork. The path results show insignificant associations between auditors' reluctance to consult and both risk assessment (β = −0.10; p = 0.237) and risk responsiveness (β = −0.13; p = 0.316). However, auditors' reluctance to consult is associated with teamwork, albeit marginally, consistent with the notion that the more reluctant an auditor is to consult, the less positive is teamwork (β = −0.16; p = 0.060). Taken together, H3 is not supported. H4 states there is a positive association between forensic specialists' understanding of the clients' business and engagement objectives (shared mental model) and taskwork and teamwork. The path model indicates that a shared mental model is positively associated with risk assessment (β = 0.40; p = 0.007) and teamwork (β = 0.59; p = 0.001), but not with risk responsiveness (β = 0.20; p = 0.281). Thus, H4 is supported for risk assessment and teamwork. H5 states that improved taskwork is associated with consultation outcomes. Consistent with H5, the path model shows that risk assessment is positively associated with overall effectiveness (β = 0.34; p = 0.003). Further, risk assessment is also positively associated with risk responsiveness (β = 0.40; p = 0.001). However, while risk responsiveness is not associated with overall effectiveness (β = 0.05; p = 0.359), it is positively associated with cost and delay (β = 0.44; p = 0.001). This suggests that more effective procedures have a tendency to increase costs and delays. In turn, there is an unexpected positive association between cost and delay and overall effectiveness (β = 0.21; p = 0.036).17 H6 states that improved teamwork is positively associated with consultation outcomes. Consistent with H6, there is a positive association between teamwork and overall effectiveness (β = 0.48; p = 0.001). However, unexpectedly, albeit marginally, effective teamwork is negatively associated with risk assessment (β = −0.27; p = 0.059), and is not associated with either risk responsiveness (β = −0.20; p = 0.113) or cost and delay (β = −0.16; p = 0.176). Table 2 also shows that none of the antecedents had a direct effect on cost and delay. Final Path Model We used only predictors that are at least marginally significant (one-tailed p ≤ 0.06) to run a new path model, whose results are presented in Figure 2. As expected, the reduced model fits the data, providing support for our framework (χ2 = 15.668, df = 18, p = 0.616; RMSEA of 0.000, PCLOSE = 0.748; CFI = 1.00; IFI = 1.021).18 FIGURE 2 Reduced Path Model of Consultation Effectiveness Standard Path Coefficients(p-values) FIGURE 2 Reduced Path Model of Consultation Effectiveness Standard Path Coefficients(p-values) Qualitative Analysis of Collaboration and Communication Effectiveness Our two open-ended questions focus on factors that enhance communication and collaboration effectiveness. Forty participants provided qualitative responses. One of the researchers and a research assistant with public accounting experience independently coded the responses into distinct idea units and classified the ideas according to the variables in our research framework. The overall level of initial agreement was 82 percent, indicating good inter-coder reliability. After the independent coding the two coders met and jointly resolved the differences. The coders also extracted several illustrative excerpts under each variable in the model. After several readings of the excerpts, we selected the most representative and meaningful ones, as presented below. In response to the question about ways to enhance consultation effectiveness, approximately 40 percent of participants mentioned the forensic specialists' and auditors' attributes as important (forensic specialists' understanding of the client [17.5 percent], the forensic specialists' understanding of how their work relates to the audit objectives [12.5 percent], and the financial auditors' understanding of what the specialists have to offer [10 percent]). Timing of the consultation was mentioned by 20 percent of the participants and teamwork by about 28 percent. Firm policy was mentioned by 10 percent of the participants. The dominant comment on timing is “upfront involvement of the forensic specialists in the planning phase.” The comment from this participant sums this point and further clarifies the importance of communications: Include forensic auditors in the planning process. In addition, constant open communication between the auditors will ensure the forensic auditors are focused on addressing the needs of the financial statement auditors in addressing relevant fraud risks. But others felt the involvement must be continuous: For our engagement, we interacted with the forensic auditor during the planning phase of the audit only. In future audits to enhance the benefit of the process we should continue to follow up with the forensic auditors to discuss the results of our testing and changes in account balances. Two participants clarified how upfront involvement of the forensic specialists benefits the process. Specifically, early involvement puts the forensic specialists “on the same page” with respect to understanding the key planning parameters and understanding the engagement objectives: An upfront explanation on the nature of the engagement, scope of audit work, and historical completion workpapers greatly enhance the forensic auditor's understanding of the entity. The consultation process goes better when forensic specialists are invited to a brainstorming session, without an agenda, because that is when the most relevant audit procedures are designed. Other participants identified sharing information regarding engagement deadlines and shared goals: The forensic specialist must be kept up to date on expectations from audit committee members and management. The timing of the consultation must be appropriate and forensic specialists must have an understanding of client-imposed deadlines and issues. We must work together to achieve a common goal—not looking for things that don't exist but truly providing value added insight to the team. Regarding the forensic specialists' attributes, participants' experiences relate to the forensic specialists' skillset and the need for them to take ownership of their work. The following are illustrative comments: It [collaboration] can be enhanced by having the forensic auditor own more of the audit area and its deliverable content in the same manner that other specialists would (i.e., IT control specialists, actuaries, valuation specialists, etc.). From my experience, the forensic auditor's participation has generally been geared more toward consultation and execution of specific procedures, with the auditor left to document their work versus receiving a specialist's report and accepting the findings as the auditor's responsibility. Another participant highlighted that the forensic specialists should be part of the engagement team: It is important for them to be part of the team, not just to bring them in for fraud discussion, as it makes it seem that the team is not educated enough to have these types of discussions. Above all it seems imperative that the forensic specialist should know how to carry out an investigation as well as communicate effectively with the engagement team, as indicated in this response: Effectiveness is going to be about their forensic skillset, but also about their communication approach given the sensitivity of such situations. A participant mentioned the importance of the auditors' comfort with the forensic specialists' responses: It is important that the forensic specialist's involvement increases our confidence level that the company has had an appropriate response to fraud and that we have a defensible audit response. This remark is consistent with the suggestion that audit teams engage in interaction rituals to produce and distribute comfort (i.e., assurance) (Pentland 1993). Finally, several comments provide insights on how firm-level activities can enhance collaboration effectiveness. For instance, a participant emphasized the use of relationships to build a consultation culture: Long-standing relationships of trust are very helpful in building a consultation culture among senior team executives. For the senior team members, it is about building ongoing communication regarding fraud into the planning dialogue. For staff members, the key is to bring real-world fraud experiences into the audit to increase the awareness and sensitivity to fraud issues and an awareness of the possibility of forensic techniques in the audit. Most staff members can use help in identifying the key fraud risks and maintaining skepticism. For them, coaching and training communications are important. This participant takes a long-term view and believes fostering a consultation culture that changes fraud mindsets at the managerial and staff level is the ultimate path to consultation effectiveness. A participant, who calls for closer relationships at the highest levels, expresses a similar thought: I think that assurance partners should develop closer relationships with forensic auditors so that they feel comfortable approaching them for assistance. One participant recommends shared training as a way of building relationships, increasing information sharing, improving learning, and ultimately reducing the cost of consultation: Forensic auditors would be less costly and equally effective if they spent more time training and sharing experiences with financial auditors, and less time on audit engagements. Often forensic auditors' contribution to the audit does not justify the cost, but due to firm policy they must be invited to participate nonetheless. Finally, a participant calls for “having consistency of forensic team members each year.” Whether this familiarity will erode or possibly enhance some of the benefits of consultation is unknown. The experiences raised the importance of firm commitment to the effectiveness of collaboration as explained by this participant: Formalized approaches and policies help ensure that the two groups work together on a routine basis. The firm's overall commitment to doing so is a key factor. Also, it is helpful to have designated champions on both the forensic and auditing sides of the house to encourage collaboration. Some participants' remarks, epitomized by the excerpt below, seem to address and suggest ways around the social identity conflict between in-group and out-group members (Ensley and Pearce 2001; Levesque et al. 2001; Mathieu et al. 2008; Kellermanns et al. 2005): Each group could have a greater understanding of what the other does on a day-to-day basis. This can be done through joint trainings, work shadowing, etc. There is the need for formalized approaches and policies to help ensure that the two groups work together on a routine basis. The firm's overall commitment to doing so is a key factor. Also, it is helpful to have designated champions on both the forensic and auditing sides of the house to encourage collaboration. In effect, this participant proposes that a firm's commitment to investing in joint extra-collaboration enterprises is needed to enhance shared mental models and strategic consensus (Ensley and Pearce 2001; Levesque et al. 2001). Although the participant clearly sees the forensic specialists as belonging to a different group, she also welcomes drawing them closer through these joint exercises outside the audit engagement (see Gratton and Erickson 2007). The foregoing excerpts enrich our quantitative findings and suggest various testable propositions and research questions, which are enumerated in Parts 1 and 2 of Table 3. For instance, are the screens that auditors use in voluntarily deciding to seek consultation effective in identifying troubled audits? Can early involvement of the forensic specialist enable groupthink and, if so, how can it be curbed? Should consultations be targeted or should forensic specialists be made members of the audit team? How should long-standing relationships between the forensic specialists and the audit team be structured to avoid ritualizing the process? Under what circumstances do auditors not follow the forensic specialists' recommendations? TABLE 3 Summary of Research Questions Suggested by Field Surveys With respect to communication effectiveness, approximately half of the respondents identified the timing of including the forensic specialist on the engagement as a factor that can enhance communications between the auditors and the specialists. Of these respondents, 55 percent identified involving the specialist up front in the planning stage and 45 percent noted involving the specialist throughout the engagement. Analysis of the comments suggests that while timing of consultation is important, its effect is likely to be moderated by the scheduling and frequency of meetings, the use of an agenda, and the channels of communication (e.g., face-to-face versus virtual meeting). These observations, in turn, implicate several testable research questions that are summarized in Part 3 of Table 3. For instance, can the use of an agenda and regularly scheduled meetings overcome the communication limitations that are inherent in late consultations? Are regularly scheduled meetings more effective than ad hoc meetings in enhancing communications? To what extent does familiarity with the forensic specialists and geographical proximity affect teamwork? How do other sub-elements of teamwork (e.g., trust and commitment) develop over time and how do they interact with other elements (e.g., timing) to affect work and outcomes? Field Survey 2 Overview The second field survey was designed to allow auditors to provide reflexive responses of their consultation experiences regarding the specific nature of additional risks that were identified, unique procedures recommended, conditions under which risk assessments and procedures are enhanced, and the cost of consultation. Thus, we were interested in auditors' cumulative consultation experience on these matters across clients rather than their specific experiences on a particular engagement.19 Procedures and Participants We sent a secured electronic survey link to four partners of the Big 4 firms who had agreed to recruit participants from their firms. A total of 29 completed responses from 22 partners and seven managers were received. Due to confidentiality concerns, the responses do not identify the firms or the participating auditors. Respondents had a mean (standard deviation) of 16.7 (7.91) years of audit experience and had worked with forensic specialists a mean (standard deviation) of 6.15 (3.5) times. Research Instrument The electronic survey had three parts. The introductory part explained the objectives of the study (gathering and analyzing data on the collaboration between auditors and forensic specialists) and assured participants of anonymity and confidentiality. In the second part, they were asked if they had ever consulted a forensic specialist. Only those responding affirmatively were allowed to continue the survey. They were asked to think about some of the engagements on which they consulted forensic specialists and answer questions on risk assessment, risk responsiveness, and cost, as excerpted in Table 4. TABLE 4 Excerpts of Questions from Survey 2 One of the researchers and a research assistant with public accounting experience coded the unique risks and procedures into discernible audit-related themes (e.g., scope of the fraud, type of procedure, corruption, embezzlement). An important observation is the commonality of themes raised by auditors on these issues, which facilitated coding. The overall level of initial agreement was 86 percent, indicating good inter-coder reliability. Subsequently, the two coders met to jointly resolve differences. The coders also identified representative quotes for each category and for circumstances under which the consultation enhanced risk assessment and procedures. Similarly, we excerpted the most representative themes on the effect of consultation on cost. Last, common themes that do not fit under the taskwork or cost are presented as additional findings. For brevity, we discuss comments related to risk assessment, audit procedures, and cost. Detailed quotes and additional findings are presented in Appendix A. Forensic Specialists' Role in the Risk Assessment Process On the question of the unique risks identified by the forensic specialists, we categorize auditors' responses into five themes: (1) scope of the fraud, (2) fictitious parties and documents, (3) corruption, (4) industry-wide issues, and (5) embezzlement. Auditors mentioned the expansion of the “scope of the fraud” as the most common unique risk identified by the forensic specialists. On the question of how consultation can enhance risk assessment, auditors highlighted the importance of a consultation that focuses on a specific issue and exposes auditors to various fraud schemes. Consultations that are unfocused or that do not lead to a way forward can result in an emphasis on “trivial risks,” “scope creep,” or “a wild goose chase.” In sum, the forensic specialists' role in risk assessment includes clarifying risk situations, identifying fraud schemes, providing a different perspective, and focusing the auditor on important issues. However, there must be a clear rationale for involving the forensic specialists in the risk assessment phase to avoid a wild goose chase, as suggested by some experts at a PCAOB's advisory group meeting (see Garver 2007). Forensic Specialists' Role in Risk Responsiveness Participants' comments suggest that the forensic specialist can enrich the auditors' risk response by customizing audit procedures, enhancing traditional procedures, ensuring the comprehensiveness of the audit program, participating in scope determination, taking part in brainstorming, and recommending and performing unique procedures. Regarding unique procedures, the responses fell into three categories: a detailed review of problem areas, definition of attributes of the data that match the fraud, and authentication and email searches. The detailed review covered areas such as payroll, sales terms, loan existence, corporate credit cards, vendor invoices, foreign payments, and employee reimbursements. Authentication (i.e., establishing validity) includes documents, vendors, entities, and customers. Defining attributes of the data included data analysis, data mining, Computer Assisted Audit Techniques (CAAT), and forensic analytics. Cost Considerations and Timeliness of Engagement Completion Participants' comments indicate that if the forensic expertise is matched to the task, then it can result in labor efficiencies and cost savings. Otherwise, the involvement may increase cost without a corresponding increase in benefit. That is, even though the forensic specialists' charging rate is high, their costs can likely be passed on to the client if their involvement is targeted to specific risks rather than generalized risks. In effect, cost goes up when a fraud is discovered because of additional time and work. However, this is not because of the consultation but because the “engagement is now beyond the normal scope.” On the question of whether consulting with the forensic specialists affects the timeliness of the completion of an audit engagement, participants' comments clarify that engagements might be delayed but not because of involving the forensic specialist. Rather, it is the necessity of resolving the potential or actual management fraud that is causing the delay. Prior research suggests that involving forensic specialists on an audit engagement may reduce auditors' fraud detection deficit (e.g., Abbott 1988; Hollenbeck et al. 1995; Hollenbeck, Colquitt, Ilgen, LePine, and Hedlund 1998; Smith-Lacroix, Durocher, and Gendron 2012; Trompeter et al. 2013; Asare et al. 2015). We develop and test a framework of the consultation process using two related field surveys. Our results show that the forensic auditors' understanding of the clients' business and audit engagement objectives is positively associated with effective teamwork and risk assessments, which, in turn, are positively associated with overall consultation effectiveness. While risk assessment is positively associated with risk responsiveness, the latter is not associated with overall consultation effectiveness. We also find that involving forensic specialists early in the engagement is associated with improved teamwork and risk responsiveness. We found marginally significant support that auditors' reluctance to consult is negatively associated with effective teamwork, while early consultation is positively associated with risk responsiveness. Auditors' comments support the conclusion that consultation enhances risk assessments when it is targeted to specific circumstances, clarifies fraud schemes, identifies idiosyncratic risk, and brings in a different perspective. Auditors noted that the cost impact of consultation is limited, apparently, because even though forensic specialists charge higher rates, they tend to help auditors focus their effort, leading to efficiency gains that largely compensate for their rates. Our findings have important research implications and suggest avenues for future research, several of which have been listed in Table 3. We highlight the effect of when the forensic specialists are deployed as an intriguing avenue for research. The participants appear to be in favor of early involvement. While this approach may ensure that all parties share a common initial understanding, it presents the risk of making the forensic specialists a co-producer of the audit plan, which can present later challenges, including groupthink (Bamber, Watson, and Hill 1996). Only carefully controlled and systematic studies can bring clarity to these issues. Another avenue for research is to evaluate the screens that auditors use in deciding whether to involve forensic specialists when such consultation is not mandatory. The underlying presumption in using such screens is that auditors are good at identifying engagements that require the use of forensic specialists. Yet, the rationale for the assumption is unclear. For instance, if one were to review SEC enforcement releases that deal with fraud in the last five years, then what proportion of those engagements involved forensic specialists? While the level of disclosure in such releases is often limited and uneven, it can nevertheless provide some important insights on the efficacy of auditors' screens and the forensic specialists' effectiveness. Alternatively, carefully designed experiments can shed light on the screens that auditors employ in deciding when to consult with forensic specialists. Although participants' qualitative responses suggest that they generally follow the advice of forensic specialists, they also raise the possibility that they would allow their judgment of materiality or relevance to override the forensic specialists' recommendation. This finding raises the question of the circumstances, including client pressure and transaction complexity, under which a forensic specialist's recommendations are not followed. In this regard, the judge-advisor paradigm can be a particularly useful framework to address questions such as whether auditors have an appropriate level of, too much, or too little trust in forensic specialists (see, e.g., Bonaccio and Dalal 2006; Yaniv 2004). An important party that has been omitted from the discussion is the client. Not only do auditors attempt to pass forensic costs on to the client, but also client personnel have to deal with forensic specialists, sometimes unexpectedly. As an example, future research can examine the effect of involving a forensic specialist on an audit engagement on the client's comfort (or discomfort) levels and the corresponding strategies adopted by the client. Participants noted that forensic specialists sometimes indicate their prior experiences with frauds and the specific tactics that they employed to address the forensic situation. Are references to such tactics, compared to their omission, more persuasive to the engagement team? The effect of the background of the forensic specialists can also be an important area of inquiry. From a practical perspective, regulators and firms may consider policies that ensure that auditors and the forensic specialists are in concurrence with respect to engagement goals and strategies to avoid the dysfunctional “wild goose chase” or “scope creep” phenomena. In a related vein, firm and regulatory policies that reduce auditors' reluctance to consult and enhance forensic specialists' shared mental model are important to improved consultation. Our findings must be interpreted cognizant of the limitations of our research approach and choices. In particular, our research is not designed to address questions such as whether auditors are consulting in circumstances when they should or not consulting when they should have. Our focus is limited to engagements for which there were consultations. Thus, an important avenue for future research is to explore whether and the circumstances under which audit teams over-consume or under-consume the forensic specialists' services. Also, although participants were free to refer to working papers in responding to questions, we did not gather data on how many actually did so. Thus, potential lack of accurate recall may be a limitation. However, of note, responses predominantly relate to recent consultations (82.5 percent of the consultations occurring less than a year earlier). Further, the participating firms did not allow us to gather data on whether the engagement team discovered fraud, thus limiting our ability to draw inferences about actual accuracy. In the absence of these data, we asked auditors to assess the overall effectiveness of the consultation process in leading to a higher-quality audit. To the extent that auditors' evaluation of the effectiveness of the consultation does not reflect accuracy in detecting fraud, our conclusions about effectiveness are similarly limited. We are also limited by the sample selection method. In particular, auditors may have identified engagements for which consultations worked better than usual. This potential sampling bias may affect the conclusions about how different factors impact consultation effectiveness. Our use of structured tailored questions, rather than interviews, limited our ability to ask tailored follow-up questions. However, the choice was driven by the protocol acceptable to the participating firms. Finally, the forensic specialist consults with the engagement team, not an individual auditor. Thus, ideally it is important to obtain the perspectives of multiple team members on an engagement to avoid the potential of a single-rater bias. However, given the difficulty of obtaining participants, obtaining multiple members from an engagement is very difficult. In sum, our proposed model is descriptive, allowing us to uncover how auditors interact effectively and benefit from the forensic specialists' work and providing a platform for future research to address related and relevant questions that are rooted in auditors' work realities. Abbott , A . 1988 . The System of Professions . Chicago, IL : The University of Chicago Press . American Institute of Certified Public Accountants (AICPA) . 2002 . Consideration of Fraud in a Financial Statement Audit. Statement on Auditing Standards No. 99 (Supersedes SAS No. 82) . New York, NY : AICPA . American Institute of Certified Public Accountants (AICPA) . 2012 a . Consideration of Fraud in a Financial Statement Audit. Statement on Auditing Standards No. 122, AU-C Section 240 . New York, NY : AICPA . American Institute of Certified Public Accountants (AICPA) . 2012 b . Using the Work of a Specialist. Statement on Auditing Standards No. 122, AU-C Section 620 . New York, NY : AICPA . Asare , S ., and A . Wright . 2004 . The effectiveness of alternative risk assessment and program planning tools in a fraud setting . Contemporary Accounting Research 21 ( 2 ): 325 352 . doi: Asare , S ., A . Wright , and M . Zimbelman . 2015 . Challenges facing auditors in detecting financial statement fraud: Insights from fraud examiners . Journal of Forensic and Investigative Accounting 7 ( 2 ). Bamber , M ., R . Watson , and C . Hill . 1996 . The effects of group support system technology on audit group decision making . Auditing: A Journal of Practice & Theory 15 ( 1 ): 122 134 . Beasley , M ., J . Carcello , D . Hermanson , and T . Neal . 2010 . Fraudulent Financial Reporting 1998–2007: An Analysis of U.S. Public Companies . New York, NY : AICPA . Beasley , M ., J . Carcello , D . Hermanson , and T . Neal . 2013 . An Analysis of Alleged Auditor Deficiencies in SEC Fraud Investigations: 1998–2010 . Bell , T ., M . Peecher , and H . Thomas . 2005 . The 21st Century Public Company Audit . New York, NY : KPMG LLP . Bonaccio , S ., and R. S . Dalal . 2006 . Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences . Organizational Behavior and Human Decision Processes 101 ( 2 ): 127 151 . doi: Bonner , S. E . 2008 . Judgment and Decision Making in Accounting . Upper Saddle River, NJ : Pearson Prentice Hall . Boritz , E ., N . Kochetova-Kozloski , and L . Robinson . 2015 a . Are fraud specialists relatively more effective than auditors at modifying audit programs in the presence of fraud risk? The Accounting Review 90 ( 3 ): 881 915 . doi: Boritz , E ., N . Kochetova-Kozloski , L . Robinson , and C . Wong . 2015 b. Auditors' and Specialists' Views about the Use of Specialists during an Audit. Working paper, University of Waterloo . Brazel , J ., T . Carpenter , and G . Jenkins . 2010 . Auditors' use of brainstorming in the consideration of fraud: Reports from the field . The Accounting Review 85 ( 4 ): 1273 1301 . doi: Chen , F ., P . Curran , K . Bollen , J . Kirby , and P . Paxton . 2008 . An empirical evaluation of the use of fixed cutoff points in RMSEA test statistics in structural equation models . Sociological Methods and Research 36 ( 4 ): 462 494 . Chen , G . 2005 . Newcomer adaptation in teams: Multilevel antecedents and outcomes . Academy of Management Journal 48 ( 1 ): 101 116 . doi: Clarin , O . 2007 . Strategies to overcome barriers to effective nurse practitioner and physician collaboration . The Journal for Nurse Practitioners 3 ( 8 ): 538 548 . doi: Cohen , S ., and D . Bailey . 1997 . What makes teams work: Group effectiveness research from the shop floor to the executive suite . Journal of Management 23 ( 3 ): 239 290 . doi: Dyck , A ., A . Morse , and L . Zingales . 2010 . Who blows the whistle on corporate fraud? The Journal of Finance 65 ( 6 ): 2213 2253 . doi: Ensley , M ., and C . Pearce . 2001 . Shared cognition in top management teams: Implications for new venture performance . Journal of Organizational Behavior 22 ( 2 ): 145 160 . doi: Fay , D ., C . Borrill , Z . Amir , R . Haward , and M . West . 2006 . Getting the most out of multidisciplinary teams: A multi-sample study of team innovation in health care . Journal of Occupational and Organizational Psychology 79 ( 4 ): 553 567 . doi: Garver , R . 2007 . Forensic Audits: Got a Clue? Gibbins , M . 2001 . Incorporating context into the study of judgment and expertise in public accounting . International Journal of Auditing 5 ( 3 ): 225 236 . doi: Gibbins , M ., and J . Newton . 1994 . An empirical investigation of complex accountability in public accounting . Journal of Accounting Research 32 ( 2 ): 165 186 . doi: Gibbins , M ., and S. Q . Qu . 2005 . Eliciting experts' context knowledge with theory-based experiential questionnaires . Behavioral Research in Accounting 17 ( 1 ): 71 88 . doi: Gibbins , M ., S . McCracken , and S . Salterio . 2005 . Negotiations over accounting issues: The congruency of audit partner and chief financial officer recalls . Auditing: A Journal of Practice & Theory 24 ( Supplement ): 171 193 . doi: Gibbins , M ., S . McCracken , and S . Salterio . 2007 . The chief financial officer's perspective on auditor-client negotiations . Contemporary Accounting Research 24 ( 2 ): 387 422 . doi: Gibbins , M ., S . McCracken , and S . Salterio . 2010 . The auditor's strategy selection for negotiation with management: Flexibility of initial accounting position and nature of the relationship . Accounting, Organizations and Society 35 ( 6 ): 579 595 . doi: Gibbins , M ., S . Salterio , and A . Webb . 2001 . Evidence about auditor-client management negotiation concerning the client's financial reporting . Journal of Accounting Research 39 ( 3 ): 535 563 . doi: Glover , S ., and D . Prawitt . 2014 . Enhancing auditor professional skepticism: The professional skepticism continuum . Current Issues in Auditing 8 ( 2 ): 1 10 . doi: Glover , S ., D . Prawitt , Schultz , J . Jr., and M . Zimbelman . 2003 . A test of changes in auditors' fraud-related planning judgments since the issuance of SAS No. 82 . Auditing: A Journal of Practice & Theory 22 ( 2 ): 237 251 . doi: Gold , A ., R . Knechel , and P . Wallage . 2012 . The effect of the strictness of consultation requirements on fraud consultation . The Accounting Review 87 ( 3 ): 925 949 . doi: Gratton , L ., and T . Erickson . 2007 . Eight ways to build collaborative teams . Harvard Business Review 85 ( 11 ): 100 109, 153 . Gray , B ., and D . Wood . 1991 . Collaborative alliances: Moving from practice to theory . The Journal of Applied Behavioral Science 27 ( 1 ): 3 22 . doi: Hammersley , J . 2011 . A review and model of auditor judgments in fraud-related planning tasks . Auditing: A Journal of Practice & Theory 30 ( 4 ): 101 128 . doi: Hammersley , J ., M . Bamber , and T . Carpenter . 2010 . The influence of documentation specificity and priming on auditors' fraud risk assessments and evidence evaluation decisions . The Accounting Review 85 ( 2 ): 547 571 . doi: Hammersley , J ., K . Johnstone , and K . Kadous . 2011 . How do audit seniors respond to heightened fraud risk? Auditing: A Journal of Practice & Theory 30 ( 3 ): 81 101 . doi: Hoffman , V ., and M . Zimbelman . 2009 . Do strategic reasoning and brainstorming help auditors change their standard audit procedures in response to fraud risk? The Accounting Review 84 ( 3 ): 811 837 . doi: Hogan , C. E ., Z . Rezaee , Riley , R . Jr., and U . Velury . 2008 . Financial statement fraud: Insights from the academic literature . Auditing: A Journal of Practice & Theory 27 ( 2 ): 231 252 . doi: Hollenbeck , J. R ., J. A . Colquitt , D. R . Ilgen , J. A . LePine , and J . Hedlund . 1998 . Accuracy decomposition and team decision making: Testing theoretical boundary conditions . The Journal of Applied Psychology 83 ( 3 ): 494 500 . doi: Hollenbeck , J ., D . Ilgen , D . Sego , J . Hedlund , D . Major , and J . Phillips . 1995 . Multilevel theory of team decision making: Decision performance in teams incorporating distributed expertise . The Journal of Applied Psychology 80 ( 2 ): 292 316 . doi: Houston , R ., M . Peters , and J . Pratt . 1999 . The audit risk model, business risk, and audit planning decisions . The Accounting Review 74 ( 3 ): 281 298 . Ilgen , D. R ., J. R . Hollenbeck , M. J . Johnson , and D . Jundt . 2005 . Teams in organizations: From input-process-out models to IMOI models . Annual Review of Psychology 56 ( 1 ): 517 543 . doi: International Auditing and Assurance Standards Board (IAASB) . 2009 . The Auditor's Responsibilities Relating to Fraud in an Audit of Financial Statements. International Standard on Auditing (ISA) 240 . New York, NY : IFAC . Janis , I. L ., and L . Mann . 1977 . Decision-Making: A Psychological Analysis of Conflict, Choice and Commitment . New York, NY : Free Press . Jenkins , J. G ., E. M . Negangard , and M . Oler . 2016 . Usage of Forensic Professionals in the Audit Process: Evidence from the Field. Working paper, Virginia Polytechnic and State University . Kangarloo , H ., J . Valdez , L . Yao , S . Chen , J . Curran , D . Goldman , U . Sinha , J . Dionisio , R . Taira , J . Sayre , L . Seeger , R . Johnson , Z . Barbaric , and R . Steckel , 2000 . Improving the quality of care through routine teleradiology consultation . Academic Radiologist 7 ( March ): 149 155 . Kellermanns , F ., J . Walter , C . Lechner , and S . Floyd . 2005 . The lack of consensus about strategic consensus: Advancing theory and research . Journal of Management 31 ( 5 ): 719 737 . doi: Kenny , D. A ., B . Kaniskan , and D. B . McCoach . 2015 . The performance of RMSEA in models with small degrees of freedom . Sociological Methods & Research 44 ( 3 ): 486 507 . doi: KPMG . 2009 . KPMG Fraud Survey 2009 . New York, NY : KPMG LLP . Levesque , L ., J . Wilson , and D . Wholey . 2001 . Cognitive divergence and shared mental models in software development project teams . Journal of Organizational Behavior 22 ( 2 ): 135 144 . doi: Lillis , A . 1999 . A framework for the analysis of interview data from multiple field research sites . Accounting & Finance 39 ( 1 ): 79 105 . doi: Lillis , A ., and J . Mundy . 2005 . Cross-sectional field studies in management accounting research—Closing the gaps between surveys and case studies . Journal of Management Accounting Research 17 ( 1 ): 119 141 . doi: Lukka , K ., and E . Kasanen . 1995 . The problem of generalizability: Anecdotes and evidence in accounting research . Accounting, Auditing & Accountability Journal 8 ( 5 ): 71 90 . doi: Mathieu , J ., T . Maynard , T . Rapp , and L . Gilson . 2008 . Team effectiveness 1997–2007: A review of recent advancements and a glimpse into the future . Journal of Management 34 ( 3 ): 410 476 . doi: Mathieu , J. E ., T . Heffner , G . Goodwin , J . Cannon-Bowers , and E . Salas . 2005 . Scaling the quality of teammates' mental models: Equifinality and normative comparisons . Journal of Organizational Behavior 26 ( 1 ): 37 56 . doi: McGrath , J. E . 1964 . Social Psychology: A Brief Introduction . New York, NY : Holt, Rinehart & Winston . Messier , W. F . Jr., V . Owhoso , and C . Rakovski . 2008 . Can audit partners predict subordinates' ability to detect errors? Journal of Accounting Research 46 ( 5 ): 1241 1264 . Mock , T ., and J . Turner . 2005 . Auditor identification of fraud risk factors and their impact on audit programs . International Journal of Auditing 9 ( 1 ): 59 77 . doi: Nelson , M. W ., and H.-T . Tan . 2005 . Judgment and decision making research in auditing: A task, person, and interpersonal interaction perspective . Auditing: A Journal of Practice & Theory ( 24 ): 41 71 . Nemeth , C ., and J . Goncalo . 2004 . Influence and Persuasion in Small Groups. Available at: http://irle.berkeley.edu/files/2004/Influence-and-Persuasion-in-Small-Groups.pdf Nieschwietz , R ., J . Schultz , and M . Zimbelman . 2000 . Empirical research on external auditors' detection of financial statement fraud . Journal of Accounting Literature ( 19 ): 190 246 . Pentland , B . 1993 . Getting comfortable with the numbers: Auditing and the micro-production of macro-order . Accounting, Organizations and Society 18 ( 7/8 ): 605 620 . doi: Power , M . 1996 . Making things auditable . Accounting, Organizations and Society 21 ( 2/3 ): 289 315 . doi: Power , M. K ., and Y . Gendron . 2015 . Broadening horizons: Engaging with qualitative research . Auditing: A Journal of Practice & Theory 34 ( 2 ): 147 165 . Public Company Accounting Oversight Board (PCAOB) . 2004 . Standard Advisory Group Meeting: Financial Fraud. New York, NY: PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2010 a . Identifying and Assessing Risks of Material Misstatements. Auditing Standard No. 12 (Recodified as AS 2110) . New York, NY : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2010 b . Auditor Considerations Regarding Significant Unusual Transactions. Staff Audit Practice Alert No. 5 . New York, NY : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2010 c . The Auditor's Responses to the Risks of Material Misstatements. Auditing Standard No. 13 (Recodified as AS 2301) . New York, NY : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2012 . Standing Advisory Group Meeting: Consideration of Outreach and Research Regarding the Auditor's Approach to Detecting Fraud. Available at: https://pcaobus.org/News/Events/Pages/11152012_SAGMeeting.aspx . Public Company Accounting Oversight Board (PCAOB) . 2015 . Inspection Observations Related to PCAOB “Risk Assessment” Auditing Standards (No. 8 through No. 15). PCAOB Release No. 2015-007. Available at: https://pcaobus.org/Inspections/Documents/Risk-Assessment-Standards-Inspections.pdf Public Company Accounting Oversight Board (PCAOB) . 2016 . Staff Inspection Brief: Preview of Observations from 2015 Inspections of Auditors of Issuers. Vol. 2016/1. Available at: https://pcaobus.org/Inspections/Documents/Inspection-Brief-2016-1-Auditors-Issuers.pdf Public Oversight Board (POB) . 2000 . The Panel on Audit Effectiveness: Report and Recommendations . Smith-Lacroix , J ., S . Durocher , and Y . Gendron . 2012 . The erosion of jurisdiction: Auditing in a market value accounting regime . Critical Perspectives on Accounting 23 ( 1 ): 36 53 . doi: Tan , H ., and K . Jamal . 2006 . Managing perceptions of technical competence: How well do auditors know how others view them? Contemporary Accounting Research 23 : 761 787 . Trompeter , G ., T . Carpenter , N . Desai , K . Jones , and Riley , R . Jr. 2013 . A synthesis of fraud related research . Auditing: A Journal of Practice & Theory 32 ( Supplement ): 287 321 . doi: Verwey , I . 2014 . Differences between Public Auditors and Forensic Accountants in Their Ability to Identify Fraud Risks and Plan Effective Procedures to Mitigate Fraud Risks. Ph.D. dissertation, Nyenrode Business University . Wells , J . 2003 . Yaniv , I . 2004 . Receiving other people's advice: Influence and benefit . Organizational Behavior and Human Decision Processes 93 ( 1 ): 1 13 . doi: Zimbelman , M . 1997 . The effects of SAS No. 82 on auditors' attention to fraud risk factors and audit planning decisions . Journal of Accounting Research 35 ( Supplement ): 75 97 . doi: 1 Similarly, auditors may consult with their firms' forensic specialists if information or other conditions indicate that a material fraud might have occurred (PCAOB 2010a, ¶ 53; PCAOB 2010b; AICPA 2012a; IAASB 2009, ¶ 29(a)). Forensic specialists typically possess both accounting and investigative skills that allow them to systematically gather evidence to address specific issues related to possible wrongdoing, including fraud, in a level of depth suitable for use in various legal arenas, including court proceedings (Wells 2003). Our focus in this paper is consultation with fraud specialists, but we use the broader term forensic specialists to be consistent with the academic literature (e.g., Hammersley 2011; Trompeter, Carpenter, Desai, Jones, and Riley 2013). 2 Risk responsiveness refers to the design and implementation of appropriate responses to the risk of material fraud and includes making changes to the planned nature, timing, and extent of procedures, reassigning of engagement responsibilities, and incorporating elements of unpredictability in the selection of procedures (see, PCAOB 2010c). Arguably, risk responsiveness could be characterized as an outcome since it represents how the audit process changed in response to identified fraud risks. However, in our model, we use outcome to refer to the auditor's assessment of the effectiveness of the whole consultation process rather than the end product of different tasks performed during the process. 3 The second field survey is conducted after analyzing the data from the first survey and is designed primarily to shed light on taskwork and cost-related findings from the first survey. However, the surveys were not designed to address the circumstances under which auditors may over-consume or under-consume consultation. Thus, the focus is on auditors' experiences subsequent to a decision being made to consult. That is, we focus on only those audits where there was consultation with forensic specialists. In this vein, our study differs from others that focus on whether and how auditors involve or exclude forensic specialists (e.g., Asare and Wright 2004; Jenkins, Negangard, and Oler 2016). IRB approval was obtained for the use of human subjects for both field surveys. 4 Prior research has identified the absence of learning opportunities due to the rarity of fraud, evidential gathering and evaluation difficulties, insufficient skepticism, lack of appropriate fraud detection training, and misjudging managers' incentives and opportunities as some of the major underlying reasons for auditors' failure to detect fraud (Nieschwietz, Schultz, and Zimbelman 2000; Beasley et al. 2013; Asare, Wright, and Zimbelman 2015). 5 Prior research has considered other vehicles to enhance fraud detection, including the use of brainstorming (Hoffman and Zimbelman 2009; Hammersley, Bamber, and Carpenter 2010), the use of strategic reasoning (Hoffman and Zimbelman 2009), and the explicit consideration of fraud schemes (Hammersley et al. 2010; Hammersley et al. 2011). 6 Gold et al. (2012) do not obtain data regarding why auditors seek consultation when deadline pressures are tight. We explore the impact of consultation on delays in completing the audit in our surveys and, as reported in the next sections, find that consultation can save time because the forensic expert can focus on the work, which potentially explains why auditors may seek consultation when facing a deadline. 7 Field surveys have the advantages of providing externally valid data and have been used in a number of auditing studies that investigate, for instance, accountability (e.g., Gibbins and Newton 1994), auditor-client negotiations (e.g., Gibbins et al. 2001; Gibbins, McCracken, and Salterio 2005, 2007, 2010), and brainstorming (e.g., Brazel et al. 2010). A field survey is well suited for examining complex issues characterized by the interplay of several variables (Gibbins 2001; Gibbins and Qu 2005). 8 Participants included 39 senior managers, 11 managers, 3 partners, and 4 seniors. We requested that each engagement team select a senior member of the engagement team to complete the instrument. While we have no a priori theoretical expectation of how audit experience might affect responses, the results presented in the next section are qualitatively similar when the seniors are excluded from the analyses. 9 Forty-seven (82.5 percent) participants selected an engagement on which the consultation occurred less than a year earlier, and the remaining consultations occurred between one to three years (10.5 percent) and over three years (7 percent) earlier. Thus, participants predominantly recalled recent consultation experiences. 10 The purpose of gathering the client demographic data was to gain some insights on the type of client for which the consultation occurred. We provide descriptive statistics in the next section. 11 There were not enough private companies to do a meaningful comparison with public companies. Similarly, the distribution of firms by the eight industries led to sample sizes that were too small for a meaningful comparison. 12 We coded bringing in the consultant early as 0, else 1. Thus, if bringing in the consultant early is associated with effectiveness and satisfaction, then the point bi-serial correlation is expected to be negative. 13 For this analysis, explicitly required by firm policy is coded as 0, else 1. Thus, we expect the point bi-serial correlation to be positive when the consultation is not mandated. 14 We also examined the associations between the task processes and outcomes. In this vein, we found that identification of additional fraud risks was positively associated with perceived effectiveness (0.40) and satisfaction (0.40). With respect to risk responsiveness, the identification of effective procedures is positively associated with perceived effectiveness (0.39) and satisfaction (0.33). Thus, auditors' affective reactions and perceived effectiveness are, in part, based on program effectiveness. Finally, the teamwork variables (expert's commitment to the team, level of trust, and communication effectiveness) are positively associated with perceived effectiveness and satisfaction. 15 Specifically, for risk assessment, we used the mean of the responses to the two questions under “Risk Assessment” in Table 1. For risk responsiveness, we used the mean of the responses to the first three questions under “Risk Responsiveness” in Table 1. For the forensic specialist antecedent, we used the mean of responses to the three questions under “Forensic Specialist Related” in Table 1. We label this mean as the shared mental model. For teamwork, we use the mean of the responses to the three questions under “Teamwork” in Table 1. For cost/delay, we used the mean of responses to the two questions under “Cost and Delays” in Table 1. Cronbach's alphas for the composite scores are 0.757, 0.805, 0.724, 0.789, and 0.721, respectively. 16 Nonsignificant χ2 is indicia of model fit. The traditional cutoff of goodness for the comparative fitness index (CFI) and the incremental fit index (IFI) is 0.90, while that of the RMSEA is 0.10. The RMSEA should be evaluated in light of the small sample size and small degrees of freedom (Kenny, Kaniskan, and McCoach 2015). 17 We obtain similar results if we run the path model using a composite of the six items under “Risk Responsiveness” in Table 1 to measure risk responsiveness. The only exceptions are that this model shows a positive association between the forensic specialist antecedent and risk response (β = 0.40; p = 0.007) and no association between cost and delay and overall effectiveness (β = 0.18; p = 0.099). The model fit metrics for this model are (χ2 = 6.315, df = 4; p = 0.177; RMSEA of 0.102, PCLOSE = 0.236; CFI = 0.977; IFI = 0.984). 18 The RMSEA is 0 and the CFI and IFI ≥ 1 when the degrees of freedom are larger than the χ2 (Chen, Curran, Bollen, Kirby, and Paxton 2008; Kenny et al. 2015). 19 Prior research suggests that asking such general questions is likely to lead auditors to engage in reconstructions based on normative beliefs about what auditors should have done rather than reporting of specific things they did or do in practice. While the questions that we asked are not of the genre where norms are likely to be critical, we nevertheless acknowledge that this manner of questions increases the likelihood of auditors engaging in sensemaking of their environment (see, e.g., Lukka and Kasanen 1995; Lillis 1999; Lillis and Mundy 2005). 20 This harm arises if the client perceives that the forensic specialist is not using the proper channels of communication or if the client is questioned about a potential fraud concern that is subsequently found to be unwarranted. APPENDIX A Qualitative Analysis and Excerpts from Field Survey 2 The experiences of four participants are illustrative of how forensic specialists' involvement can expand the scope of issues considered for fraud risk assessment: The forensic specialists expanded the risk map by explaining that the fraud goes beyond the people and situation identified. They [forensic specialists] clarified the scope of the perpetration and concealment of the fraud. Their [forensic specialists] participation in the fraud discussion helped to challenge the audit team to look at their work differently. Viewing the issues from a different viewpoint helps ensure quality. They sometimes can provide insight into various areas that are susceptible to fraud. They then suggest procedures to address the problems. These quotes suggest that the forensic specialists enhance fraud detection effectiveness by expanding the fraud risk space to cover more areas and people and bring new insights on how the fraud can be perpetrated or concealed, thus potentially increasing the team's use of strategic reasoning (Hoffman and Zimbelman 2009). Several participants recalled instances when the forensic specialists questioned the validity of clients' documents or entities. As one participant puts it: The forensic specialist alerted the engagement team to the possibility of fictitious customers, vendors, and documents, which was used to heighten fraud risk. Forensic specialists also play a role in matters that involve the intersection between law and auditing. The most common example mentioned was the identification of possible foreign corrupt practice (FCPA) issues, as indicated in this excerpt: The forensic specialists help clarify how FCPA transactions occur and how to identify them. Nevertheless, there is evidence that sometimes the forensic specialists may fail to clarify the nature of the fraud risk as noted by this participant: The forensic auditors need to be able to explain/articulate how the fraud/regulatory/compliance risk translates to the financial statements and how to assess a compliance program—what does a good program look like? This is hard, especially in highly regulated entities and often is identified when there is a regulatory audit versus a financial audit. In some cases, auditors are unaware of industry-wide fraud. This knowledge was captured in this comment: They [forensic specialists] share industry-wide fraud matters that were not necessarily known by the auditors. They make the audit team aware of how fraud can be concealed or perpetrated in specialized industries. They bring a different perspective and better procedures to address identified risks. In some cases, the risk identified related to embezzlement: The forensic specialists drew attention to risks that led to discovery of a$2.0 million cash fraud over ten years by a controller—\$200k per year.
Illustrative comments of how consultation that focuses on a specific issue can enhance risk assessment are:
The forensic auditor is brought in based on an identified issue related to revenue recognition or FCPA allegations.
Another common theme on how forensic specialists can enhance risk assessment is through exposing auditors to various fraud schemes. One participant highlighted this point as follows:
They [forensic specialists] communicate fraud schemes based on actual instances by industry to the audit teams so that the team could hone their brainstorming sessions.
The following excerpts by participants reflect the scope creep:
Sometimes there is not much more to do than the risks that have been identified or the procedures that have been designed. A needless scope creep agenda from the forensic team is not welcome.
When the consultation is not targeted to specific circumstances (i.e., if there are no specific risks that need to be addressed), it can lead to a wild goose chase.
However, not all auditors share in the fear of “scope creep.” Some auditors considered the involvement of forensic specialists as a part of the process to construct comfort (Power 1996), as reflected in this comment by this participant:
Involvement of the forensic specialists is always valuable since, at a minimum, it validates that the engagement team has considered the relevant factors.
Forensic Specialists' Role in Risk Responsiveness
One participant suggested that customization and comprehensiveness of procedures are key to the forensic specialists' role in risk responsiveness:
Forensic specialists help audit teams design custom audit procedures to address the risk or situation identified. The procedures tend to be more comprehensive than a traditional audit team will perform.
However, forensic specialists can also enhance traditional procedures, either by taking it to “a higher level,” “bringing in a new perspective,” or ensuring that the “audit program is comprehensive,” as noted in the excerpts below from three participants:
Although there can never be 100 percent certainty that an issue is fully vetted, the inclusion of procedures performed by forensic specialists take the traditional approach to a higher level.
They [forensic specialists] introduced different perspectives and recommended better procedures to address identified risks.
It is a baked in quality step to ensure the program includes the right elements.
Nevertheless, as with risk assessment, sometimes the forensic specialists help the engagement team to construct comfort (Power 1996):
He did not suggest any additional fraud risk or procedures but it was nice to have a forensic specialist verify that we had correctly identified the appropriate fraud risk and audit responses.
One participant's response confirmed the importance of the forensic specialists' fraud experience in responding to risks:
They [forensic specialists] suggest based on their experience risks needing to be addressed and previously successful tactics for responding to those risks.
Thus, forensic specialists play critical risk responsiveness roles, including scope determination (reflected in comments such as “helps determine level of any incremental fraud work”), brainstorming (e.g., “they discuss previous tactics to address looming risks”), and comprehensiveness testing (“ensure procedures are more comprehensive than a traditional audit team will perform”). Nevertheless, the “wild goose chase” issue was also identified as a potential problem in the risk responsiveness stage, along with the “poor boundary” problem, when there is unintended communication between the forensic specialists and the client (“purveyors”), which may harm auditor-auditee relations or even tip off perpetrators.20
Cost Considerations and Timeliness of Engagement Completion
The following excerpted responses shed additional light on the cost and timeliness effects of involving a forensic expert. It appears the effect of consultation on cost depends on the extent to which the consultation is targeted:
If there are AU 316 matters [suspected fraud], then costs are already up and the forensic specialists can help target what we do. If it is not an AU 316 situation, then it is likely just going to add costs because the involvement will not be focused.
In effect, participants are suggesting that if the forensic expertise is matched to the task, then it can result in labor efficiencies and cost savings. Otherwise, the involvement may increase cost without a corresponding increase in benefit. These experiences suggest that high fraud risk engagements may be less costly when forensic specialists are involved. Another participant is more emphatic in stating that involving forensic specialists does not normally significantly increase cost:
It does not normally significantly increase cost except when there is fraud. And then it becomes a question of what is found and who bears the cost.
This quote highlights that it is necessarily costlier to audit a fraud scenario, but that the cost of consultation can then be shifted to others depending on what the audit reveals. A participant makes this point more crisply:
When it is not billed to the audit client then it gets expensive.
Another participant takes the view that involving the forensic specialist is costly, but it is money well spent:
It is high-rate work if it is necessary, but obviously critical, and I have high regard for the forensic team responsible for such work.
Other comments were more unequivocal in asserting that consultation significantly affects the cost of the audit engagement, as evidenced by this excerpt:
Forensic specialists are typically more seasoned and are used to charging at a higher realization rate thereby increasing fees substantially relative to hours.
Finally, a participant shed some light on conditions under which the client is likely to pay:
Unless it is to uncover a known malfeasance it is a duplicative process, which clients would be unwilling to pay for.
This excerpt extends the point made about the wild goose chase and the comments made by experts at the PCAOB standing advisory meeting (Garver 2007). That is, clients are unlikely to pay for services that they do not need and have not asked for. In turn, this highlights the importance of targeting and tailoring the inclusion of forensic specialists on audit engagements.
On the question of whether consulting with the forensic specialists affects the timeliness of the completion of an audit engagement, a participant noted:
If there is a potential or already identified fraud by management, we need to get comfortable that the company has identified the full exposure and properly accounted for any related accounting and disclosure implications before we can issue our report.
The subtle point by this participant is that the engagement is delayed but not because of involving the forensic specialist. Rather, it is the necessity of resolving the potential or actual management fraud that is causing the delay. The point is made more directly by a participant as follows:
If it is an AU 316 [suspected fraud] issue then the engagement is already delayed but the forensic specialists can help get through faster. If it is not an AU 316 issue, then the time is not well spent.
It also came to light that involvement of the forensic specialist may put the engagement on hold, and has preplanning implications:
Possibly, it can delay the engagement, depending on the issue and when it was identified. Traditionally, an audit engagement is put on hold until the forensic specialists have completed their work. Unfortunately, a number of the issues that forensic specialists are involved with lead the audit engagement teams to reconsider client acceptance.
This comment suggests that forensic specialists' activities can inform the client acceptance and continuance decisions. In turn, this raises the possibility of involving the forensic specialists in such decisions when some conditions are met. This approach is proactive and preventive and can reduce engagement risk.
Participants also identified suspected fraud by management, audit committee request, or periodically scheduled consultation (at least once every three years if high risk or publicly traded) as factors triggering consultation. We also sought evidence on whether the forensic specialist is retained in the year following the initial consultation. In general, participants indicated that there is no requirement and they do not reengage the forensic auditor in the following year. This is because “many of these issues tend to be one-year situations.” The exception to this general rule appears to be “when the prior issues persisted or new fraud issues crop up.”
Regarding when the engagement team will not follow the advice of the forensic specialists. The typical response is captured in this participant's comments:
The audit team would not follow the forensic team's suggestion in only very rare circumstances. It should not happen. The only instance I could see is if the forensic team did not fully understand the business or if the recommendations are for a low risk/materiality area or do not reduce fraud risk.
This is an intriguing response because it appropriately reflects that the auditor retains the ultimate decision rights for the engagement, reflecting the concept of hierarchical sensitivity (Hollenbeck et al. 1995). Nevertheless, it also demonstrates that the auditor can invoke notions of risk, materiality, or the forensic specialist's lack of understanding of the business to discount or ignore advice.
Author notes
We appreciate the helpful comments of Jacqueline S. Hammersley (associate editor), the anonymous referees, Vicki Arnold, Jeffrey Cohen, Ganesh Krishnamoorthy, Jason Smith, Gregory Trompeter, and the workshop participants at Brigham Young University, Rutgers, The State University of New Jersey, University of Central Florida, and University of Nevada, Las Vegas. Funding for the research project described in this article was provided by the Center for Audit Quality.
Competing Interests
The views expressed in this paper are those of the authors alone and not those of the Center for Audit Quality.
Editor's note: Accepted by Jacqueline S. Hammersley. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2994753122329712, "perplexity": 5408.668927228286}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00449.warc.gz"} |
https://dsp.stackexchange.com/questions/49322/instantaneous-frequency-estimation-by-hilbert-transform-theoretical-justificat?noredirect=1 | # Instantaneous Frequency Estimation by Hilbert Transform - Theoretical Justification and Proof
I would like to better understand why the instantaneous frequency estimation by Hilbert transformation works (and especially why it doesn't work / lead to precise results in many cases).
The motivation is to estimate signal $x(t)$ by decomposing it into an amplitude envelope $m(t)$ and phase of cosine $\omega_c (t)$ (or carrier waveform):
$$x(t) = m(t) \cos\left(\omega_c (t)\right)$$
Now, assume that $x(t)$ indeed is a result from such a process.
Questions:
1) There are two "parameters" to be estimated for any $t$, as such some constraints are needed. What are the constraints regarding $m(t)$ and $\omega_c (t)$ that are selected when applying the Hilbert transform decomposition?
2) Is there a proof available somewhere that given the constraints, the estimation indeed finds the correct amplitude envelope and carrier (for continuous and also discrete case)?
• I believe you’d have to assume that $m(t)$ is very slowly varying with respect to the “center” frequency of $\omega_c(t)$ — which is a little mis-named. As you’ve written it, it’s a phase, not a frequency (which is usually what $\omega$ is used for).
– Peter K.
May 19 '18 at 18:50
See my earlier comments here: Meaning of Hilbert transform
Common fractal noise isn't an analytic signal (infinitely differentiable). And a Hilbert transform re-creates the imaginary component of an analytic signal if you have the real component of the complex analytic signal (which one rarely has from real-world data).
However, sufficiently band-pass filtered data might be similar to a finite length segment of something that looks like an infinite length analytic signal (e.g. is from a source whose behavior can be modeled or estimated by a 2nd order linear differential equation).
• You are giving the definition for an analytic function, which is not identical to that of analytic signal. There is a non-obvious relation between the two, but they are not identical. For example $t\mapsto\exp(-t^2) \sin(t)$ is an analytic function, but it's not an analytic signal. May 19 '18 at 20:01
• I think the OP asked about the actual process of estimating the instantaneous frequency using Hilbert Transform. I think you should point into that in your answer.
– Royi
Jun 18 '18 at 20:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8757344484329224, "perplexity": 353.790355064594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00154.warc.gz"} |
https://ora-00001.blogspot.com/2010/ | ## Tuesday, September 28, 2010
### Good news about Oracle XE 11g
Oracle OpenWorld 2010 came and went without any announcement of Oracle 11g Express Edition (XE).
However, according to Bradley D. Brown at TUSC, Oracle has now (re-) confirmed that they are still working on it.
For some time now the ACE Director community has been asking Mark Townsend (product manager for the Oracle DB) when the 11g version of Express was going to be available. He was consistent in saying that they wouldn't even consider releasing it until 11gR2 was available. Well...it's available now! So the ACED community said "OK, Mark..." and he responded with - it's already in the works and should be out...when asked "when..." we got the standard answer. The good news is that it's coming! Another piece of good news is that they raised the DB size limit from 4GB to 10GB. That makes for a nice free DB engine.
This is good news, since it has been a very long time since XE 10g was released. With Microsoft releasing updates to its free version of SQL Server at the same time as the full version, I think Oracle should do the same.
But hey, it's good, it's free, and it's coming eventually, so I'm not complaining! :-)
## Sunday, August 8, 2010
### Proxy Authentication with Thoth Gateway
Christian Vind submitted an enhancement request for the Thoth Gateway to support Oracle proxy authentication by passing on the current Windows username to the database connection string.
The point of proxy authentication is that
• The proxy user only has "create session" privileges but can't do much else.
• The real user does not have "create session" privileges and cannot log on to the database without knowing the proxy user name and password (and that is only set on the web/application server).
• The USER function returns the real user name, and all standard database auditing, roles, etc. work as usual.
As of version 1.3 of the Thoth Gateway, proxy authentication is now supported. Here is how it works:
IIS Setup
Set up the application (virtual directory) in IIS where the gateway runs with Integrated Windows Authentication, so that the CGI environment variable LOGON_USER will be populated with the client's Windows username. (If the user is using Internet Explorer to browse the site, his identity will be passed on to the web server/gateway automatically; if using another browser, then an explicit logon is required.)
Oracle Setup
Define an "application server user", ie the common user that connections will be established through:
-- Log on as DBA (SYS or SYSTEM) that has CREATE USER privilege.
create user appserver identified by eagle;
create user end_user identified by secret;
grant create session to end_user;
alter user end_user grant connect through appserver;
Now test the setup with SQL*Plus, by connecting with the "application server user", and then "becoming" the end user:
-- note we don't specify the end_user password, but still become that user
SQL> connect appserver[end_user]/eagle
Connected.
SQL> select user from dual;
USER
------------------------------
END_USER
SQL>
Note that since the point of this is to take advantage of existing Active Directory accounts, you probably want to create your users like this:
create user "your_domain\end_user" identified externally;
grant create session to "your_domain\end_user";
alter user "your_domain\end_user" grant connect through appserver;
Thoth Gateway Setup
In web.config, modify the DAD settings (the following example assumes a local Oracle XE installation):
param name="DatabaseConnectString" value="//127.0.0.1:1521/xe"
param name="DatabaseConnectStringAttributes" value="Enlist=false;Proxy User Id=appserver;Proxy Password=eagle;"
Notice the value "LOGON_USER" specified for the DatabaseUserName parameter. This is a reserved string that will be replaced with the actual value of the LOGON_USER value from the web request (ie. the user's Windows username, typically "domain\username"). You can also specify "LOGON_USER_NO_DOMAIN" to strip away the domain part of the user name -- what you use will depend on how you have set up your user accounts in Oracle.
Testing It
To test that everything works at this point, create a procedure similar to the following, and execute it via the gateway (don't forget to grant execute privileges on it to the end-user's account, and create a public synonym for it unless you prefix with the procedure owner's name in the URL).
procedure test_proxy_auth
as
begin
htp.ulistopen;
htp.listitem ('USER = ' || user);
htp.listitem ('Proxy user = ' || sys_context('userenv', 'proxy_user'));
htp.listitem ('CGI LOGON_USER = ' || owa_util.get_cgi_env('LOGON_USER'));
htp.ulistclose;
end test_proxy_auth;
If successful, the USER function should return the end-user's Windows username, and the Proxy User should display as "appserver".
Postscript: A little enigma
Actually, if you do as described above, you could possibly get this error when you try to run the procedure via the gateway:
ORA-1045: user %s lacks CREATE SESSION privilege; logon denied
At least, that's what I got . To get around it, I had to explicitly grant this to the "appserver" user:
grant create session to appserver;
The funny thing is that my example above, tested via SQL*Plus, shows that this works without the grant! But when attempting the same connection via ODP.NET, it gives the above error unless the grant is made.
And if I revoke the "create session" from the end_user, the above example doesn't work in SQL*Plus, because of the missing privilege. Which seems to contradict the purpose of proxying, as defined at the top of this blog post.
If anybody knows why SQL*Plus and ODP.NET show different behaviour here, please let me know.
### Thoth Gateway version 1.3 available
It contains the following bug fixes and enhancements:
• Bug Fix: Issue with parsing client IP address: Added exception handling to prevent error when parsing client IP address with invalid format.
• Ignore additional request parameters: Certain tools and frameworks may dynamically add additional parameters to a request, which causes the corresponding PL/SQL call to fail, since these parameters are not defined in the procedure signature. As of this version, the gateway will now retry the call after dropping (ignoring) any parameters that cannot be found in the Oracle data dictionary for the procedure being called.
• Support for Oracle proxy authentication (and Single Sign On) via dynamic username substitution: Oracle proxy authentication, combined with Integrated Windows Authentication in IIS, allows you to pass the end-user's identity from the client to the database session (so the function USER will return the end-user's Windows username, with no login required). This is useful in an intranet scenario where users are defined in an Active Directory domain and use Internet Explorer to access the PL/SQL web application.
## Thursday, July 15, 2010
### Apex on Thoth Gateway and IIS 7
Several people have asked for instructions on how to run the Thoth Gateway (a mod_plsql replacement) on Microsoft Internet Information Server (IIS) 7.
I finally had the chance (and the time!) to test the configuration on a server running IIS 7. The biggest challenge was understanding the new administration console user interface for IIS 7; it was was slightly confusing for someone who is used to IIS 6.
After a bit of fiddling, I got the gateway up and running:
I have updated the installation instructions in the latest download package (version 1.2.1) with separate sections for IIS 6 and IIS 7. So download, unzip and read the instructions in the "doc" folder.
## Saturday, June 26, 2010
### Replacing Apex? More like Find and Replace...
I was getting my daily dose of Apex blog posts when I noticed this advertisement:
I was curious, so I clicked the link and got the following page (http://www.wavemaker.com/solutions/oracleforms.html):
"Well, this is strange", I thought... it says "Oracle Forms" there in the URL, and in the illustration in the middle of the page, yet the advertisement and the page heading talks about replacing "APEX". There is also a claim that the latter "costs a fortune", but as we all know Application Express is a no-cost option in the Oracle database. WTF?
There is also a link to an 8-page whitepaper on "Migrating Oracle Apex Applications to Java" (http://www.wavemaker.com/pdf/Migrating-Oracle-Apex-Apps-To-Java-With-WaveMaker.pdf). This whitepaper includes the following screenshot:
As well as this table:
OK, so it is clearly Oracle Forms that is depicted and described here, but labeled as if it was Oracle Apex.
At best, this is a clueless mistake made by some marketing sod who did a global Find & Replace from FORMS to APEX. At worst, this is deliberately misleading.
In any case, it's just wrong, wrong, wrong. I don't know about you, but I would not trust a company that is either incompetent, dishonest, or both. I'll stick with Apex, thank you.
## Tuesday, June 15, 2010
### Small patch for the Thoth Gateway (Apex on IIS)
Just a quick note to announce that a minor patch release (version 1.2.1) of the Thoth Gateway is available for download. The Thoth Gateway allows you to run Oracle Apex applications on the Microsoft IIS web server (a replacement for Apache and mod_plsql).
The following has changed:
• Bug Fix for Content-Length: Fixed an issue where the content-length header would be incorrectly set for non-AL32UTF8 databases if the page contained multibyte characters.
• Set ODP.NET connection string attributes: Added option to specify additional connection string attributes in the DAD configuration. This allows you to fine-tune the connection properties. See the ODP.NET documentation for more details.
## Friday, May 7, 2010
### ApexGen has a new home
My first open-source project was ApexGen, a utility to generate Oracle Application Express (Apex) pages from PL/SQL, in a fraction of the time it takes to create Apex pages manually (using point-and-click). I originally hosted it on SourceForge. However, since then I have started a few other projects hosted on Google Code, so I decided to move ApexGen to Google Code as well.
To summarize, here are my current Oracle, Apex and PL/SQL projects, all on Google Code:
• Thoth Gateway, a mod_plsql replacement that runs on Microsoft Internet Information Server (IIS). It allows you to use IIS as the web server for Apex applications (instead of Apache or the Embedded PL/SQL Gateway), and it has a few extra features as well, such as CLOB support, automatic Web Services published from PL/SQL, XDB integration, and integrated Windows authentication out-of-the-box.
• JQGrid Integration Kit for PL/SQL, a set of PL/SQL packages that allows you to use the JQGrid component to display and edit tabular data in your Apex applications. It is faster, better-looking and more flexible than the built-in tabular forms in Apex.
• ApexGen, a utility to generate Oracle Application Express (Apex) pages from PL/SQL, in a fraction of the time it takes to create Apex pages manually. With Apex 4.0 just around the corner, I believe ApexGen will be due for an overhaul soon, as the export files are likely to have changed quite a bit.
## Saturday, April 10, 2010
### SELECT * FROM spreadsheet (or How to parse a CSV file using PL/SQL)
I recently needed to retrieve/download a comma-separated values (CSV) file from a website, and insert the data in an Oracle database table.
After googling around a bit, I found various pieces of the solution on AskTom, ExpertsExchange and other sites, which I put together in the following generic utility package for CSV files.
### Usage
Because I have implemented the main parsing routine as a pipelined function, you can process the data either using straight SQL, or in a PL/SQL program.
For example, you can retrieve a download a CSV file as a clob directly from the web and return it as a table with a single statement:
select *
from table(csv_util_pkg.clob_to_csv(httpuritype('http://www.foo.example/bar.csv').getclob()))
And maybe do a direct insert via INSERT .. SELECT :
insert into my_table (first_column, second_column)
select c001, c002
from table(csv_util_pkg.clob_to_csv(httpuritype('http://www.foo.example/bar.csv').getclob()))
You can of course also use SQL to filter the results (although this may affect performance):
select *
from table(csv_util_pkg.clob_to_csv(httpuritype('http://www.foo.example/bar.csv').getclob()))
where c002 = 'Chevy'
Or you can do it in a more procedural fashion, like this:
create table x_dump
(clob_value clob,
dump_date date default sysdate,
dump_id number);
declare
l_clob clob;
cursor l_cursor
is
select csv.*
from x_dump d, table(csv_util_pkg.clob_to_csv(d.clob_value)) csv
where d.dump_id = 1;
begin
l_clob := httpuritype('http://www.foo.example/bar.csv').getclob();
insert into x_dump (clob_value, dump_id) values (l_clob, 1);
commit;
dbms_lob.freetemporary (l_clob);
for l_rec in l_cursor loop
dbms_output.put_line ('row ' || l_rec.line_number || ', col 1 = ' || l_rec.c001);
end loop;
end;
### Auxiliary functions
There are a few additional functions in the package that are not necessary for normal usage, but may be useful if you are doing any sort of lower-level CSV parsing. The csv_to_array function operates on a single CSV-encoded line (so to use this you would have to split the CSV lines yourself first, and feed them one by one to this function):
declare
l_array t_str_array;
l_val varchar2(4000);
begin
l_array := csv_util_pkg.csv_to_array ('10,SMITH,CLERK,"1200,50"');
for i in l_array.first .. l_array.last loop
dbms_output.put_line('value ' || i || ' = ' || l_array(i));
end loop;
-- should output SMITH
l_val := csv_util_pkg.get_array_value(l_array, 2);
dbms_output.put_line('value = ' || l_val);
-- should give an error message stating that there is no column called DEPTNO because the array does not contain seven elements
-- leave the column name out to fail silently and return NULL instead of raising exception
l_val := csv_util_pkg.get_array_value(l_array, 7, 'DEPTNO');
dbms_output.put_line('value = ' || l_val);
end;
### Installation
In order to compile the package, you will need these SQL types in your schema:
create type t_str_array as table of varchar2(4000);
/
create type t_csv_line as object (
line_number number,
line_raw varchar2(4000),
c001 varchar2(4000),
c002 varchar2(4000),
c003 varchar2(4000),
c004 varchar2(4000),
c005 varchar2(4000),
c006 varchar2(4000),
c007 varchar2(4000),
c008 varchar2(4000),
c009 varchar2(4000),
c010 varchar2(4000),
c011 varchar2(4000),
c012 varchar2(4000),
c013 varchar2(4000),
c014 varchar2(4000),
c015 varchar2(4000),
c016 varchar2(4000),
c017 varchar2(4000),
c018 varchar2(4000),
c019 varchar2(4000),
c020 varchar2(4000)
);
/
create type t_csv_tab as table of t_csv_line;
/
UPDATE 04.04.2012: The latest version of the package itself (CSV_UTIL_PKG) can be found as part of the Alexandria Utility Library for PL/SQL.
### Performance
On my test server (not my laptop), it takes about 35 seconds to process 12,000 rows in CSV format. I don't consider this super-fast, but probably fast enough for many CSV processing scenarios.
If you have any performance-enhancing tips, do let me know!
### Bonus: Exporting CSV data
You can also use this package to export CSV data, for example by using a query like this.
select csv_util_pkg.array_to_csv (t_str_array(company_id, company_name, company_type)) as the_csv_data
from company
order by company_name
THE_CSV_DATA
--------------------------------
260,Acorn Oil & Gas,EXT
261,Altinex,EXT
263,Atlantic Petroleum,EXT
264,Beryl,EXT
265,BG,EXT
266,Bow Valley Energy,EXT
267,BP,EXT
This might come in handy, even in these days of XML and JSON ... :-)
## Tuesday, April 6, 2010
### Using TRUNC and ROUND on dates
Maybe this is old news to some, but I recently became aware that it is possible to use TRUNC and ROUND not just on a NUMBER, but also on a DATE value.
For example, you can get the start of the month for a given date (using TRUNC), or the "closest" start of the month, rounded forward or backwards in time appropriate (using ROUND):
select sysdate,
trunc(sysdate, 'YYYY') as trunc_year,
trunc(sysdate, 'MM') as trunc_month,
round(sysdate, 'MM') as round_month,
round(sysdate + 15, 'MM') as round_month2
from dual
The above gives the following results:
SYSDATE TRUNC_YEAR TRUNC_MONTH ROUND_MONTH ROUND_MONTH2
------------------------- ------------------------- ------------------------- ------------------------- -------------------------
06.04.2010 20:10:56 01.01.2010 00:00:00 01.04.2010 00:00:00 01.04.2010 00:00:00 01.05.2010 00:00:00
Somewhat related to this topic is the relatively obscure (?) EXTRACT function, which allows you to extract a part of a DATE:
select sysdate,
extract(day from sysdate) as extract_day,
extract(month from sysdate) as extract_month,
extract(year from sysdate) as extract_year
from dual
Which gives the following results:
SYSDATE EXTRACT_DAY EXTRACT_MONTH EXTRACT_YEAR
------------------------- ---------------------- ---------------------- ----------------------
06.04.2010 20:13:01 6 4 2010
If you try to extract the "hour", "minute" or "second" from a DATE, however, you get an ORA-30076: invalid extract field for extract source.
For some reason, these only work on TIMESTAMP values, not on the DATE datatype (which seems like an arbitrary limitation to me). Nevertheless:
select systimestamp,
extract(hour from systimestamp) as extract_hour,
extract(minute from systimestamp) as extract_minute,
extract(second from systimestamp) as extract_second
from dual
The above gives the following results:
SYSTIMESTAMP EXTRACT_HOUR EXTRACT_MINUTE EXTRACT_SECOND
------------- ---------------------- ---------------------- ----------------------
06.04.2010 20.17.12,047000000 +02:00 18 17 12,047
## Sunday, March 7, 2010
### jQGrid Integration Kit for PL/SQL and Apex
I started developing applications back in the good (?) old client/server days. I was fortunate enough to discover Delphi quite early. Even from the start, the lowly 16-bit Delphi version 1 had a kick-ass DBGrid control which allowed you to quickly and easily build data-centric applications. Just write a SQL statement in a TDataSet component, connect it to the grid, and voila! Instant multi-row display and editing out of the box, without any coding.
Fast forward a decade. While I do enjoy building web applications (with PL/SQL and Apex) these days, I've always missed the simplicity of that DBGrid in Delphi. Creating updateable grids with Apex is pretty tedious work (not being entirely satisfied with the built-in updateable tabular forms, I've employed a combination of the apex_item API, page processes for updates and deletes, and custom-made Javascript helpers). It doesn't help that you have to refer to the tabular form arrays by number, rather than by name (g_f01, g_f02, etc.), and that you are restricted to a total of 50 columns per page.
Enter jQGrid, "an Ajax-enabled JavaScript control that provides solutions for representing and manipulating tabular data on the web".
jQGrid can be integrated with any server-side technology, so I decided to integrate it with PL/SQL and Apex.
## Features
As of version 1.0, the jQGrid for PL/SQL and Apex has the following features:
• Single line of PL/SQL code to render grid
• Populate data based on REF CURSOR or SQL text (with or without bind variables). The REF CURSOR support is based on my REF Cursor to JSON utility package.
• Define display modes (read only, sortable, editable) and edit types (checkbox, textarea, select list) per column
• Store grid configuration in database, or specify settings via code (for read-only grids)
• Ajax updates (insert, update, delete) based on either automatic row processing (dynamic SQL) or against your own package API
• Multiple grids per page
• Integrated logging and instrumentation
• Usable without Apex (for stand-alone PL/SQL Web Toolkit applications) or with Apex, optionally integrated with Apex session security
The jQGrid Integration Kit for PL/SQL is free and open source. Download and try it now!.
## Thursday, February 11, 2010
### REF Cursor to JSON
REF Cursors are cool. They allow you to encapsulate SQL queries behind a PL/SQL package API. For example, you can create a function called GET_EMPLOYEES that returns a SYS_REFCURSOR containing the employees in a specific department:
function get_employees (p_deptno in number) return sys_refcursor
as
l_returnvalue sys_refcursor;
begin
open l_returnvalue
for
select empno, ename, job, sal
from emp
where deptno = p_deptno;
return l_returnvalue;
end get_employees;
The client (an application written in Java, .NET, PHP, etc.) can call your API and process the returned REF Cursor just as if it was a normal result set from a SQL query. The benefits are legion. The client no longer needs to contain embedded SQL statements, or indeed know anything about the actual database structure and query text. Privileges on the underlying tables can be revoked. The API can be shared and reused among different clients, whether they are written in Java, .NET, or any number of other languages.
That is, unless your client is Oracle Application Express (Apex). Apex unfortunately lacks the ability to process REF Cursors, or, more accurately, you cannot create report regions in Apex based on REF Cursors. For standard reports, you have to either embed the SQL statement in the region definition, or return the SQL text string from a function (and hope that the string you built is valid SQL when it gets executed). For interactive reports, only embedded SQL statements are supported.
I dislike having to scatter literal SQL statements all around my Apex applications, and not be able to take advantage of a package-based, shared and reusable PL/SQL API to encapsulate queries. I submitted a feature request to the Apex team back in 2007, asking for the ability to base report regions on REF Cursors, but so far this has not been implemented.
The problem, as far as I know, is that Apex uses (and must use) DBMS_SQL to "describe" a SQL statement in order to get the metadata (column names, data types, etc.) for a report region. But not until Oracle 11g did DBMS_SQL include a function (TO_CURSOR_NUMBER) that allows you to convert a REF Cursor into a DBMS_SQL cursor handle. So, as long as the minimum supported database version for Apex is Oracle 10g, support for REF Cursors is unlikely to be implemented.
In the meantime, there are a couple of alternatives:
## Option 1: Pipelined functions
It's possible to encapsulate your queries behind a PL/SQL API by using pipelined functions. For example, the above example could be rewritten as...
create type t_employee as object (
empno number(4),
ename varchar2(10),
job varchar2(9),
sal number
);
create type t_employee_tab as table of t_employee;
function get_employees (p_deptno in number) return t_employee_tab pipelined
as
begin
for l_rec in (select empno, ename, job, sal from emp where deptno = p_deptno) loop
pipe row (t_employee (l_rec.empno, l_rec.ename, l_rec.job, l_rec.sal));
end loop;
return;
end get_employees;
And used from Apex (in a report region) via the TABLE statement:
select *
from table(employee_pkg.get_employees (:p1_deptno))
## Option 2: XML from REF Cursor
The DBMS_XMLGEN package can generate XML based on a REF Cursor. While this does not "describe" the REF Cursor per se, it does give us a way (from PL/SQL) to find the column names of an arbitrary REF Cursor query, and perhaps infer the data types from the data itself. A couple of blog posts from Tom Kyte explain how this can be used to generate HTML based on a REF Cursor.
So back to Apex, you could generate a "report" based on a PL/SQL region with code similar to this:
declare
l_clob clob;
l_rc sys_refcursor;
begin
l_rc := get_employees (:p1_deptno);
l_clob := fncRefCursor2HTML (l_rc);
htp_print_clob (l_clob);
end;
It would also be possible to pass your own XLST stylesheet into the conversion function (perhaps an Apex report region template fetched from the Apex data dictionary?) to control the appearance of the report.
I put "report" in quotes above, because until the Apex team implements report regions based on REF Cursors, you will miss all the nice built-in features of standard (and interactive) reports, such as sorting, paging, column formatting, linking, etc.
## Option 3: JSON from REF Cursor
Bear with me, I am finally getting to the point of this blog post.
JSON is cool, too, just like REF Cursors. It's the fat-free alternative to XML, and JSON data is really easy to work with in Javascript.
For triple coolness, I want to use an API based on REF Cursors in PL/SQL, client-side data manipulation based on JSON, and Apex to glue the two together.
What I need is the ability to generate JSON based on a REF Cursor.
Apex does include a few JSON-related procedures in the APEX_UTIL package, including JSON_FROM_SQL. Although this procedure does support bind variables, it cannot generate JSON from a REF Cursor. (Also, the fact that is is a procedure rather than a function makes it less flexible than it could be. Dear Apex Team, can we please have overloaded (function) versions of these JSON procedures?)
## REF Cursor to JSON: The (10g) solution
So I came up with this solution: Use DBMS_XMLGEN to generate XML based on a REF Cursor, and then transform the XML into JSON by using an XSLT stylesheet.
Note: As mentioned above, in Oracle 11g you can use DBMS_SQL to describe a REF Cursor, so you could write your own function to generate JSON from a REF Cursor, without going through XML first. (And perhaps in Oracle 12g the powers that be at Redwood Shores will provide us with a built-in DBMS_JSON package that can both generate and parse JSON?)
In the meantime, for Oracle 10g, I created the JSON_UTIL_PKG package.
Here is the code for the REF_CURSOR_TO_JSON function:
function ref_cursor_to_json (p_ref_cursor in sys_refcursor,
p_max_rows in number := null,
p_skip_rows in number := null) return clob
as
l_ctx dbms_xmlgen.ctxhandle;
l_num_rows pls_integer;
l_xml xmltype;
l_json xmltype;
l_returnvalue clob;
begin
/*
Purpose: generate JSON from REF Cursor
Remarks:
Who Date Description
------ ---------- -------------------------------------
MBR 30.01.2010 Created
*/
l_ctx := dbms_xmlgen.newcontext (p_ref_cursor);
dbms_xmlgen.setnullhandling (l_ctx, dbms_xmlgen.empty_tag);
-- for pagination
if p_max_rows is not null then
dbms_xmlgen.setmaxrows (l_ctx, p_max_rows);
end if;
if p_skip_rows is not null then
dbms_xmlgen.setskiprows (l_ctx, p_skip_rows);
end if;
-- get the XML content
l_xml := dbms_xmlgen.getxmltype (l_ctx, dbms_xmlgen.none);
l_num_rows := dbms_xmlgen.getnumrowsprocessed (l_ctx);
dbms_xmlgen.closecontext (l_ctx);
close p_ref_cursor;
if l_num_rows > 0 then
-- perform the XSL transformation
l_json := l_xml.transform (xmltype(get_xml_to_json_stylesheet));
l_returnvalue := l_json.getclobval();
else
l_returnvalue := g_json_null_object;
end if;
l_returnvalue := dbms_xmlgen.convert (l_returnvalue, dbms_xmlgen.entity_decode);
return l_returnvalue;
end ref_cursor_to_json;
## Examples of usage
Get a small dataset
declare
l_clob clob;
l_cursor sys_refcursor;
begin
l_cursor := employee_pkg.get_employees (10);
l_clob := json_util_pkg.ref_cursor_to_json (l_cursor);
dbms_output.put_line (substr(l_clob, 1, 200));
end;
{"ROWSET":[{"EMPNO":7782,"ENAME":"CLARK","JOB":"MANAGER","MGR":7839,"HIREDATE":"09.06.1981","SAL":2450,"COMM":null,"DEPTNO":10},{"EMPNO":7839,"ENAME":"KING","JOB":"PRESIDENT","MGR":null,"HIREDATE":"31.01.2005","SAL":5000,"COMM":null,"DEPTNO":10},{"EMPNO":7934,"ENAME":"MILLER","JOB":"CLERK","MGR":7782,"HIREDATE":"23.01.1982","SAL":1300,"COMM":null,"DEPTNO":10}]}
A large dataset, with paging
declare
l_clob clob;
l_cursor sys_refcursor;
begin
l_cursor := test_pkg.get_all_objects;
l_clob := json_util_pkg.ref_cursor_to_json (l_cursor, p_max_rows => 3, p_skip_rows => 5000);
dbms_output.put_line (substr(l_clob, 1, 1000));
end;
{"ROWSET":[{"OBJECT_ID":5660,"OBJECT_NAME":"LOGMNRT_SEED$","OBJECT_TYPE":"TABLE","LAST_DDL_TIME":"07.02.2006"},{"OBJECT_ID":5661,"OBJECT_NAME":"LOGMNRT_MDDL$","OBJECT_TYPE":"TABLE","LAST_DDL_TIME":"07.02.2006"},{"OBJECT_ID":5662,"OBJECT_NAME":"LOGMNRT_MDDL\$_PK","OBJECT_TYPE":"INDEX","LAST_DDL_TIME":"07.02.2006"}]}
It works with nested datasets, too.. !
select d.deptno, d.dname,
cursor (select e.*
from emp e
where e.deptno = d.deptno) as the_emps
from dept d
declare
l_json clob;
begin
l_json := json_util_pkg.sql_to_json ('select d.deptno, d.dname,
cursor (select e.*
from emp e
where e.deptno = d.deptno) as the_emps
from dept d');
dbms_output.put_line (substr(l_json, 1, 10000));
end;
{"ROWSET":[{"DEPTNO":10,"DNAME":"ACCOUNTING",
"THE_EMPS":[{"EMPNO":7782,"ENAME":"CLARK","JOB":"MANAGER","MGR":7839,"HIREDATE":"09.06.1981","SAL":2450,"COMM":null,"DEPTNO":10},
{"EMPNO":7839,"ENAME":"KING","JOB":"PRESIDENT","MGR":null,"HIREDATE":"31.01.2005","SAL":5000,"COMM":null,"DEPTNO":10},
{"EMPNO":7934,"ENAME":"MILLER","JOB":"CLERK","MGR":7782,"HIREDATE":"23.01.1982","SAL":1300,"COMM":null,"DEPTNO":10}]},
{"DEPTNO":20,"DNAME":"RESEARCH",
"THE_EMPS":[{"EMPNO":7369,"ENAME":"SMITH","JOB":"SALESMAN","MGR":7902,"HIREDATE":"17.12.1980","SAL":880,"COMM":null,"DEPTNO":20},
{"EMPNO":7566,"ENAME":"JONES","JOB":"MANAGER","MGR":7839,"HIREDATE":"02.04.1981","SAL":2975,"COMM":null,"DEPTNO":20},
{"EMPNO":7788,"ENAME":"SCOTT","JOB":"ANALYST","MGR":7566,"HIREDATE":"09.12.1982","SAL":3000,"COMM":null,"DEPTNO":20},
{"EMPNO":7902,"ENAME":"FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"03.12.1981","SAL":3000,"COMM":null,"DEPTNO":20},
{"EMPNO":9999,"ENAME":"BRATEN","JOB":"CLERK","MGR":7902,"HIREDATE":"05.05.2009","SAL":1000,"COMM":null,"DEPTNO":20},
{"EMPNO":9998,"ENAME":"DOE","JOB":"CLERK","MGR":7902,"HIREDATE":"25.04.2009","SAL":500,"COMM":null,"DEPTNO":20}]},
{"DEPTNO":30,"DNAME":"SALES",
"THE_EMPS":[{"EMPNO":7499,"ENAME":"ALLEN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"20.02.1981","SAL":1600,"COMM":300,"DEPTNO":30},
{"EMPNO":7521,"ENAME":"WARD","JOB":"SALESMAN","MGR":7698,"HIREDATE":"22.02.1981","SAL":3200,"COMM":500,"DEPTNO":30},
{"EMPNO":7654,"ENAME":"MARTIN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"28.09.1981","SAL":1250,"COMM":1400,"DEPTNO":30},
{"EMPNO":7698,"ENAME":"BLAKE","JOB":"MANAGER","MGR":7839,"HIREDATE":"01.05.1981","SAL":2850,"COMM":null,"DEPTNO":30},
{"EMPNO":7844,"ENAME":"TURNER","JOB":"SALESMAN","MGR":7698,"HIREDATE":"08.09.1981","SAL":1500,"COMM":0,"DEPTNO":30},
{"EMPNO":7900,"ENAME":"JAMES","JOB":"CLERK","MGR":7788,"HIREDATE":"03.12.1981","SAL":950,"COMM":null,"DEPTNO":30}]},
{"DEPTNO":40,"DNAME":"OPERATIONS",
"THE_EMPS":null}]}
Passing a REF Cursor directly to the function call by using the CURSOR function:
select json_util_pkg.ref_cursor_to_json(cursor(select * from emp))
from dual
You can download the complete package, including the XSLT stylsheet, here (spec) and here (body).
Update 12.02.2011: This package can now be downloaded as part of the Alexandria library for PL/SQL.
Note that to compile the packages you need the following SQL type defined in your schema:
create type t_str_array as table of varchar2(4000);
/
## Thursday, February 4, 2010
### My first Apex 4 plugin: Flight Info from Web Service
One of the exciting new features in Apex 4 is the support for plugin regions and items. This feature has huge potential, and will make development with Apex even more efficient, productive, and fun. There are already several plugins out there, and I think we will see a lot of interesting work in this area after Apex 4 is released.
Here is my own first attempt at a (useful) plugin: A region plugin that displays up-to-date flight information for airports in Norway, based on public flight data provided by Avinor, the company that operates the Norwegian airport network.
Avinor has a simple web service that provides flight information in XML format.
I am sure there are similar (web) services for flight information in other countries (feel free to leave a comment below if you know of any).
Here is the PL/SQL code behind the plugin:
procedure render_my_plugin (
p_region in apex_plugin.t_region,
p_plugin in apex_plugin.t_plugin,
p_is_printer_friendly in boolean )
as
l_clob clob;
l_airport_code varchar2(20) := p_region.attribute_01;
l_direction varchar2(20) := p_region.attribute_02;
begin
l_clob := apex_web_service.make_rest_request(
p_url => 'http://flydata.avinor.no/XmlFeed.asp',
p_http_method => 'GET',
p_parm_name => apex_util.string_to_table('airport:direction'),
p_parm_value => apex_util.string_to_table(l_airport_code || ':' || l_direction )
);
if l_direction = 'D' then
htp.p('<p><b>Departures from ' || l_airport_code || '</b></p>');
else
htp.p('<p><b>Arrivals to ' || l_airport_code || '</b></p>');
end if;
htp.p('<table width="100%">');
htp.p('<tr><td>AIRLINE</td><td>FLIGHT</td><td>AIRPORT</td><td>TIME</td><td>GATE</td></tr>');
for l_rec in (
SELECT *
FROM XMLTABLE ('//airport/flights/flight'
PASSING XMLTYPE(l_clob)
COLUMNS unique_id varchar2(100) path '@uniqueID',
airline varchar2(10) path 'airline',
flight_id varchar2(20) path 'flight_id',
airport varchar2(20) path 'airport',
schedule_time varchar2(100) path 'schedule_time',
gate varchar2(100) path 'gate')
ORDER BY airline, flight_id) loop
htp.p('<tr><td>' || l_rec.airline || '</td><td>' || l_rec.flight_id || '</td><td>' || l_rec.airport || '</td><td>' || l_rec.schedule_time || '</td><td>' || l_rec.gate || '</td></tr>');
end loop;
htp.p('</table>');
htp.p('<a href="http://www.avinor.no">Flight data from Avinor.</a> Last updated: ' || to_char(sysdate, 'dd.mm.yyyy hh24:mi:ss'));
end render_my_plugin;
The code illustrates several concepts:
• How to render a region plugin using the PL/SQL Web Toolkit (HTP.P) calls
• How to retrieve values from the attributes defined for the plugin
• Using the new APEX_WEB_SERVICE.MAKE_REST_REQUEST function to retrieve a web page as a CLOB
• Using the XMLTABLE function to transform XML into a recordset that can be used in a SELECT
An export of my plugin can be downloaded here, and installed into your own Apex 4 application.
After the plugin has been installed, using the plugin is as simple as adding a Region (of type Plugin) to the page, and configuring the values for Airport and Direction (the plugin attributes) in the region definition.
You can see a live demo of the plugin here (public page, does not require authentication):
http://tryapexnow.com/apex/f?p=test4ea:plugin_demo:0
Note that for this page, I've also taken advantage of the built-in region caching feature of Apex. The region cache duration is set to 10 minutes, which prevents us from hitting the remote web service for every page view. I really like that you can switch on region caching in Apex without writing a single line of code.
Conclusion: Apex 4 plugins rock!
## Wednesday, January 27, 2010
### ODP.NET minimal, non-intrusive install
This might be of interest for those who use .NET to connect to Oracle databases. (Including yours truly, who wrote the Thoth Gateway, a mod_plsql replacement that runs on Microsoft IIS, using C# and ODP.NET.)
A while back, Microsoft officially deprecated their ADO.NET driver for Oracle (System.Data.OracleClient).
Fortunately, Oracle offers its own .NET driver, known as the Oracle Data Provider for .NET (ODP.NET). This driver is a better choice for Oracle connectivity, since it supports a wider range of Oracle-specific features, and improved performance.
However, ODP.NET, unlike, say, the thin JDBC drivers, still requires the normal Oracle client to be present on the machine. This Oracle client can be something of a beast, with the install package upwards of 200 megabytes. Couple this with the fact that you may have several diffent Oracle client versions installed on your machine (or application server), all specific to some application that you dare not touch for fear of it breaking.
### A non-intrusive install
So, here is how you can use ODP.NET with the following advantages:
• Small footprint (between 30 and 100 megabytes)
• XCopy deployment
• No dependency on shared files, all files in your own application's folder
• No registry or system environment changes required
• No tnsnames.ora file required
• No interference from other Oracle client installs on the same machine
Sounds good, doesn't it? Let's see how this can be accomplished...
http://www.oracle.com/technology/software/tech/windows/odpnet/utilsoft.html
Unzip the file and locate the following 2 files:
• OraOps11w.dll
• Oracle.DataAccess.dll
Copy these files to your application's "bin" folder.
http://www.oracle.com/technology/software/tech/oci/instantclient/index.html
You have a choice between the following two versions of the Instant Client
a) Instant Client Basic (approx. 100 megabytes)
Unzip the file and locate the following 3 files:
• oci.dll
• orannzsbb11.dll
• oraociei11.dll
b) Instant Client Basic Lite (approx. 30 megabytes): This version is smaller but only supports certain character sets (WE8MSWIN1252 and AL32UTF8 are among them). It only has English messages, so in case you wonder what "ORA-06556: The pipe is empty" sounds like in your own language, go for the non-Lite version.
Unzip the file and locate the following 3 files:
• oci.dll
• orannzsbb11.dll
• oraociicus11.dll
Whichever version you choose, copy these files to your application's "bin" folder. You now have a total of 5 new files in your "bin" folder.
### 3. Connection string
In your .NET program, use a connect string in the following format, to make sure you don't need to rely on any network configuration files (tnsnames.ora, etc.).
### 4. Configuration
This is mostly relevant if you have other Oracle client installations already on the same machine/server.
In your configuration file (web.config), you can explicitly set the path to the Oracle DLLs you want to use. Set the "DllPath" parameter to the name of your "bin" folder.
<configuration>
<oracle.dataaccess.client>
<settings>
</settings>
</oracle.dataaccess.client>
</configuration>
### 5. That's it!
You should now be able to run your ODP.NET application from your "bin" folder.
References | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21180664002895355, "perplexity": 8642.815921992142}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00646.warc.gz"} |
https://scicomp.stackexchange.com/questions/2541/interpolate-2d-data/2547 | # Interpolate 2D data
I generated a cartesian grid in Python using NumPy's linspace and meshgrid, and I obtained some data over this 2D grid from an unknown function. I want to get an approximation of the value of the function over some points inside the boundaries of the grid which are not part of it. I do not have some other unstructured grid, I just want to know the value in certain points. I assume I have to interpolate the data somehow, but I am very clueless on this and reading the documentation about the interpolate module of SciPy and some related components doesn't help. How can I find out this interpolated data?
Note: I know what I want to accomplish, but I have never done a task like this before and I have some problems formulating this question in terms of proper concepts and vocabulary. If I am not clear enough please help me improve my question.
I particularly like the bivariate spline class for what I think you are describing. You can use it to make a function (i.e. it is callable at any point) which interpolates the data using a spline. If you want just an interpolation then you simply set the kx and ky values to 1. If you want a smoother function then increasing the order of the spline (arguably 3 is a good choice) and you can even use a smoothing factor which I never do. The x and y values you would use are the ones that linspace gave you and the z value would be the function values.
You can find a good overview of methods and vocabulary on interpolation in two dimensions at http://en.wikipedia.org/wiki/Multivariate_interpolation
• Thank you Arnold! This is indeed very helpful. I am checking this and see if now I can solve the problem, or at least improve the question. – astrojuanlu Jun 15 '12 at 20:03
There is a nice video made by Travis Oliphant where he discusses 2D interpolation using python: see the youtube video Python Interpolation 3 of 4: 2d interpolation with Rbf and interp2d
• Didn't know the video, thank you for the resource. – astrojuanlu Jun 16 '12 at 17:44
Let's say you have a 2D grid with the X-axis running from ${0,1,...,i,...,M}$ and the Y-axis running from ${0,1,...,j,...,N}$. Each $i,j$ in a non-negative integer.
Your data over the grid can be viewed as a function of the grid locations $(i,j)$. In effect, the data $z = f(i,j)$.
Let's say you want the value of this function at $(i',j')$, where $i'=i+\delta{i}$ and $j'=j+\delta{j}$, such that $\delta{i}$ and $\delta{j}$ are the fractional decimal values between $0$ and $1$. Your problem is to find $z' = f(i',j') = f(i+\delta{i}, j+\delta{j})$.
There are several options for interpolating on such a grid. One of the simplest methods is the nearest neighbor interpolation. In this kind of interpolation, you simply assign to $(i',j')$, the value of the closest grid point. A naive way of doing this is to round $i'$ and $j'$ to the nearest integer.
A slightly better interpolation scheme would use a weighted combination of its closest neighbors that lie on the grid. For example, with linear interpolation, you would use the four closest grid points $(i,j)$, $(i+1,j)$, $(i, j+1)$ and $(i+1,j+1)$ to find the appropriate interpolate value at $(i',j')$.
For fast easy spline interpolation on a uniform grid in 1d 2d 3d and up, I recommend scipy.ndimage.map_coordinates; see the plot and example code under
multivariate-spline-interpolation-in-python-scipy on SO.
For smoothly-varying nonuniform grids, there's a helper class Intergrid . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4891660213470459, "perplexity": 296.99764469114155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00018.warc.gz"} |
https://crypto.stackexchange.com/questions/64780/does-composition-of-compressing-collision-resistant-hash-functions-h-h-2h | # Does Composition of compressing Collision Resistant Hash Functions $H^{*}=H_2(H_1(x))$ is Collision Resistant?
I'm trying to solve the following problem:
Given two CRHF $$H_1:2^{4n}\to2^{2n}$$, $$H_2:2^{2n}\to 2^n$$, construct the following hash function $$H^{*}=H_2(H_1(x))$$ compressing from $$2^{4n}\to 2^n$$.
We want to demonstrate that if $$H_1$$ and $$H_2$$ are collision resistant then $$H^*$$ must be collision resistant too.
However I'm still unsure on how to calculate ”BAD event” in which $$A^{H^∗}$$ outputs a collision for $$H_{s_1}$$, in this case $$x^∗=x′$$ and the second part of the reduction doesn’t work.
• Sure, I don't have it handy right now. I will post my solution as soon as possible. Thanks! Dec 11, 2018 at 20:20
• If $H_1$ and $H_2$ are collision resistant, then $H^*$ should be; if you demonstrate a collision in $H^*$, then you have a collision in either $H_1$ or $H_2$ (hence demonstrating that $H_1$ or $H_2$ wasn't collision resistant after all) Dec 12, 2018 at 15:27
• @poncho I had the same idea in the beginning but I was trying to make a straightforward demonstration. However now I am seriously thinking of making a reduction like you said. Thanks! Dec 12, 2018 at 15:48
• @kelalaka yeah and the more I tried to put my demonstration in a formally correct way the more It looked incorrect. Anyway thank you both guys ;) Dec 13, 2018 at 21:49
• @kelalaka I've edited my post with one of my ideas Dec 15, 2018 at 15:40
Assume that, $$H_1 \text{ and } H_2$$ are collision resistant and $$H^*$$ is not. We will show that one of the $$H_1 \text{ and } H_2$$ can not be a collision resistant.
Let $$x_1 \neq x_2$$ be inputs such that $$H^*(x_1) = H^*(x_2).$$ $$H_2(H_1(x_1)) = H_2(H_1(x_2)).$$i.e we have a collision pair.
Now, if $$H_1$$ is collision resistant then $$y_1 = H_1(x_1) \neq H_1(x_2) = y_2 .$$
Note : The equality holds only with negligible probability, then $$H_2(H_1(x_1)) = H_2(H_1(x_2))$$ is a collision with a negligible probability.
Now, $$H_2(y_1) = H_2(y_2)$$ since $$H^*$$ is not collision resistant. But we found a collision for $$H_2$$ given a collision for $$H^*$$. This is a contradiction since $$H_2$$ is collision resistant therefore this implies that $$H^*$$ is collision resistant, too. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3243428170681, "perplexity": 431.3453960199513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00428.warc.gz"} |
https://www.gamedev.net/forums/topic/481550-phsyx-actor-from-x/ | • 14
• 12
• 9
• 10
• 13
# PhsyX Actor From X
This topic is 3696 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
hi, i have a problem. I have added some static objects to my scene with ageia and now i want to create a sphere and move it around. This is the code to add the static stuff: NxTriangleMeshDesc TriMeshDesc; TriMeshDesc.numVertices = NumVerticies; TriMeshDesc.numTriangles = NumTriangles; TriMeshDesc.pointStrideBytes = sizeof(NxVec3); TriMeshDesc.triangleStrideBytes = 3*sizeof(short); TriMeshDesc.points = &verts[0].x; TriMeshDesc.triangles = &tris[0]; TriMeshDesc.flags = NX_MF_16_BIT_INDICES; MemoryWriteBuffer buf; bool status = NxCookTriangleMesh(TriMeshDesc, buf); ShapeDesc.meshData = g_CAgeia->m_PhysicsSDK->createTriangleMesh(MemoryReadBuffer(buf.data)); i do not add a bodydesc and then create the actor. Now if i use the ageia code:: NxBodyDesc SphereBodyDesc; SphereBodyDesc.angularDamping = 0.5f; SphereBodyDesc.linearVelocity = NxVec3(0.0f, 0.0f, 0.0f); NxSphereShapeDesc SphereDesc; SphereDesc.radius = 1.0f; NxActorDesc SphereActorDesc; SphereActorDesc.shapes.pushBack(&SphereDesc); SphereActorDesc.body = &SphereBodyDesc; SphereActorDesc.density = 10.0f; SphereActorDesc.globalPose.t = NxVec3(-70.0f, 32.0f, -190.0f); and create a sphere with a bodydesc so it can move it works perfect it collides with the scene. But if i want to create a NxActor from a ID3DXMesh* and move around with it i have no idea what to do. if i do it like in step1 and add a body desc it falls down but it never collides with anything it just falls invinite. So do i need another MeshDesc? I tryed NxConvexMeshDesc convexDesc; convexDesc.numVertices = NumVerticies; convexDesc.pointStrideBytes = sizeof(NxVec3); convexDesc.points = &verts[0].x; convexDesc.numTriangles = NumTriangles; convexDesc.triangles = &tris[0]; convexDesc.triangleStrideBytes = 3*sizeof(short); convexDesc.flags = NX_CF_COMPUTE_CONVEX | NX_CF_16_BIT_INDICES; but if i do this with:: NxInitCooking(); MemoryWriteBuffer buf2; bool status2 = NxCookConvexMesh(convexDesc, buf2); convexShapeDesc.meshData = g_CAgeia->m_PhysicsSDK->createConvexMesh(MemoryReadBuffer(buf2.data)); NxCookConvexMesh returns false and createConvexMesh fails. Any idea? thx
##### Share on other sites
I'm assuming, since you had success with the static triangle meshes, that the data structure types that you are passing into NxConvexMeshDesc are correct. I would recommend doing the following:
convexDesc.points = verts;
convexDesc.points = &verts[0].x
Since the latter makes an assumption about the internal implementation of NxVec3, and you cannot be 100% certain that the implementation will be the same in the future, or that the member 'x' will always remain public. That's just a general coding comment. (Oh, and if you happen to be using pointers to DirectX data structures here instead of copying to Ageia data structures...it might work fine, but be aware that you are getting lucky in a way. No guarantee that DX and Ageia will always define their vectors in a way that you can cast between them like this, e.g., what if Ageia's vector class goes double precision even while keeping .xyz in the same order?)
So, I see two possible things that might be causing NxCookConvexMesh to fail.
First, you are using the flag NX_CF_COMPUTE_CONVEX and also passing in triangles via convexDesc.triangles and convexDesc.triangleStrideBytes. The flag
tells the API to generate triangles by using a convex hull algorithm on a point cloud. If you tell it to do this but you are already giving it triangles---suggesting that you want it to use your triangles, it may be failing because you are giving it conflicting instructions.
Second, it might be failing because your triangles don't make up a convex mesh...maybe it is in reality concave.
Here's what I'd try, in this order:
1) Remove the NX_CF_COMPUTE_CONVEX flag and see what happens. If no failure, go with that. Your own triangles are a good, convex mesh.
2) If (1) fails, add back the NX_CF_COMPUTE_CONVEX flag and assign ConvexDesc.triangles = 0 and ConvexDesc.triangleStrideBytes = 0. See what happens.
My guess, assuming all else is good, is that one of these things will fix it.
A third option would be to treat your movable object using NxCookTriangleMesh, as you did with the static objects, just adding a body also to make it dynamic. But I would not do this at all unless you really have a concave mesh and need it to collide as concave. This will be your most expensive option.
##### Share on other sites
1) am i right that i cannot create a actor
with a NxTriangleMeshDesc that performs dynamic collision detection?
If i create one with such a struct and a body struct it falls throught
everything and never collides with anything :S
2) no NX_CF_COMPUTE_CONVEX flag is still failure
3) now i set it like this::
NxConvexMeshDesc convexDesc;
convexDesc.numVertices = NumVerticies;
convexDesc.pointStrideBytes = sizeof(NxVec3);
convexDesc.points = verts;
convexDesc.numTriangles = 0; //NumTriangles;
convexDesc.triangles = 0; //&tris[0];
convexDesc.triangleStrideBytes = 0; //3*sizeof(short);
convexDesc.flags = NX_CF_COMPUTE_CONVEX |NX_CF_16_BIT_INDICES;
NxConvexShapeDesc convexShapeDesc;
convexShapeDesc.localPose.t = NxVec3(0,0,0);
NxInitCooking();
after this the NxCookConvexMesh still fails
4) if i use a TriangleMesh and assign a body NxCookTriangleMesh it does not
fail but the mesh never collides
5) i dont really get what you mean with convace and convex but i can send
you the file if you want ;)
thank you
##### Share on other sites
Look in the SDK docs, under collisions, there is a matrix that shows what works and doesn't considering the type of mesh you setup with. IIRC forget the triangle mesh, it was almost useless to me for almost everything of than a static object, and it will not work with terrain mesh.
BTW you set to short? should be unsigned short, shouldn't matter since they should be both 16bit, but isn't correct.
TriMeshDesc.triangleStrideBytes = 3*sizeof(short);
##### Share on other sites
of_ownage,
I would agree with MARS_999...actually, regardless of whether technically would work for you, if you can avoid them, I would. They are going to be more expensive than the standard collision shapes. General triangle meshes are worse than convex meshes. The standard collision shapes use highly optimized, shape-specific code and can be very fast.
Now...you said something in your PM to me that gave me another clue about what might be going wrong. You said that your shapes had around 600 triangles. That is a problem! Ageia supports convex meshes only up to 256 triangles! Look at the NxConvexMesh documentation. It clearly states this triangle/polygon limit.
##### Share on other sites
Quote:
Original post by MARS_999Look in the SDK docs, under collisions, there is a matrix that shows what works and doesn't considering the type of mesh you setup with. IIRC forget the triangle mesh, it was almost useless to me for almost everything of than a static object, and it will not work with terrain mesh. BTW you set to short? should be unsigned short, shouldn't matter since they should be both 16bit, but isn't correct.TriMeshDesc.triangleStrideBytes = 3*sizeof(short);
The size of a short is the same as the size of an unsigned short, so this line isn't a proble...though for clarity I personally would have made it sizeof(unsigned short).
The bigger issue is, of course, that of_ownage neds to make sure that the "tris" array is an array of unsigned shorts.
##### Share on other sites
Quote:
Original post by grhodes_at_work
Quote:
Original post by MARS_999Look in the SDK docs, under collisions, there is a matrix that shows what works and doesn't considering the type of mesh you setup with. IIRC forget the triangle mesh, it was almost useless to me for almost everything of than a static object, and it will not work with terrain mesh. BTW you set to short? should be unsigned short, shouldn't matter since they should be both 16bit, but isn't correct.TriMeshDesc.triangleStrideBytes = 3*sizeof(short);
The size of a short is the same as the size of an unsigned short, so this line isn't a proble...though for clarity I personally would have made it sizeof(unsigned short).
The bigger issue is, of course, that of_ownage neds to make sure that the "tris" array is an array of unsigned shorts.
I agree as it's not an issue, like I stated before both are 2 bytes, but for obvious reasons its sloppy coding practices.
On the topic of 256 triangles I know that is in the SDK also, but not sure if that relfects 2.7.3 SDK, as I have a mesh with 2000k polygons and it works fine with it... I have more issues with the mesh being concave and not convex enough, then cooking it blows up.
If you're using concave meshes break up the mesh into chunks and try to make each one a NxConvexMeshDesc type.
##### Share on other sites
MARS*, your 2000 polygon meshes are being cooked as convex? Interesting.
I'll also point to John Ratcliff's CreateDynamics source code, built to work with PhysX (and partly while he was employed by Ageia), which can do the convex decomposition for you, if you have a concave mesh to begin with. It is SLOW so more a preprocess than a runtime thing.
John Ratcliff's Source Code (Look for "CreateDynamics")
##### Share on other sites
ok i use boxes now they work ok for me. But sometimes the behavior of them is really strange. Very small boxes stay above the ground while other fall onto it.
Maybe im confusing something with the creation of a Ageia Actor as far as
i know i need two things. the dimension and the world pos.
I have a ID3DXMesh* with a translate a rotate and a scale matrix
i have a local box with a min and a max point
so first i compute length/height/width of the new box which is::
D3DXVECTOR3 newStats;
newStats.x = box.maxPoint.x - box.centerPoint.x
newStats.y = box.maxPoint.y - box.centerPoint.y
newStats.z = box.maxPoint.z - box.centerPoint.z
now i have a vector with the half length half height half width and ageia
wants that.
Now since my mesh is scaled and rotated i need to do that with my local
box too so i do this:
D3DXMATRIX combined = rotate * scale;
D3DXVec3TransformCoord(&newStats, &newStats, &combined);
now i have a scaled rotated new local box and pash it to ageia
boxDesc.dimensions.set(NxVec3(newStats.x, newStats.y, newStats.z);
after this i set the actorDesc.globalPose.t to pos which is the position of
my ID3DXMesh* in worldSpace.
Is this correct or am i doing something wrong?
##### Share on other sites
Quote:
ok i use boxes now they work ok for me. But sometimes the behavior of them is really strange. Very small boxes stay above the ground while other fall onto it.
Not sure if you're already doing this but using the PhysX remote visual debugger is VERY helpful as it shows you exactly what your actors are doing. to use it just add this line to your program after you create your physX SDK instance:
gPhysicsSDK->getFoundationSDK().getRemoteDebugger()->connect("localhost");
n.b. you can replace localhost with the IP address of another machine for remote debugging.
Then open the remote debugger tool at:
Start->All Programs->AGEIA PhysX SDK 2.7.3->Tools->Remote Debugger
It will run concurantly with your program and show you exactly what's going on in the PhysX simulation. This has gotten me out of a few scrapes in the past so I hope you find it as usefull. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1932242065668106, "perplexity": 2619.8069927201022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00638.warc.gz"} |
https://www.physicsforums.com/threads/integration-find-the-x-coordinate-and-the-area-under-the-curve.556807/ | # Integration - Find the x coordinate and the area under the curve
• #1
39
0
## Homework Statement
http://imageshack.us/photo/my-images/703/dfdfu.jpg
## Homework Equations
To find the x coordinate
1. Make both equations equal, expose e and take logs. I'm not sure how to do this and I've tried but keep getting the wrong answer.
2. To find the area, subract the higher curve from the lower curve and integrate. I can do this. I only need help finding the x coordinate.
## The Attempt at a Solution
1. y=e^(x-3)/2 and y =e^(2x-7)
e^(x-3)/2 =e^(2x-7)
Now take logs of both sides. How?
Related Calculus and Beyond Homework Help News on Phys.org
• #2
768
4
$$\ln (e^{f(x)})=f(x)$$ But watch out for the fraction over 2. First, move it over to the other side. Then be careful, as $\ln (2e^{2x-7}) \neq 2e^{2x-7}$
• #3
39
0
$$\ln (e^{f(x)})=f(x)$$ But watch out for the fraction over 2. First, move it over to the other side. Then be careful, as $\ln (2e^{2x-7}) \neq 2e^{2x-7}$
1/2e^(x-3)=e^(2x-7)
e^(x-3)=2(e^(2x-7))
lne^(x-3)=ln(e^(2x-7))
(x-3)/2=2x-7
x=11/3
• #4
768
4
Whoops! your 2 disappeared for a line, then it came back. You can't do that. You still need to figure out what $\ln (2e^{2x-7})$ is. Hint: $\ln (a) + \ln (b)=\ln (ab)$
• Last Post
Replies
3
Views
2K
• Last Post
Replies
19
Views
2K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
3
Views
4K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
2
Views
8K
• Last Post
Replies
8
Views
3K
• Last Post
Replies
2
Views
2K
• Last Post
Replies
2
Views
840
• Last Post
Replies
7
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7996956706047058, "perplexity": 1731.4716966514547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00313.warc.gz"} |
https://www.physicsforums.com/threads/instantaneous-displacement-of-a-sound-wave.794364/ | # Instantaneous Displacement of a Sound Wave
Tags:
1. Jan 26, 2015
### Mtscorpion12
1. The problem statement, all variables and given/known data
A sinusoidal sound wave moves through a medium and is described by the displacement wave function s(x,t) = 2.00cos(15.7x - 858t) where s is in micrometers, x is in meters, and t is in seconds. Find a) the amplitude, b) the wavelength, and c) the speed of this wave. D) Determine the instantaneous displacement from equilibrium of the elements of the medium at the position x=0.050 m and t=3.00 ms. E) Determine the maximum speed of the element's oscillatory motion.
2. Relevant equations
3. The attempt at a solution
I have already figured out A, B, and C but cannot figure out D or E. I believe D is just to find the derivative and plug in the given X and T, but I do not know how to find the derivative of a three variable function. Also, I believe E just requires setting the derivative equal to 0 and finding when it is a maximum.
2. Jan 26, 2015
### Nathanael
No need; it asks for the instantaneous displacement. The given function s(x,t) gives you the displacement at every position and every time.
That would give you the maximum displacement, (not the maximum speed,) but you don't even need to differentiate to find that; it's simply the amplitude.
3. Jan 26, 2015
### Mtscorpion12
Clearly I overthought this problem way too much. I got it now.
Thank you very much.
Draft saved Draft deleted
Similar Discussions: Instantaneous Displacement of a Sound Wave | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770895600318909, "perplexity": 676.5077833698706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00524.warc.gz"} |
https://mathoverflow.net/questions/99111/orbifold-vs-alexandrove-space-vs-limit-of-manifolds | # Orbifold vs Alexandrove space vs Limit of manifolds
I have a question about some defitions : Orbifold, Alexandrov space, limit of manifolds in Gromov-Hausdorff distance sense.
Consider following example.
Let $r> 0$
$L_c = \{ (x cos \theta, x sin \theta, cx) | 0 \leq x$ and $0 \leq \theta < 2\pi \}$
$S$ : $(z-\sqrt{2} r)^2 + x^2 + y^2 = r^2$
$T$ : $(z- \sqrt{2} R)^2 + x^2 + y^2 = R^2$
If $R$ is sufficiently large, then we have a two dimensional sphere $U_c$ enclosed by $L_c$, $S$, and $T$.
First notice the following. $lim_{r \rightarrow 0} U_c$ is an orbifold for some $c$
Question : Is any orbifold is a limit of manifolds in Gromov-Hausdorff sense ?
If this question is wide, we can restrict to the case of nonnegatively curved orbifolds. : Is a nonnegatively curved $n$-orbifold a limit of positively curved $n$-manifolds ?
Question 2: In the following paper, a space with curvature $\geq k$ is defined.
M. Gromov Y. Burago and G. Perelman, A.d. alexandrov spaces with curvature bounded below, Uspekhi Mat. Nauk 47 (2) (1992), 3–51.
Is a $n$-dimensional space with curvature $\geq k$, which is smooth except finite points, is a limit of $n$-manifolds of positive sectional curvature $\geq k$ ? I believe that this question is trivial and it is true.
I do not think that all orbifolds or spaces with curvature $\geq k$ are limits of manifolds.
However I can not deny it.
Since ${\bf R}^3={\bf R}^4 /S^1 = lim_{k \rightarrow \infty} {\bf R}^4/{\bf Z}_k$, orbifolds are different from spaces with curvature $\geq k$. But they are obtained from the sequences of manifolds.
MOTIVATION : Hsiang-Kleiner classified positively curved manifolds with $S^1$-action. I want to extend this result to positively curved orbifolds with $S^1$-action.
If orbifold is a limit of manifolds then the problem is simple.
Accordingly I want to know the questions. Thank you for your attention.
• Every compact length space is a Gromov-Hausdorff limit of two-dimensional Riemannian manifolds. You need to be more precise in your question. Do the approximating manifolds have to have the same lower curvature bound? Same dimension? – Sergei Ivanov Jun 8 '12 at 12:38
• @ IVanov : Thank you. Your suggestion is helpful for me. – Hee Kwon Lee Jun 8 '12 at 13:26
• One technical point: Orbifolds are different animals than topological spaces. An orbifold is a space plus some extra structure. That said, I think that your question is fairly self explanatory (at least in this regard). – Spice the Bird Jun 8 '12 at 19:53
Q1. Note that one oriented orthonormal frame bundle $FO$ over a smooth orbifold $O$ is a smooth manifold. This frame bundle admits a one-parameter family of metrics which collapse to the original manifold and its curvature can be made bounded from below.
Proof. Equip $FO$ with $SO(n)$-invariant metric, consider product space $[\varepsilon\cdot SO(n)]\times FO$ and factorize along diagonal action. (This often called Cheeger's trick, but I think it was known before Cheeger.)
Q2. It is an open question. It is expected that cones over some positively curved manifolds can not be approximated. Vitali Kapovitch has examples of such $n$ dimesional cones which can not be approximated by $m$-dimensional manifolds with $m< n+8$.
By Perelman's Stability Theorem, if a (compact) limit of $n$-dimensional Alexandrov spaces of curvature $\ge k$ has the same dimension, then the convergent spaces are eventually homeomorphic to the limit space. So in this context a limit of manifolds is always a topological manifold.
And there exist non-manifold examples of Alexandrov spaces, e.g., $\mathbb R^3/\{id,-id\}$ which is the cone over $\mathbb {RP}^2$. Or, if you want a compact example, take $S^3/\mathbb Z_2$ where $\mathbb Z_2$ acts on $S^3$ with two fixed points (think of $S^3$ lying in $\mathbb R^4$ and consider the reflection in some 1-dimensional axis). This is a compact orbifold of curvature $\ge 1$ and it is not a limit of manifolds of the same dimension with a uniform lower curvature bound. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251065850257874, "perplexity": 386.92690089013934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00428.warc.gz"} |
https://documentation.aimms.com/language-reference/optimization-modeling-components/automatic-benders-decomposition/benders-decomposition-textbook-algorithm.html | # Benders’ Decomposition - Textbook Algorithm
Master problem
The basic Benders’ decomposition algorithm as explained in several textbooks (e.g., [NW88], [Mar99]) works as follows. After introducing an artificial variable $$\eta = d^Ty$$, the master problem relaxation becomes:
\begin{split}\begin{align} & \text{minimize} & & c^Tx + \eta \\ & \text{subject to} & & A x \leq b & & \\ &&& \eta \geq \overline{\eta} & & \\ &&& x \in \mathbb{Z}^n_+ & & \\ \end{align}\end{split}
Here $$\overline{\eta}$$ is a lower bound on the variable $$\eta$$ that AIMMS will automatically derive. For example, if the vector $$d$$ is nonnegative then we know that 0 is a lower bound on $$d^Ty$$ since we assumed that the variable $$y$$ is nonnegative, and therefore we can take $$\overline{\eta} = 0$$. We assume that the master problem is bounded.
Subproblem
After solving the master problem we obtain an optimal solution, denoted by $$(x^*,\eta^*)$$ with $$x^*$$ integer. This solution is fixed in the subproblem which we denote by $$PS(x^*)$$:
\begin{split}\begin{align} & \text{minimize} & & d^Ty \\ & \text{subject to} & & Q y \leq r - Tx^* & & \\ &&& y \in \mathbb{R}^m_+ & & \\ \end{align}\end{split}
Note that this subproblem is a linear programming problem in which the continuous variable $$y$$ is the only variable.
Dual subproblem
Textbooks that explain Benders’ decomposition often use the dual of this subproblem because duality theory plays an important role, and the Benders’ optimality and feasibility cuts can be expressed using the variables of the dual problem. The dual of the subproblem $$PS(x^*)$$ is given by:
\begin{split}\begin{align} & \text{maximize} & & r - \pi^T(Tx^*) \\ & \text{subject to} & & \pi^TQ \geq d^T & & \\ &&& \pi \geq 0 & & \\ \end{align}\end{split}
We denote this problem by $$DS(x^*)$$.
Optimality cut
If this subproblem is feasible, let $$z^*$$ denote the optimal objective value and $$\overline{\pi}$$ an optimal solution of $$DS(x^*)$$. If $$z^* \leq \eta^*$$ then the current solution $$(x^*,\eta^*)$$ is a feasible and optimal solution of our original problem, and the Benders’ decomposition algorithm only needs to solve $$PS(x^*)$$ to obtain optimal values for variable $$y$$. If $$z^* > \eta^*$$ then the Benders’ optimality cut $$\eta \geq \overline{\pi}^T (r - Tx)$$ is added to the master problem and the algorithm continues by solving the master problem again.
Feasibility cut
If the dual subproblem is unbounded, implying that the primal subproblem is infeasible, then an unbounded extreme ray $$\overline{\pi}$$ is selected and the Benders’ feasibility cut $$\overline{\pi}^T (r - Tx) \leq 0$$ is added to the master problem. Modern solvers like CPLEX and Gurobi can provide an unbounded extreme ray in case a LP problem is unbounded. After adding the feasibility cut the Benders’ decomposition algorithm continues by solving the master problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999958276748657, "perplexity": 321.6852168998371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00020.warc.gz"} |
http://cs.stackexchange.com/users/2596/arani | Arani
less info
reputation
18
bio website location age member for 2 years, 3 months seen Jun 6 '13 at 8:06 profile views 6
5 Is there any theoretically proven optimal compression algorithm? 4 Big Omega of $n \log n$ 2 Is the codomain/range of a hash function always $\mathbb{Z}$ or $\mathbb{N}$? 2 LFSR sequence computation 1 Complexity calculations, assumptions on basic costs
353 Reputation
+5 Reduction from set cover problem to vertex cover problem +10 Big Omega of $n \log n$ +28 Is the codomain/range of a hash function always $\mathbb{Z}$ or $\mathbb{N}$? +5 How to modify semantic actions when removing left-recursion from a grammer
4 Questions
8 Applying algorithms on large data 3 Reduction from set cover problem to vertex cover problem 1 How to modify semantic actions when removing left-recursion from a grammer 0 Decomposition of a relation to 3NF
23 Tags
5 algorithms × 3 4 asymptotics 5 data-compression 2 hash 5 information-theory 2 terminology 4 landau-notation 2 cryptography 4 master-theorem 2 pseudo-random-generators
12 Accounts
History 2,165 rep 724 Computer Science 353 rep 18 Stack Overflow 178 rep 19 Academia 151 rep 4 Personal Productivity 143 rep 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35901424288749695, "perplexity": 5248.802376130022}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009179.34/warc/CC-MAIN-20141125155649-00033-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/rotational-kinematics-acceleration.138380/ | # Rotational kinematics - acceleration
1. Oct 14, 2006
### physgirl
so there's a wheel with radius R and mass M. there's also a hub attached to the wheel's center with radius r and mass m. there's also a mass X suspended from a massless string that's wound around the hub. if the axle has negligible radius and mass and both wheel and the hub are solid with uniform density, how would you find the acceleration of the suspended mass after its released?
i thought what it was askng for was the tangential acceleration, which i found to be equal to (F*r^2)/I.... (since tan acc is r*alpha where alpha is r*F/I because torque = I*alpha and also r*F)... so i tried doing:
[r^2*X*g]/0.5[M*R+m*r]
but that doesn't work and i'm not sure what im doing wrong... can someone point me in the right direction?
thanks!
2. Oct 14, 2006
### Staff: Mentor
Two problems:
(1) The force F pulling on the hub does not equal the weight of the hanging mass. It does equal the tension in the string.
(2) The rotational inertial of a disk is 0.5MR^2.
Set up equations (Newton's 2nd law) for both wheel/hub and hanging mass and solve them together to get the acceleration.
3. Oct 14, 2006
### physgirl
1- isn't the tension of the string the same thing as mg?
also, what do you mean by "set up equations... to get the acceleration"? the 2nd law equation for the wheel/hub would be (m+M)alpha=F and for the hanging mass it would be F=Xg?...? i'm lost :(
4. Oct 15, 2006
### Staff: Mentor
No. Think about it: if the tension in the string was equal to mg, then the net force on the mass would be zero. It would just sit there. (This is what would happen if you hung the mass from a string that was fixed to the roof, say.) But since that string moves, the tension will be less than mg.
Here are the equations you need:
(a) Torque = I alpha ==> Fr = I_total*alpha = I_total*a/r
(b) Xg - F = Xa
Note: We know that the acceleration is down, so I take down to be positive.
5. Oct 15, 2006
### physgirl
I see, this is probably a dumb question but why did you go from Fr=I_total*alpha => F = I_total*a/r (that is, alpha to a?)
and if you didn't mean to switch over... I know I'm supposed to be solving for acceleration, so how do i know what alpha is?
lastly... I don't really understand the concept of "I"... the mass and radius of what object is supposed to be involved...? is it everything that's involved in the whole system? or just the wheel that's actually doing the turning? or just the hub that the string is directly attached to? or both?
6. Oct 15, 2006
### Staff: Mentor
Why? Because we are trying to find "a", not alpha. Since the wheel/hub is connected to the hanging mass via the string, alpha can be related to "a" via: a = alpha*r (where r is the radius of the hub that the string wraps around).
They are directly related. (See above.)
The whole thing turns as one piece, so you must use the "I" for the entire wheel/hub object--which is just the sum of I_wheel and I_hub.
Similar Discussions: Rotational kinematics - acceleration | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910930752754211, "perplexity": 1091.0840614088845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00631.warc.gz"} |
https://www.zora.uzh.ch/id/eprint/92213/ | # Search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to bottom quarks
CMS Collaboration; Chatrchyan, S; Khachatryan, V; Sirunyan, A M; et al; Chiochia, V; Kilminster, B; Robmann, P (2014). Search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to bottom quarks. Physical Review D (Particles, Fields, Gravitation and Cosmology), 89(1):012003.
## Abstract
A search for the standard model Higgs boson (H) decaying to bb¯ when produced in association with a weak vector boson (V) is reported for the following channels: W(μν)H, W(eν)H, W(τν)H, Z(μμ)H, Z(ee)H, and Z(νν)H. The search is performed in data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at s√=7 TeV and up to 18.9 fb−1 at s√=8 TeV, recorded by the CMS experiment at the LHC. An excess of events is observed above the expected background with a local significance of 2.1 standard deviations for a Higgs boson mass of 125 GeV, consistent with the expectation from the production of the standard model Higgs boson. The signal strength corresponding to this excess, relative to that of the standard model Higgs boson, is 1.0±0.5.
## Abstract
A search for the standard model Higgs boson (H) decaying to bb¯ when produced in association with a weak vector boson (V) is reported for the following channels: W(μν)H, W(eν)H, W(τν)H, Z(μμ)H, Z(ee)H, and Z(νν)H. The search is performed in data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at s√=7 TeV and up to 18.9 fb−1 at s√=8 TeV, recorded by the CMS experiment at the LHC. An excess of events is observed above the expected background with a local significance of 2.1 standard deviations for a Higgs boson mass of 125 GeV, consistent with the expectation from the production of the standard model Higgs boson. The signal strength corresponding to this excess, relative to that of the standard model Higgs boson, is 1.0±0.5.
## Statistics
### Citations
Dimensions.ai Metrics
197 citations in Web of Science®
175 citations in Scopus® | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879966378211975, "perplexity": 1369.845031086004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145729.69/warc/CC-MAIN-20200222211056-20200223001056-00259.warc.gz"} |
https://www.omnimaga.org/profile/?area=showposts;sa=messages;u=1341 | ### Show Posts
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
### Messages - XVicarious
Pages: [1] 2 3 ... 33
1
##### Minecraft Discussion / Re: [NOW OPEN] Modded Minecraft Server
« on: March 21, 2014, 05:17:44 pm »
Oh yeah. I mean I never actually said this in the thread, but WE ARE OPEN. You can join the server with instructions in the first post.
2
##### Minecraft Discussion / Re: [NOW OPEN] Modded Minecraft Server
« on: March 14, 2014, 01:15:01 pm »
Done, imo_inx. You're whitelisted.
3
##### Minecraft Discussion / Re: Modded Minecraft Server
« on: March 12, 2014, 01:52:39 pm »
4
##### Minecraft Discussion / Re: Modded Minecraft Server
« on: March 12, 2014, 01:20:32 pm »
Now taking whitelists. I will add you at my own discretion, generally if you aren't like super new and have a good amount of post. I want some people I can trust.
5
##### Minecraft Discussion / Re: Modded Minecraft Server
« on: March 11, 2014, 02:23:04 pm »
I knew I forgot a NEI addition/plugin. I was trying to stay away from IC2, but you're right and rubber can come in handy from those trees.
This isn't my first pack, infact it is the 3rd iteration of my modpack.
EDIT: Also if a mod would move this to Minecraft discussion I realized that there is such a subforum.
6
##### Minecraft Discussion / [NOW OPEN] Modded Minecraft Server
« on: March 06, 2014, 08:21:50 pm »
I'm getting a server soon and was setting up a modded minecraft server for my friends, and since I like you guys here I wanted to open it up to you as well.
If you have any suggestions for mods, feel free I haven't finalized anything yet (or even have the server).
The modpack is available on the Technic Platform: http://www.technicpack.net/modpack/details/xvtestpack.159749
As of now, it is being hosted on my PC, but will move to a dedicated server eventually. So expect it to go down when windows decides it wants to install updates when I'm not there.
The server is WHITELIST ONLY and the IP is: gateway.xvicario.us
EDIT: If you don't know how to add a modpack to technic Launcher http://youtu.be/bq0aZF70eAY?t=5m55s
7
##### Art / Re: [Request] Space sprites! 32x32
« on: January 04, 2014, 12:58:37 am »
So I'm still in need of a few things if anyone is interested!
8
##### Computer Projects and Ideas / Re: DiSGame -- A Game for Windows, Linux, and Android
« on: December 22, 2013, 12:19:21 am »
It is because of HTML5. There is somethign with it. I'll check out the scaling for it. On the native windows and android versions, it doesn't blur like that.
Also no shooting. You navigate through the puzzle.
9
##### Computer Projects and Ideas / Re: DiSGame -- A Game for Windows, Linux, and Android
« on: December 22, 2013, 12:04:48 am »
Updated first post !
10
##### Art / Re: [Request] Space sprites! 32x32
« on: December 21, 2013, 08:20:10 pm »
Thanks Hexatron! I'll still give you credit for them, simply because you deserve it.
11
##### Art / Re:
« on: December 21, 2013, 06:43:20 pm »
Hexatron those are awesome! If I do end up selling would you like compensation? You get credit for your work either way.
Also ben, it is a top down puzzle maze type game. I mean yeah they're going to be color, I should have made that clear.
You can play an earlier demo here: http://xvicario.us/bin
And the more up to date android demo here (I rec 4.0 and up, haven't tried earlier): http://xvicario.us/files/DiS-debug.apk
For android the touch screen is split in 4 equal regions to move.
Code: [Select]
up rightleft downI was skeptic about controls at first, but I like them
12
##### Art / [Request] Space sprites! 32x32
« on: December 21, 2013, 02:17:44 pm »
So I have some mockups for a game I'm working on and I need sprites for it, as the ones I made don't cut it. What am I looking for?
5 asteroids all different size of 32*32
A spaceship shuttle thing the shuttle must fit in a 24*24 space, plus the space for the flames rocket engine thing. I would like a variation in the flames for an animation, plus one without flames.
A monster, an eye monster to be specific. . Bad art, but done in a hurry. The eyeball should fit in a 26*26 area, plus the area for the strands off of it.
Also, I do plan on selling this for about a dollar, so if the artist wants compensation that can be arranged.
Thanks
13
##### Other Programming Languages / Tilemapper acting odd.
« on: December 17, 2013, 11:22:19 pm »
I'm writing a game in HaXe. But really any language expertise is needed to figure this out. Its something in my logic.
My problem is that one level (01.level) loads correctly, but another (02.level) does not. Screenshots below.
02.level
01.level
The files
Spoiler For 01.level:
Code: [Select]
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,11,2,0,0,0,0,0,0,1,3,1,1,1,1,1,1,1,1,1,11,1,1,1,1,1,1,0,0,0,0,0,1,1,1,1,0,0,0,11,9,0,0,1,1,1,1,1,0,1,0,1,1,1,1,0,1,0,11,1,1,0,1,1,1,1,1,0,1,0,0,0,0,0,0,1,0,11,0,1,0,1,0,0,0,1,0,1,1,1,1,1,1,1,1,0,11,3,1,0,1,0,1,0,1,0,1,1,0,0,0,1,1,1,0,11,0,1,0,0,0,1,0,1,0,0,4,0,0,0,0,0,0,0,11,1,1,1,1,1,1,0,1,0,1,0,1,1,1,1,1,1,1,11,0,0,4,0,0,1,0,1,0,1,0,1,1,1,1,1,1,1,11,1,1,1,1,1,1,0,1,0,1,0,1,1,1,1,1,1,1,11,0,0,0,0,0,0,0,0,4,0,0,0,0,0,1,1,1,1,11,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
Spoiler For 02.level:
Code: [Select]
11111111111111111111191111110000110003111010000001101101000110101111131000011001100011000011111110011111110110000111101111160000000000001001111111011011011110111100010110100111100111010001101011111101110111111110000001011201111111111111000111111111111111111111
Now they look different, but it parses both correctly.
The code I have is the following
Code: [Select]
function drawMap() { var psudoi:Int = 0; var psudox:Int = 0; var psudoy:Int = 0; for (i in 0...map.getMap().length) { if (i > 20*psudoy-1) { psudoy = psudoy + 1; psudox = 0; } if (map.getMap()[i] == 1) { add(new Block(psudox*32,(psudoy-1)*32)); } if (map.getMap()[i] == 2) { add(new Player(psudox*32,(psudoy-1)*32)); } if (map.getMap()[i] == 3) { add(new Enemy(psudox*32,(psudoy-1)*32,2)); } if (map.getMap()[i] == 4) { add(new Enemy(psudox*32,(psudoy-1)*32,0)); } if (map.getMap()[i] == 6) { add(new Enemy(psudox*32,(psudoy-1)*32,1)); } if (map.getMap()[i] == 9) { add(new EndPortal(psudox*32,(psudoy-1)*32)); } psudox = psudox + 1; } }
It works for one, but the other no. Am I just getting lucky with one? I only have 2 of the 15 or 30 levels done, so I can't try another.
EDIT IM AN IDIOT
14
##### Miscellaneous / Re: Rubik's Cube
« on: November 25, 2013, 01:25:11 am »
I want to get the 10x10x10. Sadly I don't think its in white yet...
15
##### Miscellaneous / Re: Rubik's Cube
« on: November 24, 2013, 09:13:13 pm »
Yup, but why single colored cubes ?
I haven't stickered them. I've had them for a while.
Pages: [1] 2 3 ... 33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18991202116012573, "perplexity": 5200.978604919683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00542.warc.gz"} |
https://www.atmos-chem-phys.net/18/1437/2018/ | Journal topic
Atmos. Chem. Phys., 18, 1437–1456, 2018
https://doi.org/10.5194/acp-18-1437-2018
Atmos. Chem. Phys., 18, 1437–1456, 2018
https://doi.org/10.5194/acp-18-1437-2018
Research article 01 Feb 2018
Research article | 01 Feb 2018
# Representation of solar tides in the stratosphere and lower mesosphere in state-of-the-art reanalyses and in satellite observations
Representation of solar tides in the stratosphere and lower mesosphere in state-of-the-art reanalyses and in satellite observations
Takatoshi Sakazaki1,2,3, Masatomo Fujiwara4, and Masato Shiotani3 Takatoshi Sakazaki et al.
• 1International Pacific Research Center, University of Hawai'i at Manoa, Honolulu, HI 96822, USA
• 2Japan Society for Promotion of Science Overseas Research Fellow, Tokyo, 102-0083, Japan
• 3Research Institute for Sustainable Humanosphere, Kyoto University, Uji, 611-0011, Japan
• 4Faculty of Environmental Earth Science, Hokkaido University, Sapporo, 060-0810, Japan
Correspondence: Takatoshi Sakazaki (tsakazak@hawaii.edu)
Abstract
Atmospheric solar tides in the stratosphere and the lower mesosphere are investigated using temperature data from five state-of-the-art reanalysis data sets (MERRA-2, MERRA, JRA-55, ERA-Interim, and CFSR) as well as TIMED SABER and Aura MLS satellite measurements. The main focus is on the period 2006–2012 during which the satellite observations are available for direct comparison with the reanalyses. Diurnal migrating tides, semidiurnal migrating tides, and nonmigrating tides are diagnosed. Overall the reanalyses agree reasonably well with each other and with the satellite observations for both migrating and nonmigrating components, including their vertical structure and the seasonality. However, the agreement among reanalyses is more pronounced in the lower stratosphere and relatively weaker in the upper stratosphere and mesosphere. A systematic difference between SABER and the reanalyses is found for diurnal migrating tides in the upper stratosphere and the lower mesosphere; specifically, the amplitude of trapped modes in reanalyses is significantly smaller than that in SABER, although such difference is less clear between MLS and the reanalyses. The interannual variability and the possibility of long-term changes in migrating tides are also examined using the reanalyses during 1980–2012. All the reanalyses agree in exhibiting a clear quasi-biennial oscillation (QBO) in the tides, but the most significant indications of long-term changes in the tides represented in the reanalyses are most plausibly explained by the evolution of the satellite observing systems during this period. The tides are also compared in the full reanalyses produced by the Japan Meteorological Agency (i.e., JRA-55) and in two parallel data sets from this agency: one (JRA-55C) that repeats the reanalysis procedure but without any satellite data assimilated and one (JRA-55AMIP) that is a free-running integration of the model constrained only by observed sea surface temperatures. Many aspects of the tides are closer in JRA-55C and JRA-55AMIP than these are to the full reanalysis JRA-55, demonstrating the importance of the assimilation of satellite data in representing the diurnal variability of the middle atmosphere. In contrast to the assimilated data sets, the free-running model has no QBO in equatorial stratospheric mean circulation and our results show that it displays no quasi-biennial variability in the tides.
1 Introduction
Atmospheric solar tides are global-scale inertia-gravity waves with periods that are integer fractions of a solar day (Chapman and Lindzen, 1970). They are primarily driven by diurnally varying diabatic heating, such as the absorption of solar radiation by tropospheric water and stratospheric ozone, and the latent heat release associated with tropical convection. The diurnal (S1) and semidiurnal (S2) variations around the globe can be decomposed into zonal harmonics with the “migrating” (Sun-synchronous) components for the S1 and S2 tides represented by westward-propagating wavenumber one and two, respectively. The remainder of the tidal zonal harmonics are “nonmigrating components” and are excited mainly by zonally asymmetric variations in (local time) heat sources or topography. Tides propagate vertically with amplitudes typically reaching a maximum in the mesosphere and lower thermosphere (MLT) region. There have been many studies of the tides in MLT as seen in ground-based measurements, satellite measurements, and numerical simulations (e.g., Lieberman, 1991; Hagan et al., 1995; Forbes and Wu, 2006; Zhang et al., 2006; Ward et al., 2010).
The amplitudes of tidal variations in the region from the troposphere to the lower mesosphere are generally smaller than in the MLT and so fewer studies have investigated the tides in this lower altitude region. Nevertheless, tidal variations in the lower atmosphere are worth investigating not only because they provide a “lower boundary condition” for tides in upper air but also because the tide and the resultant diurnal cycle in stratospheric ozone (Sakazaki et al., 2013a) need to be considered when constructing a homogenized data set of temperature and ozone from different satellites with different measurement local times (e.g., Zou et al., 2014; WMO, 2014; Nash and Saunders, 2015; Sakazaki et al., 2015c). Also it is now established that tides excited in the stratosphere can play a significant role in tropospheric meteorology, particularly in the diurnal cycle of tropical rainfall (Woolnough et al., 2004; Sakazaki et al., 2017).
The global pattern of stratospheric tides was investigated based on temperature data from ground-based (radiosondes, lidars) and satellite measurements (e.g., Wallace and Hartranft, 1969; Keckhut et al., 1996; Leblanc et al., 1999; Xu et al., 2009; Mukharov et al., 2009; Huang et al., 2010; see also Sakazaki et al., 2012 and references therein). However, the available direct measurements have important limitations in temporal and spatial coverage. For example, SABER, which is so far the most commonly used data set for tidal studies, has difficulty detecting tides in regions poleward of ∼50. Meteorological reanalyses that provide temporally and spatially homogeneous data over the globe could be a useful data set for a tidal study. Using a fixed assimilation–forecast model system, the reanalyses provide best estimates of past atmospheric states in many dynamical variables. Currently available reanalyses from major centers provide estimates of atmospheric variables from the surface to the upper stratosphere or the lower mesosphere with time resolutions of 3 or 6 h.
Reanalyses for relatively high-frequency features such as the tide are particularly challenging in the region above the usual ∼10hPa upper boundary of conventional balloon soundings. Above the middle stratosphere the reanalyses must rely on direct observations only from satellites (e.g., Fujiwara et al., 2017), which have limited local time and space coverage, so the reanalysis representation of tides in this region may be particularly dependent on the tidal simulation in the forecast models employed.
Previous studies evaluating the representation of solar tides in the stratosphere and the lower mesosphere in reanalyses have mainly considered only S1 (24 h) tides. An early study by Swinbank et al. (1999) investigated the S1 migrating tide in the stratosphere as represented in the Goddard Earth Observing System (GEOS) version 2 analysis data (one of the predecessors of MERRA and MERRA-2 reanalyses). They found that the GEOS-2 tidal amplitude in the free-running model is reduced by assimilating satellite data, particularly data from the stratospheric sounding unit (SSU). Sakazaki et al. (2012, hereafter referred to as S12) compared S1 migrating tides in the stratosphere by using data from TIMED SABER and six types of reanalysis data: MERRA, ERA-Interim, CFSR, JRA-25–JCDAS, NCEP1, and NCEP2. They found that the overall latitude–altitude structure and its seasonality was reproduced qualitatively by the newer three reanalyses (MERRA, ERA-Interim, and CFSR), but the amplitude in the reanalyses was 30–50 % underestimated in the upper stratosphere and lower mesosphere.
Only a few studies have examined S2 tides in reanalyses (although the solar S2 surface pressure oscillation has been more extensively studied; e.g., Ray and Ponte, 2003; Saha et al., 2010; Díaz-Argandoña et al., 2016; Hamilton and Sakazaki, 2017). Hsu and Hoskins (1989) and Kohyama and Wallace (2014) derived S2 migrating tides in the stratosphere by using the ECMWF operational analysis and the ERA-Interim, respectively. Li et al. (2015) used CFSR reanalysis to examine the seasonality of S2 migrating tides. Kopp et al. (2015) compared the S2 tides derived from lidar measurements over Kühlungsborn (54 N, 12 E) with those from MERRA reanalysis. Note that no intercomparison of the S2 tides as represented in different reanalyses has so far been performed. Note also that nonmigrating tides have not been examined with reanalysis data as far as the authors are aware.
Since the study of S12, several new reanalysis data sets have been released including MERRA-2 and JRA-55. The present study is a follow-up to S12 including these new reanalyses and extending the analysis to the S2 migrating tides and to nonmigrating tides. The Japan Meteorological Agency has produced a unique resource in which the full state-of-the-art JRA-55 reanalysis is supplemented with two additional global data sets (collectively called the JRA-55 family): JRA-55C, which assimilates only conventional surface and balloon sounding observations, and JRA-55AMIP, which employs a free-running version of the forecast model. The comparison of JRA-55 family members enables us to investigate the effects of data assimilation on the representation of tides in the global data sets. In addition to SABER data, data from Aura MLS (only assimilated in MERRA-2) will be also analyzed as a measure of S1 migrating tides.
The remainder of the paper is organized as follows. Section 2 describes the reanalysis and observational data sets employed, while Sect. 3 describes our method to extract tidal components. Section 4 shows the results for S1 migrating tides, S2 migrating tides, and nonmigrating tides (mainly for S1). Section 5 examines the long-term changes in migrating tides as represented in the reanalyses over the last 3 decades, while major findings are summarized in Sect. 6. The work described in the present paper contributes to the SPARC Reanalysis Intercomparison Project (S-RIP), chap. 11: “Upper stratosphere and lower mesosphere” (see Fujiwara et al., 2017, for details about S-RIP).
2 Data sets
We analyze and compare data from global reanalyses and two satellite observational data sets: SABER (not assimilated in any reanalyses) and Aura MLS (only assimilated in MERRA-2). In Sect. 5 below we intercompare several reanalyses over a long period (1980–2012) that includes most of the modern era in which satellite radiances have been assimilated. However, our detailed evaluation will focus on the 7-year period 2006–2012 during which it seems that the satellite data sources used in the global assimilations were fairly stable (e.g., Kawatani et al., 2016; Fujiwara, 2017) and during which we have two other satellite data sets (SABER and Aura MLS) not included in most of the assimilations and thus providing independent estimates of the diurnal variability of temperature in the stratosphere and mesosphere. Note that all the reanalysis data sets employed extended over the full 1980–2012 period with the exception of CFSR, whose integration with the original CDAS-T382 system ends in December 2010 (see Fujiwara et al., 2017).
## 2.1 Reanalyses
We compare results in satellite data sets with those from seven different global gridded data sets produced at major meteorological centers. Five of these are standard state-of-the-art global atmospheric reanalyses: (1) MERRA-2 (Gelaro et al., 2017), (2) MERRA (Rienecker et al., 2011), (3) JRA-55 (Kobayashi et al., 2015), (4) ERA-Interim (Dee et al., 2011), and (5) CFSR (Saha et al., 2010). We will not consider the JRA-25, ERA-40, and NCEP1/2 reanalyses, which are the predecessors of JRA-55, ERA-Interim, and CFSR, respectively. S12 showed that the global structure and seasonality of the S1 migrating tide represented in JRA-25 and NCEP1/2 were less consistent with available observations than the newer reanalyses data sets.
In addition to the five full reanalyses we also analyze the tides in two other gridded products produced by the Japan Meteorological Agency that are parallel to their full JRA-55 reanalysis. One (JRA-55C) repeats the reanalysis procedure but assimilating surface and upper-air conventional data but not satellite data, and the other (JRA-55AMIP) is a free-running integration of the forecast model constrained only by observed sea surface temperatures (Kobayashi et al., 2014). Here we refer to the three JMA data sets (JRA-55, JRA-55C, and JRA-55AMIP) as the JRA-55 family. The comparison of these family members will help us examine the effects of data assimilation on the representation of the solar tides.
Each reanalysis system is comprehensively described by Fujiwara et al. (2017) and thus only key aspects are summarized here (see also Table 1). Data are available on a 3-hourly basis at 00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00, and 21:00 UTC for MERRA and MERRA-2 and on a 6-hourly basis at 00:00, 06:00, 12:00, and 18:00 UTC for the remaining data sets. Data provided on the output pressure levels are used for MERRA and MERRA-2, CFSR, and JRA-55AMIP, with the number of levels being 42 (up to 0.1 hPa), 37 (up to 1 hPa), and 37 (up to 1 hPa), respectively. For JRA-55, JRA-55C, and ERA-Interim, we interpolated data provided on model levels onto the 42 MERRA and MERRA-2 pressure levels up to 0.1 hPa.
Table 1List of reanalyses used in this study. Also included is the JRA-55AMIP data set, which represents the results of a free-running version of the forecast model used to produce the JRA-55 and JRA-55C reanalyses.
## 2.2 Satellite measurements
### 2.2.1 SABER
The SABER instrument is onboard the TIMED satellite, which was launched on 7 December 2001 (Russell et al., 1999). It measures CO2 infrared limb radiance to retrieve the kinematic temperature profiles between 20 and 120 km (Remsberg et al., 2008). Data are continuously obtained between 53 S and 53 N. The TIMED satellite is not in a Sun-synchronous orbit, and the local time of SABER measurements changes by about 12 min per day, meaning that a full diurnal cycle (24 h in local time) is covered over a period of 60 days using ascending and descending nodes. Note that data are not acquired by SABER near local noon. The vertical resolution of the measurements is ∼2km.
In our study, version 2.0 temperature data on pressure levels are analyzed for January 2006 through December 2012 (S12 analyzed version 1.07 data). As described by Sakazaki et al. (2015b), before further analysis, data were averaged in bins of 15 longitude, 5 latitude, and 2 km in log-pressure vertical coordinates for each day and for each ascending and descending node. We emphasize that SABER data were not assimilated into any of the reanalyses used in this study.
### 2.2.2 Aura MLS
The MLS instrument is onboard the Aura satellite, which was launched in July 2004. It uses the microwave limb sounding technique to observe atmospheric dynamical parameters and chemical constituents (Waters et al., 2006). The Aura orbit is Sun synchronous at 705 km of altitude with 98 of inclination, and the Equator-crossing local time is 13:45 for the ascending nodes. The MLS fields of view look in the forward direction (an almost north–south direction) and vertically scan the limb of the atmosphere. In the tropics (10 S–10 N), the actual measurement local times were 13:45 and 01:45 on average for the ascending and descending nodes, respectively.
MLS temperature is retrieved from bands near O2 spectral lines for the region between 261 and 0.001 hPa pressure levels with a vertical resolution of 3–6 km. In this study, version 4.2 and 3.3 data are utilized after a data screening performed based on the criteria shown by Livesey et al. (2017) and Livesey et al. (2011), respectively. The MLS measurements were assimilated into the MERRA-2 data set at pressure levels less than 5 hPa (after 2004; McCarty et al., 2016, their Table 1; Gelaro et al., 2017), but not into any of the other reanalyses that we employed.
3 Analysis methods
## 3.1 Migrating and nonmigrating tides
In this study, the (1) diurnal (S1) migrating tide, (2) semidiurnal (S2) migrating tide, and (3) nonmigrating tides are extracted and diagnosed individually. The analysis procedure to extract these three components basically follows the method proposed by Sakazaki et al. (2015b) as briefly explained below.
First, all diurnal variations, which includes both migrating and nonmigrating components, are calculated based on universal time (UT) as follows. For SABER data, since 24 local times are covered by 60-day measurements by ascending and descending nodes, a time series of the 60-day running mean that can be regarded as the daily mean is calculated for each latitude–longitude bin; this is then subtracted from the original temperatures for each day, for each bin, and for each descending and ascending node to produce the anomaly from the daily mean. These anomaly temperatures are binned and averaged into hourly UT time bins to obtain 1-hourly diurnal variations. For reanalyses, 3- or 6-hourly diurnal variations in UT are extracted at each grid point through composite analysis based on UT after the subtraction of the daily mean; in fact, we downloaded and analyzed the diurnal monthly mean (monthly mean for each UTC snapshot) data provided by each reanalysis center (e.g., for JRA-55, diurnal monthly mean data are the monthly averages for 00:00, 06:00, 12:00, and 18:00 UTC). Clearly the 6-hourly data (ERA-Interim, JRA-55, and CFSR) cannot resolve S2 at each grid point, but the “migrating component” of S2 can be extracted by using data at grid points on the same latitude belt, as explained in the following (Ray and Ponte, 2003; Díaz-Argandoña et al., 2016; Hamilton and Sakazaki, 2017).
Next, by averaging data at the same local time (LT) for each latitude band, migrating tides that are a function of LT are calculated; for example, for 6-hourly reanalyses, data at 00:00 LT are the average of data points at 00:00 UT (0 E), 06:00 UT (90 E), 12:00 UT (180 E), and 18:00 UT (270 E). Then, the harmonic fitting is performed for the diurnal variations in LT to extract the migrating S1 and S2 components. Finally, nonmigrating tides are calculated by subtracting migrating tides (for reanalyses, the S1 plus S2 migrating tides are used for actual calculation) from the total tidal variations.
For nonmigrating tides, the zonal wavenumber decomposition is also applied for the S1 component, following the method proposed by Dai and Wang (1999). Before the analysis, the tidal component (X) at any longitude (λ), latitude (θ), and vertical pressure level (z) was decomposed into the symmetric (XS) and antisymmetric (XA) components with respect to the Equator as
$\begin{array}{}\text{(1)}& X\left(\mathit{\lambda },\mathit{\theta },z\right)={X}_{S}\left(\mathit{\lambda },\mathit{\theta },z\right)+{X}_{A}\left(\mathit{\lambda },\mathit{\theta },z\right),\end{array}$
where
$\begin{array}{}\text{(2)}& & {X}_{S}\left(\mathit{\lambda },\mathit{\theta },z\right)\equiv \frac{\mathrm{1}}{\mathrm{2}}\left\{X\left(\mathit{\lambda },\mathit{\theta },z\right)+X\left(\mathit{\lambda },-\mathit{\theta },z\right)\right\},\text{(3)}& & {X}_{A}\left(\mathit{\lambda },\mathit{\theta },z\right)\equiv \frac{\mathrm{1}}{\mathrm{2}}\left\{X\left(\mathit{\lambda },\mathit{\theta },z\right)-X\left(\mathit{\lambda },-\mathit{\theta },z\right)\right\}.\end{array}$
Figure 1Latitude–altitude distribution of amplitude for annual mean diurnal (S1) migrating tide in temperature, as derived from (a) SABER, (b) JRA-55, (c) JRA-55C, (d) JRA-55AMIP, (e) MERRA-2, (f) MERRA, (g) ERA-Interim, and (h) CFSR.
## 3.2 Difference between ascending minus descending nodes for MLS (A–D difference)
The difference between the ascending (A) and descending (D) nodes of MLS temperature measurements is calculated in the tropics (10 S–10 N) as a measure of tidal amplitude (hereafter referred to as the A–D difference). Since the measurement times are fixed and are 12 h apart in local time (i.e., 01:45 and 13:45), the zonal mean A–D difference can be caused by odd harmonics of the migrating tides (24 h, 8 h components, etc.). In addition, because the S1 component is predominant over the higher-order harmonics (e.g., the ratio of 8 h to 24 h components in SABER data was <30 % for most parts in the tropical stratosphere and in the lower mesosphere), the A–D difference can be mostly attributed to the S1 migrating tide. The same quantity is calculated for the other data sets as well (i.e., SABER and reanalyses) by using the migrating tides (i.e., diurnal variations in LT) in each data set (for reanalyses, migrating tides reconstructed from S1, S2 and terdiurnal harmonic components are considered).
Figure 2As for Fig. 1 but for phase (LT at which the S1 temperature variation is maximum).
Figure 3Vertical profile of the (a, c) amplitude and (b, d) phase of the annual mean diurnal (S1) migrating tide averaged for (a, b) 15 S–15 N and (c, d) 30–45 N, derived from different data sets. Horizontal bars show 95 % confidential levels in a t test for SABER results. For the statistical test, the error is defined as the 95 % confidential level for the daily anomaly (composite value) at each hourly universal time; this quantity has been propagated to the error of amplitude and phase for diurnal migrating tides following the error propagation theory.
4 Results
## 4.1 Diurnal (S1) migrating tide
Figures 1 and 2 show the latitude–altitude distribution of amplitude and phase, respectively, for annual mean S1 migrating temperature tides computed from SABER data and from the various reanalyses during 2006–2012. Figure 3 compares the vertical profile of amplitude and phase averaged over 15 S–15 N (tropics) and 30–45 N (midlatitudes). All data sets show that the tidal amplitude increases with altitude in the tropics (up to ∼4K in the lower mesosphere in the SABER data, somewhat less in the various reanalyses). The amplitude has maxima in the upper stratosphere (45–50 km or 1 hPa) at ∼3.5K for SABER (again somewhat less in the reanalyses) in the extratropics of both hemispheres. Over the tropics, the phase shows a downward progression (except for SABER at 40–55 km; see below for further discussion). At extratropical latitudes, on the other hand, the phase is almost constant around 18:00 LT except for the region above ∼55km (0.3 hPa).
Figure 4Vertical profile of the (a, c) amplitude and (b, d) phase of (a, b) the first propagating Hough mode and (c, d) the first trapped Hough mode for the annual mean diurnal (S1) migrating tide.
There is quite a good agreement among the full reanalysis data sets below ∼45km, while the spread among the reanalysis results becomes larger above the upper stratosphere. It is inferred that the reanalyses may be well constrained by satellite measurements up to the upper stratosphere while being somewhat more model dependent in the lower mesosphere.
Apart from the difference among different reanalyses, we see a systematic difference between SABER and all the reanalyses both for amplitude and phase above 40 km. Notably, (1) the amplitude in SABER is ∼1K larger than that in the reanalyses (see, e.g., the extratropical maximum at ∼45km in Fig. 3c) and (2) the phase is locally constant (or shows an upward progression) in the tropics for SABER at 40–55 km, while it shows a continuous downward progression for most reanalyses. The present SABER results, including the extratropical maxima at 45–50 km and the phase stagnation at 40–50 km, are quantitatively consistent with previous studies using SABER, even though earlier investigators used a different procedure to extract tides (Mukhtarov et al., 2009; Xu et al., 2009; Huang et al., 2010).
In Fig. 3 it is apparent that the JRA-55C and JRA-55AMIP results for the S1 migrating tide stand out from those obtained with the full reanalyses. Notably, the JRA-55C and JRA-55AMIP results are close together and differ substantially from the JRA-55 results. The amplitude in JRA-55C and JRA-55AMIP is no larger than that in JRA-55 for the entire stratosphere and is substantially smaller in some regions. This suggests that, contrary to the finding by Swinbank et al. (1999), the assimilation of satellite measurements does not act to damp the tidal amplitude in JRA-55, at least for the recent period (Swinbank et al. analyzed data in the early 1990s).
We analyzed these results further with some guidance from the expectations of so-called classical tidal theory, which solves for the linear response of the global atmosphere to monochromatic heating ignoring mean winds and horizontal temperature gradients in the mean state. The classical tidal theory equations are separable in the zonal, vertical, and meridional directions and conventionally the solutions are written as the product of zonal harmonics and meridional modes known as Hough functions (e.g., Chapman and Lindzen, 1970). As shown by Sakazaki et al. (2013b), the S1 migrating tide in the stratosphere can be reasonably well represented by a superposition of only a few (∼4) Hough modes each of which has its own vertical propagation characteristics. For the annual mean tidal temperatures, which are almost symmetric about the Equator (Fig. 1), even the two symmetric Hough modes (the (1, 1) mode and (1, 1) mode shown in Fig. A1a) are enough to represent the overall structures. That is, the S1 migrating tidal temperatures (${T}_{{S}_{\mathrm{1}}\text{-mig}}$) determined from SABER and each of the reanalyses are approximated as
$\begin{array}{ll}\text{(4)}& & {T}_{{S}_{\mathrm{1}}\text{-mig}}\left(\mathit{\theta },z,t\right)=\stackrel{\mathrm{‾}}{T}\left(\mathit{\theta },z\right)\mathrm{cos}\left(\mathit{\omega }\left(t-\stackrel{\mathrm{‾}}{\mathit{\alpha }}\left(\mathit{\theta },z\right)\right)\right)& \cong \sum _{n=\mathrm{1}}^{\mathrm{2}}{\stackrel{\mathrm{̃}}{T}}_{n}^{\mathrm{1}}\left(z\right){\mathrm{\Theta }}_{n}^{\mathrm{1}}\left(\mathit{\theta }\right)\mathrm{cos}\left(\mathit{\omega }\left(t-{\stackrel{\mathrm{̃}}{\mathit{\alpha }}}_{n}^{\mathrm{1}}\left(z\right)\right)\right),\end{array}$
where t is local time (hr); $\stackrel{\mathrm{‾}}{T}$ and $\stackrel{\mathrm{‾}}{\mathit{\alpha }}$ are amplitude (K) and phase (LT), respectively, at each latitude and pressure level; $\mathit{\omega }=\mathrm{2}\mathit{\pi }/\mathrm{24}$ (hr−1); and ${\stackrel{\mathrm{̃}}{T}}_{n}^{\mathrm{1}}$ and ${\stackrel{\mathrm{‾}}{\mathit{\alpha }}}_{n}^{\mathrm{1}}$ are the amplitude (K) and phase (LT) of the nth Hough mode (${\mathrm{\Theta }}_{n}^{\mathrm{1}}$; in this case, n=1 is the (1, 1) mode and n=2 is the (1, 1) mode). Note that the equatorially trapped (1, 1) mode is associated with vertical phase propagation, while the (1, 1) mode represents disturbances we expect to be vertically trapped.
Figure 4 shows the vertical profile of amplitude and phase of the two modes (i.e., ${\stackrel{\mathrm{̃}}{T}}_{n}^{\mathrm{1}}\left(z\right)$ and ${\stackrel{\mathrm{‾}}{\mathit{\alpha }}}_{n}^{\mathrm{1}}\left(z\right)$ of Eq. (4), respectively). For the propagating (1, 1) mode, the amplitude grows exponentially with increasing altitude and the phase shows a downward progression. The vertical wavelength is ∼25km, which is quite consistent with the prediction by classical tidal theory (∼28km; Chapman and Lindzen, 1970). For the trapped (1, 1) mode, the amplitude is localized around the peak ozone heating region (∼50km) and the phase is almost constant with altitude at around 18:00 LT. Notably, the systematic difference between SABER and the reanalyses seen in Fig. 3 is projected mostly onto the amplitude of the trapped mode (Fig. 4c); the amplitude of the trapped mode in the reanalyses is 1.5–2.5 K, which is significantly smaller than that in SABER (3–4 K). For the propagating mode, by contrast, there is no clear systematic difference between SABER and the reanalyses (Fig. 4a and b). Because the magnitude of the trapped mode is smaller in reanalyses compared to SABER, the amplitude is small at all latitudes and the phase can propagate vertically in the tropics (i.e., in SABER, the phase is almost constant at 40–55 km as affected by the strong trapped mode; Fig. 3). The magnitude of the trapped mode in SABER is consistent with the analysis by Mukhtarov (2009; ∼4K peak both in March and July). These findings imply two possible reasons for the SABER–reanalyses difference: (1) (if SABER is “true”) the ozone heating, which is largely responsible for the trapped mode in the upper stratosphere, may be underestimated in the reanalyses, or (2) (if reanalyses are “true”) SABER might have a bias that is dependent on local time and has a similar latitudinal structure similar to the trapped mode (i.e., almost constant with latitude).
Figure 5Vertical profile of the difference between ascending and descending profiles of temperature in MLS measurements compared to the 13:45 minus 01:45 LT temperature difference sampled from the SABER data and various reanalysis data sets (average between 10 S and 10 N). For MLS, solid and dashed curves show the results from v4.2 and v3.3, respectively. Horizontal bars show 95 % confidential levels with t test (only for MLS (v4.2) and SABER). Results are annual means for the 7-year period 2006–2012.
To supplement the above discussion concerning the S1 migrating tide, we examined the A–D difference (i.e., 13:45 LT minus 01:45 LT) in MLS temperature measurements. As mentioned in Sect. 2, the zonal mean A–D difference is expected to result from S1 migrating tides (if there is no “instrumental” bias between A and D profiles). The vertical profile of the 13:45 LT minus 01:45 LT difference averaged over 10 S–10 N is shown in Fig. 5 for the MLS determinations (both for versions 4.2 and 3.3) and for the SABER data and each of the reanalyses. The A–D difference in MLS and all the reanalyses, but not SABER, basically changes its sign vertically, with its absolute value increasing with altitude. This feature means that the amplitude increases with altitude and the phase shows a vertical progression. The profile by SABER, by contrast, is mostly positive over the entire upper stratospheric region; this corresponds to the fact that the phase from SABER shows little vertical progression at 40–55 km (Fig. 3b).
Figure 6Month–altitude distribution of the amplitude of diurnal (S1) migrating tides averaged between 15 S and 15 N from (a) SABER, (b) JRA-55, (c) JRA-55C, (d) JRA-55AMIP, (e) MERRA-2, (f) MERRA, (g) ERA-Interim, and (h) CFSR.
It may be worth comparing the present results with previous findings, especially for the upper stratosphere and the lower mesosphere. Wu et al. (1998) analyzed temperature measurements from the MLS onboard the UARS. In the tropics (15 S–15 N), they showed that the amplitude of S1 migrating tides is ∼1K at 1 hPa (see their Fig. 2); this is between our SABER (∼1.5K) and reanalyses (∼0.3K) results (Fig. 3a). Swinbank et al. (1999) also analyzed MLS measurements (in 1992 only) and showed that the extratropical maxima in the upper stratosphere is 3–3.5 K in January; our analysis showed that it is >4K for SABER and ∼3K in the reanalyses in January (for 2006–2012 mean; not shown). Keckhut et al. (1996) reported that UARS MLS results are quite consistent with lidar measurements over a station in southern France (at 44 N). This latitude is close to the location of the amplitude maxima in the extratropical upper stratosphere that we find for the S1 migrating tide (Fig. 1). Huang et al. (2010) pointed out that the local upward phase progression between 35 and 60 km in SABER (Fig. 3b) is not observed in the measurements from the CRISTA during 5–11 November 1994 (Oberheide et al., 2000); the CRISTA results look similar to the present tidal determinations in the various reanalysis data sets. To summarize, it seems that there is enough uncertainty concerning the S1 migrating tide represented in the SABER data that further investigation may be needed before attributing the systematic differences we found between SABER and the reanalyses considered here.
The seasonal variation in the amplitude of S1 migrating tides averaged in the tropics (15 S–15 N) is shown in Fig. 6. Monthly tides during 2006–2012 are calculated both for SABER and the reanalyses, but for SABER, the results of each month are derived from 60-day data (e.g., the results in January are from 15 December through 15 February). All data sets show that the amplitude maximizes in February–March and again in July–August–September in the stratosphere and the lower mesosphere; this semiannual variation is consistent with previous studies (e.g., Mukhtarov et al., 2009; Huang et al., 2010; SA12). Such seasonality has been attributed to the antisymmetric Hough-mode strengthening due to the meridional gradient of the zonal-mean zonal wind in the tropics (McLandress, 2002; Sakazaki et al., 2013b). In the extratropics, on the other hand, all data sets show that the amplitude maximizes in local summer in the stratosphere (not shown), presumably due to the enhanced ozone heating in the summer hemisphere.
Figure 7As for Fig. 1 but for semidiurnal (S2) migrating tide.
Figure 8As for Fig. 2 but for the semidiurnal (S2) migrating tide.
Figure 9As for Fig. 3 but for the semidiurnal (S2) migrating tide.
## 4.2 Semidiurnal (S2) migrating tide
Figures 7 and 8 show the latitude–altitude distribution of amplitude and phase, respectively, for annual mean S2 migrating tides in temperature. Figure 9 compares the vertical profiles of S2 amplitude and phase averaged over 15 S–15 N and 30–45 N. The amplitude is largest in the tropics, showing a local maximum at around 45 km (up to ∼1.2K), i.e., close to the location of ozone heating maximum. In the tropics, the phase shows a slight upward progression below ∼40km (Fig. 9b), indicating that the energy propagates downward from the ozone heating layer. Above ∼40km, the phase is almost constant, at least in the tropics. The long vertical wavelength and the significant downward energy propagation from the stratosphere are consistent with classical tidal theory for S2 migrating tides (Chapman and Lindzen, 1970).
Figure 10As for Fig. 4 but for the gravest symmetric Hough mode (2, 2) of the semidiurnal (S2) migrating tide.
The vertical profiles of amplitude and phase are in good agreement among the data sets, particularly below ∼45km, except that the ERA-Interim shows a smaller amplitude in the tropics (Fig. 9a). Above ∼45km, the phase diverges among the data sets (Fig. 9b and d). In contrast to the S1 migrating tide, there is no systematic difference between SABER and the reanalyses in the upper stratosphere and the lower stratosphere, but the amplitude in the reanalyses is systematically smaller than that in SABER between 20 and 30 km of altitude (Fig. 9a). Note that the S2 tides in the stratosphere have not been examined in detail except for some ground-based lidar measurements (e.g., Keckhut et al., 1996; Leblanc et al., 1999; Kopp et al., 2015); our study demonstrated its meridional-vertical structure for the first time.
Figure 10 shows the vertical profiles of amplitude and phase for the temperature projected onto the (2, 2) mode (${\mathrm{\Theta }}_{\mathrm{2}}^{\mathrm{2}}$), i.e., the gravest symmetric S2 Hough mode (see Fig. A1b for the structure of this mode). That is, the S2 migrating tide is approximated by ${\stackrel{\mathrm{̃}}{T}}_{\mathrm{2}}^{\mathrm{2}}\left(z\right){\mathrm{\Theta }}_{\mathrm{2}}^{\mathrm{2}}\left(\mathit{\theta }\right)\mathrm{cos}\left(\mathrm{2}\mathit{\omega }\left(t-{\stackrel{\mathrm{̃}}{\mathit{\alpha }}}_{\mathrm{2}}^{\mathrm{2}}\left(z\right)\right)\right)$ and the vertical profiles of ${\stackrel{\mathrm{̃}}{T}}_{\mathrm{2}}^{\mathrm{2}}\left(z\right)$ and ${\stackrel{\mathrm{̃}}{\mathit{\alpha }}}_{\mathrm{2}}^{\mathrm{2}}\left(z\right)$ are shown. Note that classical tidal theory predicts that the S2 tidal response should consist of only modes with vertical propagation in contrast to S1. The profiles of this mode are similar to the observed profiles over the tropics (Fig. 9a and b), meaning that this mode dominates the S2 migrating tide over the tropics. All data sets show that the amplitude maximizes in the upper stratosphere, although the amplitude in ERA-Interim is again smaller than the other data sets. The phase is in good agreement among the data sets below 45 km, but it shows a difference above 45 km. As for S1 migrating tides, the variance among reanalyses becomes large in the upper stratosphere and the lower mesosphere even for such a large-scale structure.
Figure 11As for Fig. 6 but for the semidiurnal (S2) migrating tide.
Figure 11 shows the month–altitude distribution of amplitude for S2 migrating tides. Although the SABER results are noisy, all data sets basically show that the amplitude maximizes twice in December–January–February and in June–July–August in the upper stratosphere and the lower mesosphere. In the lower and middle stratosphere, by contrast, the amplitude minimizes during June–July–August; notably, this is similar to the seasonality of surface pressure tides (e.g., Díaz-Argandoña et al., 2016; Hamilton and Sakazaki, 2017). Such seasonality in the stratosphere was reported earlier by Li et al. (2015) using the CFSR reanalysis.
Figure 12Longitude–altitude distribution of annual mean nonmigrating temperature tides at 00:00 UTC averaged between 10 S and 10 N, as derived from (a) SABER, (b) JRA-55, (c) JRA-55C, (d) JRA-55AMIP, (e) MERRA-2, (f) MERRA, (g) ERA-Interim, and (h) CFSR.
## 4.3 Nonmigrating tides
Figure 12 shows the longitude–altitude distribution of annual mean nonmigrating temperature tides at 00:00 UTC averaged between 10 S and 10 N. It is clear that through the upper troposphere to the lower stratosphere, the wave signals are the strongest around the South America (80–40 W) and Africa (10–40 E) and are the second largest around the Maritime continent (90–150 E); both westward- and eastward-tilting waves emanate from these locations. This horizontal pattern indicates that nonmigrating tides are interpreted as the superposition of gravity waves from these geographically localized sources, which is consistent with the findings by Sakazaki et al. (2015b, their Sect. 4), who analyzed data from a high-resolution GCM as well as SABER and COSMIC GPS radio occultation measurements. We also see that westward-tilting (eastward-tilting) waves correspond to the westward-propagating (eastward-propagating) waves that are clear in the western (eastern) hemisphere below ∼40km. Such asymmetry may be explained by two factors. First, the major excitation regions are confined around 60 to +20 E (see also Sakazaki et al., 2015b); because waves are subject to dissipation during the horizontal propagation, westward waves are likely dominant to the west of 60 E and eastward waves are dominant to the east of 20 E. Secondly, westward signals are clearer between 60 and +20 E, even though in this region both westward waves (from Africa) and eastward waves (from South America) might be equally important. This asymmetry is likely because westward waves (mainly wavenumber 5) are more efficiently excited by tropospheric heating than eastward waves (mainly wavenumber 3; see also Fig. 15) due to the difference in their typical vertical wavelengths (e.g., Williams and Avery, 1996).
Figure 13Longitudinal variation of annual mean nonmigrating tides at 00:00 UTC averaged between 10 S and 10 N at (a) 0.4 hPa, (b) 1 hPa, (c) 3 hPa, (d) 10 hPa, and (e) 30 hPa. Vertical bars show the 95 % confidence level estimated by a t test.
Figure 13 compares in detail the longitudinal variations of nonmigrating tides at several pressure levels. We see that the longitudinal variations agree well among the data sets. There is no systematic difference between SABER and the reanalyses. The biggest outliers are JRA-55C and JRA-55AMIP, which seem to display somewhat larger amplitudes than the full reanalyses. It may be worth mentioning that Sakazaki et al. (2015b) also noted that the amplitude in their model (a free-running model) was significantly larger than that for SABER and COSMIC.
Figure 14Latitude–altitude distribution of annual mean zonally uniform nonmigrating tides (zonal wavenumber 0 component) at 00:00 UTC, as derived from (a) SABER, (b) JRA-55, (c) JRA-55C, (d) JRA-55AMIP, (e) MERRA-2, (f) MERRA, (g) ERA-Interim, and (h) CFSR. Contour interval is 0.2 K.
Figure 15Amplitudes for each zonal wavenumber component of annual mean diurnal (S1) nonmigrating tides for the region between 10 S and 10 N, at (a) 0.4 hPa, (b) 1 hPa, (c) 3 hPa, (d) 10 hPa, and (e) 30 hPa. The top and bottom half of each panel show the results of symmetric and antisymmetric components, respectively. Positive and negative wavenumbers are for the eastward- and westward-traveling waves, respectively. The S1 migrating tide (westward wavenumber 1) is not shown.
We note that averaging data between 10 S and 10 N as was done in Figs. 12 and 13 only extracts the symmetric components with respect to the Equator. Sakazaki et al. (2015a) showed that antisymmetric components near the Equator (i.e., as revealed by taking the difference between the 10 S–0 average and the 0–10 N average) have a clear zonally uniform component (zonal wavenumber 0) as do the gravity wave patterns emanating from the continents (Sakazaki et al., 2015a). Figure 14 shows our present results for the latitude–altitude structure of the annual mean, zonal wavenumber 0 component (zonal mean temperature anomaly from the daily mean) at 00:00 UTC. As found by Kuroda and Chiba (1995) and Sakazaki et al. (2015a), the antisymmetric structure with respect to the Equator is dominant, with a vertical wavelength of ∼15km and confined mainly to within about 15 of the Equator.
Figure 15 shows the zonal wavenumber dependence for the annual mean S1 (24 h) harmonic of nonmigrating tides for each symmetric and antisymmetric component (see Sect. 3.1). All data sets show that zonal wavenumber 0 (so-called D0; particularly for antisymmetric components as seen in Fig. 14), westward zonal wavenumbers 5 and 2 (DW5 and DW2), and eastward zonal wavenumber 3 (DE3) are dominant, which is consistent with previous studies (Forbes and Wu., 2006; Zhang et al., 2006; Sakazaki et al., 2015b). Particularly, DW5 in the stratosphere corresponds to the clear westward-tilting waves in Fig. 12 (Sakazaki et al., 2015b). Although the dominant wavenumbers agree among the data sets, their magnitudes display some differences. A marked difference is seen for DE3; the MERRA and MERRA-2 results are close to the SABER but the other reanalyses have larger amplitudes than SABER above the middle stratosphere (pressures less than 3 hPa).
Figure 16(a–c) Long-term changes in the amplitude of the diurnal (S1) migrating tide averaged between 10 S and 10 N after applying a 12-month moving average at (a) 0.4 hPa, (b) 3 hPa, and (c) 10 hPa, as derived from reanalyses. (d) Two QBO indices defined as the deseasonalized (12-month moving average), normalized zonal wind over Singapore at (solid gray curve) 10 hPa and (dashed gray curve) 30 hPa.
Figure 17As for Fig. 16a–c but for the semidiurnal (S2) migrating tide.
Figure 18Time–altitude distributions of SD among the four reanalyses (MERRA, MERRA-2, ERA-Interim, and JRA-55) for the (a) amplitude of the diurnal (S1) migrating tide and (b) the amplitude of the semidiurnal (S2) migrating tide averaged over 10 S–10 N.
Sakazaki et al. (2015b) in their study of nonmigrating tides found that the westward-propagating waves from the continents penetrate deeply into the mesosphere during equinox but they are dissipated near the stratopause around the solstice season, likely due to filtering by the zonal wind associated with the stratospheric semiannual oscillation (SAO). In the present project we confirmed that such features are discernable in all reanalysis data sets (not shown). For the zonally uniform pattern discussed above (i.e., D0 tide), Sakazaki et al. (2015a) showed that it is most clear in June–July–August; this was also confirmed in all data sets in the present study (not shown).
5 Interannual variations and long-term trends in reanalysis representation of tides
This section examines the interannual variations and long-term changes in S1 and S2 migrating tides as represented in the various reanalyses over the extended 1980–2012 period. Figure 16a–c show the monthly amplitude of S1 migrating tide averaged over 10 S–10 N at selected pressure levels in the stratosphere and the lower mesosphere (0.4, 3, and 10 hPa). The seasonal variations have been removed by applying a 12-month running mean. First, all reanalyses show similar interannual variations with a peak-to-peak difference of up to 0.5 K. The time series of two quasi-biennial oscillation (QBO) indices, the zonal wind at 10 and 30 hPa over Singapore after the deseasonalization (12-month running mean) and normalization, are shown in Fig. 16d. It is clear that the main interannual variations in tides are synchronized with the QBO cycle in stratospheric zonal wind. The modulation of S1 tides by tropical stratospheric QBO in mean winds has been reported in satellite measurements from the stratosphere through the MLT (e.g., Burrage et al., 1995; Mukhtarov et al., 2009). The QBO in zonal wind itself is represented quite well in the reanalyses considered here (including JRA-55C; Konayashi et al., 2014; Kawatani et al., 2016). Note that the free-running JRA-55AMIP model does not generate a QBO in the tropical stratospheric mean circulation (Kobayashi et al., 2014), and correspondingly there is no QBO apparent in the S1 tidal amplitudes (Fig. 16b and c).
The difference in tidal amplitudes among the reanalyses depends on vertical level, and it changes through the full period. In the lower mesosphere at 0.4 hPa (Fig. 16a), the amplitudes for MERRA and MERRA-2 are larger than that for ERA-Interim and JRA-55. This pattern continues for the 3 decades except that the MERRA-2 amplitude became smaller after ∼2004, likely corresponding to the assimilation of MLS temperature starting in 2004. Since no other measurements are assimilated in the lower mesosphere, the reanalyses in this altitude region are presumably strongly dependent on the tides simulated in the forecast model used to produce each reanalysis. Figure 18a shows the variance in the amplitude of S1 migrating tides averaged over 10 S–10 N among the four reanalyses, MERRA-2, MERRA, JRA-55, and ERA-Interim (CFSR is not included because its CDAS-T382 integration ended in December 2010), plotted as a function of altitude and time. In the lower mesosphere the variance among the reanalysis data sets is large (∼1K) and fairly steady throughout the entire record
In the upper stratosphere at 3 hPa, it is clear that the variance among the reanalyses was much larger before 2000 than after 2000 (Figs. 16b and 18a). Notably, the amplitude in JRA-55 increases abruptly in ∼2000 to approach the results of other reanalyses, while the JRA-55C does not show any systematic changes even after $\sim \mathrm{2000}.$ This clearly indicates that the satellite observations, which are assimilated for JRA-55 but not for JRA-55C, are responsible for the drastic improvement around 2000. Actually, the years around 2000 correspond to the timing of the TOVS-to-ATOVS transition. ATOVS has the AMSU-A/B, which has more channels in the upper stratosphere with narrower weighting functions compared to the SSU on TOVS, so that the representation of the stratospheric dynamical fields significantly improved at this time (see Fujiwara et al., 2017, their Sect. 5.2 for more details). For JRA-55, SSU was assimilated until ∼2000, while AMSU started to be assimilated in ∼1999 (both SSU and AMSU were assimilated during 1999–2000; see Fig. 8 of Fujiwara et al., 2017). Artificial jumps around 2000 have been reported for other features of the circulation in the reanalysis data sets, such as climatological temperature (Long et al., 2017) and the zonal wind in the tropical stratosphere (Kawatani et al., 2016).
Finally, in the middle to lower stratosphere at 10 hPa, the variance is relatively small for the entire period compared to that at higher vertical levels. As at the 3 hPa level, an abrupt decrease in variance is observed after ∼2000 (Figs. 16c and 18a). In the 1990s, quite large amplitudes are sometimes observed in ERA-Interim.
Figure 17 shows the monthly amplitude of the S2 migrating tide averaged over 10 S–10 N, while the variance in this quantity among the four reanalyses is shown in Fig. 18b. The QBO-related variation observed for S1 migrating tides (Fig. 16) is not clear for the S2 tide. An abrupt change due to the TOVS-to-ATOVS transition around 2000 does not seem clear for the S2 tidal amplitudes, expect possibly for the CFSR data set, which is a strong outlier at 10 hPa before ∼1999 and somewhat more consistent with the other reanalyses after ∼1999 (Fig. 17c; for CFSR, SSU was assimilated until ∼1998; see Fig. 8 of Fujiwara et al., 2017). However, other strange interannual variations are observed, particularly before 2000. Notably, the ERA-Interim shows a “saw-tooth” pattern of changes at 0.4 and 3 hPa until $\sim \mathrm{2000}.$ This is likely caused by the orbital drift of TOVS and the transition between different NOAA satellites carrying the TOVS (e.g., Zou et al., 2014). For example, TOVS was onboard NOAA-9 between 1985 and 1988 and was onboard NOAA-11 between 1988 and 1994; the orbital drift of NOAA-9 (NOAA-11) likely corresponds to the gradual increase in S2 amplitude over 1985–1988 (1988–1994), and the transition between the two satellites likely corresponds to the abrupt reduction seen in the ERA-Interim representation of S2 amplitude in 1988 at the 0.4 hPa level (Fig. 17a).
6 Summary and discussion
This study investigated the solar tides seen in the temperature in the stratosphere and the lower mesosphere using state-of-the-art reanalysis data sets included in the S-RIP intercomparison project and compared with independent SABER measurements during 2006–2012. Diurnal (S1) migrating tides, semidiurnal (S2) migrating tides, and nonmigrating tides are extracted and discussed individually. Overall, the reanalysis results are found to be quite consistent with those from SABER in a qualitative way, such as the three-dimensional structure, dominant wavenumbers (for nonmigrating tides), and their seasonality. The spread among the reanalyses increases with altitude and is fairly large in the lower mesosphere where few actual observations are assimilated, leaving the reanalysis fields dependent on the tides simulated in the forecast model used in each reanalysis procedure.
A marked systematic difference between SABER and the reanalyses is seen for the amplitude and phase profiles for S1 migrating tides above 40 km. S12 noticed this issue using MERRA, ERA-Interim, CFSR, and JRA-25, but this study confirmed such a difference for the more recent reanalyses (MERRA-2 and JRA-55) as well. Swinbank et al. (1999) found that the assimilation of SSU measurements damps the representation of the tidal amplitude in a reanalysis in the upper stratosphere. The comparison of JRA-55 family data sets in our study, however, suggests that the assimilation does not degrade tides, at least in the present day (i.e., 2006–2012 period) and in the JRA-55 system. A Hough-mode decomposition further showed that such SABER reanalyses differences can be attributed primarily to the amplitude of the trapped (1, 1) mode response in the stratosphere. This could be explained if either the stratospheric ozone heating is underestimated in the forecast models used to produce the reanalyses or SABER temperatures have some systematic local time biases. We also compared the vertical profile of ascending–descending temperature differences from Aura MLS measurements, which is a good indicator of the magnitude of S1 migrating tides, to reanalysis temperatures sampled at the same local times. Our results suggest that the S1 tides in the reanalyses are closer to those derived from Aura MLS than SABER observations. An intercomparison with available ground-based measurements may be helpful to resolve this issue.
The evolution of tidal amplitudes derived from the reanalyses over the extended 1980–2012 period shows a clear QBO signal except for JRA-55AMIP data, which have no QBO in equatorial stratospheric mean circulation. On the other hand, it is suggested that any long-term changes are primarily artificial and are driven by several changes in the input data employed. The largest impact is caused by the TOVS-to-ATOVS transition and the changes in NOAA satellites carrying TOVS. The tides as represented in MERRA-2 reanalyses are also affected by the incorporation of MLS data starting in 2004. How much influence these changes have on tides depend on each reanalysis system and also on the tidal frequency (i.e., S1 or S2). This finding indicates that the intercomparison results depend on the analysis period and artificial discontinuities in the data stream that were assimilated make it quite difficult to detect natural long-term trends of the tides in the middle atmosphere.
Some current global atmospheric models cover the region from the surface up to the MLT (often referred to as “whole-atmosphere models”). Such models are sometimes integrated with several dynamical variables nudged toward reanalysis data in order to reproduce the realistic day-to-day variations in the upper atmosphere that are often connected to tidal variations (e.g., Jin et al., 2012; Pedatella et al., 2014). In this respect, tides in reanalysis data provide an important “lower boundary condition” for simulations of upper-air dynamics. Tides in reanalyses are also used for correcting the diurnal anomaly or drift seen in Sun-synchronous satellite measurements. Zou et al. (2014) corrected the local time drift in SSU temperature measurements by using temperature tides in MERRA. The present evaluation of stratospheric tides should thus be helpful for estimating the uncertainty associated with using reanalyses for such applications.
Data availability
Data availability.
Diurnal monthly reanalysis data sets are publicly available as follows.
1. JRA-55: through the DIAS at http://search.diasjp.net/en/dataset/JRA55
2. JRA-55C: through the DIAS at http://search.diasjp.net/en/dataset/JRA55_C
3. JRA-55AMIP: through NCAR RDA at https://doi.org/10.5065/D6T72FHN
4. CFSR: through NCAR RDA https://doi.org/10.5065/D6DN438J
SABER data can be downloaded from the ftp site at ftp://saber.gats-inc.com/custom/Temp_O3/. MLS data can be downloaded from https://disc.gsfc.nasa.gov/datasets/ML2T_V003/summary. Please see Sects. 2.1 and 2.2 for details.
Appendix A: Major abbreviations and terms
AMSU: Advanced Microwave Sounding Unit Aura: a satellite in the EOS A-Train satellite constellation ATOVS: Advanced TIROS Operational Vertical Sounder CFSR: Climate Forecast System Reanalysis of NCEP COSMIC: Constellation Observing System for Meteorology, Ionosphere, and Climate CRISTA: CRyogenic Infrared Spectrometers and Telescopes for the Atmosphere ECMWF: European Centre for Medium-Range Weather Forecasts ERA-Interim: ECMWF interim reanalysis JCDAS: JMA Climate Data Assimilation System JRA-25: Japanese 25-year Reanalysis JRA-55: Japanese 55-year Reanalysis JRA-55AMIP: Japanese 55-year Reanalysis based on AMIP-type simulations JRA-55C: Japanese 55-year Reanalysis assimilating Conventional observations only MERRA: Modern Era Retrospective Analysis for Research MLS: Microwave Limb Sounder NCAR: National Center for Atmospheric Research NCEP: National Centers for Environmental Prediction of NOAA NOAA: National Oceanic and Atmospheric Administration SABER: Sounding of the Atmosphere using Broadband Emission Radiometry SPARC: Stratosphere–troposphere Processes And their Role in Climate SSU: Stratospheric Sounding Unit TIMED: Thermosphere–Ionosphere–Mesosphere Energetics and Dynamics UARS: Upper Atmosphere Research Satellite
Figure A1Meridional structure of Hough modes for (a) diurnal (S1) migrating temperature tides (westward-propagating, zonal wavenumber 1, diurnal component) and (b) semidiurnal (S2) migrating temperature tides (westward-propagating, zonal wavenumber 2, semidiurnal component). (a) The leading gravest (solid) and trapped (dashed) modes and (b) the gravest symmetric mode.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
This article is part of the special issue “The SPARC Reanalysis Intercomparison Project (S-RIP) (ACP/ESSD inter-journal SI)”. It is not associated with a conference.
Acknowledgements
Acknowledgements.
We are grateful to Kevin Hamilton for valuable comments and suggestions on the original manuscript. We thank Yoko Naito for processing the original MLS version 3.3 data and Chiaki Kobayashi and Yayoi Harada for helpful discussions on the JRA-55 results. The comments by three anonymous reviewers were also helpful in improving the paper. We also thank NASA's GMAO, ECMWF, JMA, and NCEP for providing reanalysis data sets. This study was in part supported by the Japan Society for the Promotion of Science (JSPS) through Grants-in-Aid for Scientific Research (15K17761 and 16K05548). Figures were produced using the GFD DENNOU Library. The DIAS data set is archived and provided under the framework of the Data Integration and Analysis System (DIAS) funded by the Japan Ministry of Education, Culture, Sports, Science and Technology (MEXT).
Edited by: Gabriele Stiller
Reviewed by: three anonymous referees
References
Burrage, M. D., Vincent, R. A., Mayr, H. G., Skinner, W. R., Arnold, N. F., and Hays, P. B.: Long-term variability in the equatorial middle atmosphere zonal wind, J. Geophys. Res., 101, 12847–12854, https://doi.org/10.1029/96JD00575, 1995.
Chapman, S. and Lindzen, R. S.: Atmospheric Tides, D. Reidel, Dordrecht, 200 pp., 1970.
Dai, A. and Wang, J.: Diurnal and semidiurnal tides in global surface pressure fields, J. Atmos. Sci., 56, 3874–3891, 1999.
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Q. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011.
Díaz-Argandoña, J., Ezcurra, A., Senz, J., Ibarra-Berástegi, J. G., and Errasti, I.: Climatology and temporal evolution of the atmospheric semidiurnal tide in present-day reanalyses, J. Geophys. Res.-Atmos., 121, 4614–4626, https://doi.org/10.1002/2015JD024513, 2016.
Forbes, J. M. and Wu., D.: Solar tides as revealed by measurements of mesosphere temperature by the MLS experiment on UARS, J. Atmos. Sci., 63, 1776–1797, 2006.
Fujiwara, M., Wright, J. S., Manney, G. L., Gray, L. J., Anstey, J., Birner, T., Davis, S., Gerber, E. P., Harvey, V. L., Hegglin, M. I., Homeyer, C. R., Knox, J. A., Krüger, K., Lambert, A., Long, C. S., Martineau, P., Molod, A., Monge-Sanz, B. M., Santee, M. L., Tegtmeier, S., Chabrillat, S., Tan, D. G. H., Jackson, D. R., Polavarapu, S., Compo, G. P., Dragani, R., Ebisuzaki, W., Harada, Y., Kobayashi, C., McCarty, W., Onogi, K., Pawson, S., Simmons, A., Wargan, K., Whitaker, J. S., and Zou, C.-Z.: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417–1452, https://doi.org/10.5194/acp-17-1417-2017, 2017.
Gelaro, R., McCarty, W., Suarez, M., Todling, R., Molod, A., Takacs, L., Randles, C., Darmenov, A., Bosilovich, M., Reichle, R., Wargan, K., Coy, L., Cullather, R., Draper, C., Akella, S., Buchard, V., Conaty, A., Da Silva, A., Gu, W., Kim, G., Koster, R., Lucchesi, R., Merkova, D., Nielsen, J., Partyka, G., Pawson, S., Putman, W., Rienecker, M., Schubert, S., Sienkiewicz, M., and Zhao, B.: The Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2), J. Climate, 30, 5419–5454, 2017.
Global Modeling and Assimilation Office (GMAO): MERRA-2 instU_3d_asm_Np: 3d, diurnal, Instantaneous, Pressure-Level, Assimilation, Assimilated Meteorological Fields V5.12.4, Goddard Earth Sciences Data and Information Services Center (GES DISC), Greenbelt, MD, USA, https://doi.org/10.5067/6EGRBNEBMIYS, 2015.
Hagan, M. E., Forbes, J. M., and Vial, F.: On modeling migrating solar tides, Geophys. Res. Lett., 22, 893–896, 1995.
Hamilton, K. and Sakazaki, T.: A note on apparent solar time and the seasonal cycle of atmospheric solar tides, Q. J. Roy. Meteor. Soc., 143, 2310–2314, https://doi.org/10.1002/qj.3076, 2017.
Hsu, H.-H. and Hoskins, B. J.: Tidal fluctuations as seen in ECMWF data, Q. J. Roy. Meteor. Soc., 115, 247–264, https://doi.org/10.1002/qj.49711548603, 1989.
Huang, F. T., McPeters, R. D., Bhartia, P. K., Mayr, H. G., Frith, S. M., Russell III, J. M., and Mlynczak, M. G.: Temperature diurnal variations (migrating tides) in the stratosphere and lower mesosphere based on measurements from SABER on TIMED, J. Geophys. Res., 115, D16121, https://doi.org/10.1029/2009JD013698, 2010.
Jin, H., Miyoshi, Y., Pancheva, D., Mukhtarov, P., Fujiwara, H., and Shinagawa, H.: Response of migrating tides to the stratospheric sudden warming in 2009 and their effects on the ionosphere studied by a whole atmosphere-ionosphere model GAIA with COSMIC and TIMED∕SABER observations, J. Geophys. Res., 117, A10323, https://doi.org/10.1029/2012JA017650, 2012.
Kawatani, Y., Hamilton, K., Miyazaki, K., Fujiwara, M., and Anstey, J. A.: Representation of the tropical stratospheric zonal wind in global atmospheric reanalyses, Atmos. Chem. Phys., 16, 6681–6699, https://doi.org/10.5194/acp-16-6681-2016, 2016.
Keckhut, P., Gelman, M. E., Wild, J. D., Tissot, F., Miller, A. J., Hauchecorne, A., Chanin, M.-L., Fishbein, E. F., Gille, J., Russel III, J. M., and Taylor, F. W.: Semidiurnal and diurnal temperature tides (30–55 km): climatology and effect on UARS-LIDAR data comparisons, J. Geophys. Res., 101, 10299–10310, https://doi.org/10.1029/96JD00344, 1996.
Kobayashi, C., Endo, H., Ota, Y., Kobayashi, S., Onoda, H., Harada, Y., Onogi, K., and Kamahori, H.: Preliminary results of the JRA-55C, an atmospheric reanalysis assimilating conventional observations only, Scientific Online Letters on the Atmosphere, 10, 78–82, https://doi.org/10.2151/sola.2014-016, 2014.
Kobayashi, S., Ota, Y., Harada, Y., Ebita, A., Moriya, M., Onoda, H., Onogi, K., Kamahori, H., Kobayashi, C., Endo, H., Miyaoka, K., and Takahashi, K.: The JRA-55 reanalysis: general specifications and basic characteristics, J. Meteorol. Soc. Jpn., 93, 5–48, https://doi.org/10.2151/jmsj.2015-001, 2015.
Kohyama, T. and Wallace, J. M.: Lunar gravitational atmospheric tide, surface to 50 km in a global, gridded data set, Geophys. Res. Lett., 41, 8660–8665, https://doi.org/10.1002/2014GL060818, 2014.
Kopp, M., Gerding, M., Höffner, J., and Lübken, F.-J.: Tidal signatures in temperatures derived from daylight lidar soundings above Kühlungsborn (54 N, 12 E), J. Atmos. Sol.-Terr. Phy., 127, 37–50, 2015.
Kuroda, Y. and Chiba, M.: Creation of a zonally symmetric tide due to the interference of the migrating diurnal tide and a quasi-stationary wave, J. Meteorol. Soc. Jpn., 73, 737–746, 1995.
Leblanc, T., McDermid, I. S., and Ortland, D. A.: Lidar observations of the middle atmospheric thermal tides and comparison with the High Resolution Doppler Imager and Global Scale Wave Model: 2. October observations at Mauna Loa (19.5 N), J. Geophys. Res., 104, 11931–11938, https://doi.org/10.1029/1999JD900008, 1999.
Li, X., Wan, W., Yu, Y., and Ren, Z.: Yearly variations of the stratospheric tides seen in the CFSR reanalysis data, Adv. Space Res., 56, 1822–1832, 2015.
Lieberman, R. S.: Nonmigrating diurnal tides in the equatorial middle atmosphere, J. Atmos. Sci., 48, 1112–1123, 1991.
Livesey, N. J., Read, W. G., Froidevaux, L., Lambert, A., Manney, G. L., Pumphrey, H. C., Santee, M. L., Schwartz, M. J., Wang, S., Cofield R. E., Cuddy, D. T., Fuller, R. A., Jarnot, R. F., Jiang, J. H., Knosp, B. W., Stek, P. C., Wagner, P. A., and Wu, D. L.: Version 3.3 and 3.4 Level 2 data quality and description document, Tech. Rep. JPL D-33509, NASA Jet Propulsion Laboratory, Pasadena, 156 pp., 2011.
Livesey, N. J., Read, W. G., Wagner, P. A., Froidevaux, L., Lambert, A., Manney, G. L., Valle, L. F. M., Pumphrey, H. C., Santee, M. L., Schwartz, M. J., Wang, S., Fuller, R. A., Jarnot, R. F., Knosp, B. W., and Martinez, E.: Version 4.2x Level 2 data quality and description document, Tech. Rep. JPL D-33509 Rev. C, NASA Jet Production Laboratory, Pasadena, 163 pp., 2017.
Long, C. S., Fujiwara, M., Davis, S., Mitchell, D. M., and Wright, C. J.: Climatology and interannual variability of dynamic variables in multiple reanalyses evaluated by the SPARC Reanalysis Intercomparison Project (S-RIP), Atmos. Chem. Phys., 17, 14593–14629, https://doi.org/10.5194/acp-17-14593-2017, 2017.
McCarty, W., Coy, L., Gelaro, R., Huang, A., Merkova, Smith, E. B., Sienkiewicz, M., and Wargan, K.: MERRA-2 input observations: summary and assessment, Technical Report Series on Global Modeling and Data Assimilation, Goddard Space Flight Center, Greenbelt, 46, 51 pp., 2016.
McLandress, C.: The seasonal variation of the propagating diurnal tide in the mesosphere and lower thermosphere, Part II: The role of tidal heating and zonal mean winds, J. Atmos. Sci., 59, 907–922, 2002.
Mukhtarov, P., Pancheva, D., and Andonov, B.: Global structure and seasonal and interannual variability of the migrating diurnal tide seen in the SABER∕TIMED temperatures between 20 and 120 km, J. Geophys. Res., 114, A02309, https://doi.org/10.1029/2008JA013759, 2009.
Nash, J. and Saunders, R.: A review of stratospheric sounding unit radiance observations for climate trends and reanalyses, Q. J. Roy. Meteor. Soc., 141, 2103–2113, https://doi.org/10.1002/qj.2505, 2015.
Oberheide, J., Hagan, M. E., Ward, W. E., Riese, M., and Offermann, D.: Modeling the diurnal tide for the Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) 1 time period, J. Geophys. Res., 105, 24917–24929, https://doi.org/10.1029/2000JA000047, 2000.
Pedatella, N. M., Fuller-Rowell, T., Wang, H., Jin, H., Miyoshi, Y., Fujiwara, H., Shinagawa, H., Liu, H.-L., Sassi, F., Schmidt, H., Matthias, V., and Goncharenko, L.: The neutral dynamics during the 2009 sudden stratosphere warming simulated by different whole atmosphere models, J. Geophys. Res.-Space, 119, 1306–1324, https://doi.org/10.1002/2013JA019421, 2014.
Ray, R. D. and Ponte, R. M.: Barometric tides from ECMWF operational analyses, Ann. Geophys., 21, 1897–1910, https://doi.org/10.5194/angeo-21-1897-2003, 2003.
Remsberg, E. E., Marshall, B. T., Garcia-Comas, M., Krueger, D., Lingenfelser, G. S., Martin-Torres, J., Mlynczak, M. G., Russell III, J. M., Smith, A. K., Zhao, Y., Brown, C., Gordley, L. L., Lopez-Gonzales, M. J., Lopez-Puertas, M., She, C.-Y., Taylor, M. J., and Thompson, R. E.: Assessment of the quality of the Version 1.07 temperature-versus-pressure profiles of the middle atmosphere from TIMED∕SABER, J. Geophys. Res., 113, D17101, https://doi.org/10.1029/2008JD010013, 2008.
Rienecker, M. M., Suarez, M. J., Gelaro, R., Todling, R., Bacmeister, J., Liu, E., Bosilovich, M. G., Schubert, S. D., Takacs, L., Kim, G.-K., Bloom, S., Chen, J., Collins, D., Conaty, A., da Silva, A., Gu, W., Joiner, J., Koster, R. D., Lucchesi, R., Molod, A., Owens, T., Pawson, S., Pegion, P., Redder, C. R., Reichle, R., Robertson, F. R., Ruddick, A. G., Sienkiewicz, M., and Woollen, J.: MERRA: NASA's modern-era retrospective analysis for research and applications, J. Climate, 24, 3624–3648, https://doi.org/10.1175/JCLI-D-11-00015.1, 2011.
Russell III, J. M., Mlynczak, M. G., Gordley, L. L., Tansock, J., and Esplin, R.: An overview of the SABER experiment and preliminary calibration results, Proc. SPIE, 3756, 277–288, 1999.
Saha, S., Moorthi, S., Pan, H.-L., Wu, X., Wang, J., Nadiga, S., Tripp, P., Kistler, R., Woollen, J., Behringer, D., Liu, H., Stokes, D., Grumbine, R., Gayno, G., Wang, J., Hou, Y.-T., Chuang, H.-Y., Juang, H.-M. H., Sela, J., Iredell, M., Treadon, R., Kleist, D., van Delst, P., Keyser, D., Derber, J., Ek, M., Meng, J., Wei, H., Yang, R., Lord, S., van den Dool, H., Kumar, A., Wang, W., Long, C., Chelliah, M., Xue, Y., Huang, B., Schemm, J.-K., Ebisuzaki, W., Lin, R., Xie, P., Chen, M., Zhou, S., Higgins, W., Zou, C.-Z., Liu, Q., Chen, Y., Han, Y., Cucurull, L., Reynolds, R. W., Rutledge, G., and Goldberg, M.: The NCEP climate forecast system reanalysis, B. Am. Meteorol. Soc., 91, 1015–1057, https://doi.org/10.1175/2010BAMS3001.1, 2010.
Sakazaki, T., Fujiwara, M., Zhang, X., Hagan, M. E., and Forbes, J. M.: Diurnal tides from the troposphere to the lower mesosphere as deduced from TIMED∕SABER satellite data and six global reanalysis data sets, J. Geophys. Res., 117, D13108, https://doi.org/10.1029/2011JD017117, 2012.
Sakazaki, T., Fujiwara, M., Mitsuda, C., Imai, K., Manago, N., Naito, Y., Nakamura, T., Akiyoshi, H., Kinnison, D., Sano, T., Suzuki, M., and Shiotani, M.: Diurnal ozone variations in the stratosphere revealed in observations from the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES) on board the International Space Station (ISS), J. Geophys. Res., 118, 2991–3006, https://doi.org/10.1002/jgrd.50220, 2013a.
Sakazaki, T., Fujiwara, M., and Zhang, X.: Interpretation of the vertical structure and seasonal variation of the diurnal migrating tide from the troposphere to the lower mesosphere, J. Atmos. Sol.-Terr. Phy., 105, 66–80, https://doi.org/10.1016/j.jastp.2013.07.010, 2013b.
Sakazaki, T., Sasaki, T., Shiotani, M., Tomikawa, Y., and Kinnison, D.: Zonally uniform tidal oscillations in the tropical stratosphere, Geophys. Res. Lett., 42, 9553–9560, https://doi.org/10.1002/2015GL066054, 2015a.
Sakazaki, T., Sato, K., Kawatani, Y., and Watanabe, S.: Three-dimensional structures of tropical nonmigrating tides in a high-vertical-resolution general circulation model, J. Geophys. Res.-Atmos., 120, 1759–1775, https://doi.org/10.1002/2014JD022464, 2015b.
Sakazaki, T., Shiotani, M., Suzuki, M., Kinnison, D., Zawodny, J. M., McHugh, M., and Walker, K. A.: Sunset–sunrise difference in solar occultation ozone measurements (SAGE II, HALOE, and ACE–FTS) and its relationship to tidal vertical winds, Atmos. Chem. Phys., 15, 829–843, https://doi.org/10.5194/acp-15-829-2015, 2015c.
Sakazaki, T., Hamilton, K., Zhang, C., and Wang, Y.: Is there a stratospheric pacemaker controlling the daily cycle of tropical rainfall?, Geophys. Res. Lett., 44, 1998–2006, https://doi.org/10.1002/2017GL072549, 2017.
Swinbank, R., Orris, R. L., and Wu, D. L.: Stratospheric tides and data assimilation, J. Geophys. Res., 104, 16929–16941, https://doi.org/10.1029/1999JD900108, 1999.
Wallace, J. M. and Hartranft, F. R.: Diurnal wind variations, surface to 30 kilometers, Mon. Weather Rev., 97, 446–455, 1969.
Ward, W. E., Oberheide, J., Goncharenko, L. P., Nakamura, T., Hoffmann, P., Singer, W., Chang, L. C., Du, J., Wang, D.-Y., Batista, P., Clemesha, B., Manson, A. H., Riggin, D. M., She, C.-Y., Tsuda, T., and Yuan, T.: On the consistency of model, ground-based, and satellite observations of tidal signatures: initial results from the CAWSES tidal campaigns, J. Geophys. Res., 115, D07107, https://doi.org/10.1029/2009JD012593, 2010.
Waters, J. W., Froidevaux, L., Harwood, R. S., Jarnot, R. F., Pickett, H. M., Read, W. G., Siegel, P. H., Cofield, R. E., Filipiak, M. J., Flower, D. A., Holden, J. R., Lau, G. K., Livesey, N. J., Manney, G. L., Pumphrey, H. C., Santee, M. L., Wu, D. L., Cuddy, D. T., Lay, R. R., Loo, M. S., Perun, V. S., Schwartz, M. J., Stek, P. C., Thurstans, R. P., Boyles, M. A., Chandra, K. M., Chavez, M. C., Chen, G.-S., Chudasama, B. V., Dodge, R., Fuller, R. A., Girard, M. A., Jiang, J. H., Jiang, Y., Knosp, B. W., LaBelle, R. C., Lam, J. C., Lee, K. A., Miller, D., Oswald, J. E., Patel, N. C., Pukala, D. M., Quintero, O., Scaff, D. M., Van Snyder, W., Tope, M. C., Wagner, P. A., and Walch, M. J.: The Earth Observing System Microwave Limb Sounder (EOS MLS) on the Aura satellite, IEEE T. Geosci. Remote, 44, 1075–1092, 2006.
Williams, C. R. and Avery, S. K.: Diurnal nonmigrating tidal oscillations forced by deep convective clouds, J. Geophys. Res., 101, 4079–4091, https://doi.org/10.1029/95JD03007, 1996.
Woolnough, S. J., Slingo, J. M., and Hoskins, B. J.: The diurnal cycle of convection and atmospheric tides in an aqua planet GCM, J. Atmos. Sci., 61, 2559–2573, 2004.
World Meteorological Organization (WMO): Scientific Assessment of Ozone Depletion: 2014, World Meteorological Organization, Global Ozone Research and Monitoring Project – Report No. 55, Geneva, Switzerland, 416 pp., 2014.
Wu, D. L., McLandress, C., Read, W. G., Waters, J. W., and Froidevaux, L.: Equatorial diurnal variations observed in UARS microwave limb sounder temperature during 1991–1994 and simulated by the Canadian Middle Atmosphere Model, J. Geophys. Res., 103, 8909–8917, 1998.
Xu, J., Smith, A. K., Liu, H.-L., Yuan, W., Wu, Q., Jiang, G., Mlynczak, M. G., Russell III, J. M., and Franke, S. J.: Seasonal and quasi-biennial variations in the migrating diurnal tide observed by Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED), J. Geophys. Res., 114, D13107, https://doi.org/10.1029/2008JD011298, 2009.
Zhang, X., Forbes, J. M., Hagan, M. E., Russell III, J. M., Palo, S. E., Mertens, C. J., and Mlynczak, M. G.: Monthly tidal temperatures 20–120 km from TIMED∕SABER, J. Geophys. Res., 111, A10S08, https://doi.org/10.1029/2005JA011504, 2006.
Zou, C.-Z., Qian, H., Wang, W., Wang, L., and Long, C.: Recalibration and merging of SSU observations for stratospheric temperature trend studies, J. Geophys. Res.-Atmos., 119, 13180–13205, https://doi.org/10.1002/2014JD021603, 2014. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164836764335632, "perplexity": 6211.434266034187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00486.warc.gz"} |
https://www.math.princeton.edu/events/quantum-oracle-classification-case-group-structure-2016-12-12t210012 | # Quantum Oracle Classification: The Case of Group Structure
-
Mark Zhandry, Princeton University
Fine Hall 214
The Quantum Oracle Classification (QOC) problem is to classify a function, given only quantum black box access, into one of several classes without necessarily determining the entire function. Generally, QOC captures a very wide range of problems in quantum query complexity. However, relatively little is known about many of these problems. In this work, we analyze the a subclass of the QOC problems where there is a group structure. That is, suppose the range of the unknown function A is a commutative group G, which induces a commutative group law over the entire function space. Then we consider the case where A is drawn uniformly at random from some subgroup A of the function space. Moreover, there is a homomorpism f on A, and the goal is to determine f(A). This class of problems is very general, and covers several interesting cases, such as oracle evaluation; polynomial interpolation, evaluation, and extrapolation; and parity. These problems are important in the study of message authentication codes in the quantum setting, and may have other applications. We exactly characterize the quantum query complexity of every instance of QOC with group structure in terms of a particular counting problem. That is, we provide an algorithm for this general class of problems whose success probability is determined by the solution to the counting problem, and prove its exact optimality. Unfortunately, solving this counting problem in general is a non-trivial task, and we resort to analyzing special cases. Our bounds unify some existing results, such as the existing oracle evaluation and parity bounds. In the case of polynomial interpolation and evaluation, our bounds give new results for secret sharing and information theoretic message authentication codes in the quantum setting. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242128491401672, "perplexity": 588.0621146347245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00640.warc.gz"} |
https://tushartyagi.com/blog/bloom-filters/ | ## Bloom Filter
Hash tables are such data structures which have uses almost everywhere. For each problem that we get, there’s a slight chance that it can be solved by using a hash table. Arrays are simpler hash tables, so are structures like symbols tables, caches etc.
The problem with hash tables is that these store both the key as well as its associated value. While the keys can be hashed, the values are not. And if these records run into millions, then there might be a situation of how to store it into memory (although this might not be concern in the age of big data, but still).
A Bloom filter is a probabilistic data structure. Here we try to find if the key is available in the Bloom filter or not. The Bloom filter, by design, is either 100% sure that the key is not present, or it’s fairly certain (not 100%) that it’s present. Not that the actual value needs to be stored separately and is not present in the Bloom filter.
Mathematically, a Bloom filter can return the following values for a given value v, in set S:
• 100% sure that v doesn’t exist in S, or
• Very high probability (less than 100%) that v does exists in S.
So the non-existence is guaranteed but existence is just a high probability.
In some sense, the Bloom filter is a smaller cache in front of cache. It allows us to know if the key is present in the cache or not.
### Technical Details
The underlying data structure is a very long bit string, with its size roughly running into millions. Initially, each of the bits is off.
Each key which needs to be added to the Bloom filter is hashed using multiple hash functions, with each function giving a number less than the bit-string size. Each number becomes an index in the bit-string and the associated bit is turned on. So a key hashed using 3 hash functions giving 12, 55, and 87 hash values is going to turn on the bits on index 12, 55, and 87. We keep on adding new keys into the filter.
During lookup, the same procedure is followed and the bits are calculated. If any of the bits is off, that means that the key is not present in the Bloom filter. If all of the bits are on, that either means that the key is actually present in the filter, or maybe a unique combination of some other keys have turned on all the bits. Therefore, the absence is guaranteed, but the presence is not. Any such entry which is not actually present, but the associated bits are turned on is called a false positive.
Designing a Bloom filter means that we need to have a balance between the number of elements which we want to enter, and the proportion of false positives that we are willing to tolerate.
Next we see some combination of properties we can use.
### Ballpark calculations
When we start with a Bloom filter, we need to fix how many elements is the filter going to hold, and what is the percentage of false positives we are willing to bear.
Then the number of hash functions which takes the false positive percentage to a minimum is:
$k = {m \over n} ln(2).$
where $k$ is the number of hash functions, $m$ is the number of bits that we have to store per element $n$ is the total number of elements.
A ballpark figure is to have the following bits for false positive tolerance:
4.8 bits/element for 10% false positives 9.6 bits/element for 1% false positives 14.4 bits/element for 0.1% false positives
### How to get independent hash functions?
Given that having a false positive rate of 0.1% requires 10 different hash functions, we need to understand how we can get so many hash functions.
• Trick 1
Divide the bit of a large bit value to smaller bits, if the bits of large bits can be independent. e.g. a 32bit wide value can be split to 4x8 or 2x16 hashed bits.
Number of bits required to represent n numbers is log2(n)
• Trick 2
Combine two hash functions together:
h(…) = h1(…) + i * h2(…)
e.g.
h3 = h1(…) + 3 * h2(…) h4 = h1(…) + 4 * h2(…)
### Example Usecase
Images which are served only once are cached. This is a waste of resources. Solution: Bloom and Cuckoo filters saves the cache to be filled with such one-off values.
Bloom filter allows us to find the requests which are one-off and then remove these from the cache. So this is like a cache for the cache.
Imagine that there’s a cache which has hundreds of thousands of items. The purpose of the cache is to send back the items which are requested by the users without making a trip to the database. But how do we know which item is frequently requested by the end users and which are seldomly requested? This is one situation where Bloom filter can be used.
The simple scenario without using a Bloom filter is when the user requests an item which is not in the cache, and we return it and save it in the cache. If the item is not requested even again, then it is wasting the cache space.
The solution would be to have a Bloom filter alongside the cache. In case of a non-cached request by the user, we do the following:
• Return the item from the database,
• Check the Bloom filter if the request has been served before.
• The Bloom filter is going to either going to be 100% sure if the request has
not been served before, or give a good enough probable answer that it has been served before.
• In either case, we enter the value in the Bloom filter.
• If the same item is requested again, then we check the Bloom filter and going to tell us that the item has been probably served before.
• This item then goes into the cache.
### Implementation
As per StackOverflow discussion here, one of the good hash functions for Bloom filters is Murmur3 function.
We try to target the functionality of having just 0.1% false positives.
We start with a relatively large number of elements so that the algorithm can actually become functional. As per the calculations in the previous section, we need around 14.4 bits per element to have low number of false positives. So we start by creating a BitArray having 14.4 times the size of elements.
Since we already know that we need Murmur3 hash function, we use the library. But we also have to use 10 of these so that there’s enough randomisation of the bits.
The murmur3 library returns 32bit int (there’s an option to get 128bit values as well), so we can split that 32bit int into 2 16bit values. This is because for our use case of 1million elements, unsigned 16bit will hold 65K values – well below the size of the array. This takes care of the first 2 hash functions.
For the next 8 functions, we use the same key but have the following arrangement of the functions:
$h_i(k) = h_1(k) + i * h_2(k)$
By the time we reach the 8th hash function, we get a maximum of 520K index.
Entering and reading values is just a matter of setting and reading the bits from the BitArray
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 public class BloomFilter { private readonly BitArray _elements; private static readonly Murmur32 _murmur = MurmurHash.Create32(); public BloomFilter(int numElements) { _elements = new BitArray((int)(14.4 * numElements), false); } // Since this is a 32bit integer, we can split the value // in the first half and second half with hash1 getting the first half // and hash2 getting the second half. This is doable since BitConverter // takes in a starting index as well. This may help with the indexes as // well, since we'll not have to take the modulo. static readonly Func hash1 = (key) => { byte[] hash = MurmurHash.Create32().ComputeHash(Encoding.ASCII.GetBytes(key)); return BitConverter.ToUInt16(hash, 0); }; static readonly Func hash2 = (key) => { byte[] hash = MurmurHash.Create32().ComputeHash(Encoding.ASCII.GetBytes(key)); return BitConverter.ToUInt16(hash, 2); }; // We need to have an array of 10 hash functions, and iterate over these // to get the 10 hash values and then set these 10 bits in the bitarray. // In order to not overflow the array, (and I'm not sure about the implementation), // we can take the modulo of the number. public void AddKey(string key) { int bitIndex = hash1(key) % _elements.Length; _elements.Set(bitIndex, true); bitIndex = hash2(key) % _elements.Length; _elements.Set(bitIndex, true); for (int i = 3; i <= 10; i++) { bitIndex = hash1(key) + i * hash2(key); _elements.Set(bitIndex, true); } } public bool ContainsKey(string key) { var contains = true; int bitIndex = hash1(key) % _elements.Length; contains = contains && _elements.Get(bitIndex); bitIndex = hash2(key) % _elements.Length; contains = contains && _elements.Get(bitIndex); for (int i = 3; i <= 10; i++) { bitIndex = hash1(key) + i * hash2(key); contains = contains && _elements.Get(bitIndex); } return contains; } }
### Challenges
So in the default Bloom filter, deletions are not supported. Because of this, over time, there’s a chance that all the bits are going to be set to 1. When this happens, then every request is going to be returned as a positive.
### Applications
Check the relevent section of the wikipedia article to get more ideas of where to use the Bloom filters. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29152825474739075, "perplexity": 736.2103876024311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00438.warc.gz"} |
https://betterexplained.com/articles/a-friendly-chat-about-whether-0-999-1/ | Does .999… = 1? The question invites the curiosity of students and the ire of pedants. A famous joke illustrates my point:
A man is lost at sea in a hot air balloon. He sees a lighthouse approaching in the fog. “Where am I?” he shouts desperately through the wind. “You’re in a balloon!” he hears as he drifts off into the distance.
The response is correct but unhelpful. When people ask about 0.999… they aren’t saying “Hey, could you find the limit of a convergent series under the axioms of the real number system?” (Really? Yes, Really!)
No, there’s a broader, more interesting subtext: What happens when one number gets infinitely close to another?
It’s a rare thing when people wonder about math: let’s use the opportunity! Instead of bluntly offering technical definitions to satisfy some need for rigor, let’s allow ourselves to explore the question.
Here’s my quick summary:
• The meaning of 0.999… depends on our assumptions about how numbers behave.
• A common assumption is that numbers cannot be “infinitely close” together — they’re either the same, or they’re not. With these rules, 0.999… = 1 since we don’t have a way to represent the difference.
• If we allow the idea of “infinitely close numbers”, then yes, 0.999… can be less than 1.
Math can be about questioning assumptions, pushing boundaries, and wondering “What if?”. Let’s dive in.
## Do Infinitely Small Numbers Exist?
The meaning of 0.999… is a tricky concept, and depends on what we allow a number to be. Here’s an example: Does “3 – 4” mean anything to you?
Sure, it’s -1. Duh. But the question is only simple because you’ve embraced the advanced idea of negatives: you’re ok with numbers being less than nothing. In the 1700s, when negatives were brand new, the concept of “3-4” was eyed with great suspicion, if allowed at all. (Geniuses of the time thought negatives “wrapped around” after you passed infinity.)
Infinitely small numbers face a similar predicament today: they’re new, challenge some long-held assumptions, and are considered “non-standard”.
## So, Do Infinitesimals Exist?
Well, do negative numbers exist? Negatives exist if you allow them and have consistent rules for their use.
Our current number system assumes the long-standing Archimedean property: if a number is smaller than every other number, it must be zero. More simply, infinitely small numbers don’t exist.
The idea should make sense: numbers should be zero or not-zero, right? Well, it’s “true” in the same way numbers must be there (positive) or not there (zero) — it’s true because we’ve implicitly excluded other possibilities.
But, it’s no matter — let’s see where the Archimedean property takes us.
## The Traditional Approach: 0.999… = 1
If we assume infinitely small numbers don’t exist, we can show 0.999… = 1.
First off, we need to figure out what 0.999… means. Most mathematicians see the problem like this:
• 0.999… represents a series of numbers: 0.9, 0.99, 0.999, 0.9999, and so on
• The question: does this series get so close (converge) to a result that we cannot tell it apart?
This is the reasoning behind limits: Does our “thing to examine” get so darn close to another number that we can’t tell them apart, no matter how hard we try?
“Well,” you say, “How do you tell numbers apart?”. Great question. The simplest way to compare is to subtract:
• if a – b = 0, they’re the same
• if a – b is not zero, they’re different
The idea behind limits is to find some point at which “a – b” becomes zero (less than any number); that is, we can’t tell the “number to test” and our “result” as different.
## The Error Tolerance
It’s still tough to compare items when they take such different forms (like an infinite series). The next clever idea behind limits: define an error tolerance:
• You give me your tolerance for error / accuracy level (call it “e”)
• I’ll see whether I can get the two things to fall within that tolerance
• If so, they’re equal! If we can’t tell them apart, no matter how hard we try, they must be the same.
Suppose I sell you a raisin granola bar, claiming it’s 100 grams. You take it home, examine the non FDA-approved wrapper, and decide to see if I’m lying. You put the snack on your scale and it shows 100 grams. The scale is accurate to 1 gram. Did I trick you?
You couldn’t know: as far as you can tell, within your accuracy, the granola bar is indeed 100 grams. Our current problem is similar: I’m selling you a “granola bar” weighing 1 gram, but sneaky me, I’m actually giving you one weighing 0.999… grams. Can you tell the difference?
Ok, let’s work this out. Suppose your error tolerance is 0.1 gram. Then if you ask for 1, and I give you 0.99, the difference is 0.01 (one hundredth) and you don’t know you’ve been tricked! 1 and .99 look the same to you.
But that’s child’s-play. Let’s say your scale is accurate to 1e-9 (.000000001, a billionth of a gram). Well then, I’ll sell you a candy bar that is .999999999999 (only one trillionth of a gram off) and you’ll be fooled again! Hah!
In fact, instead of picking a specific tolerance like 0.01, let’s use a general one (e):
• Error tolerance: e
• Difference: Well, suppose e has “n” digits of precision. Let 0.999… expand until we have a difference requiring n+1 digits of precision to detect.
• Therefore, the tolerance can always be less than e! And the difference appears to be zero.
See the trick? Here’s a visual way to represent it:
The straight line is what you’re expecting: 1.0, that perfect granola bar. The curve is the number of digits we expand 0.999… to. The idea is to expand 0.999… until it falls within “e”, your tolerance:
At some point, no matter what you pick for e, 0.999… will get close enough to satisfy us mathematically.
(As an aside, 0.999… isn’t a growing process, it’s a final result on its own. The curve represents the idea that we can approximate 0.999… with better and better accuracy — this is fodder for another post).
With limits, if the difference between two things is smaller than any margin we can dream of, they must be the same.
## Assuming Infinitesimals Exist
This first conclusion may not sit well with you — you might feel tricked. And that’s ok! We seem to be ignoring something important when we say that 0.999… equals 1 because we, with our finite precision, cannot tell the difference.
Newer number systems have developed the idea that infinitesimals exist. Specifically:
• Infinitely small numbers can exist: they aren’t zero, but look like zero to us.
This seems to be a confusing idea, but I see it like this: atoms don’t exist to cavemen. Once they’ve cut a rock into grains of sand, they can go no further: that’s the smallest unit they can imagine. Things are either grains, or not there. They can’t imagine the concept of atoms too small for the naked eye.
Compared to other number systems, we’re cavemen. What we call “tiny numbers” are actually gigantic. In fact, there can be another “dimension” of numbers too small for us to detect — numbers that differ only in this tiny dimension look identical to us, but are different under an infinitely powerful microscope.
I interpret 0.999… like this: Can we make a number a bit less than 1 in this new, infinitely small dimension?
## Hyperreal Numbers
Hyperreal numbers are one system that uses this “tiny dimension” to examine infinitely small numbers. In this, infinitesimals are usually called “h”, and are considered to be 1/H (where big H is infinity).
So, the idea is this:
• 0.999… < 1 [We’re assuming it’s allowed to be smaller, and infinitely small numbers exist]
• 0.999… + h = 1 [h is the infinitely small number that makes up the gap]
• 0.999… = 1 – h [Equivalently, we can subtract an infinitely small amount from 1]
So, 0.999… is just a tiny bit less than 1, and the difference is h!
## Back to Our Numbers
The problem is, “h” doesn’t exist back in our macroscopic world. Or rather, h looks the same as zero to us — we can’t tell that it’s a tiny atom, not the lack of any matter altogether. Here’s one way to visualize it:
When we switch back to our world, it’s called taking the “standard part” of a number. It essentially means we throw away all the h’s, and convert them to zeroes. So,
• 0.999… = 1 – h [there is an infinitely small difference]
• St(0.999…) = St(1 – h) = St(1) – St(h) = 1 – 0 = 1 [And to us, 0.999… = 1]
The happy compromise is this: in a more accurate dimension, 0.999… and 1 are different. But, when we, with our finite accuracy, try to describe the difference, we cannot: 0.999… and 1 look identical.
## Lessons Learned
Let’s hop back to our world. The purpose of “Does 0.999… equal 1?” is not to spit back the answer to a limit question. That’s interpreting the query as “Hey, within our system what does 0.999… represent?”
The question is about exploration. It’s really, “Hey, I’m wondering about numbers infinitely close together (.999… and 1). How do we handle them?”
Here’s my response:
• Our idea of a number has evolved over thousands of years to include new concepts (integers, decimals, rationals, reals, negatives, imaginary numbers…).
• In our current system, we haven’t allowed infinitely small numbers. As a result, 0.999… = 1 because we don’t allow there to be a gap between them (so they must be the same).
• In other number systems (like the hyperreal numbers), 0.999… is less than 1. Here, infinitely small numbers are allowed to exist, and this tiny difference (h) is what separates 0.999… from 1.
There are life lessons here: can we extend our mental model of the world? Negatives gave us the conception that every number can have an opposite. And you know what? It turns out matter can have an opposite too (matter and antimatter annihilate each other when they come in contact, just like 3 + (-3) = 0).
Let’s think about infinitesimals, a tiny dimension beyond our accuracy:
• Some theories of physics reference tiny “curled up” dimensions which are embedded into our own. These dimensions may be infinitely small compared to our own — we never notice them. To me, “infinitely small dimensions” are a way to describe something which is there, but undetectable to us.
• The physical sciences use “significant figures” and error margins to specify the inherent inaccuracy of our calculations. We know that reality is different from what we actually measure: infinitesimals help make this distinction explicit.
• Making models: An infinitely small dimension can help us create simple but accurate models to solve problems in our world. The idea of “simple but accurate enough” is at the heart of calculus.
Math isn’t just about solving equations. Expanding our perspective with strange new ideas helps disparate subjects click. Don’t be afraid wonder “What if?”.
## Appendix: Where’s the Rigor?
When writing, I like to envision a super-pedant, concerned more with satisfying (and demonstrating) his rigor than educating the reader. This mythical(?) nemesis inspires me to focus on intuition. I really should give Mr. Rigor a name.
But, rigor has a use: it helps ink the pencil-lines we’ve sketched out. I’m not a mathematician, but others have written about the details of interpreting 0.999… and 1 or less than 1:
“So long as the number system has not been specified, the students’ hunch that .999… can fall infinitesimally short of 1, can be justified in a mathematically rigorous fashion.”
My goal is to educate, entertain, and spread interest in math. Can you think of a more salient way to get non-math majors interested in the ideas behind analysis? Limits aren’t going to market themselves. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7016342282295227, "perplexity": 1515.446510929097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823674.34/warc/CC-MAIN-20181211172919-20181211194419-00338.warc.gz"} |
https://blog.appsignal.com/2019/02/19/elixir-alchemy-pouring-protocols | elixir
# Pouring Protocols in Elixir
Miguel Palhas on
In today's Elixir Alchemy, we will stir into the potion of protocols. Elixir has several mechanisms that allow us to write expressive and intuitive code.
Pattern matching, for instance, is a powerful way of dealing with multiple scenarios without having to go into complicated branching. It allows each of our functions to be clear and concise.
# What Are Protocols?
In a way, Protocols are similar to pattern matching, but they allow us to write more meaningful and context-specific code based on the datatype we’re dealing with.
Let’s take the example of a content-delivery website. This website has multiple types of content: audio clips, videos, texts, and whatever else you can think of.
Each of these content types obviously has different attributes and metadata, so it makes sense for them to be represented by independent structs:
Translating this into Elixir, you’d have the following structures:
defmodule Content.Audio do
defstruct [:title, :album, :artist, :duration, :bitrate, :file]
end
defmodule Content.Video do
defstruct [:title, :cast, :release_date, :duration, :resolution, :file]
end
defmodule Content.Text do
defstruct [:title, :author, :word_count, :chapter_count, :format, :file]
end
Each of these types has a few different fields, most of them unique to the type. We also have a common :file field which will point to the file keeping the actual data.
Now, let’s say you want to make your content as accessible as possible. You may, for instance, want to allow your hearing-impaired users to view the transcripts of both your audio and video. For that, you’ll use your awesome AudioTranscriber and VideoTranscriber modules which provide transcribe_audio/1 and transcribe_video/1 functions, respectively.
The implementation of those functions uses state-of-the-art machine learning and will be delegated to a future blog post. Let’s just assume they work and roll with it.
Both transcriber modules are split up into separate modules. Aside from having different function names for transcribing content, they might be completely different libraries. To allow us to use both in a transparent manner, we'll implement a protocol named Content.Transcribe that has a unified API that can handle both types of content.
# Implementing the Protocol
Using protocols, we can easily define what the act of transcribing something means to each of our data types. This is done by first defining a transcribing protocol:
defprotocol Content.Transcribe do
def transcribe(content)
end
and then implementing it separately for each of our types:
defimpl Content.Transcribe, for: Content.Video do
def transcribe(video), do: VideoTranscriber.transcribe_video(video.file)
end
defimpl Content.Transcribe, for: Content.Audio do
def transcribe(audio), do: AudioTranscriber.transcribe_audio(audio.file)
end
defimpl Content.Transcribe, for: Content.Text do
end
We have separately defined implementations of the same function for all 3 content types.
You may note that for text content, the implementation merely reads the corresponding file, as it's already in text format, while for the other two, we call the corresponding machine-learning-magic function on the file.
We’re then able to call transcribe/1 for all the data types we have an implementation for:
iex> %Content.Video{...} |> Content.Transcribe.transcribe()
{:ok, "We're no strangers to love\nYou know the rules and so do I..."}
iex> %Content.Audio{...} |> Content.Transcribe.transcribe()
{:ok, "Imagine there's no heaven\nIt's easy if you try..."}
iex> %Content.Text{...} |> Content.Transcribe.transcribe()
{:ok, "in a hole in the ground there lived a hobbit..."}
# Fallback Implementations
Now, let’s say we add a new type of media to our platform: games (we’re kidding! We are a very ambitious hypothetical startup, and admittedly, success may be getting into our heads).
What happens when we try to transcribe the newly-added content?
iex> %Content.Game{...} |> Content.Transcribe.transcribe()
** (Protocol.UndefinedError) protocol Content.Transcribe is not implemented for %Content.Game{...}. This protocol is implemented for: Content.Audio, Content.Text, Content.Video
Whoops! We’ve hit an error. Which makes sense. We didn’t provide any transcription implementation for this type.
But it doesn’t really make sense to do so, does it? Games are supposed to be interactive experiences, and there simply may be no way to make them accessible to everyone.
So we could just provide an implementation that always fails:
defimpl Content.Transcribe, for: Content.Game do
def transcribe(game), do: {:error, "not supported"}
end
But this doesn’t seem very scalable, does it? If we keep adding new content types, we'll end up having to duplicate this for every single type that we cannot transcribe.
Instead, we can simply add a fallback implementation for any type we don’t specify. This is done precisely by providing an implementation for the Any type, and then stating in our protocol that we want to fall back to it when necessary.
defimpl Content.Transcribe, for: Any do
def transcribe(_), do: {:error, "not supported"}
end
defprotocol Content.Transcribe do
@fallback_to_any true
def transcribe(content)
end
The implementation for Any can usually be used by asking Elixir to automatically derive implementations from it (you can read more about this in the official Elixir Getting Started guide).
But by adding @fallback_to_any true to our protocol, we’re stating that whenever a specific implementation is not found, the Any implementation should be used. This allows us to fail gracefully for any unsupported data type:
iex> %Content.Game{...} |> Content.Transcribe.transcribe()
{:error, "not supported"}
iex> %{key: :value} |> Content.Transcribe.transcribe()
{:error, "not supported"}
# Failed Gracefully
Can we close off any better than with a graceful fail? We'll leave you now that we've experimented with protocols and we gracefully haven't broken any alembic today.
If you love experimenting with code, make sure you don't miss an episode of Elixir Alchemy!
This post is written by guest author Miguel Palhas. Miguel is a professional over-engineer @subvisual and organizes @rubyconfpt and @MirrorConf. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15712875127792358, "perplexity": 4462.239320244486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00046.warc.gz"} |