url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://math.stackexchange.com/questions/1973280/show-frac2-pi-mathrmexp-z2-int-0-infty-mathrmexp-z2x/1976649 | # Show $\frac{2}{\pi} \mathrm{exp}(-z^{2}) \int_{0}^{\infty} \mathrm{exp}(-z^{2}x^{2}) \frac{1}{x^{2}+1} \mathrm{d}x = \mathrm{erfc}(z)$
I used the result $$\frac{2}{\pi} \mathrm{exp}(-z^{2}) \int\limits_{0}^{\infty} \mathrm{exp}(-z^{2}x^{2}) \frac{1}{x^{2}+1} \mathrm{d}x = \mathrm{erfc}(z)$$ to answer this MSE question. As I mentioned in the link, I obtained this result from the DLMF. I happened to find this solution after failing to evaluate the integral using a variety of substitutions. A solution would be appreciated.
Expanding @Jack D'Aurizio's solution, we have
\begin{align} \frac{2}{\pi} \mathrm{e}^{-z^{2}} \int\limits_{0}^{\infty} \frac{\mathrm{e}^{-z^{2}x^{2}}}{x^{2} + 1} \mathrm{d}x &= \frac{2z}{\pi} \mathrm{e}^{-z^{2}} \int\limits_{0}^{\infty} \frac{\mathrm{e}^{-t^{2}}}{z^{2} + t^{2}} \mathrm{d}t \\ &= \frac{z}{\pi} \mathrm{e}^{-z^{2}} \int\limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-t^{2}}}{z^{2} + t^{2}} \mathrm{d}t \end{align} we used the substitution $x=t/z$.
For the integral $$\int\limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-t^{2}}}{z^{2} + t^{2}} \mathrm{d}t$$ we let $f(t) = \mathrm{e}^{-t^{2}}$ and $g(t) = 1/(z^{2} + t^{2})$ and take Fourier transforms of each, $$\mathrm{F}(s) = \mathcal{F}[f(t)] = \frac{\mathrm{e}^{-s^{2}/4}}{\sqrt{2}}$$ and $$\mathrm{G}(s) = \mathcal{F}[g(t)] = \frac{1}{z}\sqrt{\frac{\pi}{2}} \mathrm{e}^{-z|s|}$$ then invoke Parseval's theorem $$\int\limits_{-\infty}^{\infty} f(t)\overline{g(t)} \mathrm{d}t = \int\limits_{-\infty}^{\infty} \mathrm{F}(s)\overline{\mathrm{G}(s)} \mathrm{d}s$$ dropping constants, the integral becomes
\begin{align} \int\limits_{-\infty}^{\infty} \mathrm{e}^{-s^{2}/4} \mathrm{e}^{-z|s|} \mathrm{d}s &= 2\int\limits_{0}^{\infty} \mathrm{e}^{-s^{2}/4} \mathrm{e}^{-z|s|} \mathrm{d}s \\ &= 2\mathrm{e}^{z^{2}} \int\limits_{0}^{\infty} \mathrm{e}^{-(s+2z)^{2}/4} \mathrm{d}s \\ &= 4\mathrm{e}^{z^{2}} \int\limits_{0}^{\infty} \mathrm{e}^{-y^{2}} \mathrm{d}y \\ &= 2\sqrt{\pi}\mathrm{e}^{z^{2}} \mathrm{erfc}(z) \end{align} We completed the square in the exponent and used the substitution $y=z+s/2$.
Putting the pieces together yields our desired result \begin{align} \frac{2}{\pi} \mathrm{e}^{-z^{2}} \int\limits_{0}^{\infty} \frac{\mathrm{e}^{-z^{2}x^{2}}}{x^{2} + 1} \mathrm{d}x &= \frac{z}{\pi} \mathrm{e}^{-z^{2}} \int\limits_{-\infty}^{\infty} \frac{\mathrm{e}^{-t^{2}}}{z^{2} + t^{2}} \mathrm{d}t \\ &= \frac{z}{\pi} \mathrm{e}^{-z^{2}} \frac{1}{\sqrt{2}} \frac{1}{z} \sqrt{\frac{\pi}{2}} 2\sqrt{\pi} \mathrm{e}^{z^{2}} \mathrm{erfc}(z) \\ &= \mathrm{erfc}(z) \end{align}
• The approach I used had already been used in an answer to a related question to evaluate the case $z=1$. – Random Variable Nov 20 '17 at 2:47
With the substitution $x=\frac{t}{z}$, the integral on the left becomes
$$I=\frac{2}{\pi z e^{z^2}}\int_{0}^{+\infty}\frac{e^{-t^2}}{1+\frac{t^2}{z^2}}\,dt = \frac{1}{\pi z e^{z^2}}\int_{-\infty}^{+\infty}\frac{e^{-t^2}}{1+\frac{t^2}{z^2}}\,dt$$ and we may switch to Fourier transforms. Since $$\mathcal{F}(e^{-t^2}) = \frac{1}{\sqrt{2}}e^{-s^2/4},\qquad \mathcal{F}\left(\frac{1}{1+\frac{t^2}{z^2}}\right)=z\sqrt{\frac{\pi}{2}} e^{-z|s|}$$ $I$ boils down to an integral of the form $\int_{0}^{+\infty}\exp\left(-(s-\xi)^2\right)\,ds$ that is straightforward to convert in a expression involving the (complementary) error function.
As an alternative, you may use differentiation under the integral sign to prove that both sides of your equation fulfill the same differential equation with the same initial constraints, then invoke the uniqueness part of the Cauchy-Lipschitz theorem: $$\frac{d}{dz} LHS = -\frac{2}{\pi}\int_{0}^{+\infty}2z e^{-z^2 (x^2+1)}\,dx,\qquad \frac{d}{dz}RHS = -\frac{2}{\sqrt{\pi}}e^{-z^2}.$$ We have $\frac{d}{dz}(LHS-RHS)=0$, and $(LHS-RHS)(0)=1$.
An interesting consequence is the following (tight) approximation for the $\text{erfc}$ function:
$$\text{erfc}(z)=\frac{2e^{-z^2}}{\pi}\int_{0}^{+\infty}\frac{e^{-z^2 x^2}}{x^2+1}\,dx\leq \frac{2e^{-z^2}}{\pi}\int_{0}^{+\infty}\frac{dx}{(x^2+1)(x^2 z^2+1)}=\frac{1}{(1+z)e^{z^2}}.$$
• You always was generous with your answers in this site MSE. My vote is $A^{A^{+}}$, thanks from all users! – user243301 Oct 18 '16 at 14:37
• Thanks @Jack D'Aurizio. I expanded your solution and added it to the question. – poweierstrass Oct 20 '16 at 0:00
Assuming $z>0$,
\begin{align}\int_{0}^{\infty} \frac{e^{-z^{2}x^{2}}}{1+x^{2}} \, dx &= \int_{0}^{\infty}e^{-z^{2}x^{2}} \int_{0}^{\infty}e^{-t(1+x^{2})} \, dt \, dx \\ &= \int_{0}^{\infty} e^{-t} \int_{0}^{\infty}e^{-(z^{2}+t)x^{2}} \, dx \, dt \tag{1}\\ &= \frac{\sqrt{\pi}}{2}\int_{0}^{\infty} \frac{e^{-t}}{\sqrt{z^{2}+t}} \, dt \tag{2}\\ &= \frac{\sqrt{\pi}}{2} \, e^{z^{2}}\int_{z^{2}}^{\infty}\frac{e^{-u}}{\sqrt{u}} \, du \\ &= \sqrt{\pi} \, e^{z^{2}} \int_{z}^{\infty} e^{-w^{2}} \, dw \\ &= \frac{\pi}{2} \, e^{z^{2}}\operatorname{erfc}(z) \end{align}
$(1)$ Tonelli's theorem
$(2)$ $\int_{0}^{\infty} e^{-ax^{2}} \, dx = \frac{\sqrt{\pi}}{2} \frac{1}{\sqrt{a}}$ for $a>0$
$(3)$ Let $u = z^{2}+t$.
$(4)$ Let $w=\sqrt{u}$.
• Excellent. Your initial substitution is exactly what I was seeking. – poweierstrass Oct 20 '16 at 10:49 | 2019-03-23 12:34:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1398.719122025949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202804.80/warc/CC-MAIN-20190323121241-20190323143241-00107.warc.gz"} |
https://tex.stackexchange.com/questions/121641/degrees-as-numbers-or-units-in-si-system | # Degrees, as numbers or units in SI system
When typesetting degrees the correct way is to make the degrees symbol part of the number (without the space between the degree symbol and the number.)
Technically in the SI system then degrees C or degrees F should be typeset with a space between the degrees symbol and the unit.
The \SI{23}{\celsius} does not do this correctly.
Is this a bug or a feature?
• It's a feature, siunitx handles both temperatures and angles correctly. In the SI system, there has to be a space between the number and the degree symbol and no space between the degree symbol and the C when typesetting temperatures. See section 5.3.3 of the official SI brochure. – Jake Jun 28 '13 at 19:50
• Fahrenheit and Rankine are not part of the SI system. The degree symbol in °C makes it possible to tell the derived unit for Celsius temperature apart from the base unit coulomb (C). It is therefore part of the unit symbol. – Jake Jun 28 '13 at 19:58
It's a feature, siunitx handles both temperatures and angles correctly. In the SI system, there has to be a space between the number and the degree symbol and no space between the degree symbol and the C when typesetting temperatures. See section 5.3.3 of the official SI brochure. | 2019-10-16 07:09:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7934808135032654, "perplexity": 789.2195691976755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00368.warc.gz"} |
http://simbad.cds.unistra.fr/simbad/sim-ref?bibcode=2005AJ....129..178K | other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
2005AJ....129..178K - Astron. J., 129, 178-188 (2005/January-0)
The Local Group and other neighboring galaxy groups.
KARACHENTSEV I.D.
Abstract (from CDS):
Over the last few years, rapid progress has been made in distance measurements for nearby galaxies based on the magnitude of stars on the tip of the red giant branch. Current CCD surveys with the Hubble Space Telescope (HST) and large ground-based telescopes bring ∼10% accurate distances for roughly a hundred galaxies within 5 Mpc. The new data on distances to galaxies situated in (and around) the nearest groups–the Local Group, M81 Group, Cen A/M83 Group, IC 342/Maffei Group, Sculptor filament, and Canes Venatici cloud–allowed us to determine their total mass from the radius of the zero-velocity surface, R0, which separates a group as bound against the homogeneous cosmic expansion. The values of R0 for the virialized groups turn out to be close each other, in the range of 0.9-1.3 Mpc. As a result, the total masses of the groups are close to each other, as well, yielding total mass to blue luminosity ratios of 10-40 ML–1. The new total mass estimates are 3-5 times lower than old virial mass estimates of these groups. Because about half of galaxies in the Local volume belong to such loose groups, the revision of the amount of dark matter (DM) leads to a low local density of matter, Ωm≃0.04, which is comparable with the global baryonic fraction Ωb but much lower than the global density of matter, Ωm=0.27. To remove the discrepancy between the global and local quantities of Ωm, we assume the existence of two different DM components: (1) compact dark halos around individual galaxies and (2) a nonbaryonic dark matter ocean'' with ΩDM1≃0.07 and ΩDM2≃0.20, respectively.
Journal keyword(s): Cosmology: Observations - Cosmology: Dark Matter - Galaxies: Distances and Redshifts - Galaxies: Individual: Name: Centaurus A - Galaxies: Individual: Alphanumeric: IC 342 - Galaxies: Individual: Messier Number: M31 - Galaxies: Individual: Messier Number: M81 - Galaxies: Individual: Messier Number: M83 - Galaxies: Individual: Name: Maffei 1 - Galaxies: Individual: NGC Number: NGC 253 - Galaxies: Individual: NGC Number: NGC 4736 - Galaxies: Kinematics and Dynamics - Galaxy: General
CDS comments: Table 3 : galaxy FM1 = [FM2000b] cand 1, HS 117 = [LSK86] 117. Table 5 : galaxy DEEP 1337-33 not identified. | 2023-02-04 14:52:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2754915654659271, "perplexity": 6540.644745058058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00551.warc.gz"} |
https://lesswrong.ru/aggregator/sources/2?page=13 | # Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 1 час 6 минут назад
### test post
27 августа, 2021 - 18:30
Published on August 27, 2021 3:30 PM GMT
why don't you leave a test comment?
Discuss
### Research productivity tip: "Solve The Whole Problem Day"
27 августа, 2021 - 16:05
Published on August 27, 2021 1:05 PM GMT
(This is about a research productivity strategy that’s been working very well for me personally. But YMMV, consider reversing any advice, etc. etc.)
As a researcher, there’s kinda a stack of "what I'm trying to do", from the biggest picture to the most microscopic task. Like here's a typical "stack trace" of what I might be doing on a random morning:
So as researchers, we face a practical question: How do we allocate our time between the different levels of the stack? If we’re 100% at the bottom level, we run a distinct risk of "losing the plot", and working on things that won't actually help advance the higher levels. If we’re 100% at the top level, with our head way up in the clouds, never drilling down into details, then we’re probably not learning anything or making any progress.
Obviously, you want a balance.
And I've found that striking that balance properly isn't something that takes care of itself by default. Instead, my default is to spend too much time at the bottom of the stack and not enough time higher up.
So to counteract that tendency, I have for many months now had a practice of "Solve The Whole Problem Day". That's one day a week (typically Friday) where I force myself to take a break from whatever detailed thing I would otherwise be working on, and instead I fly up towards the top of the stack, and try to see what I'm missing, question my assumptions, etc.
In my case, "The Whole Problem" = "The Whole Safe & Beneficial AGI Problem". For you, it might be The Whole Climate Change Problem, or The Whole Animal Suffering Problem, or The Whole Becoming A Billionaire Problem, or whatever. (If it's not obvious how to fill in the blank, well then you especially need a Solve The Whole Problem Day! And maybe start here & here & here.)
Implementation details
• The most concrete and obvious way that my Solve The Whole Problem Days are different from my other workdays is that I have a rule that I impose on myself: No neuroscience. ("Awww c'mon, not even a little? Pretty please?" "No!!!!!"). So that automatically forces me up to like Levels 3 & 4 on the bulleted list above, instead of my usual perch at Levels 1 & 2. Of course, there's more to it than that one rule—the point is Solving The Whole Problem, not following self-imposed rules. But still, that rule is especially helpful.
• For example, when I'm answering emails and commenting on blog posts, that's often not about neuroscience, nor about Solving The Whole Problem. So I wouldn't count those towards fulfilling the spirit of Solve The Whole Problem Day.
• The point is not to stay at a high level on the stack. The point is to visit a high level on the stack, and then drill down to lower levels. That's fine … as long as I'm drilling down into lower-level details along a new and different branch of the tree.
• I also have a weekly cleanup and reorganization of my to-do list, but I think of it as a totally different thing from Solve The Whole Problem Day, and indeed I do it on a different day. In fact, a separate sub-list on my Trello board to-do list is a list of tasks that I want to try tackling on an upcoming Solve The Whole Problem Day.
• I have no qualms about Solving The Whole Problem on other days of the week too—I'm trying to correct a particular bias in my own workflow, and am not at risk of overcorrecting.
Why do I need to force myself to do this, psychologically?
It's crazy: practically every Solve The Whole Problem Day, I start the morning with a feeling of dread and annoyance and strong temptation to skip it this week. And I end the day feeling really delighted about all the great things I got done. Why the annoyance and dread? Introspectively, I think there are a few things going on in my mind:
• First, I’m very often immersed in some interesting problem, and reluctant to pause. “Aww,” I say to myself, “I really wanted to know what the nucleus incertus does! What on earth could it be? And now I have to wait all the way until Monday to figure it out? C'mon!!” Not just that, but all my normal heuristics for to-list prioritization would say that I should figure out the nucleus incertus right now: I need to do it eventually one way or the other, and I'm motivated to do it now, and I'm in an especially good position to do it right now (given that all the relevant context is fresh in my mind), and finally, the "Solve The Whole Problem" activities are not time-sensitive.
• Second, I prefer working on problems that definitely have solutions, even if nobody knows them. The nucleus incertus does something. Its secrets are just waiting to be revealed, if only we know where to look! Other low-level tasks are of the form "Try doing X with method Y", which might or might not succeed, but at least I can figure out whether it succeeds or fails, cross it off my to-do list, and move on. By contrast, higher-level things are sometimes in that awful place where there’s neither a solution, nor a proof that no solution exists. (Think of things like "solve the whole AGI control problem", or "find an interpretability technique that scales to AGI".) If I'm stumped, well maybe it's not just me, maybe there's just no progress to be made. I find that somewhat demotivating and aversive. Not terribly so, but just enough to push me away, if I’m not being self-aware about it.
• Third, I have certain ways of thinking about the bigger-picture context of what I'm working on, and I'm used to thinking that way, and it's comfortable and I like it. But a frequent task of Solve The Whole Problem Day is to read someone coming with a very different perspective, sharing none of my assumptions or proximate goals or terminologies, and try to understand that perspective and get something out of it. Sometimes this is fun and awesome, but also sometimes it's just a really long hard dense slog with no reward at the end. So it feels aversive, and comparatively unproductive.
But again that's just me. YMMV.
Discuss
### Altruism Under Extreme Uncertainty
27 августа, 2021 - 09:58
Published on August 27, 2021 6:58 AM GMT
I attended an Effective Altruism club today where someone had this to say about longtermism.
I have an intuitive feeling that ethical arguments about small probabilities of helping out extremely large numbers (like 1058.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} ) of people are flawed, but I can't construct a good argument for why this is.
The flaw is uncertainty.
In the early 20th century, many intellectuals were worried about population control. The math was simple. People reproduce at an exponential rate. The amount of food we can create is finite. Population growth will eventually outstrip production. Humanity will starve unless population control is implemented by governments.
What actually happened was as surprising as it was counterintuitive. People in rich, industrial countries with access to birth control voluntarily restricted the number of kids we have. Birthrates fell below replacement-level fertility. This process is called the demographic transition.
We now know that if you want to reduce population growth, the best way to do so is to make everyone rich and then provide free birth control. The side effects of this are mostly beneficial too.
China didn't know about the demographic transition when they implemented the one-child policy (一孩政策). The one-child policy wasn't just a human rights disaster involving tens of thousand of forced abortions for the greater good. It was totally unnecessary. The one-child policy was implemented at a time when China was rapidly industrializing. The Chinese birthrate would have naturally dropped below replacement level without government intervention. Chinese birthrates are still below replacement-level fertility even now that the one-child policy has been lifted. China didn't just pay a huge cost to get zero benefit. They paid a huge cost to gain negative benefit. Their age pyramid and sex ratios are extra messed up now. This is the opposite of what effective population control should have accomplished.
China utterly failed to predict its own demographic transition even though demographic changes on time horizons of a few decades are an unusually easy trend to predict. The UN makes extremely precise predictions on population growth. Most trends are much harder to predict than population growth. If you're making ethical decisions involving the distant future then you need to make predictions about the distant future. Predictions about the distant future necessarily involve high uncertainty.
In theory, a 10% chance of helping 10 people equals a 0.001% chance of helping out 100,000 people. In practice, they are very different because of uncertainty. In the 10% situation, a 0.1% uncertainty is ignorably small. In the 0.001% situation, a 0.1% uncertainty dominates the equation. You have a 0.051% chance of doing good and a 0.049% chance of doing harm once uncertainty is factored in. It's statistical malpractice to even write the probabilities as "0.051%" and "0.049%". They both round to 0.05%.
Is it worth acting when you're comparing a 0.051% chance of doing good to a 0.049% chance of doing harm? Maybe, but it's far from a clean argument. Primum non nocere (first, do no harm) matters too. When the success probability of an altruistic action is lower than my baseline uncertainty about reality itself, I let epistemic humility take over by prioritizing more proximate objectives.
Discuss
### Could you have stopped Chernobyl?
27 августа, 2021 - 07:05
Published on August 27, 2021 1:48 AM GMT
...or would you have needed a PhD for that?
It would appear the inaugural post caused some (off-LW) consternation! It would, after all, be a tragedy if the guard in our Chernobyl thought experiment overreacted and just unloaded his Kalashnikov on everyone in the room and the control panels as well.
And yet, we must contend with the issue that if the guard had simply deposed the leading expert in the room, perhaps the Chernobyl disaster would have been averted.
So the question must be asked: can laymen do anything about expert failures? We shall look at some man-made disasters, starting of course, with Chernobyl itself.
ChernobylOne way for problems to surface
To restate the thought experiment: the night of the Chernobyl disaster, you are a guard standing outside the control room. You hear increasingly heated bickering and decide to enter and see what's going on, perhaps right as Dyatlov proclaims there is no rule. You, as the guard, would immediately be placed in the position of having to choose to either listen to the technicians, at least the ones who speak up and tell you something is wrong with the reactor and the test must be stopped, or Dyatlov, who tells you nothing is wrong and the test must continue, and to toss the recalcitrant technicians into the infirmary.
If you listen to Dyatlov, the Chernobyl disaster unfolds just the same as it did in history.
If you listen to the technicians and wind up tossing Dyatlov in the infirmary, what happens? Well, perhaps the technicians manage to fix the reactor. Perhaps they don't. But if they do, they won't get a medal. Powerful interests were invested in that test being completed on that night, and some unintelligible techno-gibberish from the technicians will not necessarily convince them that a disaster was narrowly averted. Heads will roll, and not the guilty ones.
This has broader implications that will be addressed later on, but while tossing Dyatlov in the infirmary would not have been enough to really prevent disaster, it seems like it would have worked on that night. To argue that the solution is not actually as simple as evicting Dyatlov is not the same as saying that Dyatlov should not have been evicted: to think something is seriously wrong and yet obey is hopelessly akratic.
But for now we move to a scenario more salvageable by individuals.
The ChallengerRoger Boisjoly, Challenger warner
The Challenger disaster, like Chernobyl, was not unforeseen. Morton-Thiokol engineer Roger Boisjoly, had raised red flags with the faulty O-rings that led to the loss of the shuttle and the deaths of seven people as early as six months before the disaster. For most of those six months, that warning, as well as those of other engineers went unheeded. Eventually, a task force was convened to find a solution, but it quickly became apparent the task force was a toothless, do-nothing committee.
The situation was such that Eliezer Yudkowsky, leading figure in AI safety, held up the Challenger as a failure that showcases hindsight bias, the mistaken belief that a past event was more predictable than it actually was:
Viewing history through the lens of hindsight, we vastly underestimate the cost of preventing catastrophe. In 1986, the space shuttle Challenger exploded for reasons eventually traced to an O-ring losing flexibility at low temperature (Rogers et al. 1986). There were warning signs of a problem with the O-rings. But preventing the Challenger disaster would have required, not attending to the problem with the O-rings, but attending to every warning sign which seemed as severe as the O-ring problem, without benefit of hindsight.
This is wrong. There were no other warning signs as severe as the O-rings. Nothing else resulted in an engineer growing this heated the day before launch (from the obituary already linked above):
But it was one night and one moment that stood out. On the night of Jan. 27, 1986, Mr. Boisjoly and four other Thiokol engineers used a teleconference with NASA to press the case for delaying the next day’s launching because of the cold. At one point, Mr. Boisjoly said, he slapped down photos showing the damage cold temperatures had caused to an earlier shuttle. It had lifted off on a cold day, but not this cold.
“How the hell can you ignore this?” he demanded.
How the hell indeed. In an unprecedented turn, in that meeting NASA management was blithe enough to reject an explicit no-go recommendation from Morton-Thiokol management:
During the go/no-go telephone conference with NASA management the night before the launch, Morton Thiokol notified NASA of their recommendation to postpone. NASA officials strongly questioned the recommendations, and asked (some say pressured) Morton Thiokol to reverse their decision.
The Morton Thiokol managers asked for a few minutes off the phone to discuss their final position again. The management team held a meeting from which the engineering team, including Boisjoly and others, were deliberately excluded. The Morton Thiokol managers advised NASA that their data was inconclusive. NASA asked if there were objections. Hearing none, NASA decided to launch the STS-51-L Challenger mission.
Historians have noted that this was the first time NASA had ever launched a mission after having received an explicit no-go recommendation from a major contractor, and that questioning the recommendation and asking for a reconsideration was highly unusual. Many have also noted that the sharp questioning of the no-go recommendation stands out in contrast to the immediate and unquestioning acceptance when the recommendation was changed to a go.
Contra Yudkowsky, it is clear that the Challenger disaster is not a good example of how expensive it can be to prevent catastrophe, since all prevention would have taken was NASA management doing their jobs. Though it is important to note that Yudkowky's overarching point in that paper, that we have all sorts of cognitive biases clouding our thinking on existential risks, still stands.
But returning to Boisjoly. In his obituary, he was remembered as "Warned of Shuttle Danger". A fairly terrible epitaph. He and the engineers who had reported the O-ring problem had to bear the guilt of failing to stop the launch. At least one of them carried that weight for 30 years. It seems like they could have done more. They could have refused to be shut out of the final meeting where Morton-Thiokol management bent the knee to NASA, even if that took bloodied manager noses. And if that failed, why, they were engineers. They knew the actual physical process necessary for a launch to occur. They could also have talked to the astronauts. Bottom line, with some ingenuity, they could have disrupted it.
As with Chernobyl, yet again we come to the problem that even while eyebrow raising (at the time) actions could have prevented the disaster, they could not have fixed the disaster generating system in place at NASA. And like in Chernobyl: even so, they should have tried.
We now move on to a disaster where there wasn't a clear, but out-of-the-ordinary solution.
BeirutYet another way for problems to surface
It has been a year since the 2020 Beirut explosion, and still there isn't a clear answer on why the explosion happened. We have the mechanical explanation, but why were there thousands of tons of Nitropril (ammonium nitrate) in some rundown warehouse in a port to begin with?
In a story straight out of The Outlaw Sea, the MV Rhosus, a vessel with a convoluted 27 year history, was chartered to carry the ammonium nitrate from Batumi, Georgia to Beira, Mozambique, by the Fábrica de Explosivos Moçambique. Due to either mechanical issues or a failure to pay tolls for the Suez Canal, the Rhosus was forced to dock in Beirut, where the port authorities declared it unseaworthy and forbid it to leave. The mysterious owner of the ship, Igor Grechushkin, declared himself bankrupt and left the crew and the ship to their fate. The Mozambican charterers gave up on the cargo, and the Beirut port authorities seized the ship some months later. When the crew finally managed to be freed from the ship about a year after detainment (yes, crews of ships abandoned by their owners must remain in the vessel), the explosives were brought into Hangar 12 at the port, where they would remain until the blast six years later. The Rhosus itself remained derelict in the port of Beirut until it sank due to a hole in the hull.
During those years it appears that practically all the authorities in Lebanon played hot potato with the nitrate. Lots of correspondence occurred. The harbor master to the director of Land and Maritime Transport. The Case Authority to the Ministry of Public Works and Transport. State Security to the president and prime minister. Whenever the matter was not ignored, it ended with someone deciding it was not their problem or that they did not have the authority to act on it. Quite a lot of the people aware actually did have the authority to act unilaterally on the matter, but the logic of the immoral maze (seriously, read that) precludes such acts.
There is no point in this very slow explosion in which disaster could have been avoided by manhandling some negligent or reckless authority (erm, pretend that said "avoided via some lateral thinking"). Much like with Chernobyl, the entire government was guilty here.
What does this have to do with AI?
The overall project of AI research exhibits many of the signs of the discussed disasters. We're not currently in the night of Chernobyl: we're instead designing the RBMK reactor. Even at that early stage, there were Dyatlovs: they were the ones who, deciding that their careers and keeping their bosses pleased was most important, implemented, and signed off, on the design flaws of the RBMK. And of course there were, because in the mire of dysfunction that was the Soviet Union, Dyatlovism was a highly effective strategy. Like in the Soviet Union, plenty of people, even prominent people, in AI, are ultimately more concerned with their careers than with any longterm disasters their work, and in particular, their attitude, may lead to. The attitude is especially relevant here: while there may not be a clear path from their work to disaster (is that so?) the attitude that the work of AI is, like nearly all the rest of computer science, not life-critical, makes it much harder to implement regulations on precisely how AI research is to be conducted, whether external or internal.
While better breeds of scientist, such as biologists, have had the "What the fuck am I summoning?" moment and collectively decided how to proceed safely, a similar attempt in AI seems to have accomplished nothing.
Like with Roger Boisjoly and the Challenger, some of the experts involved are aware of the danger. Just like with Boisjoly and his fellow engineers, it seems like they are not ready to do whatever it takes to prevent catastrophe.
Instead, as in Beirut, memos and letters are sent. Will they result in effective action? Who knows?
Perhaps the most illuminating thought experiment for AI safety advocates/researchers, and indeed, us laymen, is not that of roleplaying as a guard outside the control room at Chernobyl, but rather: you are in Beirut in 2019.
How do you prevent the explosion?
Precisely when should one punch the expert?
The title of this section was the original title of the piece, but though it was decided to dial it back a little, it remains as the title of this section, if only to serve as a reminder the dial does go to 11. Fortunately there is a precise answer to that question: when the expert's leadership or counsel poses an imminent threat. There are such moments in some disasters, but not all, Beirut being a clear example of a failure where there was no such critical moment. Should AI fail catastrophically, it will likely be the same as Beirut: lots of talk occurred in the lead up, but some sort of action was what was actually needed. So why not do away entirely with such an inflammatory framing of the situation?
Why, because us laymen need to develop the morale and the spine to actually make things happen. We need to learn from the Hutu:
Can you? Can I?
The pull of akrasia is very strong. Even I have a part of me saying "relax, it will all work itself out". That is akrasia, as there is no compelling reason to expect that to be the case here.
But what after we "hack through the opposition" as Peter Capaldi's The Thick of It character, Malcolm Tucker, put it? What does "hack through the opposition" mean in this context? At this early stage I can think of a few answers:
1. There is such a thing as safety science, and leading experts in it. They should be made aware of the risk of AI, and scientific existential risks in general, as it seems they could figure some things out. In particular, how to make certain research communities engage with the safety-critical nature of their work.
This sort of gibberish could be useful. From Engineering a Safer World.
1. A second Asilomar conference on AI needs to be convened. One with teeth this time, involving many more AI researchers, and the public.
2. Make it clear to those who deny or are on the fence about AI risk that the (not-so-great) debate is over, and it's time to get real about this.
3. Develop and proliferate the antidyatlovist worldview to actually enforce the new line.
Points 3 and 4 can only sound excessive to those who are in denial about AI risk, or those to whom AI risk constitutes a mere intellectual pastime.
Though these are only sketches. We are indeed trying to prevent the Beirut explosion, and just like in that scenario, there is no clear formula or plan to follow.
This Guide is highly speculative. You could say we fly by the seat of our pants. But we will continue, we will roll with the punches, and we will win.
After all, we have to.
You can also subscribe on substack.
Discuss
### Dialogue on anti-induction.
27 августа, 2021 - 02:21
Published on August 26, 2021 11:21 PM GMT
My uncle with whom I shared thoughts on anti-induction remarked that humans are systematically anti-inductive in some situations : he gave the example of gambling, where people can think that losing a lot in a row means that they are poised to win soon.
But this is not a fair example in my opinion, because gamblers are not consciously anti-inductive : when their behavior is exposed as such, they do not defend their decision.
Among my relatives, the gamblers are notoriously irrational. A bayesian might say that a long streak of wins is very weak evidence that they will keep on winning, because they have a strong prior confidence in mathematical analysis of the game, but that hardly tells us anything about how anti-induction arose in the first place.
The following dialogue is intended to showcase a (moderately) intelligent anti-inductor in action, to try to understand the anti-inductors by putting myself in their shoes.
----------------------------------------------------------------------------------------------------
Alice is an anti-inductor. She intuitively believes that things that have happened often typically don't happen again.
Aware of that fact, and of the existence of inductors, she has tried to look into anti-induction to know what it really means to her, and if it is an intuition she should abandon.
She has a friend, Iris, who is an inductor. They are both tentatively rational, as intelligent as each other (and coincidentally about as dumb as me).
Iris : See, when using induction, I have often been right, and you have often been wrong using anti-induction.
Alice : Then, your induction probably tells you that it means induction is right and anti-induction is wrong... How interesting.
I : I am aware of that, and I can accurately modelize your own view, including your modelization of me, and it's reciprocal, and I know you know I know... Let's accept that and move on.
A : We are very similar, in that our goals and cognitive patterns are the same, except when it comes to induction. We also have shared knowledge. I accept on these grounds that I might be wrong about anti-induction, with a strong negative emotional bias of course.
I : So do I. I care about you and want to make you recognize the good of induction for your own well-being. By symmetry, we will search for a non-(anti-)inductive argument to ground (anti-)induction, and we will try our best not to implicitly found our arguments on (anti-)induction.
A : How about... We accept there are inductors and anti-inductors. Let's approximate being right with fitness. This approximation is reasonable because anti-induction is used for very basic things, such as the sun not shining tomorrow. I expect inductors to die from starvation when they falsely believe that eating will feed them after they have done it a thousand times. Likewise, you expect anti-inductors to die from starvation from refusing to eat when they are hungry.
I : Exactly so ! Now, look : we have diverging conclusions about the state of the world. Let us observe it and crown one of the two competing theories !
Were I a non-inductive spirit, I would have perhaps no reason to fill my fictional world with inductors rather than anti-inductors (thereby supporting one side a priori), but I am not.
This does mean however that the following reasoning is only valuable insofar as it modifies my behavior relative to the real world and not a mere possible one (in some possible worlds, I would not update on counterfactually fictional evidence).
A : We have observed without bias in samples or experiment, and it has been shown everyone on fictional-Earth except for me is an inductor. Wow, I did NOT expect that !
I, triumphantly : You know the bias of being surprised at your own failures. Now, be fair and accept to change your mind and join the inductive side of the force !
A, thoughtfully : You could have worded it differently... Is there a meaningful difference between what you said and "Now, because of the laws of rationality, you must change your mind etc." ? In that case, I will argue that the laws of rationality have had this implication everytime a similar reasoning followed a similar experiment indeed... Which seems to prove that we cannot think in the same way this time !
I : There are no "laws of rationality", there are only the actual laws of rationality ! How could correct methods of thinking change from one experiment to the other ?
A : Well, it has definitely never happened before.
I : The fact that anti-induction support a contradictory proposition does not mean that the original one relies on induction, in general and in this particular case.
A : Although I don't see why what you just said would be true it sounds very reasonable, so let's accept it and continue our research.
I : What is there to continue thinking about ? We have shown that induction improves fitness, and we have previously agreed that the fittest heuristic would be the truest.
A : Yes, but that is only a true fact about the past. Induction has managed to prevail until now (rather, until the moment we observed the people of the world), but how do we know it will remain the best heuristic tomorrow ? Or in five minutes ? Heuristics like this one are as fundamental as modus ponens, of the kind that does not change on small scales of time, and tend to be invariant general fitness (dis)advantages.
A : (As a side note, we both feel like it's more than a mere behavioral heuristic, but rather a logical truth, but we cannot find any supportive reasoning.)
I : So what ?
A : So I expect this particular one to be unlike all the other ones. This is anti-induction alright, but is it false ?
I, after a moment of reflection : You used very vague wording, and perhaps you think there's no flaw because you are confused. Can you detail your proposition ? Perhaps tabooing words like "other" and "particular".
A : So, I expect anti-induction to be true tomorrow, since induction was true until five minutes ago, because (anti-)induction belongs to the reference class of fundamental behavioral heuristics, like will to survive or will to reproduce. The kind of things that we expect every single living being to have because anything else is simply improbable to happen to live.
I : Geez... I'm still not sure it's not just confusion, but I see why I can't spot a mistake : (anti-)induction is messy, does not yield logical certainty, and uses ill-defined reference classes. Using arbitrary classes can make a committed (anti-)inductor believe anything, there are just too many potential reference classes with interesting properties out there.
A : And yet our reference classes are not arbitrary. We would reject a reference class used for induction comprising gems blue before today and green after. My point stands very strongly if "fundamental behavioral heuristics" is a very legitimate class.
None of my tentatives to formalize induction in formal logic led to interesting results. Please tell me if you know a paper on the subject !
So far, my best definition is that there is an interest function of propositions (personal ?) and interesting reference classes, defined as {x : P(x)} for an interesting property P. Induction states that if A is an interesting reference class,
Vx€A, (O(x)=>Q(x)) -> Vx€A, Q(x)
where Q is interesting and O(x) means "x is observed", assuming a meaningful "observed" property.
I : This is starting to become too subjective, so I'll try a new approach. You said earlier that inductors should die of starvation. Please tell me, Alice, how is it that you managed to survive until now ?
A : I had not thought about the absurdity of eating. From now on, I will not eat anymore when I am hungry.
I : Funny how the thought did not occur to you before... Perhaps your system 2 is anti-inductor, and your system 1 is inductor ?
A : Perhaps, but we have little evidence in favor of that, it's just a wild conjecture, perhaps motivated by your desire to prove anti-induction wrong. I am fictional anyway, so it's not evidence that there is no rational anti-inductor.
I : Tell me, Alice, how is it you believe the word induction is used in the same meaning it had since the beginning of our conversation ?
A : By the double-barrelled jumping jiminetty and flying spaghetti monster !
Alice refused to partake further in the debate, troubled. Thank the lord she did not think about the fact that she had used anti-induction until now ! She would then have legitimately expected to stop believing her truth any moment, without any explanation as to why ! Fated to be wrong like someone who receives overwhelming evidence they are a Botzmann brain is fated to stop thinking ! In that sorry case, she would weep for her soon to be lost love for truth, and digging further, her soon to be lost consistent utility function (for she followed one until now).
I : Now, that is a devastating argument from my viewpoint : that anti-induction leads to systematic uncertainty. Unfortunately, that is conditional on the fact that obervedly-unchanging truth is unchanging, which I suspect is an inductive system 1 reasoning.
I : I predict that anti-inductors will fail to think, and anti-inductors incapable of self-reflection would take me up on that bet while anti-inductors capable of self-reflection will fail to think, like Alice. Now, I just need to observe the world and see whether people actually think consistently, right ? Or maybe bet a lot of fitness on that ?
I : Another potential argument is the fact that Alice cannot explain how she managed to avoid starvation until now. Perhaps induction allows me to explain more facts of the world than anti-induction ? The big issue there is a posteriori explanation, since it may not be enough to deduce knowledge of the future without using (anti-)induction.
I : I sure hope my reasoning was not too motivated by the fact that I actually believe induction to be superior to anti-induction... As long as Alice was there, at least she could compensate by being equally motivated to defend anti-induction, but what would she answer to the above arguments ?
Discuss
### Amyloid Plaques: Chemical Streetlight, Medical Goodhart
27 августа, 2021 - 00:25
Published on August 26, 2021 9:25 PM GMT
Alzheimer's Disease (AD) is truly, unduly cruel, and truly, unduly common. A huge amount of effort goes into curing it, which I think is a true credit to our civilization. This is in the form of both money, and the efforts of many of the brightest researchers.
But it hasn't worked.
Since AD is characterised by amyloid plaques, the "amyloid hypothesis" that these were the causative agent has been popular for a while. Mutations to genes which encode the amyloid beta protein can cause AD. Putting lots of amyloid into the brain causes brain damage in mice. So for many years, drugs were screened by testing them in mutant mice which were predisposed to AD. If the plaques disappeared, they were considered good candidates.
So why didn't it work?
Lots of things can affect amyloid plaques as it turns out, right up to the latest FDA approved drug, which is just antibodies which target amyloid protein. While this does reduce amyloid, it has no effect on cognitive decline.
Goodhart's law has reared its head: amyloid plaque buildup is a metric for AD progression, but selecting for drugs which reduce it causes the relationship between AD and plaques to fall apart.
Equally, amyloid plaques are very easy to measure in mouse (and human) brains. It can be done by MRI scan, or by dissection. Memory loss and mood changes are harder to measure, and even harder in mice. The methods for measuring amyloid plaques also feel better in many ways. There's less variation in potential methods, they can be compared across species, they're qualitative, and they're also more in line with what the average biologist/chemist will be used to.
Understanding these, we can see how looking for drugs which decrease amyloid plaques in mice just really feels like productive research. We can also understand, now, why it wasn't.
Avoiding Wasted Effort
Pointing out biases is fairly useless. Pointing out specific examples is better. But the best way to help others is to point out how it feels from the inside to be making these mistakes.
So what does it feel like to be on the inside of these biases? Unfortunately as someone who has not been intimately involved in AD research I can't say exactly. But as someone involved with research in general I can make a guess:
• Research will feel mostly productive. It may feel like you are becoming how you imagine a researcher to be. Papers will be published. This is because you're in the streetlight.
• What you won't feel is a sense of building understanding. Learning to notice a lack of understanding is one of the most important skills, and it is sadly not an easy thing to explain.
• Think about the possible results of your experiments. Do you expect something you've not seen before? Or do you expect a result with a clear path to success? Creative work usually passes the first. Well-established and effective protocols pass the second. Mouse AD models do not pass either (anymore).
• A positive experimental result will be much easier than a "true" success. This has the benefit (for researchers) of allowing you to seem successful without actually doing good. The ratio of AD papers to AD cures is 1:0 ("Alzheimer's Disease Treatment" returns 714,000 results in Google Scholar)
Beyond this I do not know. Perhaps it is a nameless virtue. But it might be useful to try to identify more cases. I hereby precommit to posting a follow-up with at least five examples of this within the next seven days.
Discuss
### A brief review of The Scout Mindset
26 августа, 2021 - 23:47
Published on August 26, 2021 8:47 PM GMT
I've been reading blogs like Less Wrong for almost a decade. So then, a lot of what was said in this book wasn't new to me. However, I still really liked it.
I'm of the opinion that in order to deeply understand a topic, it's not just enough to understand it conceptually, you have to see lots and lots of examples of it, from many different angles. I felt like this book helped me with that. Despite the fact that I spend so much time reading rationality related blogs, The Scout Mindset still felt like it non-trivially deepened my understanding of the subject matter.
The stories that were told in this book were really good. They were a nice blend of appropriate, instructive, and engaging. I'm always impressed when books like these manage to do this. I spend too much of my time reading blogs and not enough of my time reading books. I often find myself similarly impressed by the quality of stories chosen in other books as well.
My main critique of The Scout Mindset is that, well, let me start with this. Early in the book, Julia pointed out that the difficulty isn't in knowing that you should do stuff like account for cognitive biases. It's in actually bringing yourself to do it!
My path to this book began in 2009, after I quit graduate school and threw myself into a passion project that became a new career: helping people reason out tough questions in their personal and professional lives. At first I imagined that this would involve teaching people about things like probability, logic, and cognitive biases, and showing them how those subjects applied to everyday life. But after several years of running workshops, reading studies, doing consulting, and interviewing people, I finally came to accept that knowing how to reason wasn't the cure-all I thought it was.
Knowing that you should test your assumptions doesn't automatically improve your judgement, any more than knowing you should exercise automatically improves your health. Being able to rattle off a list of biases and fallacies doesn't help you unless you're willing to acknowledge those biases and fallacies in your own thinking. The biggest lesson I learned is something that's since been corroborated by researchers, as we'll see in this book: our judgment isn't limited by knowledge nearly as much as it's limited by attitude.
So then, the question becomes, how do you actually get people to have a scout mindset? Julia's "approach has three prongs: 1) Realize that truth isn't in conflict with your other goals, 2) Learn tools that make it easier to see clearly, 3) Appreciate the emotional rewards of Scout Mindset". This is where my criticism comes in. I didn't really find that this approach was that effective. It didn't "tug on my heartstrings" enough.
To me, when I think about books that have moved me, they have told longer, deeper, more emotionally compelling stories. This often means that the book is fiction, but not always. It could be non-fiction. As an example of non-fiction, biographies come to mind. So does some long-form journalistic reporting. I think doing something like that in The Scout Mindset would have been more effective. Instead, Julia chose to use lots of smaller stories. By doing so, I think the book falls too close to "educational" on the spectrum from educational-to-inspirational.
That said, I do think that there is a place for a book that lives at that point on the spectrum. Not all books should be, say, at +8 points towards inspirational.
A related maybe-not-even-a-critique is that a book doesn't feel like the right medium for the goal of "inspire people to adopt a Scout Mindset". Something that is more social, interactive, and community-based seems like a better tool for that job. To her credit, Julia has in fact spent time tackling the problem from that angle in founding the Center for Applied Rationality, amongst other things. She also mentions various communities you can join online in the book such as the FeMRADebates or ChangeMyView subreddits, or the Effective Altruist community. And I'll mention Less Wrong again as another example.
Overall, I felt that The Scout Mindset was a pleasant read, and a book that did help me make progress as a scout. I don't feel like it was very much progress though, and I have thoughts about what could have been done instead to promote more progress. However, this is a super important topic, so any progress is valuable, and I'll happily take what I can get.
Discuss
### Signaling Virtuous Victimhood as Indicators of Dark Triad Personalities
26 августа, 2021 - 22:18
Published on August 26, 2021 7:18 PM GMT
A recent paper at the University of British Columbia describes five studies which, taken together, provide evidence that tendency to engage in victimhood signalling is correlated across individuals with Dark Triad personality traits (Machiavellianism, narcissism, and psychopathy).
Results were robust across the general US and Canada population gathered on mTurk as well as within a sample of Canadian undergraduates.
A link to the full study is provided in this post.
Discuss
### Introduction to Reducing Goodhart
26 августа, 2021 - 21:38
Published on August 26, 2021 6:38 PM GMT
I - Prologue
Two months ago, I wanted to write about AI designs that evade Goodhart's law. But as I wrote that post, I became progressively more convinced that framing things that way was leading me to talk complete nonsense. I want to explore why that is and try to find a different (though not entirely original, see Rohin et al., Stuart 1, 2, 3) framing of core issues, which avoids assuming that we can model humans as idealized agents.
This post is the first of a sequence of five posts - in this introduction I'll be making the case that we expect problems to arise in the straightforward application of Goodhart's law to value learning. I'm interested in hearing from you if you remain unconvinced or think of things I missed.
II - Introduction
Goodhart's law tells us that even when there are normally only small divergences between what we optimize for and our real standards, the outcome can be quite bad by our real standards. To use Scott Garrabrant's terminology from Goodhart Taxonomy, suppose that we have some true preference function V (for "True Values") over worlds, and U is some proxy that has been correlated with V in the past. Then there are several reasons given in Scott's post why maximizing U may score poorly according to V.
But here's the problem: humans have no such V (see also Scott A., Stuart 1, 2). Inferring human preferences depends on:
• what state the environment is in.
• what physical system to infer the preferences of.
• how to make inferences from that physical system.
• how to resolve inconsistencies and conflicting dynamics.
• how to extrapolate the inferred preferences into new and different contexts.
There is no single privileged way to do all these things, and different choices can give very different results. And yet the framing of Goodhart's law, as well as much of our intuitive thinking about value learning, rests on the assumption that the True Values are out there.
Goodhart's law is important - we use it all over the place (e.g. 1, 2, 3). In AI alignment we want to use Goodhart's law to crystallize a pattern of bad behavior in AI systems (e.g. 1, 2, 3, 4), and to design powerful AIs that don't have this bad behavior (e.g. 1, 2, 3, 4, 5, 6). But if you try to use Goodhart's law to design solutions to these problems it has a single prescription for us: find V (or at least bound your error relative to it). Since there is no such V, not only is that advice useless, it actually denies the possibility of success.
The goal, then, is deconfusion. We still want to talk about the same stuff, the same patterns, but we want a framing of what-we-now-call-Goodhart's-law that helps us think about what successful AI could look like in the real world.
III - Preview of the sequence
We'll start the next post with the classic question: "Why do I think I know what I do about Goodhart's law?"
The obvious answers to this question involve talking about how humans model each other. But this raises yet more questions, like "why can't the AI just model humans that way?" This requires two responses: first, breaking down what we mean when we casually say that humans "model" things, and second, talking about the limitations of such models compared to the utility-maximization picture. The good news is that we can rescue some version of common sense, the bad news is that this doesn't solve our problems.
Next we'll take a deeper look at some typical places to use Goodhart's law that are related to value learning:
• Curve fitting and overfitting.
• Hard-coded utility functions.
• Hard-coded human models.
Goodhart's law reasoning is used both in the definition of these problems, and also in talking about proposed solutions such as quantilization. I plan to talk at excessive length about all of these details, with the object of building up pictures of our reasoning in these cases that never needs to mention the word "Goodhart" because it's at a finer level of magnification.
However, these pictures aren't all going to be consistent, because what humans think of as success or failure can depend on the context, and extrapolating beyond that context will bring our intuitions into conflict. Thus we'll have to revisit the abstract notion of human preferences and really hash out what happens (or what we think happens) at the boundaries and interchanges between human models of the world.
Finally, the hope is to conclude with some sage advice. Not a solution, because I haven't got one. But maybe some really obvious-seeming sage advice can tie together the concepts introduced in the sequence into something that feels like progress.
We'll see.
Discuss
### Narrative truth
26 августа, 2021 - 20:49
Published on August 26, 2021 5:49 PM GMT
One idea I encountered when investigating Jordan Peterson was the idea of narrative truth.
This is the kind of concept that most people nod along too, but which is almost always left implicit, so I thought it'd be worthwhile doing so here.
Let's quote what some other people have written on this subject:
Literal and Narrative Truth - Dave King
His story raises an interesting question, though – what does it mean for a memoir to be accurate? One of the largest issues we dealt with was the matter of dialogue. He wanted to be absolutely scrupulous, telling stories precisely as they happened. But in his original draft, his characters only spoke when he could remember what they said word-for-word. Since the manuscript was written years after the fact, this meant he used very little dialogue – mostly bursts of highly memorable lines like, “I must live, I must tell!” Nearly all the rest of his conversations were narrative summary, and many of his scenes felt flat and distant as a result. He was telling the story to readers rather than letting them experience it.
I agreed with his absolute scruple about accuracy, but argued that he needed to focus on a different kind of accuracy – narrative accuracy rather than literal accuracy. He needed to create dialogue that would make his readers feel the way he felt at the time. This meant literally putting words in his characters’ mouths, even if those words conveyed the gist of a dialogue that actually happened. But since the point of his narrative was to allow his readers to experience what he had experienced, the scenes with recreated dialogue were more accurate than the flat, emotionless scenes.
Many memoirists have taken this technique a step further and created composite characters. For instance, in Dreams of my Father, President Obama’s “New York girlfriend” was actually an amalgam of several girlfriends he’d had in New York and Chicago. I’d argue that combining several minor characters into a single character who represents the type is another form of narrative accuracy. If you had, for instance, several high school teachers who inspired you in similar ways, you could take the time to create each of them as a minor character. But all these excess characters would do more than simply slow your narrative down. By spending time on each teacher, you would give your readers the impression that your high school experiences meant more to you than they actually did. The writing is strictly accurate, but the story as a whole is thrown off.
Storytelling and Narrative Truth
Society would remember the Holocaust differently if there were no survivors to tell the story, but only data, records and photographs. The stories of victims and survivors weave together the numbers to create a truth that is tangible to the human experience… The combination of the personal and narrative truth gives human context to the grainy black and white photos. As a result, the narrative truths combine with factual truth create a holistic picture of the Warsaw Ghetto and the Holocaust. This need for the narration of human experience seems innate...
Truth in Storytelling
It’s easier to understand important points when there’s a structure to follow. And it’s easier for us to remember—particularly if it is a lively and engaging piece.
If we over-simplify a story to fit it into a narrative arc, are we being truthful?
This gets us into an area where people start to see different shades of gray.
I think it helps to ask a few questions:
• Am I leaving out key details because including them messes with the narrative flow?
• Do I skip context because it makes the piece less compelling?
• Am I framing anything in a way that makes the story look black and white when the reality is far more complex and nuanced?
• Do I exclude facts and circumstances because they clutter the piece and may bore readers?
Why do we want to know the truth? Sometimes it's out of curiosity, sometimes for it's own sake, but arguably the strongest reason is because it allows us to act effectively in the world.
However, acting effectively in the world isn't just about knowing true facts about it. The human brain is fundamentally a meaning-making machine. When we are exposed to new facts, these update our current narratives and frames and it is usually via this indirect route that facts change our we live in the world.
Narratives build upon untrue facts can lead us down the wrong path, but the responsible use of artistic license often allows us model the world accurately more than the plain, unvarnished truth. Too much unnecessary detail confuses people, wears out their patience or interferes with the emotional impact. Simply throwing facts at someone is unlikely to be effective. Instead, you are much more likely to influence people if your communication style is comprehensible and engaging. And sometimes that requires some minor sacrifices in terms of literal accuracy.
Discuss
### For me the Christianity deal-breaker was meekness
26 августа, 2021 - 17:56
Published on August 26, 2021 2:25 PM GMT
I was raised Catholic, became agnostic around 13, stopped thinking an afterlife made sense at 15, and noticed I was no longer religious at 16.
But I still spent the next ~12 years in pretty close proximity to Christianity. I did religious studies at school, I studied philosophy and theology at university, and most importantly I sung in church choirs, which meant I was regularly attending services.
I also wasn't crazy about the label 'atheist', as I didn't think my beliefs had much in common with famous or 'new' atheists (Richard Dawkins, Christopher Hitchens, AC Grayling). I often found their objections uninteresting, as they dealt with claims concerning things like 'metaphysics' and 'theodicy' that most Christians I knew didn't really care about. For those Christians, religion was more of a spiritual commitment or a decision to live a certain way, and, for example, their conservative approach to romance seemed like it could be valid even if the Church couldn't explain why God lets evil exist.
But if what you're doing is following a set of practices that are at most loosely inferred from a set of community texts and a history of community tradition, why take the extra step of identifying as Christian? Why not just say 'hey there's some good stuff in here, and I'll join in with the good and leave the bad'? What makes you want to take that extra step?
During the last 12 years I've wondered if I wanted to take that extra step again, until this year, when I realized any desire I had to do so was based in meekness.
Meek: Quiet, gentle, and easily imposed on; submissive.
- https://www.lexico.com/definition/meek
Meekness is praised in the Gospel:
Blessed are the meek,
for they will inherit the earth.
- Matthew 5:5
Not to mention that in the most important story in the whole Bible, the Son of God gives himself up *for death* without putting up a fight. He knows he will be betrayed but he waits for his captors to come and arrest him.
Meekness is a big deal for Christianity. And it shouldn't be. Meekness is not a virtue.
In the dictionary definition of meekness above, 'quiet, gentle' sounds fine; 'easily imposed on' and 'submissive', not so much. Meekness is when you don't stand up and ask for what you need, or it is when you allow what you need to be taken from you. This is a disaster for your personal wellbeing, and perhaps even worse when it comes to looking out for your community.
One of my clearest memories of Catholic school is sitting at a breakfast table watching three kids bully a fourth. The fourth kid had recently returned from a suspension for writing something rude in a guest book; it was pretty clear he had been pressured if not coerced. As if the events happening in front of him weren't enough, the supervising adult at the table would have known all this. But he was very deliberately staring straight ahead and into the distance, focussing his energy on ignoring what was happening.
And if you think meekness is a virtue, why wouldn't you do what that adult did? If you actually value not standing up for yourself, what hope is there for the people around you who might be counting on you to stand up for them?
Unfortunately, Christian communities that break this pattern are in my experience the exception not the rule. We had Marthin Luther King Jr., but he also wrote the following in his Letter from Birmingham Jail:
When I was suddenly catapulted into the leadership of the bus protest in Montgomery, Alabama, a few years ago, I felt we would be supported by the white church. I felt that the white ministers, priests and rabbis of the South would be among our strongest allies. Instead, some have been outright opponents, refusing to understand the freedom movement and misrepresenting its leaders; all too many others have been more cautious than courageous and have remained silent behind the anesthetizing security of stained glass windows.
- https://letterfromjail.com/
Meekness.
Why did I wonder if I wanted to join the Church again? I realized I had been missing the sensation from childhood of being 'easily imposed upon, submissive', and letting a greater authority determine a larger part of what I believed.
The world is scary, and as a kid there's a lot of comfort in that kind of a relationship. I can't be the only one who was taken in by it.
But I'm not a kid any more, and as soon as I recognized the roots of what I was feeling, I also recognized it was neither healthy nor acceptable to me. Thanks, but no thanks, I'll be an impartial observer of religion from now on.
If you're reading this and you are a Christian, here are my challenges to you:
• Don't let meekness be a recruiting mechanism to your Church. Deferring responsibility in the face of difficulty is sometimes necessary, but doing it as part of a fundamental, lifelong commitment is no way to live.
• Respect yourself. Can you treat yourself with kindness and value your own wishes and desires? Or will you find some excuse for letting them go unfulfilled?
• Look out for others. Prove to the world that you are not meek. You're committed to loving your neighbour, so whose basic human dignity are you going to stand up and defend, even in the face of opposition?
• Decide what you believe on your own terms, not on your priest's. (If the conclusion that you reach is that you agree with him, then good for you.)
n.b. I've encountered the term 'protest atheism' which does a decent job of capturing where I am now. A new atheist like Dawkins would say 'God doesn't exist', whereas a protest atheist might say 'this isn't how we should live'.
Discuss
### Covid 8/26: Full Vaccine Approval
26 августа, 2021 - 16:00
Published on August 26, 2021 1:00 PM GMT
Great news, everyone. The Pfizer vaccine has been approved. Woo-hoo!
It will be marketed under the name Comirnaty. Doh!
(Do we all come together to form one big cominraty? Or should you be worried about the cominraties of getting vaccinated, although you should really be orders of magnitude more worried about the cominraties of not getting vaccinated? Did things cominraty or was there a problem? Nobody knows. Particle man.
My understanding is that if a doctor were to prescribe the vaccine ‘off label,’ say to give to an 11 year old or to get someone an early booster shot, then they could potentially be sued for anything that went wrong, so in practice your doctor isn’t going to do this.
A reasonable request was made that my posts contain Executive Summaries given their length. Let’s do it!
Executive Summary of Top News You Can Use
1. Pfizer vaccine approved under the name Cominraty.
2. Vaccines still work. If you have a choice, Moderna > Pfizer but both are fine.
3. Boosters are still a good idea if you want even better protection.
4. Cases approaching peak.
Also, assuming you’re vaccinated, Krispy Kreme is offering two free donuts per day from August 30 until September 5.
Now that that’s out of the way, let’s run the numbers.
The Numbers Predictions
Prediction from last week: 1,000,000 cases (+14%) and 8,040 deaths (+45%).
Results: 935k cases (+7%) and 7,526 deaths (+35%).
Prediction for next week: 950k cases (+2%) and 9,400 deaths (+25%).
I was confused how there could be such sharp peaks in other countries. It looks like we won’t get one of those. The trend lines seem clear, and it looks like we are approaching the peak. It would be surprising if we were still seeing increases week over week by mid-September, with the obvious danger that things could pick up again once winter hits.
Deaths DateWESTMIDWESTSOUTHNORTHEASTTOTALJul 1-Jul 74593296121281528Jul 8-Jul 145323986891451764Jul 15-Jul 214343417321701677Jul 22-Jul 2849138510091572042Jul 29-Aug 469347714153042889Aug 5-Aug 1170562921812343749Aug 12-Aug 1891285133943885545Aug 19-Aug 251281104546925087526
Deaths continue to lag cases. News was slightly good, so adjusting expectations slightly in response. Peak should still be a month out or so.
Cases DateWESTMIDWESTSOUTHNORTHEASTTOTALJul 1-Jul 727,41317,46040,0317,06591,969Jul 8-Jul 1445,33827,54468,12911,368152,379Jul 15-Jul 2165,91339,634116,93319,076241,556Jul 22-Jul 2894,42960,502205,99231,073391,996Jul 29-Aug 4131,19786,394323,06348,773589,427Aug 5-Aug 11157,553110,978409,18466,686744,401Aug 12-Aug 18183,667130,394479,21478,907872,182Aug 19-Aug 25188,855152,801502,83291,438935,926 Vaccination Statistics
How much will full FDA approval matter? Survey says not much.
I am more hopeful than this, and expect more than a 10% increase. Some of this will be people for whom this really was the true rejection. Other parts of it will be as mandates are handed down and people anticipate further mandates.
Vaccine Effectiveness
I continue to find this very telling in terms of vaccine effectiveness versus Delta:
The argument is simple. The Delta vaccines are designed and would be easy to get approved, yet there has been no move to manufacture them quickly. The only reasonable explanation for this is that there isn’t actually much if any difference with the old vaccine. Or at least, that’s what the pharma companies that have every financial incentive acting against this are revealing they believe.
I will for now accept the principle that a single dose provides substantially less protection against Delta than Alpha, but this is another data point that Delta isn’t different from Alpha once you get your second shot. I always find maddening the ‘confidence intervals overlapped, so nothing here’ reaction to differences like 67% vs. 79% – yes, you can’t be confident in that, but that’s mostly saying your study was underpowered, since that’s the kind of difference one would expect if there was a difference, and again, the word evidence does not mean what they think it means.
The paper’s findings then get worse, if you believe them, claiming rapid reduction in effectiveness over time.
They then go on to say this, which given how vaccinations were timed seems likely to be confounding indeed:
There’s no one good money quote on it, but the findings robustly say that vaccinated people’s cases tend to be lower viral load, less dangerous and less severe.
Looking at their section on statistical analysis, they’re doing some of the necessary and reasonable things but I can’t tell if it’s enough of them. Such studies are better than nothing if treated with caution, and this seems like a relatively well-done one, but I’m still more focused on the population numbers and what makes the models work out.
When I see things like this:
My core reaction is, the very idea of a 22% decline in vaccine effectiveness per month doesn’t make any mathematical sense, until I figured out it meant a 22% increase in vaccine ineffectiveness. As in, if you are 99% effective in month one, and then you have a 22% ‘decline in effectiveness’ you would be… 98.8% effective. Or if you were 95% before, you’re 94% now. Which doesn’t sound to me like a 22% decline in effectiveness, even if true.
Israeli data continues to suggest extreme fading of vaccine effectiveness if you look at it naively, along with yet another reason to, as post puts it, proceed with caution.
One presumes that the improvement against hospitalization in Pfizer is a data artifact or failure to control for something or some such, which shows how easy it is to get misleading results, especially since infection went the other way. And this big Pfizer versus Moderna difference against Alpha isn’t found elsewhere, which makes me think that once again there’s confounding going on all over the place.
Here’s a thread analyzing some of the results, and takes the declining protections and other study data fully seriously, putting the burden of proof on finding something specific that is wrong with the studies, and otherwise taking their results and details seriously and forming the model around that. As usual, the broader context of what such results would mean for all the other data we see isn’t incorporated – but again, I don’t see anyone doing that.
Here’s another good long thread explaining what vaccine effectiveness means then listing lots of different findings and real world results. Putting them all together like that makes it striking how much the different numbers don’t agree if you take them all at face value.
I continue to think that the decline in vaccine effectiveness over time is in large part a mirage, and for practical purposes the decline is relevant but small.
This week’s representations of how those vaccines are doing, after having vaccinated about 70% of adults and most of the elderly.
Virginia offers a dashboard:
Doesn’t look like vaccines are losing effectiven
Houston, via PoliMath:
And another:
That’s disappointing at face value since it’s only a 90% reduction in deaths but after correcting for age it would look a lot better. Weird that so much of the vaccine advantage here seems to be coming after hospitalization.
A worry is that the studies are selecting for ways to show vaccinated people are at risk, and another worry is that the real world statistics being reported are selecting for showing that the vaccines are super effective, because they are the same information but the Official Story is on two contradictory propaganda tracks and is pretending not to notice that this is a physical world question with a correct answer (whether or not we are confident we know what it is).
Anecdotal in Tampa, Florida
:
Here in New York:
Meanwhile also this:
Note that yes, we are excluding the first wave infections here as per her follow-up note, but note the graph and adjust accordingly, and I think the point stands.
That does bring up that UK cases are clearly rising again, so we can no longer use that as an important signpost that things will turn around rapidly and that will be that. If anything, it’s now making the case that such a turnaround is unlikely. I don’t know of anyone who has offered an explanation other than a shrug for the decline followed by a reversal here.
As for the reinfections versus vaccine effectiveness, my hypothesis is that this is not a case of ‘immunity from infection holds up but vaccine immunity is losing ground.’ Remember when we were worried that natural immunity faded with time but vaccines solved that problem? The actual difference is in the methods of observation. When similar observational methods are used, we seem to get similar results.
How infectious are breakthrough cases? We now have two studies for that. They found that vaccinated people who get infected are still infectious, but their viral loads are substantially lower, so this was what we previously expected. And also they clear the virus faster, which was also expected.
Weirdly, they’re two different studies that find the two different results, although depending on how you measure, fading quickly implies lower average viral loads, so the results are compatible with the graph and it’s possible what we’re seeing is a shorter period of infectiousness rather than less at the peak. That seems unlikely to be the whole effect to me, but could easily be the majority of the benefit.
How much comfort that brings depends on the situation and on what you previously believed. If you’re as bad at this as the CDC and were saying the vaccines ‘prevent transmission’ full stop and now that they ‘don’t prevent transmission full stop’ it gets confusing.
Vaccine Hesitancy and Mandates
Formal approval is in, so here… we… go.
I saw this about one minute after I saw the FDA had approved the Covid vaccine, perhaps someone planned something in advance for once:
On her first day on the job now that The Worst Is Over, our new governor lays down the law:
She also raised New York’s total death count by 12k, which once again highlights that maybe Cuomo went down in a similar way to Al Capone (who was indeed guilty of tax evasion).
Although she’s also mandating ‘ethics seminars’ so you win some and you lose some.
Whereas the University of Georgia is going the other way.
Who else we got (WaPo)? They found CVS Health, Deloitte and Disney, but so far, not an impressive set of additional mandates. It seems not many were standing by ready to go.
Delta Airlines is charging unvaccinated employees $200 a month extra for health insurance, on the very reasonable premise that every hospital stay for Covid costs them an average of$50,000 and they end up in the hospital for Covid more often. Insurance companies can’t do this, but it seems corporations employing you can do it.
NYPD has threatened to sue if the city attempts to implement a mandate.
When you’re fully anti-vax, you’re anti-vax, and it’ll be hard to tell you different, as Donald Trump learned:
Others are less fully anti-vax, but still unvaccinated, thanks to various ways we botched things.
As Ranu notes there are two distinct things here. First, we botched the logistics, and could have done much better if we’d made sure to beware trivial inconveniences that aren’t always trivial. Second, our authorities are untrustworthy so people don’t trust them. This is framed here in the standard blue-tribe way as ‘the system fails such people and they remember the legacy of all that’ with it being ‘hard to make up’ during a pandemic, rather than the simple ‘these people lied about the pandemic over and over again’ model. Both are presumably relevant, but my guess is that handling the pandemic in a trustworthy fashion would have largely solved both problems. Yes, such people will absolutely ask why you weren’t helping them before, but that’s different from turning your help down if you’re here now.
One aspect of vaccination decisions is that patients in America do not pay for their health care. Almost everyone who can get it has health insurance because if you don’t the medical system bills you personally and attached some number of extra zeros to the bill because they can, so you can’t opt out. For a while, they even waved ‘cost sharing’ on Covid, so you didn’t even pay the fraction you normally pay, but that’s increasingly no longer true. Would be good if more people knew. Incentives matter, but only if people are aware of it. One could note that this policy could be taken farther, if the government permitted this, so we’re doing mandatory mandates with one hand and mandatory massive subsidies to those who don’t follow those mandates with the other.
State employees, you will get vaccinated as many times as is legal, or else.
This is an explicit ‘everything that is not forbidden is mandatory, and everything that is not mandatory is forbidden’ rule. You can get exactly this many shots at exactly these times, and you either get them or you’re fired. There’s no concept of a booster that is optional, based on someone’s situation, and the full mandate applies to teleworkers.
This is where things are going to be tricky. Requiring ‘full vaccination’ so far has been simple. You get two shots and that’s it. Now there are signs that this in many places is going to morph into getting periodic boosters with different places (at a minimum, nations, Austria and Croatia are already setting expiration dates) having different requirements, and those boosters will have a much less slam-dunk risk-benefit profile.
I will happily take the third shot without any need for outside incentives, but it is a very reasonable position to not want the third one, and it seems likely that requiring boosters will have far less robust support than requiring two shots.
A cheap shot, but I think a necessary one so putting it here anyway, without any need for further comment.
One can definitely say shots fired:
This is nuts, actively counterproductive on every level, and what must be fought against:
To be fair, it is only required ‘when social distancing is not possible,’ most of the time this will definitely apply, and I assure you that it’s always possible.
It’s always adorable when people think the constitution is a meaningful limiting factor, and all recursive mandate sentences are fun.
In practice, this is technically true, but there is a known way around it known as withholding federal funding. And another easier way around it, called ignoring the constitution, since presidents mostly do what they want without any actual legal authority under the constitution and mostly no one calls them out on it. Eviction moratorium, anyone?
You can also buy one at the pharmacy, although not like in Europe where the tests are super cheap and abundant. FDA Delenda Est.
Also, a periodic reminder that the reason younger children can’t get vaccinated, which in practice is causing super massive freakouts although there’s almost zero risk there, is that the FDA moved the goalposts to require additional data. Thus we almost certainly won’t get this before the end of 2021, and I’d double check but this market sure looks a lot like free money.
The lack of an increase in fear over the winter surge is the most surprising thing here. Otherwise it all makes sense, with fear going down when things were improving, then fear starting to go back up as cases rise. Fear isn’t a perfect proxy for the private control system, but changes in fear likely predict marginal changes in private actions and we’re back at levels similar to April.
Here’s a survey on activity:
As one would expect by now, vaccinated people are taking more precautions than unvaccinated people. Almost half of vaccinated people are ‘avoiding people as much as possible’ and they’re claiming it’s because of the pandemic. However I share Nate’s skepticism here minus the word ‘little’ because math:
Perhaps ‘as much as possible’ means until one is hungry, or has somewhere to go. It’s on the margin.
Study does some modeling and finds that according to its model masks work, ventilation works even better.
Filters win out here over windows, if one has to choose, and of course if possible you’d do both. Also you can’t cheat on the windows, you gotta actually leave them open. When we’re considering actions like mask mandates or shutting down living life entirely I find it odd that people worry about energy costs this much, but there you go. Also fresh air remains a Nice Thing. As always, one must be highly skeptical when translating such results into predictions for actually preventing cases.
I found the tweet more compelling than the full post. Getting into the details mostly highlighted places I disagreed with Bryan.
Booster Shots
Governor of Texas gets a third shot as a booster. I have no issue with people in high positions getting superior medical treatment when there’s a supply or resource shortage, but meanwhile we have vaccines expiring in some places. That’s from Scott’s post with further comments on the topic of FDA Delenda Est, which is interesting but inessential.
The new argument against booster shots is that they… might cause us to produce too many antibodies against Covid, and then maybe Covid mutates and the antibodies become dangerous or unhelpful because they’re overtrained? When it’s not Officially Sanctioned even antibodies are labeled bad, it seems. Meanwhile this is doubtless supposed to make people worried about Delta, but this worry definitely does not apply to Delta, and an additional customized booster would be necessary in the cases being described either way. Don’t worry, such arguments will go away once the Official Sanction comes down, which is coming soon.
Meanwhile, an argument for booster shots is that the first two doses were so close together that they count as a primary immunization, claiming it looks like this:
Which is so insane it doesn’t even bother putting any impact from the second shot into the chart at all, and puts the peak of the ‘primary’ response more than halfway down the graph when it’s almost fully effective. There’s obvious nonsense available on all sides.
Think of the Children
We really do have a large class of Very Serious People, with a lot of influence on policy and narrative, who think that living life is not important, that the things you care about in life are not important, and that our future is not important, because saying the word ‘safety’ or ‘pandemic’ should justify anything.
This week’s case in point, and like my source MR I want to emphasize that this is not about the particular person in question here.
If anything, I’d like to thank Dr. Murray for being so clear and explicit. If you think that safety trumps the need for love, for friends and for living a complete life in general, then it’s virtuous to say that outright, so no one is confused.
In case you think she doesn’t mean that (or that others don’t mean that), no really, she does:
Ellie Murray does not believe that school is terrible, so she is simply saying that the claimed benefits of school are not important relative to the marginal impact of schools on Covid-19.
That reply was one voice in a chorus, as the replies are what you’d expect and rather fun to read through. Nate Silver sums this up well:
There was also a side debate over whether school is the future of our children and our children are our future, or the alternate hypothesis that children are also people and school is a prison and dystopian nightmare. The thing to remember is that this view is not driving most of the anti-school rhetoric. Such folks mostly think school is vital to children, but don’t care.
Yes, I was aware, and I’d rank my concerns regarding school in this order:
1. Kids going to school. School is a prison and a dystopian nightmare.
2. Kids not going to school. Remote school as implemented is somehow so much worse.
3. Getting Covid. I’d rather not get Covid.
But yeah, we can beat that take this week, because the The Times Is On It:
Technically I’m sure it is true that masks represent an ‘educational opportunity’ in the sense that whenever anything happens you can use it as an opportunity to learn. The main such opportunity is to learn about those making the decisions.
In Other News
Have you tried using a market clearing price? No? Well, then.
I strongly agree that lawn care is a terrible use of water when there’s a limited supply, but the way we figure such things out in a sane world is we charge more money for water and if desired or needed give people a credit to avoid distributional concerns. Yes, I know, don’t make you tap the sign, go write on the blackboard, etc etc.
Biden still hasn’t appointed anyone to head the FDA, but at least he floated a name. The name is someone who said that living past 75 is a waste, but hey, pobody’s nerfect, right?
Obama literally hired a doctor to ensure everyone was vaccinated and safe and his party was still a huge issue, so now everyone in Washington is afraid to throw parties. Also for other reasons, I’d imagine, but those are beyond scope.
A calculation of whether the benefits of exercise in a gym exceed the risk of Covid finds that it very much does in her case. Often the choice really is between going to the gym or not exercising. Her calculation did depend on the lack of other people in the gym, however, so if the gym had sufficiently more people in a tight space the calculation could have gone the other way. She has a spreadsheet you can play around with if you’d like to explore this more.
Denmark gives up on the mystical ‘herd immunity.’ Usual misunderstandings here but I suppose this is better than the practical alternative of not giving up.
Thread reminding us that the control system has many facets, and they work together at least additively and often multiplicatively. You don’t need any one factor to control the virus or get you mystical ‘herd immunity’ on its own, you care about the combined effects.
Zeynep reminds us that plastic barriers are likely to be net harmful because they interfere with airflow. I got this one wrong early on, same as everyone else. The key is to update.
Monoclonal antibodies are free and effective against Covid, but few people are getting them (WaPo).
Germany moves to using hospitalizations as the primary measure of whether Covid is under control. This makes sense for policy, since what matters is whether the hospitals are overwhelmed and whether people are sick and dying.
Australian stockpile of AZ continues to grow, over 6 million doses (via MR).
Australians who are vaccinated overseas can register that vaccination, but only if the vaccine was approved in Australia at the time of vaccination. Which was not a rapid process.
I like how transparently the ‘at the time’ restriction is purely harmful. No fig leaf.
Also via MR, due to continued Covid restrictions down under, they shot dogs due to be rescued by a shelter to prevent shelter workers from travelling to pick them up. Meanwhile, they’ve uncovered people getting fresh air. It’s becoming an epidemic of fresh air getting after 200 days in lockdown.
But good news, if you’re fully vaccinated, you’re about to get new freedoms!
So, how do you think Australia did, all things considered?
Poison control is lonely work. Not many people call, and when they do, it’s usually something like ‘I took prophylactic Ivermectin that was intended for animals, thinking that was a good idea.’ We have some news.
General warning for anyone who needs it: Animal formulations of a given medicine are often different from the human version, and could be highly dangerous to humans. Do not perform this regulatory arbitrage assuming that the two things are the same.
They didn’t know the two things were different, and it’s a perfectly reasonable hypothesis that a thing could be vastly cheaper and easier to get if you can do an end run around the FDA, or around pharmacists earning praise for refusing to fill prescriptions for Ivermectin. This simply was not one of those times.
Also note the numbers. One individual was told to ‘seek further evaluation,’ and 85% of the cases were mild. The definition of ‘mild’ can be whatever people want it to be, but if it’s ‘no need to seek further evaluation’ it seems like there were six poison control calls out of eight total calls? I’m guessing it’s higher than that, and please if you decide to take Ivermectin make sure you’re sourcing and dosing it safely and properly, but this isn’t an epidemic of cases, and this was going around enough it felt important to point that out, even if I’m highly skeptical that Ivermectin does anything useful.
Inessential but fun case of an elected official saying very much the wrong thing.
Not Covid
Remember that if you own an Oculus, and your Facebook account gets suspended because of reasons (such as saying facts that contradict local health authorities) you will lose all your games and save data permanently, no refunds, no fixes. Might want to consider a secondary Facebook account for this purpose, unless you’re using your Oculus to recover your Facebook account, which is also a thing.
In Scott’s recent post, he reckons with his struggles to not make mistakes despite the need to quickly produce a lot of content. I have this problem as well, and last week failed to check something I should have checked. My solution so far has essentially been to state my epistemic confidence in my statements, and to carefully put conditionals on statements that I haven’t verified. So last week I wrote “I am not aware of any X” and it turns out there are a bunch of common Xs and I really should have known that already and also should have checked even though I didn’t know, but I did know I hadn’t checked so I wrote I wasn’t aware. I ended up editing the paragraph (on pregnancy) a few times. There wasn’t anything false when I wrote it but once it was pointed out it obviously needed to be fixed quickly. This occasionally happens, also there are occasional typos, broken links and other stupid mistakes, and occasionally one of the sources turns out to be fake, as was the case with a British account a while back.
Discuss
### Pair Debugging and applied rationality
26 августа, 2021 - 14:37
Published on August 26, 2021 11:37 AM GMT
Pair debugging is a staple of applied rationality. Humans aren't really all that smart on their own. Our intelligence is distributed. You don't get very far by thinking by yourself.
Before we start, I'll teach a rationality technique. These techniques can be used explicitly as a structured conversation, or they can be applied freestyle whenever.
Minimum prep: bring a bugs list! This is a list of n things that you would like to improve about your life. Here's some examples:
- "I often don't fall asleep quickly enough on evenings before workdays"
- "I want to work for an early phase startup for my next job, but I don't know how to get in contact with startups in that phase"
- "Despite being poly I seem to have a mental block around flirting even when there is clear interest from the other side"
- etc
Optional prep: read some of the CFAR handbook at https://www.rationality.org/resources/handbook
Rough schedule:
14:00 - welcome
± 15:00 - meditation session
± 15:15 - lecture & practice
± 16:30 - done
± 19:00 - grab some dinner together
± 22:00 - closing
Come whenever, but try not to come during the meditation.
Discuss
### Bangalore, India – ACX Meetups Everywhere 2021
26 августа, 2021 - 13:20
Published on August 26, 2021 12:56 AM GMT
This year's ACX Meetup everywhere in Bangalore, India.
Location: Cubbon Park band stand – ///firework.corkscrew.shelter
Contact: w0074@outlook.com
Discuss
### Learning can be deciding
25 августа, 2021 - 19:22
Published on August 25, 2021 4:22 PM GMT
Consider the following questions:
A: "Will it rain in Paris at midnight tonight?"
B: "Will I blink in the next 5 seconds?"
If I wish to find the answer to question A, I might need to look at the conditions of the atmosphere over Paris and then use knowledge of how climate evolves. I might need readings of temperature, wind speed, humidity, and pressure, and I might also need complex mathematical models of the weather.
If I wish to find the right answer to question B, I can just decide to blink now.
There is an activity that I would roughly describe as "figuring out how the world is like." This activity is to be understood as a collective endeavour, not a personal one. Simple acts of information gathering, like opening a box to see what is inside, are examples of this activity. More impressive examples are Science and History, systematic disciplines that evolve across generations and produce large bodies of knowledge. The end goal of this activity might be a theory of everything.
I often hear people (particularly academics) speak as if figuring out how the world is like must always look like the process described above for answering Question A: science-y, focused on gathering data, distilling models or regularities, and applying them to particular cases.
However, Question B illustrates that in some cases, figuring how the world is like simply consists in deciding, or at least it includes a decision. This occurs when we pose questions about ourselves or about changes in the environment that occur as a consequence of our actions. For example: what will the temperature of this room be in the next 10 minutes? That depends on whether I will open the windows, turn on the AC, or set the furniture on fire.
Discuss
### (apologies for Alignment Forum server outage last night)
25 августа, 2021 - 17:45
Published on August 25, 2021 2:45 PM GMT
The Alignment Forum was down between approx 1:30AM and 7:00AM PDT last night due to what seems to be a memory leak issue (postmortem ongoing). We're setting up some additional monitoring to ensure this doesn't happen again.
Apologies for any inconvenience experienced!
Discuss
### How to turn money into AI safety?
25 августа, 2021 - 13:49
Published on August 25, 2021 10:49 AM GMT
I
I have heard through the grapevine that we seem to be constrained - there's money that donors and organizations might be happy to spend on AI safety work, but aren't because of certain bottlenecks - perhaps talent, training, vetting, research programs, or research groups are in short supply. What would the world look like if we'd widened some of those bottlenecks, and what are local actions that people can do to move in that direction? I'm not an expert either from the funding or organizational side, but hopefully I can leverage Cunningham's law and get some people more in the know to reply in the comments.
Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren't the bottleneck right now, and the other problems we have are more interesting. We need to be able to train people in the details of an area of cutting-edge research. We need a larger number of research groups that can employ those people to work on specific agendas. And perhaps trickiest, we need to do this within a network of reputation and vetting that makes it possible to selectively spend money on good research without warping or stifling the very research it's trying to select for.
In short, if we want to spend money, we can't just hope that highly-credentialed, high-status researchers with obviously-fundable research will arise by spontaneous generation. We need to scale up the infrastructure. I'll start by taking the perspective of individuals trying to work on AI safety - how can we make it easier for them to do good work and get paid?
There are a series of bottlenecks in the pipeline from interested amateur to salaried professional. From the the individual entrant's perspective, they have to start with learning and credentialing. The "obvious path" of training to do AI safety research looks like getting a bachelor's or PhD in public policy, philosophy, computer science, or math, (for which there are now fellowships, which is great) trying to focus your work towards AI safety, and doing a lot of self-study on the side. These programs are often an imprecise fit for the training we want - we'd like there to be graduate-level classes that students can take that cover important material in AI policy, technical alignment research, the philosophy of value learning, etc.
Opportunity 1: Develop course materials and possibly textbooks for teaching courses related to AI safety. This is already happening somewhat. Encourage other departments and professors to offer courses covering these topics.
Even if we influence some parts of academia, we may still have a bottleneck where there aren't enough departments and professors who can guide and support students focusing on AI safety topics. This is especially relevant if we want to start training people fast, as in six months from now. To bridge this gap this it would be nice to have training programs, admitting people with bachelor's- or master's-level skills, at organizations doing active AI safety research. Like a three-way cross between internship, grad school, and AI Safety Camp. The intent is not just to have people learn and do work, but also to help them produce credible signals of their knowledge and skills, over a timespan of 2-5 years. Not just being author number 9 out of 18, but having output that they are primarily responsible for. The necessity of producing credible signals of skill makes a lot of sense when we look at the problem from the funders' perspective later.
Opportunity 2: Expand programs located at existing research organizations that fulfill training and signalling roles. This would require staff for admissions, support, and administration.
This would also provide an opportunity for people who haven't taken the "obvious path" through academia, of which there are many in the AI safety community, who otherwise would have to create their own signalling mechanisms. Thus it would be a bad outcome if all these internships got filled up with people with ordinary academic credentials and no "weirdness points," as admissions incentives might push towards. Strong admissions risk-aversion may also indicate that we have lots of talent, and not enough spots (more dakka required).
Such internships would take nontrivial effort and administrative resources - they're a negative for the research output of the individuals who run them. To align the incentives to make them happen, we'd want top-down funding intended for this activity. This may be complicated by the fact that a lot of research happens within corporations, e.g. at DeepMind. But if people actually try, I suspect there's some way to use money to expand training+signalling internships at corporate centers of AI safety research.
Suppose that we blow open that bottleneck, and we have a bunch of people with some knowledge of cutting-edge research, and credible signals that they can do AI safety work. Where do they go?
Right now there are only a small number of organizations devoted to AI safety research, all with their own idiosyncrasies, and all accepting only a small number of new people. And yet we want most research to happen in organizations rather than alone: Communicating with peers is a good source of ideas. Many projects require the efforts or skillsets of multiple people working together. Organizations can supply hardware, administrative support, or other expertise to allow research to go smoother.
Opportunity 3: Expand the size and scope of existing organizations, perhaps in a hierarchical structure. Can't be done indefinitely (will come back to this), but I don't think we're near the limits.
In addition to increasing the size of existing organizations, we could also found new groups altogether. I won't write that one down yet, because it has some additional complications. Complications that are best explored from a different perspective.
II
If you're a grant-making organization, selectivity is everything. Even if you want to spend more money, if you offer money for AI safety research but have no selection process, a whole bushel of people are going to show up asking for completely pointless grants, and your money will be wasted. But it's hard to filter for people and groups who are going to do useful AI safety research.
So you develop a process. You look at the grantee's credentials and awards. You read their previous work and try to see if it's any good. You ask outside experts for a second opinion, both on the work and on the grantee themselves. Et cetera. This is all a totally normal response to the need to spend limited resources in an uncertain world. But it is a lot of work, and can often end up incentivizing picking "safe bets."
Now let's come back the unanswered problem of increasing the number of research organizations. In this environment, how does that happen? The fledgling organization would need credentials, previous work, and reputation with high-status experts before ever receiving a grant. The solution is obvious: just have a central group of founders with credentials, work, and reputation ("cred" for short) already attached to them.
Opportunity 4: Entice people who have cred to found new organizations that can get grants and thus increase the amount of money being spent doing work.
This suggests that the number of organizations can only grow exponentially, through a life cycle where researchers join a growing organization, do work, gain cred, and then bud off to form a new group. Is that really necessary, though? What if a certain niche just obviously needs to be filled - can you (assuming you're Joe Schmo with no cred) found an organization to fill it? No, you probably cannot. You at least need some cred - though we can think about pushing the limits later. Grant-making organizations get a bunch of bad requests all the time, and they shouldn't just fund all of them that promise to fill some niche. There are certainly ways to signal that you will do a good job spending grant money even if you utterly lack cred, but those signals might take a lot of effort for grant-making organizations to interpret and compare to other grant opportunities, which brings us to the "vetting" bottleneck mentioned at the start of the post. Being vetting-constrained means that grant-making organizations don't have the institutional capability to comb through all the signals you might be trying to send, nor can they do detailed follow-up on each funded project sufficient to keep the principal-agent problem in check. So they don't fund Joe Schmo.
But if grant-making orgs are vetting-constrained, why can't they just grow? Or if they want to give more money and the number of research organizations with cred is limited, why can't those grantees just grow arbitrarily?
Both of these problems are actually pretty similar to the problem of growing the number of organizations. When you hire a new person, they need supervision and mentoring from a person with trust and know-how within your organization or else they're probably going to mess up, unless they already have cred. This limits how quickly organizations can scale. Thus we can't just wait until research organizations are most needed to grow them - if we want more growth in the future we need growth now.
Opportunity 5: Write a blog post urging established organizations to actually try to grow (in a reasonable manner), because their intrinsic growth rate is an important limiting factor in turning money into AI safety.
All of the above has been in the regime of weak vetting. What would change if we made grant-makers' vetting capabilities very strong? My mental image of strong vetting is grant-makers being able to have a long conversation with an applicant, every day for a week, rather than a 1-hour interview. Or being able to spend four days of work evaluating the feasibility of a project proposal, and coming back to the proposer with a list of suggestions to talk over. Or having the resources to follow up on how your money is being spent on a weekly basis, with a trusted person available to help the grantee or step in if things aren't going to plan. If this kind of power was used for good, it would open up the ability to fund good projects that previously would have been lost in the noise (though if used for ill it could be used to gatekeep for existing interests). This would decrease the reliance on cred and other signals, and increase the possible growth rate, closer to the limits from "talent" growth.
An organization capable of doing this level of vetting blurs the line between a grant-making organization and a centralized research hub. In fact, this fits into a picture where research organizations have stronger vetting capabilities for individuals than grant-making organizations do for research organizations. In a growing field, we might expect to see a lot of intriguing but hard-to-evaluate research take place as part of organizations but not get independently funded.
Strong vetting would be impressive, but it might not be as cost-effective as just lowering standards, particularly for smaller grants. It's like a stock portfolio - it's fine to invest in lots of things that individually have high variance so long as they're uncorrelated. But a major factor in how low your standards can be is how well weak vetting works at separating genuine applicants from frauds. I don't know much about this, so I'll leave this topic to others.
The arbitrary growth of research organizations also raises some questions about research agendas (in the sense of a single, cohesive vision). A common pattern of thought is that if we have more organizations, and established organisms have different teams of people working under their umbrellas, then all these groups of people need different things to do, and that might be a bottleneck. That what's best is when groups are working towards a single vision, articulated by the leader, and if we don't have enough visions we shouldn't found more organizations.
I think this picture makes a lot of sense for engineering problems, but not a lot of sense for blue-sky research. Look at the established research organizations - FHI, MIRI, etc. - they have a lot of people working on a lot of different things. What's important for a research group is trust and synergy; the "top-down vision" model is just a special case of synergy that arises when the problem is easily broken into hierarchical parts and we need high levels of interoperability, like an engineering problem. We're not at that stage yet with AI safety or even many of its subproblems, so we shouldn't limit ourselves to organizations with single cohesive visions.
III
Let's flip the script one last time - if you don't have enough cred to do whatever you want, but you think we need more organizations doing AI safety work, is there some special type you can found? I think the answer is yes.
The basic ingredient is something that's both easy to understand and easy to verify. I'm staying at the EA Hotel right now, so it's the example that comes to mind. The concept can be explained in about 10 seconds (it's a hotel that hosts people working on EA causes), and if you want me to send you some pictures I can just as quickly verify that (wonder of wonders) there is a hotel full of EAs here. But the day-to-day work of administrating the hotel is still nontrivial, and requires a small team funded by grant money.
This is the sort of organization that is potentially foundable even without much cred - you promise something very straightforward, and then you deliver that thing quickly, and the value comes from its maintenance or continuation. When I put it that way, now maybe it sounds more like Our World In Data's covid stats. Or like 80kh's advising services. Or like organizations promising various meta-level analyses, intended for easy consmption and evaluation by the grant-makers themselves.
Opportunity 6: If lacking cred, found new organizations with really, extremely legible objectives.
The organization-level corollary of this is that organizations can spend money faster if they spend it on extremely legible stuff (goods and services) rather than new hires. But as they say, sometimes things that are expensive are worse. Overall this post has been very crassly focusing on what can get funded, not what should get funded, but I can be pretty confident that researchers give a lot more bang per buck than a bigger facilities budget. Though perhaps this won't always be true; maybe in the future important problems will get solved, reducing researcher importance, while demand for compute balloons, increasing costs.
I think I can afford to be this crass because I trust the readers of this post to try to do good things. The current distribution of AI safety research is pretty satisfactory to me given what I perceive to be the constraints, we just need more. It turned out that when I wrote this post about the dynamics of more, I didn't need to say much about the content of the research. This isn't to say I don't have hot takes, but my takes will have to stay hot for another day.
Thanks to Jason Green-Lowe, Guillaume Corlouer, and Heye Groß for feedback and discussion at CEEALAR.
Discuss
### Sam Altman at the AstralCodexTen Online Meetup
25 августа, 2021 - 11:07
Published on August 25, 2021 8:07 AM GMT
Sam Altman, CEO of OpenAI, joining us again. Almost a year ago we had a great meeting with Sam— with record attendance!
Our meetup on Sunday, September 5, 2021, will start off with a Q&A. You can ask questions on AI, AGI, OpenAI, or any other topics.
After that, we will socialize on video-chat rooms.
The meetup starts 10:30 AM Pacific Daylight Time, 17:30 UTC, 20:30 Israel Daylight Time.
Please register here , and we'll send you an invitation closer to the time.
Discuss | 2021-09-22 07:36:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32507607340812683, "perplexity": 2700.870627470758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00056.warc.gz"} |
https://atakua.org/w/spam.html | ## SPAM
The very original SPAM sketch: https://www.youtube.com/watch?v=g8huXkSaL7o
SPAM on the Wikipedia, linking the sketch, the food and the communications phenomenon: https://en.wikipedia.org/wiki/Email_spam | 2019-08-20 23:25:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815571665763855, "perplexity": 8705.401749343588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00200.warc.gz"} |
https://ai.stackexchange.com/questions/2170/in-2016-can-1000-00-buy-enough-operations-per-second-to-be-approximately-equal | # In 2016, can $1000.00 buy enough operations per second to be approximately equal to the computational power of a human brain? In The Age of Spiritual Machines (1999), Ray Kurzweil predicted that in 2009, a \$1000 computing device would be able to perform a trillion operations per second. Additionally, he claimed that in 2019, a \\$1000 computing device would be approximately equal to the computational ability of the human brain (due to Moore's Law and exponential growth.)
Did Kurzweil's first prediction come true? Are we on pace for his second prediction to come true? If not, how many years off are we?
• Who says that the problem is quantitative ? – pasaba por aqui Aug 11 '18 at 9:17
## 2 Answers
The development of CPUs didn't quite keep up with Kurzweil's predictions. But if you also allow for GPUs, his prediction for 2009 is pretty accurate.
I think Moore's law slowed down recently and has now been pretty much abandoned by the industry. How much that will affect the 2019 prediction remains to be seen. Maybe the industry will hit its stride again with non-silicon based chips, maybe not.
And of course whether hitting Kurzweil's estimate of the computing power of the human brain will make an appreciable difference for the development of AGI is another question altogether.
• AGI? Artificially Generated Intelligence, or...? – DJG Oct 17 '16 at 21:07
• Artificial General Intelligence – Gottfried William Oct 19 '16 at 23:23
1) Yes we do have computing systems that does fall in to teraFLOPS range.
2) The human brain is a biological system and saying it has some sort of FLOPS ability is just plain dumb because there is no way to take a human brain and measure it's FLOPS. You could say "hey by looking at the neurons activity using fMRI we can reach some sort of approximation" but comparing the result of this approach with the way FLOPS are measured in computers will be comparing apples with oranges, which again is dumb.
• Why don't we measure it in energy consumed instead, with some sort of efficiency factor that denotes how much of the heat is being generated by useful computation (as opposed to supportive biological processes?) – DJG Oct 17 '16 at 7:02
• Heat is just another factor to optimise. You want to maximise the FLOPS and minimise heat generation (aka energy consumption of the system). People, in high performance computing, first focus to maximise FLOPS generally as they want their algorithms to run fast and later focus on heat depending on the requirements. – Ankur Oct 17 '16 at 7:06
• @DJG it's not a currently useful measure, because the "heat being generated by useful computation" (e.g. Landauer limit) is many orders of magnitude smaller than the waste heat of even the most efficient computing devices that we can build. For both modern electronic computers and biological neurons, despite their enormous efficiency differences, they still effectively are 100% waste heat, spending millions of times more power than theoretically necessary for that computation. – Peteris Oct 24 '16 at 19:15
• Perhaps this indicates a flaw in or incompletion of the theory. – DJG Aug 11 '18 at 12:04
• @DJG Given a list of 100 numbers, who would be fast to compute their sum, a computer or a human? This simple example shows that what human brain does and what computers does are completely different things. We built computers to perform a given sequence of arithmetic and logic operations as fast as possible because human brains are very very slow to do this task. – Ankur Aug 11 '18 at 13:08 | 2020-09-19 19:51:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6033056378364563, "perplexity": 1125.7513182511698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00318.warc.gz"} |
https://besu.hyperledger.org/en/latest/public-networks/reference/evm-tool/ | You are reading Hyperledger Besu development version documentation and some displayed features may not be available in the stable release. You can switch to stable version using the version box at screen bottom.
Date of last update: September 20, 2022
# EVM tool reference
Options for running:
## Run options
The first mode of the EVM tool runs an arbitrary EVM and is invoked without an extra command. Command Line Options specify the code and other contextual information.
### code
--code=<code as hex string>
--code=5B600080808060045AFA50600056
The code to be executed, in compiled hex code form.
No default value: execution fails if this is not set.
### gas
--gas=<gas as a decimal integer>
--gas=100000000
Amount of gas to make available to the EVM. The default value is 10 Billion, an incredibly large number unlikely to be seen in any production blockchain.
### price
--price=<gas price in GWei as a decimal integer>
--price=10
Price of gas in GWei. The default is zero. If set to a non-zero value, the sender account must have enough value to cover the gas fees.
### sender
--sender=<address>
--sender=0xfe3b557e8fb62b89f4916b721be55ceb828dbd73
The account the invocation is sent from. The specified account must exist in the world state, which unless specified by --genesis or --prestate is the set of accounts used for testing.
### receiver
--receiver=<address>
--receiver=0x588108d3eab34e94484d7cda5a1d31804ca96fe7
The account the invocation is sent to. The specified account does not need to exist.
### input
--input=<hex binary>
--input=9064129300000000000000000000000000000000000000000000000000000000
The data passed into the call. Corresponds to the data field of the transaction and is returned by the CALLDATA and related opcodes.
### value
--value=<Wei in decimal>
--value=1000000000000000000
The value of Ether attached to this transaction. For operations that query the value or transfer it to other accounts this is the amount that is available. The amount is not reduced to cover intrinsic cost and gas fees.
### json
--json=<boolean>
--json=true
Provide an operation-by-operation trace of the command in JSON when set to true.
### nomemory
--nomemory=<boolean>
--nomemory=true
By default, when tracing operations the memory is traced for each operation. For memory heavy scripts, setting this option may reduce the volume of JSON output.
### genesis
--genesis=<path>
--genesis=/opt/besu/genesis.json
The Besu Genesis file to use when evaluating the EVM. Most useful are the alloc items that set up accounts and their stored memory states. For a complete description of this file see Genesis file items.
--prestate is a deprecated alternative option name.
### chain
--chain=<mainnet|goerli|sepolia|dev|classic|mordor|kotti|astor>
--chain=goerli
The well-known network genesis file to use when evaluating the EVM. These values are an alternative to the --genesis option for well known networks.
### repeat
--repeat=<integer>
--repeat=1000
Number of times to repeat the contract before gathering timing information. This is useful when benchmarking EVM operations.
### revert-reason-enabled
--revert-reason-enabled=<boolean>
--revert-reason-enabled=true
If enabled, the JSON tracing includes the reason included in REVERT operations.
### key-value-storage
--key-value-storage=<memory|rocksdb>
--key-value-storage=rocksdb
Kind of key value storage to use.
Occasionally it may be useful to execute isolated EVM calls in context of an actual world state. The default is memory, which executes the call only in context of the world provided by --genesis or --network at block zero. When set to rocksdb and combined with --data-path, --block-number, and --genesis a Besu node that is not currently running can be used to provide the appropriate world state for a transaction. Useful when evaluating consensus failures.
### data-path
--data-path=<path>
--data-path=/opt/besu/data
When using rocksdb for key-value-storage, specifies the location of the database on disk.
### block-number
--block-number=<integer>
--block-number=10000000
The block number to evaluate the code against. Used to ensure that the EVM is evaluating the code against the correct fork, or to specify the specific world state when running with rocksdb for key-value-storage.
## State test options
The state-test subcommand allows the Ethereum state tests to be evaluated. Most of the options from EVM execution do not apply.
### Applicable options
#### json
--json=<boolean>
--json=true
Provide an operation by operation trace of the command in JSON when set to true. Set to true for EVM Lab Fuzzing. Whether or not json is set, a summary JSON object is printed to standard output for each state test executed.
#### nomemory
--nomemory=<boolean>
--nomemory=true
By default, when tracing operations the memory is traced for each operation. For memory heavy scripts, setting this option to true may reduce the volume of JSON output.
### Using command arguments
If you use command arguments, you can list one or more state tests. All the state tests are evaluated in the order they are specified.
docker run --rm -v \${PWD}:/opt/referencetests hyperledger/besu-evmtool:develop --json state-test /opt/referencetests/GeneralStateTests/stExample/add11.json
evm --json state-test stExample/add11.json
### Using standard input
If no reference tests are passed in using the command line, the EVM Tool loads one complete JSON object from standard input and executes that state test.
docker run --rm -i hyperledger/besu-evmtool:develop --json state-test < stExample/add11.json
evm --json state-test < stExample/add11.json | 2023-02-04 18:51:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2857138216495514, "perplexity": 5549.738362106884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00317.warc.gz"} |
http://mathhelpforum.com/discrete-math/124316-pigeonhole-principle-question.html | Thread: Pigeonhole principle question
1. Pigeonhole principle question
Any power of 2 is even and any power of 5 is a multiple of 5. Let p be a prime with p not equal to 2,5. show that among the numbers p^1,p^2,p^3,....,p^40 at least one of them has 01 as its last two digits.
2. Focus on the numbers mod 100 (i.e., only the last 2 digits). There are 100 total, but you can remove 50 of these since they are even, and remove 10 because they end in 5). Your powers of primes will never end in these values. The remaining 40 roughly form a group in that they will multiply into themselves. Your 40 powers either cover this group of 40 evenly, so one of them will cover 01, or if there is a pattern, you can find a power acting as the identity, i.e., 01.
3. Originally Posted by sbankica
Any power of 2 is even and any power of 5 is a multiple of 5. Let p be a prime with p not equal to 2,5. show that among the numbers p^1,p^2,p^3,....,p^40 at least one of them has 01 as its last two digits.
Consider $p^1, p^2, \dots , p^{40}$ modulo 100. Since p is not equal to 2 or 5, there are 100 - (100/2) - (100/5) + (100/10) = 40 possible congruence classes for the powers of p.
If any $p^n \text { where } 1 \leq n \leq 40$ is congruent to 1 modulo 100 we are done.
If not, then there are 39 possible congruence classes and 40 powers of p, so one of the classes must contain at least two powers of p, say $p^n \equiv p^m \pmod{100}$. Then $p^{n-m} \equiv 1 \pmod{100}$. | 2016-12-10 04:33:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597172498703003, "perplexity": 257.1918969673583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542938.92/warc/CC-MAIN-20161202170902-00075-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://www.bladeandepsilon.com/home/fanfiction/serialised-fanfiction/innocence/ | # Innocence
"Evil is a concept created by society to protect the weak from the
strong."
They say that in the last moments before death your life flashes before
your eyes. I never really believed that. Now I know differently. I can
feel it coming, and I know there is nothing I can do to stop it.
I watch them draw together, their power combining in an
unstopable hurricane, and I know they are going to kill me. I rage, it
is all I can do, it is all I know. Then a try to fight back, a last
desperate gamble in a shield to protect myself but it is pointless. I
hate them. All of them.
The bandana-adorned brat called Ryouga I hardly know, but I hate him.
The boy whose soul I captured to complete the sphere, Ranma, I hate him
too. I know more about him. I know he loves, and I know it is this love
that drives him to destroy me. I can not understand that. Love is
foreign. Only hate matters, only hate is -real-. Thanks to them it is
all I feel. I hate them for allowing me to hate them.
Gaeld, the dragon, there is nothing I would rather see than
his miserable traitorous hide drapped across the battlements of the
Keep. I hate the wolf too, whose interference nearly destroyed me, whose
sacrifice created my brother. How could he? How could he create
something that was me, that felt and that could love, yet leave me
trapped forever with only my hatred? And my brother is no better, a
martyr, he knows me and I know him better than any two people could know
each other. I can understand him, and I do not hate him. I loathe him.
He is everything I wanted to be, he is a mockery of my very existence.
He shows me exactly what it is that has happened to me and I feel the
darkness consume me when I think of it.
Blade, the one who comes closest to me. He is a pit of hatred as
well. He has mastered the Iron Blow of the True Dragon, the ultimate
expression of the inner darkness. Yet he has love, in all his darkness
there glows a light that drives the madness from his mind and gives him
the one thing I can never have. Hope. That is what Blade is, he is the
hope I never had, someone so consumed with darkness that they seem
beyond redemption. Yet the love of a women... no, TWO women gives him
hope. I hate him too.
The last one is Heavensrun, a girl who thinks she's a man.
Torn between two loves... oh yes, I know her, perhaps better than she
knows herself. Know you enemy and all that. How dare she mock me,
complaining about having too much of the one thing I will never have.
She allows it to eat away at her, to feed her rage that fuels her power.
It would be a pleasure to put her out of her misery.
I hate them all... and for a moment, just a moment the shield holds. I
rage more, the power of my defense grows, but then it begins to shatter.
My hatred is no match for their power. It is at this point I know beyond
a shadow of a doubt that it is over.
With this I realize one thing, that there is one person I hate
more than any other in this room. That person is myself. I hate what I
have become because that is all I can do. I remember all my promises,
all my intentions, all my failures... my hatred crumbles away to
nothing. It leaves emptiness, but not the familiar emptiness of
non-feeling. No, this is a different kind altogether, this one hurts.
Hurts more than the fact that their power has struck me like the wrath
of the cosmos itself. Hurts more than the reality distortions that I
myself created flooding my body as water floods past a broken dam. Hurts
more than feeling my body being torn apart by this, feeling my atoms
scatter across the infinite that is the multiverse. In those last
seconds they mistake my cry for that of physical torment. But they do
not know the pain that fills me, and the phrase that burns across my
eroding conciousness like a comet. The last thing I will ever think...
If only... if only it could have been different....
Journey Into Darkness
The Tale of Epsilon
Chapter 1
Innocence
A GRITfic by Aaron Peori
-*-
I first became aware in a house. It was not a large house, just four
rooms and only one floor. By that era's standards it was almost a
mansion. The front room was the largest, about thirty of a paces
across and fifteen by the side. In one corner sat a chair, little more
than a collection of flexible wooden logs covered over with the leathery
hide of some animal I never did learn the name of. It brooded in the
corner, whether because of the fact that the light of the fireplace that
was within a mans reach left it perpetually in shadows or some strange
aftereffect of the man who sat there I could not say. Above the
aforementioned fireplace gleamed two swords, one was iron, the other
bronze. They were my father's, he had used them in the war. At least
that was what I gathered from his drunken ramblings.
Opposite the fireplace from his chair was the pile of straw
I called my bed. In the center of the room was a table, made of oak or
some other large tree, just a trunk cut thin and propped up one four
rickety logs.
In the back of the room two doors led to my father's room and the
kitchen. The next door, found directly opposite the fireplace in the far
wall, was my mother's room. That room was sparse, a single bed (actually
little more than a collection of reeds in a sack on a frame of bent
wood) and a window covered in canvas to keep the cold out. Not that she
would have noticed, in fact in her state she probably wouldn't have
noticed anything. Jupiter knows we tried everything.
You see my mother was in what modern doctors would have called
a coma. They probably would have been unable to figure out why, nor
could I actually. And I had learned the hard way that asking my father
would get me nowhere. So I accepted it, like I accepted all the other
constants in my life.
It was my job to look after her, and I would spend many hours
cleaning her, brushing her string like blonde-white hair from her
closed eyes, spoon feeding her mush and forcing her to shallow. Her skin
was pale, almost white, and tended to accumulate sweat as if she fought
a battle as terrible, or more so than, the ones my father alluded to in
those not-quite-rare times when he'd ramble into his wineskin.
Sometimes, when my father was in a strange mood, he would lock himself
in her room. I could hear his hoarse sobs all throughout the little
shack, punctuated now and then by his fierce moans of loss.
I didn't know what loss was then, that would be a lesson for
another time, nor did I know what the strange affliction was that caused
tears to fall from my fathers eyes. Somehow I knew that these tears were
not unnatural, and that in fact I was the strange one for not joining my
father. Somehow, I knew that if I had done that things would have turned
out differently.
For some reason I could not fathom in my young, albiet
precocious, mind when my father returned from from those fits and would
see my dry grey eyes the Fury would overtake him worse than at any other
time. The Fury was another constant of my young life, one that I can say
was perhaps the defining concept. It would come from nowhere, seemingly
at random, and consume my father. His face would twist and his eyes
would shine with some inner light that transformed him from a man into a
beast spawned of nightmares and forged by the stuff of madness.
Sometimes he would scream, his voice harsh and uncaring, blaming me for
my mother's condition, other times he would just go straight to the
beating.
Pain was one of my clearest memories, its bitter-sweet taste one I knew
well. My father knew about pain too, or more importantly how to inflict
it. His years in that nameless war had taught him that. I lost
conciousness so many times that I didn't bother to count, but other
times I was left battered and covered in my own blood, laying on the
floor in agony that lingered for hours. I'm not sure how often my father
nearly killed me, at those times when the Fury was at its worst. I
remember at least five times, each time I could feel my body giving in,
the darkness beginning to encroach on my vision. But not the cool
darkness, no, this one was hot and within it my mind screamed. Yet each
time he stopped short, each time he would halt himself and with an
effort repress the Fury. Whether he did this because of some deep buried
feeling for me or my mother, or to simply prolong the suffering I know
not. Even so, I don't think I would have survived those first ten years
As I said, I was a precocious child, and by the "tender" age of six I
had learned that moving with a fist was easier than resisting it. I
would, over the next several years, learn how to turn that to my
advantage, rolling with the Fury rather than facing it. I would also
adapt my body, strengthing it. This I did during my father's frequent
trips to town. Chopping wood and hauling it helped to build my stamina
and strength, and by the time I was ten I was already quite sturdy and
well built, if small. I had several advantages over other children, if
at the time I didn't know this. I healed faster for one, not much, but
enough to make the difference since the Fury would come more and more
often as I grew older.
The one time I did meet another child my age was when I was eight or
nine. My father was out, gone to retrieve more wine no doubt. I was
chopping wood in the back, the slow methodical motion of the hatchet
sending a creeping burning through my muscles, but leaving my mind free
to wander. I was unsure of what I was thinking, my young mind tended to
drift from topic to topic. Maybe I was thinking of how the shadow of the
sun would tell the time, or remembering the fact that the small leaf I
had once dropped had been lifted ever so little by the steam from a pot
of boiling water. Either way my thoughts went out the window when he
approached.
Another of my father's lesson was how to understand and be aware of my
environment. It was a neccesity when you had to hear your father
approaching from a distance so that you could make yourself busy
elsewhere. Sometimes I would simply sit, late at night when my father
had fallen into drunken slumber, or during those days when my chores
were finished early, and concentrate on the world around me. The scamper
of a small rodent, the screech of the owl... but I digress.
I was aware of the child before he appeared, I suppose turning to face
his hiding place was not the best thing to do in the interest of
generating ease. I sensed a strange force from him, not one that my
father had ever projected. This was not something with which I was
familiar. In time, I would learn to recognize it as fear and curiosity.
"Come out," I remember saying, it being so rare back then to say
anything at all that I remember it quite clearly. I waited for a time,
unsure what to do. I considered going over, but somehow I knew that
would make whoever it was leave.
"I won't hurt you," I added as I placed the hatchet aside. I felt
whoever it was change, their mode shifted. Slowly the bushes in which he
had been hiding parted and he stepped out. He was taller than me, with
short hair almost cut to the scalp, of some brown hue.
"Hello," he said.
"Greetings," I replied. He seemed taken aback by my formality.
"You have weird eyes," he offered.
"I do?" I inquired.
"Uh-huh," he nodded, "They're all grey, see, my eyes are blue." He
pointed at the aforementioned orbs. "With black and white and junk.
Yours are just grey."
"Indeed..." I said.
"Whatcha doin'?"
"Chopping wood for the evening fire."
"Wanna play?"
"I can't."
"Why not?"
"I'm chopping wood for the evening fire."
"Oh."
There was a long silence, interupted only by the distant chitter of
small animals.
"I better be going," he said, finally breaking the strange silence. "My
parents took me on a picnic, they'll be looking for me..."
"Farewell," I offered.
"You're not like other kids," he said and was gone.
"No." I replied into the air, "I'm not."
I returned to the wood pile.
-*-
I was sixteen when my mother died. The promise of my younger
years had fulfilled itself after my brief and early puberty. My puberty
had been unlike others, coming when I was young, as if my body had raced
to catch up with my mind. I was also not plagued by hormones, somehow my
lack of confusion over the changes that occured to me in those years
would cause the Fury to consume my father more often.
Thankfully I was a sturdy youth, already as big as some men and nearly
as tough. Still it was not enough to protect me from the pain, from the
longs hours spent lying in a a puddle of my own blood, or from the cold
embrace of unconciousness.
My own increasing strength seemed to be inverted by my mother. For
almost a year her health had been flagging, her face had grown tighter,
her hair thinner and her breathing more difficult. During that year my
father's Fury continued to grow, while his self-control lagged further
and further behind.
I was with her when she died. Going through my regular routine, as I
had for as long as I could remember. Then, as I wiped the sweat from her
brow I felt something inside her slip away. As it did her eyes snapped
open, her voice cracked out for the first time in all my life. Her words
were incoherent, her meaning lost forever, but I could repeat them
verbatim now, over three thousand years later, not a syllable off, a
tone incorrect or an octave different. Then she looked at me, her eyes
filled with something, a power filled her that I could not understand.
She smiled, a tear rolled down her cheek... and her eyes closed. Her
last breath was one filled with a calm and peace I had never seen
before. Her struggle was over.
I don't know how long I stood there. Maybe hours, maybe seconds. At
some point I decided to get to my other chores. No use wasting time on
it, less work to perform tomorrow.
When my father returned he was half dead from drink. His eyes
shined red and puffed out, his gait slow and infected with an odd
stumble. He had been like this a few times, and the Fury never took him
at times like these so I knew I was safe for the moment. He would drag
himself to his bed and sleep the night away. My father had grown fat
over the years. His former warrior's build was hidden underneath an inch
of flab that covered his entire body. His hair was usually unkempt,
covered in his own sweat and oils. His face was shaved only once every
few weeks. He smelled like the animal he had become. The few times his
friends had come over I had seen the looks on their faces, and felt the
throb that I would later identify as pity.
"Mother is dead," I said when he was half-way across the room. I was
sitting on my bed, reading a scroll that I had managed to get a man down
the road to give me for weeding his gardens. A fire burned high in the
fireplace, to keep away the chill. Not that I minded chill, but my
father preferred it warm and I knew better than to do something that
would make him unhappy, not matter how illogical it seemed.
It was then I was to learn my third lesson. Never take anything for
granted. For as I watched my father sobered in an instant. His lumbering
gait was gone, the bloodshot eyes seemed to sink back into his face and
cool. It was an unusual sight.
"What?" he croaked, I felt the sorrow flood through him, stronger than
any time he had locked himself in with her.
"She died this morning," I replied, "Not long after you had left to
help that caravan pack."
His body seemed to stiffen, "You were with her weren't you?"
"Yes," I saw no reason to lie.
He came over to me, and I felt the Fury grow within him. It's
power was like I had never felt it before, it threatened to overwhelm me
and it was all I could do to think as he approached. Then he struck me,
and I was reminded that despite that fat he was still strong. He had to
work for his wine, and he worked with heavy lifting, manual labor and
other grunt tasks others didn't wish to do. Thus, under the fat was
still a layer of muscle, strong as any steel. I was lifted from my seat
and sent sprawling onto the floor. My cheek ached and I could taste
blood in my mouth from where a tooth and gouged the inside.
"You little monster..." he growled, The Fury grew still, but he had not
given into it. No, it seemed instead like he was only holding it back to
allow it to build. He was on me in a second. Lifting me from where I lay
he tossed me into the table. This time I rolled with it, and thanks to
this my spine did not snap like a twig when it collided with the round
table.
Unfortunately the table gave way, the legs fell out from under it
and I went falling with it. Again I lay on the table, trying to get the
strength to think. "You killed her!" His fist connected with my jaw, I
saw light and my vision blurred as I skidded across the table top and
collapsed to the floor again. "Little monster, you've been draining her
dry all this time haven't you! But now you're bigger, you had to eat it
all, well you won't get away with it."
I expected another blow but it didn't come, in fact, I had time to stop
the spinning and raise myself to a crouch. What I saw however, proved
that this might not have been a good thing. My father took down one of
the sword made of bronze and turned to face me, holding it with deadly
familiarity. Then the Fury overcame him at last. I watched my father die
too that day, I felt what was him, and I watched it slip away like I had
felt my mother leave. All that was left was the Fury. I knew with
certainty that this time he would not stop, this time he would not pull
himself short. This time, I would die.
He came at me with methodical measured strides, slow and easy. I had
nowhere to go, and he knew it. So I did the only thing I could, I
waited, now standing with feet spread to allow myself better balance.
The Fury was hot, but not hot like before. Instead it was the burning
cold of the Darkness that I had almost slipped into those few times.
Then he was upon me, sword flashing in the firelight like a shard of
the sun itself. I tried to roll with the blow but it didn't work. The
sword bit into my side, freeing a small river of blood that rolled down
my leg and pooled on the floor. I did not cry out, I did not fall. He
pulled back his sword, his face twisted beyond humanity and I felt the
low buzz of pleasure flow through the Fury, the pleasure he felt when he
would laugh with friends who pitied him. The wound was shallow, not life
threatening by far, only meant to hurt. Oh yes, it hurt, like nothing
else it hurt. I readied myself for the next blow. Adapt or die.
When it came I was already moving. He struck high, meaning to
peel skin off my shoulder like a man may peel the skin of an apple. But
I was faster, I ducked underneath the blow and crouched, feeling the
energy build in my legs I went with it. Releasing myself forward my fist
crashed into his stomach, sinking into the flab before it found
something more solid. He fell back, winded, and I knew I had no time to
waste thinking on how this was the first time I had ever struck back,
how this could only mean he would make it worse. Instead I was around
him and to the fireplace by the time he had recoved his breath. He
turned to face me, his sword held low and the Fury burning coldly within
him, even stronger than before. But I was ready, in my hand I held his
other sword, the one of iron.
But I held it like a novice, my hands clenched in the wrong places and
my stance all wrong. He smiled as he recognized this and shifted into a
good stance, sword held high. And this was what I wanted. With a single
motion I copied his stance, shifting my hands as I had watched him shift
his. I now stood in a ready stance.
For a moment he faltered, than the Fury grew strong again and he was
upon me. I was no match for him. He beat down my guard with a few
mockingly easy blows and struck again and again, carving shallow lines
of red across my chest and arms and legs. His sword flicked with the
movements of a master, and I couldn't even see half his blows, much less
have any hope of countering them. After a minute or so he stopped. I was
still standing, I had not cried out. But I could feel strength ebbing
from my wounds. My sword's tip lay on the floor, and I couldn't raise it
despite all my efforts. My arms were lead, my legs were saplings bending
under my own weight. Only my sturdiness kept me from collapsing and
dying right there.
With a laugh he struck with the flat of his blade, sending me
sprawling into the fire behind me. I felt the fire fall in around me and
he smiled, expecting me to scream in pain as it burned the flesh from my
bones as my body, too weak to rise, slowly roasted in the flames.
But I didn't die. The fire filled me with warmth, it flooded
into my skin, and I felt something inside me swallow it like water
soaking into the ground. And I could move, my strength returned quickly
as the fire flooded me.
Without thinking I acted, throwing myself from the fire with my second
wind I struck. Sword held in front of me like a spear I drove myself
into his chest. He didn't recover from his surprise quickly enough to
save himself. My sword point drove home, into the place where I could
feel his life held. His body and heart were pierced, my sword blade
stuck from his back. He died quickly, his heart stopped, his blood no
longer flowed. His breathing stilled and he fell. Eyes open he lay on
the floor, cold, lifeless. I realised that I had thrown the fire behind
me into chaos. That logs had rolled onto the wooden floor, that my bed
I grunted and lifted my father over, pulling the sword from his
carcass. I would need it later. I cleaned the blade, running it along
his body to wipe the blood from it before taking down the sheath and
sliding it into place. I turned and watched the fire beginning to climb
up the walls of the house. I didn't have much time left here. Walking
into the kitchen I retrieved some water and clothes, then I left. I
stood at the edge of the clearing, watching the house go up as I cleaned
my wounds with the water which I had boiled using the heat from that
very inferno. Already they were beginning to clot, I would survive.
Once I had wrapped the rags around myself, stemming the last of the
blood flow, and felt assured that there would be no infection I left.
Walking into the wood, sword held in one hand. I never looked back.
The Beginning
Author's Notes: This is the origin story of one of GRIT's most infamous
villians, that being Epsilon. More chapters will follow as I finish them
but don't expect this series to finish anytime soon. The entire saga of
Epsilon covers three thousand years (not counting timewarps and other
wierd stuff that happened) and it was a pretty eventful life.
-----------------
Epsilon
" Pain without Sorrow;
Want without Desire;
Purpose without Passion,
That burns like Fire."
- Sensation | 2022-10-02 17:24:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39783379435539246, "perplexity": 5960.125142495387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00540.warc.gz"} |
https://www.imrpress.com/journal/CEOG/49/11/10.31083/j.ceog4911257/htm | NULL
Section
All sections
Countries | Regions
Countries | Regions
Article Types
Article Types
Year
Volume
Issue
Pages
IMR Press / CEOG / Volume 49 / Issue 11 / DOI: 10.31083/j.ceog4911257
Open Access Original Research
Effect of Super-Specialization in External Cephalic Version: A Comparative Study
Show Less
1 Department of Obstetrics and Gynecology, ‘Virgen de la Arrixaca' University Clinical Hospital, 30120 Murcia, Spain
2 Department of Surgery, Obstetrics and Gynecology and Pediatrics of University of Murcia, 30120 Murcia, Spain
*Correspondence: javier.sanchez14@um.es (Javier Sánchez-Romero)
Clin. Exp. Obstet. Gynecol. 2022, 49(11), 257; https://doi.org/10.31083/j.ceog4911257
Submitted: 2 July 2022 | Revised: 24 August 2022 | Accepted: 26 August 2022 | Published: 16 November 2022
(This article belongs to the Special Issue Clinical Research of Epidemiology in Pregnant Women)
This is an open access article under the CC BY 4.0 license.
Abstract
Background: The introduction of an experienced dedicated team is not a completely studied fact. Several studies reported a high external cephalic version (ECV) success rate when the procedure is executed by a single operator or a dedicated team. This study aims to compare the effectiveness and safety of the ECV when the procedure is performed by senior experienced obstetricians or by super-specialized professionals who composed a dedicated team. Methods: Longitudinal retrospective analysis of ECV performed in a tertiary hospital. From 1 January 2018 to 1 October 2019, ECV were performed by two senior experienced obstetricians who composed the dedicated team for ECV, designed as Group A. From 1 October 2019 to 31 December 2019, ECV was performed by two seniors obstetricians, designed as Group B. Ritodrine was administered during 30 minutes just before the procedure. Propofol was used for sedation. Results: 186 pregnant women were recruited (150 patients in group A and 36 patients in group B). ECV success rate increased from 47.2% (31.7–63.2) in Group B to 74.0% (66.6–80.5) in Group A (p = 0.002). The greatest increase in the success rate of ECV was seen in nulliparae, from 38.5% (21.8–57.6) in group B to 69.1% (59.4–77.6) (p = 0.004). Complications rate decreased from 22.2% (11.1–37.6) in Group B to 9.3% (5.5–14.8) in Group A (p = 0.032). Conclusions: The introduction of an experienced dedicated team improves ECV success rate, especially in primiparas, and it also reduces ECV complications rate.
Keywords
sedation
experience
ECV
breech presentation
1. Introduction
A breech presentation occurs in 3–4% of all pregnant women at term [1]. Since the publication of the Term Breech Trial in 2000 [2] which reported an excess neonatal mortality as a consequence of breech vaginal delivery, cesarean delivery rates have risen alarmingly [3].
External cephalic version is an effective procedure for modifying the fetal position and achieving a cephalic presentation. The purpose of the ECV is to offer a chance for cephalic delivery which is safer than breech delivery or cesarean section. The use of ECF in breech presentation, according to World Health Organization [4], reduces the incidence of cesarean section, which is interesting in those units where vaginal breech delivery is not a common practice.
ECV is commonly performed before the active labor period begins. Many factors are associated with a higher ECV success [5, 6, 7] such as black race, multiparity, posterior placenta, amniotic fluid index higher than 10 cm, or a transverse lie.
Certain interventions facilitate ECV [8] such as analgesia, tocolysis, empty bladder [9], or the introduction of a dedicated experienced team [10]. Although ritodrine is considered the safest tocolytic drug and the agent that improves the most ECV success rate [8, 11], other tocolytics have been also compared in ECV are atosiban [8], nifedipine [8], others beta-agonist [12] or nitroglycerine [12].
Other analgesic agents have been analyzed such as systemic opioids or spinal anesthesia. Spinal anesthesia techniques improve the ECV success rate and pain after the procedure [13, 14, 15, 16]. No differences are reported in the ECV success rate when systemic opioids or spinal anesthesia are compared [13].
The introduction of an experienced dedicated team is not a completely studied fact. Several studies reported a high ECV success rate when the procedure is executed by a single operator [17, 18] or a dedicated team [5, 10, 19, 20]. Just one study has compared a dedicated team with non-experienced gynecologists, midwives, and residents [10].
The main objective of this study is to compare ECV results when the procedure is performed by an experienced dedicated team or by senior obstetricians who are not involved in a dedicated team. As a secondary objective, predictor factors of ECV success are analyzed in both groups. We hypothesized that an experienced dedicated team could have a higher ECV success rate and lower complication rate.
2. Materials and Methods
A longitudinal retrospective analysis of ECV performed in ‘Virgen de la Arrixaca’ University Clinical Hospital in Murcia (Spain) between the 1 January 2018 and the 31 December 2019 was performed. Informed written consent was obtained from all the patients under study. The confidentiality of any information about the patients was assured. No obligation on the patients to participate in the study. This study was approved by the Clinical Research Committee of the ‘Virgen de la Arrixaca’ University Clinical Hospital (2020-5-6-HCUVA). This study conforms with the 2013 Helsinki World Medical Association Declaration.
The procedure were performed by two of the four senior experienced obstetricians who composed the dedicated team for ECV in the Maternal-Fetal Unit from 1 January 2018 to 31 September 2019. In this study, this group is designed as ‘Group A’. The dedicated team for ECV in Maternal-Fetal Unit has more than 7 years of experience in ECV. More than seven hundred procedures have been carried out by this team during this period. However, the members of the dedicated team for ECV were absent between 1 October 2019 and 31 December 2019, and they were performed by two seniors obstetricians specialized in obstetrical care with 15 years of experience in delivery room. These seniors’ colleagues were not involved in the dedicated team for ECV. In this study, this group is designed as ‘Group B’.
Patients were offered the ECV during the third-trimester evaluation at 36 weeks gestation. Recruitment criteria were the same for Group A and Group B and there was no changes in ECV offering criteria during this period. ECV was proposed for every pregnant with non-cephalic presentation and no contraindication for vaginal delivery. Women were deemed ineligible in cases of severe preeclampsia, confirmed rupture of membranes, recent vaginal bleeding, and when an absolute indication for cesarean section was identified (e.g., placenta previa).
Obstetric anamnesis and ultrasound scan for assuring fetal position, biometry, placental location, and amniotic fluid were carried out in the consult.
If the patient was considered eligible and informed consent was obtained, ECV is performed at 37 weeks gestation. All patients were asked to fast at least eight hours before the procedure.
2.1 Procedure
ECV was performed following the same protocol in both groups [21, 22]. Our group published a previous study describing the procedure [21] and analyzing the role of sedation with propofol [22]. The procedure was carried out in the operating room in the presence of a midwife and an anesthesiologist. Before ECV was performed, pregnant women were valued by the anesthesiologist. Just before the ECV, 0.2 mg/min of ritodrine was administered for 30 minutes.
In the operating room, vital signs were monitored (Temperature, noninvasive blood pressure, heart rate, Electrocardiogram (EKG), and oxygen saturation). The patient was positioned in Trendelenburg (15º) and administered 1–1.5 mg/kg of propofol [22]. Paracetamol was used as analgesic agent. Two ECV attempts were performed by two obstetricians following the forward-roll technique. Immediately after the procedure, the fetal position was reassured with an ultrasound scan and fetal well-being was assessed with a continuous cardiotocograph register during the following 4 hours. Anti-D was given to rhesus-negative women. 24 hours after the procedure, a continuous monitoring for one hour and fetal position was reassured.
If any complication occurred during or immediately after the procedure, an urgent cesarean section was performed.
2.2 Outcome Variables
The procedure is considered successful when a cephalic presentation is achieved. Intraversion cesarean is considered as any cesarean carried out during the ECV or the first 24 hours after the procedure due to any complication secondary to it (i.e., fetal compromise, cord prolapse, vaginal bleeding, …).
2.3 Statistical Analysis
Data were recorded retrospectively on all referrals. Continuous variables were assessed for normality with the Shapiro–Wilk test.
The primary outcome variable was the ECV success. The secondary outcome variable was the incidence of intraversion cesarean section. Obstetric history, anthropometric measurements, estimated fetal weight at 3rd trimester, placental location, and fetal presentation underwent bivariate analysis using Student’s T-test or Pearson’s chi-squared test to compare the characteristics of each group. Subsequently, the primary and secondary outcome variables were compared between both groups.
Afterward, taking primary and secondary outcome variables for each group: all variables above mentioned with p-value $<$ 0.2 in bivariate analysis were considered using a multivariable analysis logistic regression model for both groups.
All tests were two-tailed and the level of statistical significance was set at 0.05. Data analysis was assisted with SPSS version 25.0 (SPSS Inc., Chicago, IL, USA), R version 3.6.2 (https://www.r-project.org/. Accessed 29 February 2022. R Core Team, Auckland, US), and RStudio version 1.2.5033: Integrated Development for R (RStudio, Inc., Boston, MA, USA).
3. Results
During this period, 203 pregnant women were offered an ECV. A spontaneous cephalic presentation before the procedure was showed in 15 pregnant and preterm labor began in two patients. Finally, 186 pregnant women underwent an ECV attempt. Of these, 150 (80.6%) were performed by Group A, and 36 (19.4%) were carried out by Group B (Fig. 1). 123 women were nulliparas (66.1%) and 63 (33.9%) women were multiparas. Baseline characteristics are depicted in Table 1. Baseline characteristics were comparable for Group A and Group B.
Fig. 1.
Flowchart diagram of study population.
Table 1.Characteristics of pregnant women who underwent external cephalic version (ECV).
Characteristics Total (186) 95% CI Group A (150) 95% CI Group B (36) 95% CI Age, years 32.3 31.5–33.0 33.5 31.8–35.2 33.5 31.8–37.2 GA at ECV, weeks 37.5 37.4–37.5 37.4 37.2–37.5 37.4 37.2–37.5 Gravida 1.9 1.7–2.1 2.0 1.6–2.4 2.0 1.6–2.4 Nulliparous, % (n) 66.1% (123) 59.1–72.6 64.7% (97) 56.8–72.0 72.2% (26) 56.3–84.7 Previous CS, % (n) 3.8% (7) 1.7–7.2 3.3% (5) 1.3–7.2 5.6% (2) 1.2–16.6 BMI, Kg/m${}^{2}$ 27.7 27.0–28.4 27.5 26.8–28.3 28.6 27.0–30.1 -BMI $<$25, % (n) 28.5% (53) 22.4–35.3 30.7% (46) 23.7–38.4 19.4% (7) 9.1–34.4 -BMI 25–30, % (n) 45.2% (84) 38.1–52.3 44.7% (67) 36.9–52.7 47.2% (17) 31.7–63.2 -BMI 30–35, % (n) 16.1% (30) 11.4–21.9 15.3% (23) 10.3–21.7 19.4% (7) 9.1–34.4 -BMI 35–40, % (n) 8.6% (16) 5.2–13.3 7.3% (11) 4.0–12.3 13.9% (5) 5.5–27.8 -BMI $>$40, % (n) 1.6% (3) 0.5–4.2 2.0% (3) 0.6–5.2 0% (0) EFW before ECV, grams 2797 2749–2845 2795 2741–2849 2806 2698–2914 Placental location, % (n) -Anterior, % (n) 51.6% (96) 44.5–58.7 50.7% (76) 42.7–58.6 55.6% (20) 39.4–70.8 -Posterior, % (n) 39.8% (74) 33.0–46.9 40.0% (60) 32.4–48.0 38.9% (14) 24.3–55.2 -Fundus, % (n) 4.3% (8) 2.1–8.0 5.3% (8) 2.6–9.8 0% (0) -Lateral wall, % (n) 4.3% (8) 2.1–8.0 4.0% (6) 1.7–8.1 5.6% (2) 1.2–16.6 Amniotic fluid pocket, mm 52.1 48.9–55.2 53.2 49.3–57.0 48.3 43.9–52.6 Transversal lie, % (n) 7.0% (13) 4.0–11.3 7.3% (11) 4.0–12.3 5.6% (2) 1.2–16.6 Data presented as mean or % (number (n)). p-value $<$ 0.05 in bold, when comparing characteristics between Group A and B, T-student test for normally distributed variables, and Chi-squared for categorical variables. GA, Gestational Age; BMI, Body Mass Index; ECV, External Cephalic Version; CS, Cesarean Section; EFW, Estimated Fetal Weight.
The overall ECV success rate was 68.8% (95% confidence interval (CI) 61.9–75.1). ECV outcomes and obstetric outcomes by Group are shown in Table 2.
Table 2.External cephalic version (ECV) and obstetric outcome by group.
Outcome Total (182) 95% CI Group A (148) 95% CI Group B (34) 95% CI ECV Success, % (n) 68.8% (128) 61.9–75.1 74.0% (111) 66.6–80.5 47.2% (17) 31.7–63.2 GA at delivery, weeks 39.5 39.3–39.7 39.5 39.2–39.8 39.5 39.0–40.0 Type of labor, % (n) -Spontaneous, % (n) 35.7% (65) 29.0–42.9 37.8% (56) 30.3–45.8 26.5% (9) 14.0–42.8 -Induction, % (n) 31.3% (57) 24.9–38.3 33.8% (50) 26.5–41.7 20.6% (7) 9.7–36.2 CS, % (n) 33.0% (60) 26.4–40.0 28.4% (42) 21.6–36.0 52.9% (18) 36.5–68.9 Mode of delivery, % (n) -Spontaneous, % (n) 29.7% (54) 23.4–36.6 31.1% (46) 24.0–38.8 23.5% (8) 11.8–39.5 -Operative, % (n) 23.6% (43) 17.9–30.2 25.0% (37) 18.6–32.4 17.6% (6) 7.7–32.8 -Urgent CS, % (n) 22.0% (40) 16.4–28.4 21.6% (32) 15.6–28.8 23.5% (8) 11.8–39.5 -Planned CS, % (n) 24.7% (45) 18.9–31.4 22.3% (33) 16.2–29.5 35.3% (12) 20.9–52.0 Four pregnant women gave birth in a different institution. Data presented as mean or % (number (n)). p-value $<$ 0.05 in bold, when comparing characteristics between Group A and B, T-student test for normally distributed variables, and Chi-squared for categorical variables. ECV, External Cephalic Version; GA, Gestational Age; CS, Cesarean Section.
The success rate of ECV increased from 47.2% (95% CI 31.7–63.2) in Group B to 74.0% (95% CI 66.6–80.5) in Group A (odds ratio (OR) = 3.18; 95% CI 1.40–7.20; p = 0.002) (Fig. 2). The greatest increase in the success rate of ECV was seen in nulliparas, from 38.5% (95% CI 21.8–57.6) in group B to 69.1% (95% CI 59.4–77.6) (OR = 3.57; 95% CI 1.33–9.83; p = 0.004) (Fig. 2).
Fig. 2.
ECV Success rate for nulliparas, multiparas, and total women.
Four pregnant women gave birth in a different institution. The total vaginal delivery rate after ECV increased from 41.2% (95% CI 25.9–57.9) in Group B to 56.1% (95% CI 48.9–63.9) in Group A. Overall, the rate of planned cesarean after ECV decreased from 33.3% (95% CI 19.7–49.5) in Group B to 22.0% (95% CI 15.9–29.1) in Group A. After successful ECV, four pregnant women (2.2%) showed breech presentation at birth and they were planned cesarean section. These four procedures were performed by Group A. Moreover, after a failed ECV, eight pregnant (4.3%) showed a cephalic presentation at birth and had a vaginal delivery (seven spontaneous and an operative delivery). The spontaneous reversion to cephalic rate after a failed ECV was 15.4% for Group A and 10.6% for Group B.
Multivariable logistic regression analysis showed that the amniotic fluid pocket (OR 1.08, CI 95% 1.04–1.12, p $<$ 0.001) was associated with the success of ECV. ECV super-specialization (OR 3.40, 95% CI 1.23–9.42, p $<$ 0.05), previous cesarean section (OR 2.23, 95% CI 0.73–6.79, p $<$0.05) and lower maternal body mass index (BMI) (OR 0.86, 95% CI 0.80–0.98, p $<$ 0.02) were associated with the success of ECV (Table 3).
Table 3.Logistic regression analysis to determine predictors of successful external cephalic version (ECV).
Factors Crude OR 95% CI p-value Adjusted OR${}^{a}$ 95% CI p-value ECV super-specialization 3.18 1.50–6.73 0.03 3.40 1.23–9.42 0.02 Multiparity 2.54 1.23–5.25 0.01 2.23 0.73–6.79 0.16 Previous CS 0.17 0.03–0.89 0.03 0.08 0.06–0.94 0.04 BMI, Kg/m${}^{2}$ 0.91 0.85–0.98 0.01 0.89 0.80–0.98 0.02 AF Pocket, mm 1.06 1.03–1.09 $<$0.01 1.08 1.04–1.12 $<$0.01 p-value $<$ 0.05 in bold. ${}^{a}$ Adjusted for multiparity, amniotic fluid pocket, body mass index, previous cesarean section, maternal age, and estimated fetal weight. OR, Odds Ratio; CI, Confidence Interval; AF, Amniotic Fluid; BMI, Body Mass Index.
Over this period, 22 (11.8%) complications occurred, all during the 24 h following the procedure. Complications rate decreased from 22.2% (95% CI 11.1–37.6) in Group B to 9.3% (95% CI 5.5–14.8) in Group A (OR = 0.36; 95% CI 0.14–0.91; p = 0.032). 13 minor vaginal bleeding, five non-reassuring fetal heart rate pattern, two preterm rupture of membranes, two chord prolapse and a maternal bronchoaspiration during the procedure were reported.
One newborn was admitted to neonatal unit care due to minor respiratory distress. This was a patient with a successful ECV, and afterward, intrauterine growth restriction was diagnosed. Labor was induced with dinoprostone and it was a spontaneous delivery with a cord blood pH = 7.28 and APGAR score at 1st minute of life = 8 and APGAR at 5 minutes of life = 9. The newborn was discharged after two days with no consequences.
One newborn was admitted to neonatal intensive unit care due to major respiratory distress. This was a planned cesarean section two weeks after an unsuccessful ECV. It was an extremely difficult fetal extraction during the cesarean section that needed a J-shaped incision for achieving it. Arterial cord blood pH was 6.97, venous cord blood pH was 7.00, APGAR score at 1st minute of life = 1, and APGAR at 5 minutes of life = 6. After 7 days, the newborn was discharged to neonatal unit care, where she was admitted for 23 days. No complications arose during the following year.
One patient suffered bronchoaspiration. This event occurred just after ending the procedure. The patient was admitted to the maternal unit care with intravenous antibiotic therapy. Although a cephalic presentation was achieved, finally a cesarean section was performed due to the bronchoaspiration after 7 days of treatment. A female was born with an APGAR score at 1st minute of life = 9 and Apgar at 5 minutes of life = 10. Arterial cord blood pH was 7.32, venous cord blood pH was 7.28. The patient and her newborn were discharged with no sequelae.
4. Discussion
The experience is considered crucial in medicine in general, and in obstetrics particularly. Super-specialization in medicine improves the experience acquisition and makes daily work safer. It seems logical that the introduction of a super-specialized team in ECV, would improve the success rate and would make the procedure safer. National and International Obstetrics organizations should not only support but also lead specific formation and accreditation plans for External Cephalic Version specialization for obstetricians, midwives, and anesthesiologists in light of this and previous results.
In this study, the ECV success rate increases from 47.2% (95% CI 31.7–63.2) to 74.0% (66.6–80.5%) with the introduction of a dedicated team. The number needed to treat was 6.7, meaning that 6.7 ECVs performed by the experienced dedicated ECV team led to one additional vaginal delivery in comparison with ECVs performed by the non-dedicated team. The creation of a dedicated experienced team of obstetricians to perform ECV led to an increase in the success rate and a significant decrease in the cesarean section rate overall.
If the results are compared in nulliparas, a greater increase is reported from 38.5% (95% CI 21.8–57.6) in group B to 69.1% (95% CI 59.4–77.6).
Several studies proved that analgesia [8, 12, 13, 14, 15, 16, 20] and tocolysis [8, 11] improve the ECV success rate. The present study had remarkable procedure characteristics such us, as far as we are concerned, it is the first group in which propofol is used for deep sedation in ECV, and what tocolysis concerned, ritodrine is administered for half-hour just before the procedure [21, 22].
Although several prediction models for the success of ECV have been published, none of them included the operator experience as a predictor [23, 24]. Kim et al. [25] highlighted the importance of the operator experience by developing a learning curve for ECV. They estimate that to achieve an expected success rate of 70% success rate, approximately 130 ECV attempts are needed. In contrast, in multiparas, only 10 attempts would be necessary for an expected success rate of 70%.
Several studies have analyzed their results in ECV when it is performed by a dedicated team: single-operator [17, 18] or dedicated team [5, 10, 19, 26]. Bogner et al. [27] showed that the ECV success rate depended not only on parity and gestational age but also on the operator.
Other authors have focused on the effect of a dedicated team [10, 28]. Hickland et al. [28] replaced their ECV operator every 15 days and reported an increase in the success rate from 32.6% to 41.9% over 3 years. Thissen et al. [10] compared ECV performed by a non-experienced team with their results after the introduction of a dedicated team. They reported an increase in the ECV success rate (39.8% to 59.66%) with the greatest increase in nulliparas.
Previous studies tried to describe fetal and maternal characteristics that may predict the ECV result [6, 23, 24, 26, 29]. Normal or high amniotic fluid volume, multiparity, BMI $<$35 Kg/m${}^{2}$, reduced bladder volume, fetal transverse lie, and increased estimated fetal weight are predictors of the success of ECV in several studies [9, 24, 26]. The present study found that previous cesarean section, normal to high amniotic fluid volume, and lower BMI were associated with the success of ECV. Other factors such as transverse lie, placental position, multiparity, or estimated fetal weight (EFW) before ECV were not associated with statistical signification with ECV success rate in the multivariable model. These associations would have reached a statistically significant association if a larger number of pregnant women had been recruited.
ECV is considered to be a safe procedure for achieving a cephalic presentation. Two studies analyzed ECV complications rate in the dedicated team [30, 31]. Beuckens et al. [30] reported 47.2% of ECV success and 2.63% of complications during the 48 hours next to the procedure. Rodgers et al. [31] reported a success rate of 35% for nulliparas and 62% for multiparas and an ECV complication rate of 4.73%. In both studies, ECV was performed without analgesia nor tocolysis which may explain the lower success and complication rates. The present study found that an experienced dedicated team decreases the ECV complications rate from 22.2% (95% CI 11.1–37.6) to 9.3% (95% CI 5.5–14.8) with the introduction of an ECV dedicated team when the procedure was performed under sedation with propofol and with ritodrine as tocolytic.
Super-specialization in obstetrics is essential for improving results and maintaining safety in procedures. ECV is an effective procedure for reducing the cesarean section rate and offering a chance for a vaginal delivery. When ECV is performed by experienced obstetricians a reduction in complications rate and an increase in success rate are observed [10]. Although how experience influences in ECV have already been analyzed, an experienced dedicated team was compared with residents or non-experienced obstetricians [10]. This study has compared the results, in terms of effectiveness and safety, between the dedicated team and experienced senior obstetricians.
The introduction of a dedicated team not only supposes an advantage in comparison with residents or other colleges but also with other experienced obstetricians. Super-specialization in ECV, in the light of this study, should be enhanced for tertiary hospitals by nationals and internationals obstetrics and gynecology associations.
Some key questions are still unanswered, such as the optimal learning curve, the best anesthetic technique, the most effective tocolytic drug, etc. Besides, clinical trials are needed to evaluate definitively the effectiveness of super-specialization in ECV. without the potential bias that could affect observational studies.
Strengths and Limitations
A strength of this study is the fact that it is the first cohort study to assess the influence of an experienced dedicated team on the success rate of ECV in comparison with senior experienced obstetricians. This is the first study in which propofol is used for sedation in patients who underwent ECV. It should be also highlighted that in the present study ritodrine is administered for 30 minutes just before the procedure. There were no significant differences in patient and obstetric characteristics making selection bias less likely.
This study has some limitations. First, this is a retrospective observational study with potential bias. These results must be confirmed with a prospective randomized trial. In addition, the number of women who underwent ECV in a non-dedicated team is small, which may affect the power of statistical analysis. However, differences observed in the present study, despite the lack of power, are consistent. Due to the differences in complications rate, increasing the patients enrolled in a non-dedicated team in this study could not be ethical. Besides, the learning curve cannot be evaluated in this study due to the absence of temporal analysis.
5. Conclusions
ECV is a safe and effective procedure. The introduction of an experienced dedicated team improves the ECV success rate. Previous cesarean section, lower BMI, and normal or high amniotic fluid volume have been associated with an increase in the ECV success rate. The introduction of an experienced dedicated team reduces the ECV complications rate. ECV super-specialization plan should be led by national and international obstetrics organizations.
Author Contributions
JSR, RMGP helped to record data, performing an ultrasound scan, and to design the study. FAR, JEBC, AND and MLSF helped to record data and to design the study. FAR and JHG helped to design the study. All authors read and approved the final manuscript.
Ethics Approval and Consent to Participate
This study was approved by the Clinical Research Committee of the ‘Virgen de la Arrixaca’ University Clinical Hospital (2020-5-6-HCUVA). This study conforms with the 2013 Helsinki World Medical Association Declaration.
Acknowledgment
Not applicable.
Funding
This research received no external funding.
Conflict of Interest
The authors declare no conflict of interest.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Share | 2022-12-02 16:04:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30543458461761475, "perplexity": 9906.224909560453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00148.warc.gz"} |
http://mathhelpforum.com/calculus/203897-derivative-certain-integrals-evans-pde.html | ## Derivative of certain integrals - Evans PDE
Hi All,
In PDE by Evans, he gives the fundamental solution to the heat equation for t>0
$\Phi(x,t)=\frac{1}{(4\pi t)^\frac{n}{2}}e^{-\frac{|x|^2}{4t}}$
Now he reasons that the convolution
$u(x,t)=\int_{\mathbb{R}^n}\Phi(x-y,t)g(y)\,dy$ should solve the IVP with $u=g$ on $\mathbb{R}^n \times \{t=0\}$.
A lot in chapter two he makes a big fuss (a justified one, I'm sure...) about bringing derivatives underneath the integral sign (for example with the solution to Laplace's equation earlier on in the chapter...)
In the case I'm talking about however, he has no trouble down the page saying that
$u_t(x,t)-\Delta u(x,t) = \int_{\mathbb{R}^n}[(\Phi_t - \Delta_x \Phi)(x-y,t)]g(y)\,dy$
He says that $\Phi$ is infinitely differentiable with uniformly bounded derivatives of all orders... so is this the reason why he can bring the derivative wrt $t$ and the laplacian with respect to $x$ inside the integral in that case? I know there are theorems from measure theory that guarantee you can do this - if that is his reasoning, I am fine with it.
However, I get confused when he then goes on to show
$u(x,t):=\int_0^t \int_{\mathbb{R}^n}\Phi(x-y,t-s)f(y,s)\,dy\,ds$ solves the non-homogenous problem. He says:
"Since $\Phi$ has a singularity at (0,0), we cannot directly justify differentiating under the integral sign. We instead proceed somewhat as in the proof of theorem 1 in 2.2.1..."
Okay, first of all, which derivative is he talking about?! Have we run in to problems now because we are integrating wrt to $s$ from 0 to $t$ as well?
I don't really understand this... is anyone able to shed any light for me on the situation?
Thanks! | 2017-02-25 01:38:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9184649586677551, "perplexity": 251.10725474793063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00367-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.academic-quant-news.com/2020/03/research-articles-for-2020-03-12.html | # Research articles for the 2020-03-12
A Behavioral Signaling Explanation for Stock Splits: Evidence from China
Cui, Chenyu,Li, Frank Weikai,Pang, Jiaren,Xie, Deren
SSRN
We propose a behavioral signaling explanation for the positive announcement effects of stock splits. There are two key behavioral ingredients in our model. First, (retail) investors have misconceptions about stock splits that make them view stock splits as good news. Second, investors are loss-averse and will be particularly disappointed if a splitting firm’s ex-post performance falls short of expectation. In a separating equilibrium, only managers with favorable private information use stock splits to signal. Using a comprehensive sample of stock splits in China over the period of 1998 to 2017, we find supporting evidence: (1) stock splits elicit positive announcement returns and a higher split ratio is associated with a stronger market reaction; (2) splitting firms have better future operating performance and more favorable analyst forecasts; (3) when future performance is poor, splitting firms experience larger price declines than non-splitting firms; (4) the announcement returns of stock splits are smaller for firms with higher institutional ownership and firms with higher pre-split prices.
A Detailed take on Fat Tail
Thakur, Sandeep
SSRN
Understanding of fat tail and its importance was very less known when it comes to the economic field. This paper makes notes from other papers and describes what all are done in the field of fat-tail.
A Mean Field Game Approach to Equilibrium Pricing with Market Clearing Condition
Masaaki Fujii,Akihiko Takahashi
arXiv
In this work, we study an equilibrium-based continuous asset pricing problem which seeks to form a price process endogenously by requiring it to balance the flow of sales-and-purchase orders in the exchange market, where a large number of agents are interacting through the market price. Adopting a mean field game (MFG) approach, we find a special form of forward-backward stochastic differential equations of McKean-Vlasov type with common noise whose solution provides a good approximate of the market price. We show the convergence of the net order flow to zero in the large N-limit and get the order of convergence in N under some conditions. We also extend the model to a setup with multiple populations where the agents within each population share the same cost and coefficient functions but they can be different population by population.
Brain-Machine Interfaces and Ethics: A Transition from Wearables to Implantable
Montalbano, Lydia
SSRN
BMI is usually described as “a device that translates neuronal information into commands capable of controlling external software or hardware such as a computer or robotic arm.â€. This paper explored both existing and experimental innovations in this emerging field and highlighted some related ethical concerns. As a matter of fact, such powerfull technology may potentially undermine - as well as boost - our privacy and autonomy. I conclude that the potential threats can be substantially reduced by the Legislator if the right international regulations for the society are implemented and harmonized.
Closed-form approximations with respect to the mixing solution for option pricing under stochastic volatility
Kaustav Das,Nicolas Langrené
arXiv
We consider closed-form approximations for European put option prices within several stochastic volatility frameworks with time-dependent parameters. Our methodology involves writing the put option price as an expectation of a Black-Scholes formula and performing a second-order Taylor expansion around the mean of its argument. The difficulties then faced are simplifying a number of expectations induced by the Taylor expansion. Under the assumption of piecewise-constant parameters, we derive explicit pricing formulas and devise a fast calibration scheme. Furthermore, we perform a numerical error and sensitivity analysis to investigate the quality of our approximation and show that the errors are well within the acceptable range for application purposes. Lastly, we derive bounds on the remainder term generated by the Taylor expansion.
Covariance matrix filtering with bootstrapped hierarchies
Christian Bongiorno,Damien Challet
arXiv
Statistical inference of the dependence between objects often relies on covariance matrices. Unless the number of features (e.g. data points) is much larger than the number of objects, covariance matrix cleaning is necessary to reduce estimation noise. We propose a method that is robust yet flexible enough to account for fine details of the structure covariance matrix. Robustness comes from using a hierarchical ansatz and dependence averaging between clusters; flexibility comes from a bootstrap procedure. This method finds several possible hierarchical structures in DNA microarray gene expression data, and leads to lower realized risk in global minimum variance portfolios than current filtering methods when the number of data points is relatively small.
Does Governance Travel Across Industries? A Mutual Fund Episode
Chang, Yen-Cheng,Tseng, Kevin,Yen, Chia-Yi
SSRN
We examine whether better-managed mutual funds are at the same time good corporate monitors by exploiting small changes in market capitalizations of firms around the Russell 1000 and 2000 index cutoff. Tracking error concerns force active mutual funds to buy addition stocks moving into Russell 2000. Using a regression discontinuity design, we find that increases in ownership by better-managed mutual funds leads to better overall governance with more independent directors, fewer anti-takeover measures, and more equal voting rights. Firms' operating performance also improve. Our results provide evidence that management knowledge is transmittable across industries.
Efficient Cyber Risk: Security and Competition in Financial Markets
Brolley, Michael,Cimon, David,Riordan, Ryan
SSRN
In financial markets, clients entrust their capital and data to financial infrastructure providers who are vulnerable to breaches. We develop a model in which infrastructure providers compete to provide secure and efficient client services, in the presence of a cyber-attacker. In equilibrium, provider competition leads to both lower fees and security investment, but potentially greater vulnerability, in comparison to a monopolistic platform. We find that providers prefer to consolidate into a single platform, whereas clients prefer a fragmented infrastructure. The inefficiency of consolidated providers stems from under-investment in security when the market is small, and over-investment when the market is large. Policy makers should be wary of consolidation of critical financial infrastructure, as the impacts to security do not compensate clients for the increase in fees. Instead, minimum security investment requirements may improve security in competitive environments while yielding higher utility than the comparable monopoly platform.
Electoral systems and international trade policy
Serkan Kucuksenel,Osman Gulseven
arXiv
We develop a simple theoretic game a model to analyze the relationship between electoral sys tems and governments' choice in trade policies. We show that existence of international pressure or foreign lobby changes a government's final decision on trade policy, and trade policy in countries with proportional electoral system is more protectionist than in countries with majoritarian electoral system. Moreover, lobbies pay more to affect the trade policy outcomes in countries with proportional representation systems.
Equity Term Structures without Dividend Strips Data
Giglio, Stefano,Kelly, Bryan T.,Kozak, Serhiy
SSRN
We use a large cross-section of equity returns to estimate a rich affine model of equity prices, dividends, returns and their dynamics. Using the model, we price dividend strips of the aggregate market index, as well as any other well-diversified equity portfolio. We do not use any dividend strips data in the estimation of the model; however, model-implied equity yields generated by the model match closely the equity yields from the traded dividend forwards reported in the literature. Our model can therefore be used to extend the data on the term structure of discount rates in three dimensions: (i) over time, back to the 1970s; (ii) across maturities, since we are not limited by the maturities of actually traded dividend claims; and most importantly, (iii) across portfolios, since we generate a term structure for any portfolio of stocks (e.g., small or value stocks). The new term structure data generated by our model (e.g., separate term structures for value, growth, investment and other portfolios, observed over a span of 45 years that covers several recessions) represent new empirical moments that can be used to guide and evaluate asset pricing models.
Foreign Direct Investment and the Equity Home Bias Puzzle
Blank, Sven,Hoffmann, Mathias,Roth, Moritz A.
SSRN
The vast macroeconomic literature trying to explain the widely observed equity home bias disregards internationally active firms. In a DSGE model that features the endogenous choice of firms to become internationally active through either exports or foreign direct investment (FDI), we find that the optimal equity holdings of agents are biased towards domestic firms. Our finding indicates that international diversification is not as bad as empirical measures of the equity home bias suggest.
Good News, Bad News, No News: The Media and the Cross Section of Stock Returns
Naumer, Hans-Jörg,Yurtoglu, B. Burcin
SSRN
We examine the relationship between the tonality of news flow and the cross section of expected stock returns. We use a comprehensive definition of media coverage that includes both financial newspapers and mass media, represented by TV broadcasts. Using the total news flow with positive and negative tonality, we build a Media Tonality Indicator (MTI) with higher values reflecting a more positive tonality. Our sample consists of U.S. companies that were constituents of the S&P 500 index from January 2006 to October 2016. We use MTI to sort companies into portfolios and find an average monthly return spread in the order of 3% between the most positive and the most negative MTI-portfolio. This spread amplifies further when a company’s valuation, size or ESG scores are taken into account, disappears however when we use the MTI from the previous month to sort stocks. Foresighted investors can benefit from the tonality premium in overweighting stocks with high valuation, low market capitalization or a high ESG if they expect negative news and vice versa.
How Does Firm Size Explain Cross-Country Differences in Ownership Concentration?
Moshirian, Fariborz,Thuy, Nguyen Thi,Yu, Jin,Zhang, Bohui
SSRN
Using a comprehensive international sample of 18,932 firms across 40 countries, we find that cross-country variations in ownership concentration are attributable to differences in firm sizes. Ownership concentration in large firms differs strikingly between countries. For example, large U.S. firms tend to have much more dispersed ownership structures than large non-U.S. firms. In contrast, the ownership concentration of small firms does not seem to vary across countries. Further analysis reveals that differences in ethical values and legal environments across countries can assist in explaining this cross-country variation in ownership across different firm size groups. Our results are robust to alternative blockholding proxies and firm size measures. Taken together, our findings not only provide novel and ethics-based explanations of corporate ownership structures around the world, but also potentially reconcile the presently conflicting views on whether U.S. ownership structures differ from those found elsewhere in the world.
Indemnity Payments in Agricultural Insurance: Risk Exposure of EU States
Osman Gulseven,Kasirga Yildirak
arXiv
This study estimates the risk contributions of individual European countries regarding the indemnity payments in agricultural insurance. We model the total risk exposure as an insurance portfolio where each country is unique in terms of its risk characteristics. The data has been collected from the recent surveys conducted by the European Commission and the World Bank. Farm Accountancy Data Network is used as well. 22 out of 26 member states are included in the study. The results suggest that the EuroMediterranean countries are the major risk contributors. These countries not only have the highest expected loss but also high volatility of indemnity payments. Nordic countries have the lowest indemnity payments and risk exposure.
Inf-convolution and optimal risk sharing with arbitrary sets of risk measures
Marcelo Brutti Righi
arXiv
The inf-convolution of risk measures is directly related to risk sharing and general equilibrium, and it has attracted considerable attention in mathematical finance and insurance problems. However, the theory is restricted to finite (or at most countable in rare cases) sets of risk measures. In this study, we extend the inf-convolution of risk measures in its convex-combination form to an arbitrary (not necessarily finite or even countable) set of alternatives. The intuitive principle of this approach is to regard a probability measure as a generalization of convex weights in the finite case. Subsequently, we extensively generalize known properties and results to this framework. Specifically, we investigate the preservation of properties, dual representations, optimal allocations, and self-convolution.
Investigating the influence Brexit had on Financial Markets, in particular the GBP/EUR exchange rate
Michael Filletti
arXiv
On 23rd June 2016, 51.9% of British voters voted to leave the European Union, triggering a process and events that have led to the United Kingdom leaving the EU, an event that has become known as 'Brexit'. In this piece of research, we investigate the effects of this entire process on the currency markets, specifically the GBP/EUR exchange rate. Financial markets are known to be sensitive to news articles and media, and the aim of this research is to evaluate the magnitude of impact of relevant events, as well as whether the impact was positive or negative for the GBP.
Lifting the Veil: How Do Underwriters Price Corporate Bond Offerings?
Wang, Liying
SSRN
Using newly available data on the initial bond yield, this study demonstrates that the pre-offering price formation of corporate bond (CB) offerings differs from that of equity IPOs. Specifically, CB underwriters set the initial yield to target the post-market trade yield, plus 20 bps, rather than the offering yield. Subsequently, the price update is smaller for riskier offerings, resulting in greater underpricing. Further evidence suggests that CB underwriters suffer greater pricing errors for riskier offerings, bookbuilding helps underwriters reduce pricing errors, and underwriters reward investors for their positive information production by offering greater underpricing, which provides support for bookbuilding theories.
Market Implementation of Multiple-Arrival Multiple-Deadline Differentiated Energy Services
Yanfang Mo,Wei Chen,Li Qiu,Pravin Varaiya
arXiv
An increasing concern in power systems is how to elicit flexibilities in demand, which leads to nontraditional electricity products for accommodating loads of different flexibility levels. We have proposed Multiple-Arrival Multiple-Deadline (MAMD) differentiated energy services for the flexible loads which require constant power for specified durations. Such loads are indifferent to the actual power delivery time as long as the duration requirements are satisfied between the specified arrival times and deadlines. The focus of this paper is the market implementation of such services. In a forward market, we establish the existence of an efficient competitive equilibrium to verify the economic feasibility, which implies that selfish market participants can attain the maximum social welfare in a distributed manner. We also show the strengths of the MAMD services by simulation.
Market Timing on the Johannesburg Stock Exchange
Bowler, Matthew
SSRN
Whether there are significant relationships between independent variables with future returns, of differing horizons, between 1960 and 2010 and finds, using correlation analysis, several significant relationships. These significant relationships are then included in multifactor forecast models, which are estimated using ordinary least squares regression. The findings from these estimations indicate that there is some, albeit small, portion of the market that is predictable by historic variables. Applying these forecasts to three trading strategies, this study finds that returns in excess of 6% above that of the JSE ALSI are possible.However, there are several look-ahead biases that impact on this initial result. As the beta coefficients and the specification of the model (based on relational strength between variables) are determined based on the full sample of observations, it is possible that limiting the data such that it reflects only the information available to an investor at each point in time could lead to both differing coefficients and different specifications.However, even when employing a dynamically updating model to eliminate these biases, there is still evidence of market predictability that can be profitably exploited, with an optimal combination of regression type, trading strategy and return horizon generating slightly less than 3% in excess of the JSE ALSI. Even incorporating transaction costs of 150 basis points of transaction value, it is found that it is possible to generate returns of 0.5% in excess of those of the JSE ALSI.
Multidimensional Analysis of Monthly Stock Market Returns
Osman Gulseven
arXiv
This study examines the monthly returns in Turkish and American stock market indices to investigate whether these markets experience abnormal returns during some months of the calendar year. The data used in this research includes 212 observations between January 1996 and August 2014. I apply statistical summary analysis, decomposition technique, dummy variable estimation, and binary logistic regression to check for the monthly market anomalies. The multidimensional methods used in this article suggest weak evidence against the efficient market hypothesis on monthly returns. While some months tend to show abnormal returns, there is no absolute unanimity in the applied approaches. Nevertheless, there is a strikingly negative May effect on the Turkish stocks following a positive return in April. Stocks tend to be bullish in December in both markets, yet we do not observe anya significant January effect is not observed.
Multiple Buffer CoCos and Their Impact on Financial Stability
Neamtu, Ioana
SSRN
In this paper we develop a theoretical model to investigate the effect on a bank's financial stability of having multiple contingent convertible bonds buffers (CoCos) on the same bank balance sheet, using cash-in-the-market pricing and global games methodologies. Contingent convertible bonds are meant to act as a bail-in mechanism for banks, where CoCo debt converts into equity when a bank needs it the most. We find that having CoCo buffers which trigger at different capitalisation levels can be detrimental for the CoCo bail-in capacity. Market-based triggers lead to premature conversion and fire-sales of equity. In contrast with existing literature, we show that book-based trigger CoCos yield an optimal outcome, as long as they incorporate expected credit losses.
Numerical smoothing and hierarchical approximations for efficient option pricing and density estimation
Christian Bayer,Chiheb Ben Hammouda,Raul Tempone
arXiv
When approximating the expectation of a functional of a certain stochastic process, the efficiency and performance of deterministic quadrature methods, and hierarchical variance reduction methods such as multilevel Monte Carlo (MLMC), is highly deteriorated in different ways by the low regularity of the integrand with respect to the input parameters. To overcome this issue, a smoothing procedure is needed to uncover the available regularity and improve the performance of the aforementioned methods. In this work, we consider cases where we cannot perform an analytic smoothing. Thus, we introduce a novel numerical smoothing technique based on root-finding combined with a one dimensional integration with respect to a single well-chosen variable. We prove that under appropriate conditions, the resulting function of the remaining variables is highly smooth, potentially allowing a higher efficiency of adaptive sparse grids quadrature (ASGQ), in particular when combined with hierarchical representations to treat the high dimensionality effectively. Our study is motivated by option pricing problems and our main focus is on dynamics where a discretization of the asset price is needed. Our analysis and numerical experiments illustrate the advantage of combining numerical smoothing with ASGQ compared to the Monte Carlo method. Furthermore, we demonstrate how numerical smoothing significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate. Given a pre-selected tolerance, $\text{TOL}$, this results in an improvement of the complexity from $\mathcal{O}\left(\text{TOL}^{-2.5}\right)$ in the standard case to $\mathcal{O}\left(\text{TOL}^{-2} \log(\text{TOL})^2\right)$. Finally, we show how our numerical smoothing enables MLMC to estimate density functions, which standard MLMC (without smoothing) fails to achieve.
Nurturing Infrastructure Investments in Emerging Markets and Africa: Notes from Washington, Beijing and Riyadh
Firzli, M. Nicolas J.
SSRN
The investment choices of large asset owners such as pension funds, sovereign wealth funds and endowments, are, to a large extent ‘guided’ and pre-determined by the systematic use of old- fashioned indices or benchmarks designed by a small set of Anglo- American ‘index providers’ â€" most notably MSCI, formerly known as Morgan Stanley Capital International (MSCI). These companies and the rather conformist investment consultants promoting their indexes tend to be unwittingly biased in favor of liquid assets in rich, developed countries e.g. the archetypal MSCI All Country World Index (ACWI) clearly encourages asset owners to allocate 55% percent (or more) of their overall (equity) assets to the United States and 8% to Japan â€" whereas these two ageing nations only represent 15% and 4% of the world economy respectively (in real terms i.e. based on purchasing-power-parity or PPP).These unfair, deeply ingrained biases have de facto forced Northern Hemisphere asset owners to over-allocate capital to the US, Germany, Japan etc., thus sucking-up much needed financial resources from the rest of the world and hurting the long-term economic interests of Asia, Africa and Latin America. The paper also explores the geo-economic and financial implications of the US-China rivalry from the perspective of long-term asset owners and the G20 Saudi Arabia Presidency in relation to the increasingly 'Asianized World Economy.'The accelerating 'Sino-American Race' could benefit pivot-nations like Kenya, Egypt, Morocco, Ivory Coast, Senegal, Angola, Madagascar in Africa, and Estonia, Romania, Cyprus, Israel, Saudi Arabia, Vietnam, Cambodia, Malaysia in the Eurasia Pacific area â€" which will be courted like never before by Washington, Brussels and Beijing â€" thus eventually adding massive public funding and risk insurance resources to the rising private capital flows coming from sophisticated asset owners based in Canada, Scandinavia, Holland, Australia and Singapore (“Pension Superpowersâ€).
Planetary Health and the Global Financial System
Chenet, Hugues
SSRN
• The financial system is not structurally well equipped to address long-term global public goods issues like planetary health. Relying on the financial system to solve planetary health is therefore challenging.• Planetary health finance should shift current global investment flows towards economic activities compatible with planetary health; it is also important to cease financing those activities that create environmental and health problems.• Public finance has a strong role to play in planetary health to support innovation and crowd-in private actors.• The volume of available financial capital appears to be large enough to be substantially mobilised for planetary health.• Nature conservation finance is a promising approach to target concrete impact on the ground, but it may be difficult to scale to global level.• There is a need to channel capital towards planetary health and manage the related risks to the financial system, but the traditional mechanics of risk pricing cannot work in this case because markets cannot manage the fundamental uncertainty and long time horizons at stake.• A precautionary approach to the financial risks associated with planetary health is needed, as is the application of a new approach to supervision and regulation of the financial system.• Mobilizing finance for planetary health is likely to require deeper regulation of the financial system, although measures taken will strongly depend on each country’s current approach to financial regulation.
SkillCheck: An Incentive-based Certification System using Blockchains
Jay Gupta,Swaprava Nath
arXiv
Skill verification is a central problem in workforce hiring. Companies and academia often face the difficulty of ascertaining the skills of an applicant since the certifications of the skills claimed by a candidate are generally not immediately verifiable and costly to test. Blockchains have been proposed in the literature for skill verification and tamper-proof information storage in a decentralized manner. However, most of these approaches deal with storing the certificates issued by traditional universities on the blockchain. Among the few techniques that consider the certification procedure itself, questions like (a) scalability with limited staff, (b) uniformity of grades over multiple evaluators, or (c) honest effort extraction from the evaluators are usually not addressed. We propose a blockchain-based platform named SkillCheck, which considers the questions above, and ensure several desirable properties. The platform incentivizes effort in grading via payments with tokens which it generates from the payments of the users of the platform, e.g., the recruiters and test-takers. We provide a detailed description of the design of the platform along with the provable properties of the algorithm.
Stochastic Dominance in Mutual Fund Returns
Jiang, Lei,Wen, Quan,Wu, Ke,Yin, Mengfan
SSRN
We find that a large portion of U.S. equity mutual funds almost second-order stochastically dominates the market portfolio. Consistent with the canonical definition of second-order stochastic dominance, both fund investors and managers reveal their preference for funds with a higher degree of almost second-order stochastic dominance through higher inflows and higher manager ownership. Funds with a higher degree of stochastic dominance over the market portfolio significantly outperform their peers, after controlling for common performance predictors and the Sharpe ratio. Inference based on stochastic dominance is more consistent with the Manipulation-Proof Performance Measure (MPPM) than with the Sharpe ratio.
The Cost of Misaligned Tax Incentives: Evidence from Tax-Motivated Special Dividends
Krupa, Trent,Utke, Steven
SSRN
Prior research finds that firms pay special dividends before dividend tax increases. We examine the real effects of this decision, finding that firms incur costs to pay these tax-motivated special dividends. Specifically, firms reduce investment and repurchases to pay tax-motivated special dividends. Further, the source of funding varies with the tax-incentives of shareholders. When the dividend is likely influenced by (tax-insensitive) institutional investors for the benefit of shareholders other than insiders, firms reduce repurchases and capital expenditures. Conversely, misaligned tax incentives between insiders and outside shareholders are associated with a reduction in R&D, consistent with managerial myopia which erodes shareholder value. Tests using total factor productivity support these conclusions. These findings add to our understanding of tax-based agency issues influencing real corporate decisions.
The Effect of the Global Financial Crisis on the Profitability of Islamic Banks in UAE
SSRN
This paper empirically analyzes the profitability of the four Islamic banks operating in the UAE during the financial period between 2004 and 2009 using three profitability indicators, return on total income, return on assets and return on equity. The researcher uses a variety of techniques, equality of means, coefficient of variation and Anova analysis to assess the effect of the financial crisis on the performance of the four specified banks. The findings show that although the financial crisis began in the 3rd quarter of 2007, its impact on the profitability of Islamic banks was most profound in 2008 and 2009 where there was a notable decline in all analyzed financial indicators. Moreover, the three indicators held a higher variability rate during the crisis years spanning 2008 to 2009 in stark contrast with the pre-crisis rates of the period spanning 2004 to 2007. Anova analysis across the four banks show significant differences between the mean of most indicators, suggesting varying performance under the adverse conditions present during the recession.
The New Public Management: Administrative Reform in Iran
Siami-Namini, Sima
SSRN
New Public Management (NPM) has become one of the dominant paradigms for public management across the world. This paper proceeds by defining NPM and giving an insight on the context within which it emerged. The paper goes on to explore essential characteristics of NPM. Thereafter, it recommends administrative reforms that need to be adopted to strengthen public administration (PA) in Iran as a case study and the challenges in implementing them. It concludes with a summary of key issues and suggestions posed in the discussion.
The Shareholder Response to Corporate Tax Planning Advice Regulation
Donohoe, Michael P.,Gale, Brian,Mayberry, Michael
SSRN
We examine the shareholder response to heightened regulation of corporate tax planning advice through the covered opinions rules under U.S. Treasury Department Circular No. 230. These rules imposed extensive due diligence obligations and drafting requirements on tax professionals for a broad range of written tax advice. Despite overwhelming criticism from tax professionals, stock returns reveal a positive shareholder response to the promulgation of the rules, equating to a \$12.33 billion increase in aggregate shareholder value. Consistent with shareholders believing that the benefits of the rules â€" deterrence of excessively risky tax planning and increased monitoring â€" would offset the burdens, we find that the shareholder response was more positive for firms with higher tax risk, weaker monitoring, and higher tax risk coupled with weaker monitoring. Overall, these findings provide new evidence that shareholders perceive regulations aiming to curtail risky tax planning differently depending on whether they target tax professionals or taxpayers.
Top-up Design and Health Care Expenditure: Evidence from Cardiac Stents
Jin, Ginger Zhe,Lien, Hsienâ€Ming,Tao, Xuezhen
SSRN
Taiwan’s National Health Insurance (NHI) has adopted a top-up design for cardiac stents since 2007: the NHI covers the full cost of baseline treatment (bare-metal stents); but if a patient prefers more expensive treatments (drug-eluting stents), she must pay for the incremental cost out of pocket. Such a “top-up†coverage has been advocated as a good model to provide essential care for the mass population and keep the cost of health care under control. To further reduce health spending, the NHI cut the reimbursement rate of bare-metal stents (to hospitals) by 26% in January 2009. We study how hospitals responded to this price change and how such response affects the actual payment from the NHI and patients. Based on individual patient records and hospital-reported stent prices (2007-2010), we find no evidence of hospitals raising the price of drug-eluting stents. However, on average hospitals increase the number of stents per admission by 0.14 in 2009, and most of the increases are for bare metal stents. As a result, the rate cut induces about 18% more BMS usage and providers recoup up to 30% of the revenue loss in 2009 after the NHI rate cut. This suggests that the rate cut is still effective in reducing NHI expenditure on cardiac stents, despite hospital moral hazard. | 2023-02-04 19:06:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28097808361053467, "perplexity": 3063.3931227885487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00004.warc.gz"} |
https://www.physicsforums.com/threads/markov-chain-random-walk.528429/ | # Markov Chain - Random Walk
1. Sep 8, 2011
### tanzl
Suppose X is a random walk with probability
$P(X_k=+1)=p$ and $P(X_k=-1)=q=1-p$
and $S_n=X_1+X_2+...+X_n$
Can anyone explain why does line 3 equal to line 4?
$P(S_k-S_0≠0 ,S_k-S_1≠0 ,…,S_k-S_{k-1}≠0)$
$=P(X_k+X_{k-1}+⋯+X_1≠0 ,X_k+X_{k-1}+⋯+X_2≠0 ,…,X_k≠0)$
$=P( X_k≠0 ,X_k+X_{k-1}≠0 ,…,X_k+X_{k-1}+⋯+X_1≠0 )$...............Line 3
$=P( X_1≠0 ,X_2+X_1≠0 ,…,X_k+X_{k-1}+⋯+X_1≠0 )$..................Line 4
$=P( X_1≠0 ,X_1+X_2≠0 ,…,X_1+X_2+⋯+X_k≠0 )$
The above comes from a book on random walk, I attached a link here (page 36),
Thanks
2. Sep 8, 2011
### alexfloo
It's because your Xi's are all i.i.d.. That means you can always interchange them however you like, since they each have the same distribution.
3. Sep 9, 2011
### chiro
Hey tanzl.
It looks like they are just substituting k = 1 into line 4, based on the premise that the relationship holds for k >= 1.
As for an explanation, it looks like a simple random walk with independent increments, but from the page you cited, it appears that they are not necessarily independent which is a more general assumption than the simple random walk models.
(When each incremental random variable is independent, this simplifies things somewhat)
4. Sep 9, 2011
### tanzl
Thanks for the replies.
Hi Alexfloo, in what way do you mean X can interchange? I do know that X are iid, but I dont see how this property can help when line 3 is adding more terms in reverse time order and line 4 is adding more terms in increasing time order.
Hi Chiro, I dont think it is just simply substituting k=1 into line 3, it does not hold for k>1.
From my understanding, X is independent incremental random variable, I am not sure about S. But S has Markovian property.
BTW, I have read in a research paper on this problem. The proof in the paper only stated that it uses symmetry and independence property without further clarification. I am not really sure what does symmetry property refer to. | 2017-11-21 15:05:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5336127281188965, "perplexity": 578.2820895331104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806388.64/warc/CC-MAIN-20171121132158-20171121152158-00543.warc.gz"} |
https://mlir.llvm.org/docs/Dialects/SparseTensorOps/ | # MLIR
Multi-Level IR Compiler Framework
# 'sparse_tensor' Dialect
The SparseTensor dialect supports all the attributes, types, operations, and passes that are required to make sparse tensor types first class citizens within the MLIR compiler infrastructure. The dialect forms a bridge between high-level operations on sparse tensors types and lower-level operations on the actual sparse storage schemes consisting of pointers, indices, and values. Lower-level support may consist of fully generated code or may be provided by means of a small sparse runtime support library.
The concept of treating sparsity as a property, not a tedious implementation detail, by letting a sparse compiler generate sparse code automatically was pioneered for dense linear algebra by [Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor Algebra Compiler (TACO) project (see http://tensor-compiler.org).
The MLIR implementation closely follows the “sparse iteration theory” that forms the foundation of TACO. A rewriting rule is applied to each tensor expression in the Linalg dialect (MLIR’s tensor index notation) where the sparsity of tensors is indicated using the per-dimension level types dense/compressed together with a specification of the order on the dimensions (see [Chou18] for an in-depth discussions and possible extensions to these level types). Subsequently, a topologically sorted iteration graph, reflecting the required order on indices with respect to the dimensions of each tensor, is constructed to ensure that all tensors are visited in natural index order. Next, iteration lattices are constructed for the tensor expression for every index in topological order. Each iteration lattice point consists of a conjunction of tensor indices together with a tensor (sub)expression that needs to be evaluated for that conjunction. Within the lattice, iteration points are ordered according to the way indices are exhausted. As such these iteration lattices drive actual sparse code generation, which consists of a relatively straightforward one-to-one mapping from iteration lattices to combinations of for-loops, while-loops, and if-statements.
• [Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, Leiden University, May 1996.
• [Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of the ACM on Programming Languages, October 2017.
• [Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation. PhD thesis, MIT, February, 2020.
• [Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of the ACM on Programming Languages, October 2018.
## Attribute definition ¶
### SparseTensorEncodingAttr ¶
An attribute to encode TACO-style information on sparsity properties of tensors. The encoding is eventually used by a sparse compiler pass to generate sparse code fully automatically for all tensor expressions that involve tensors with a sparse encoding. Compiler passes that run before this sparse compiler pass need to be aware of the semantics of tensor types with such an encoding.
Example:
#DCSC = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (j,i)>,
pointerBitWidth = 32,
indexBitWidth = 8
}>
... tensor<8x8xf64, #DCSC> ...
#### Parameters: ¶
ParameterC++ typeDescription
dimLevelType::llvm::ArrayRef<SparseTensorEncodingAttr::DimLevelType>Per-dimension level type
dimOrderingAffineMap
pointerBitWidthunsigned
indexBitWidthunsigned
## Operation definition ¶
### sparse_tensor.convert (::mlir::sparse_tensor::ConvertOp) ¶
Converts between different tensor types
Syntax:
operation ::= sparse_tensor.convert $source attr-dict : type($source) to type($dest) Converts one sparse or dense tensor type to another tensor type. The rank of the source and destination types must match exactly, and the dimension sizes must either match exactly or relax from a static to a dynamic size. The sparse encoding of the two types can obviously be completely different. The name convert was preferred over cast, since the operation may incur a non-trivial cost. When converting between two different sparse tensor types, only explicitly stored values are moved from one underlying sparse storage format to the other. When converting from an unannotated dense tensor type to a sparse tensor type, an explicit test for nonzero values is used. When converting to an unannotated dense tensor type, implicit zeroes in the sparse storage format are made explicit. Note that the conversions can have non-trivial costs associated with them, since they may involve elaborate data structure transformations. Also, conversions from sparse tensor types into dense tensor types may be infeasible in terms of storage requirements. Examples: %0 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<32x32xf32, #CSR> %1 = sparse_tensor.convert %a : tensor<32x32xf32> to tensor<?x?xf32, #CSR> %2 = sparse_tensor.convert %b : tensor<8x8xi32, #CSC> to tensor<8x8xi32, #CSR> %3 = sparse_tensor.convert %c : tensor<4x8xf64, #CSR> to tensor<4x?xf64, #CSC> // The following conversion is not allowed (since it would require a // runtime assertion that the source's dimension size is actually 100). %4 = sparse_tensor.convert %d : tensor<?xf64> to tensor<100xf64, #SV> Traits: SameOperandsAndResultElementType Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sourcetensor of any type values #### Results: ¶ ResultDescription desttensor of any type values ### sparse_tensor.init (::mlir::sparse_tensor::InitOp) ¶ Materializes an unitialized sparse tensor Syntax: operation ::= sparse_tensor.init [$sizes ] attr-dict : type($result) Materializes an uninitialized sparse tensor with given shape (either static or dynamic). The operation is provided as an anchor that materializes a properly typed but uninitialized sparse tensor into the output clause of a subsequent operation that yields a sparse tensor as the result. Example: %c = sparse_tensor.init_tensor [%d1, %d2] : tensor<?x?xf32, #SparseMatrix> %0 = linalg.matmul ins(%a, %b: tensor<?x?xf32>, tensor<?x?xf32>) outs(%c: tensor<?x?xf32, #SparseMatrix>) -> tensor<?x?xf32, #SparseMatrix> Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sizesindex #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.lex_insert (::mlir::sparse_tensor::LexInsertOp) ¶ Inserts a value into given sparse tensor in lexicograph index order Syntax: operation ::= sparse_tensor.lex_insert$tensor , $indices ,$value attr-dict : type($tensor) , type($indices) , type($value) Inserts the given value at given indices into the underlying sparse storage format of the given tensor with the given indices. This operation can only be applied when a tensor materializes unintialized with an init operation, the insertions occur in strict lexicographic index order, and the final tensor is constructed with a tensor operation that has the hasInserts attribute set. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. sparse_tensor.lex_insert %tensor, %indices, %val : tensor<1024x1024xf64, #CSR>, memref<?xindex>, f64 #### Operands: ¶ OperandDescription tensortensor of any type values indices1D memref of index values valueany type ### sparse_tensor.load (::mlir::sparse_tensor::LoadOp) ¶ Rematerializes tensor from underlying sparse storage format Syntax: operation ::= sparse_tensor.load$tensor (hasInserts $hasInserts^)? attr-dict : type($tensor)
Rematerializes a tensor from the underlying sparse storage format of the given tensor. This is similar to the memref.load operation in the sense that it provides a bridge between a bufferized world view and a tensor world view. Unlike the memref.load operation, however, this sparse operation is used only temporarily to maintain a correctly typed intermediate representation during progressive bufferization.
The hasInserts attribute denote whether insertions to the underlying sparse storage format may have occurred, in which case the underlying sparse storage format needs to be finalized. Otherwise, the operation simply folds away.
Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve.
Example:
%1 = sparse_tensor.load %0 : tensor<8xf64, #SV>
Traits: SameOperandsAndResultType
#### Attributes: ¶
AttributeMLIR TypeDescription
hasInserts::mlir::UnitAttrunit attribute
#### Operands: ¶
OperandDescription
tensortensor of any type values
#### Results: ¶
ResultDescription
resulttensor of any type values
### sparse_tensor.new (::mlir::sparse_tensor::NewOp) ¶
Materializes a new sparse tensor from given source
Syntax:
operation ::= sparse_tensor.new $source attr-dict : type($source) to type($result) Materializes a sparse tensor with contents taken from an opaque pointer provided by source. For targets that have access to a file system, for example, this pointer may be a filename (or file) of a sparse tensor in a particular external storage format. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as pointers to buffers or runnable initialization code. The operation is provided as an anchor that materializes a properly typed sparse tensor with inital contents into a computation. Example: sparse_tensor.new %source : !Source to tensor<1024x1024xf64, #CSR> Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription sourceany type #### Results: ¶ ResultDescription resulttensor of any type values ### sparse_tensor.release (::mlir::sparse_tensor::ReleaseOp) ¶ Releases underlying sparse storage format of given tensor Syntax: operation ::= sparse_tensor.release$tensor attr-dict : type($tensor) Releases the underlying sparse storage format for a tensor that materialized earlier through a new operator, init operator, or a convert operator with an annotated tensor type as destination (unless that convert is folded away since the source and destination types were identical). This operation should only be called once for any materialized tensor. Also, after this operation, any subsequent memref querying operation on the tensor returns undefined results. Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values. The semantics may be refined over time as our sparse abstractions evolve. Example: sparse_tensor.release %tensor : tensor<1024x1024xf64, #CSR> #### Operands: ¶ OperandDescription tensortensor of any type values ### sparse_tensor.indices (::mlir::sparse_tensor::ToIndicesOp) ¶ Extracts indices array at given dimension from a tensor Syntax: operation ::= sparse_tensor.indices$tensor , $dim attr-dict : type($tensor) to type($result) Returns the indices array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the indices array. Example: %1 = sparse_tensor.indices %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex> Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensortensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.pointers (::mlir::sparse_tensor::ToPointersOp) ¶ Extracts pointers array at given dimension from a tensor Syntax: operation ::= sparse_tensor.pointers$tensor , $dim attr-dict : type($tensor) to type($result) Returns the pointers array of the sparse storage format at the given dimension for the given sparse tensor. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the pointers array. Example: %1 = sparse_tensor.pointers %0, %c1 : tensor<64x64xf64, #CSR> to memref<?xindex> Interfaces: NoSideEffect (MemoryEffectOpInterface) Effects: MemoryEffects::Effect{} #### Operands: ¶ OperandDescription tensortensor of any type values dimindex #### Results: ¶ ResultDescription resultstrided memref of any type values of rank 1 ### sparse_tensor.values (::mlir::sparse_tensor::ToValuesOp) ¶ Extracts numerical values array from a tensor Syntax: operation ::= sparse_tensor.values$tensor attr-dict : type($tensor) to type($result)
Returns the values array of the sparse storage format for the given sparse tensor, independent of the actual dimension. This is similar to the memref.buffer_cast operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the memref.buffer_cast operation, however, this sparse operation actually lowers into a call into a support library to obtain access to the values array.
Example:
%1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref<?xf64>
Interfaces: NoSideEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
#### Operands: ¶
OperandDescription
tensortensor of any type values
#### Results: ¶
ResultDescription
resultstrided memref of any type values of rank 1 | 2021-11-29 03:34:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909862756729126, "perplexity": 6124.519646840624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00624.warc.gz"} |
https://www.physicsforums.com/threads/integrating-the-complex-conjugate-of-z-with-respect-to-z.315128/ | # Integrating the Complex conjugate of z with respect to z
1. May 19, 2009
### Deevise
Im doing a bit of contour integration, and a question came up with a term in it am unsure of how to do: in its simplest form it would be
$$\int$$$$\bar{z}$$dz
where z is a complex number and $$\bar{z}$$ is it's conjugate. Hmm i can't get the formatting to work out properly.. :S
Last edited: May 19, 2009
2. May 19, 2009
### Dick
If you are integrating over a circular contour of radius R then zz*=R^2, so z*=R^2/z. Otherwise you just have to take the contour and write it as z=(x(t)+iy(t)), so z*=(x(t)-iy(t)).
3. May 19, 2009
### Deevise
Well now i feel kind of stupid... its line intergration, not contour integration :P the question reads:
Evaluate the integral:
$$\int$$( $$\bar{z}$$ +1 ) dz
L
Where L is the line segment from -i to 1+i.
normally i would just integrate and sub in start and end point, but i have totaly drawn a blank on what to do with the conjugate in this case...
4. May 19, 2009
### Dick
Just treat it as a complex line integral. You can only 'sub in' endpoints if the function you are integrating is analytic and has an antiderivative. (z*+1) doesn't. Parametrize L as a function of t and integrate dt. Like I said, if you have z=(x(t)+iy(t)) then z*=(x(t)-iy(t)).
5. May 19, 2009
### squidsoft
6. May 19, 2009
### Deevise
I think it's time i went to sleep... Yeh now that you mention the lack of anti-derivative i knew that. I think a good nights sleep will prepare me better for this exam than grinding my head into non-exsistant problems...
sorry to waste your time with inane questions lol... Thanks for the prompt responces. | 2017-12-16 21:11:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7912722826004028, "perplexity": 1468.0158177248234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00011.warc.gz"} |
http://wealoneonearth.blogspot.nl/2011_11_01_archive.html | ## 20111121
### The Vaccine Controvery
This past Friday I had the chance to meet Mark Largent, a historian of science at Michigan State University, who after writing an excellent history of American eugenics, is working on a history of the anti-vaccination movement. The anti-vaccination movement is one of the more contentious flashpoints in popular culture, with views on vaccines ranging from the deliberate poisoning of children by doctors, to anti-science nonsense that threatens to reverse a century of healthcare gains. Largent’s methodology is to look at the people involved and try to see the world as they believe it, without doing violence. The question of whether vaccines cause autism is scientifically and socially irrelevant. But it is a proxy for a wider and more important spectrum of beliefs about personal responsibility and biomedical interventions, the interface between personal liberty and public goods, and the political consequences of these beliefs.
Some numbers: Currently, 40% of American parents have delayed one or more recommend vaccines, and 11.5% have refused a state mandated vaccine. 23 states containing more than half the population allow “philosophical exemptions” to mandatory vaccination, which are trivial to obtain. The number of inoculations given to children has increased from 9 in the mid 1980s, to 26 today. As a single father, Largent understands the anti-vaccines movement on a basic level: babies hate shots, and doctors administer dozens of them from seconds after birth to two years old.
The details of “vaccines-cause-autism” are too complex to go into here, but Largent is an expert on Andrew Wakefield, the now-discredited British physician who authored the withdrawn Lancet study which suggest a link between the MMR vaccines and autism, and Jenny McCarthy, who campaigned against the mercury-containing preservative thimerosal in the US. Now, as for the scientific issue, it is settled: vaccines do not cause autism. Denmark, which keeps comprehensive health records, shows no difference in autism cases between the vaccinated, partially vaccinated, and un-vaccinated. We don’t know what causes autism, or why cases of autism are increasing, but it probably is related to more rigorous screening and older mothers, as opposed to any external cause. Certainly, the epidemiological cause-and-effect for vaccines and autism is about as a strong as the link between cellphones and radiation, namely non-existent.
But parents, looking for absolute safety and certainty for their children, aren’t convinced by scientific studies, simply because it is effectively impossible to prove a negative to their standards. A variety of pro-vaccine advocates, Seth Mnookin and Paul Offit among them, have cast this narrative as the standard science denialism story, with deluded and dangerous parents threatening to return us to the bad old days of polio. This “all-or-nothing” demonization is unhelpful, and serves merely to alienate the parents doctors are trying to reach. Rather, Largent proposed that we need to have a wider social debate on the number and purpose of vaccines, and the relationship between doctors, parents, and the teachers and daycare workers who are the first line of vaccine compliance.
Now, thinking about this in the context of my studies, this looks like a classic issue of biopolitics and competing epistemologies, and is tied directly into the consumerization of the American healthcare system. According to Foucault, modernity was marked by the rise of biopolitics. “One might say that the ancient right to take life or let live was replaced by a power to foster life or disallow it to the point of death.” While the sovereign state—literally a man in a shiny hat with a sword—killed his enemies to maintain order, the modern state tends to the population like a garden, keeping careful statistics and intervening to maintain population health.
From a bureaucratic rationalist point of view, vaccines are an ideal tool, requiring a minimal intervention, and with massive and observable effects on the rolls of births and deaths, and the frequency and severity of epidemics. Parents don’t see these facts, particularly when vaccines have been successful. What they do see is that babies hate vaccines. I’m not being flip when I say that the suffering of children is of no account to the bureaucratic perspective, the official CDC claim is that 1/3 of babies are “fretful” after receiving vaccines. This epistemology justifies an unlimited expansion of the vaccination program, since any conceivable amount of fretfulness is offset by even a single prevented death. For parents and pediatricians, who must deal with the expense, inconvenience, and suffering of each shoot, the facts appear very different. These mutually incompatible epistemologies mean that pro and anti-vaccine advocates are talking past each other.
The second side of the story is how responsibility for maintaining health has been increasingly shifted onto patients. From the women’s health movement of the 1970s, with Our Bodies, Ourselves, to the 1997 Consumer Bill of Rights and Responsibilities, to Medicare Advantage plans, ordinary people are increasingly expected to take part in healthcare decisions that were previously the sole province of doctors. The anti-vaccine movement has members from the Granola Left and the Libertarian Right, but it is overwhelming composed of upper-middle class women, precisely the people who have seen the greatest increase in medical knowledge and choice over the past few decades. Representatives of the healthcare system should not be surprised that after empowering patients to make their own decisions, they sometimes make decisions against medical advice.
So how to resolve this dilemma? The pro-vaccine advocates suggest we either force people to get vaccinated, a major intrusion of coercive power into a much more liberalized medical system, or we somehow change the epistemology of parents. Both of these approaches are unworkable. Likewise, anti-vaccine advocates should lay off vaccines-cause-autism. They may have valid complaints, but at this point, the science is in, and continuing to push that line really pisses scientists off. Advocates need to understand the standards of scientific knowledge, and what playing in a scientific arena entails.
In the vaccine controversy, as in so many others, what we need is forum that balances both scientific and non-scientific knowledge, so that anti-vaccine advocates can speak their case without mangling science in the process. I don’t know what that forum would look like, who would attend, or how it would achieve this balance, but the need for better institutional engagement between science and society is clear.
## 20111101
### Visual Analogue of a Shepard Tone
A Shepard tone is an auditory illusion which appears to indefinitely ascend or descend in pitch without actually changing pitch at all.
Shepard tones work because they actually contain multiple tones, separated by octaves. As tones get higher in pitch, they fade out. New tones fade in at the lower pitches. The net effect is that it sounds like all the constituent tones are continually increasing in pitch -- and they are, but pitches fade in and out so that, on average, the pitch composition is constant.
Since 2D quasicrystals can be rendered as a sum of plane-waves, it is possible to form the analogue of a Shepard tone with these visual objects. Each plane wave is replaced with a collection of plane waves, at 2,4,8,16... etc times the spatial frequency of the original plane wave. The relative amplitudes of the plane waves are set so that the spatial frequency stays approximately the same even as the underlying waves are scaled. The result is a quasicrystal that appears to zoom in or out indefinitely, without fundamentally changing in structure. There is no reason to demonstrate this effect using quasicrystals, as it would be evident even with a single plane wave. However, I find the interplay between the infinite scaling and the emergent patterns of quasicrystals to be particularly appealing.
More vaguely nauseating quasicrystal zoom GIFs can be found here. You can run and modify the code I used to generate these animation. Copy the following code into a file called QuasiZoom.java. Then, in a terminal, type "javac QuasiZoom.java" in the same directory, and then "java QuasiZoom". Various parameters to tune the output are noted in comments in the code. Then use Gimp to make an animated GIF.
import java.awt.Color;import java.awt.image.BufferedImage;import java.io.File;import java.io.IOException;import javax.imageio.ImageIO;import static java.lang.Math.*;public class QuasiZoom { // Defines a gaussian function. We will use this to define the // envelope of spatial frequencies public static double gaussian(double x) { return exp(-x*x/2)/sqrt(2*PI); } public static void main(String[] args) throws IOException { int k = 5; //number of plane waves int stripes = 3; //number of stripes per wave int N = 500; //image size in pixels int divisions=40; //number of frames to divide the animation into int N2 = N/2; BufferedImage it = new BufferedImage(N, N, BufferedImage.TYPE_INT_RGB); //the range of different spatial frequencies int [] M=new int[]{1,2,4,8,16,32,64,128,256}; //the main ( central ) spatial frequency double mean=log(16); //the spread of the spatial frequency envelope double sigma=1; //counts the frames int ss=0; //iterate over spatial scales, scaling geometrically for (double sc=2.0; sc>1.0; sc/=pow(2,1./divisions)) { System.out.println("frame = "+ss); //adjust the wavelengths for the current spatial scale double [] m=new double[M.length]; for (int l=0; l<M.length; l++) m[l]=M[l]*sc; //modulate each wavelength by a gaussian envelop in log //frequency, centered around aforementioned mean with defined //standard deviation double sum=0; double [] W=new double[M.length]; for (int l=0; l<M.length; l++) { W[l]=gaussian((log(m[l])-mean)/sigma); sum+=W[l]; } sum*=k; for (int i = 0; i < N; i++) { for (int j = 0; j < N; j++) { double x = j - N2, y = i - N2; //cartesian coordinates double C = 0; // accumulator // iterate over all k plane waves for (double t = 0; t < PI; t += PI / k){ //compute the phase of the plane wave double ph=(x*cos(t)+y*sin(t))*2*PI*stripes/N; //take a weighted sum over the different spatial scales for (int l=0; l<M.length; l++) C += (cos(ph*m[l]))*W[l]; } // convert the summed waves to a [0,1] interval // and then convert to [0,255] greyscale color C = min(1,max(0,(C*0.5+0.5)/sum)); int c = (int) (C * 255); it.setRGB(i, j, c | (c << 8) | (c << 16)); } } ImageIO.write(it, "png", new File("out"+(ss++)+".png")); } }}
The infinite zoom effects also creates a motion-fatigue optical illusion, which will cause illusory contraction of your visual field after staring at the above GIF for a while. This is caused by the neurons that encode motions "getting tired" or adapting to the continual motion. When you look away from the animation, there is a rebound effect where neurons end up encoding stationary inputs as moving in the direction opposite of the animation. | 2017-10-20 12:31:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19820840656757355, "perplexity": 3126.63140811136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824104.30/warc/CC-MAIN-20171020120608-20171020140608-00161.warc.gz"} |
https://physicstravelguide.com/advanced_tools/manifold | # Manifold
## Why is it interesting?
Manifolds are especially important in General Relativity. Through massive objects, spacetime is curved and no longer flat. Manifolds are the correct mathematical tool to describe spacetime when it isn't flat.
## Student
Illustration of the map from some two-dimensional manifold $M$ onto $R^2$. The neighborhood $U$ of $P$ in $M$ is mapped onto $V$ in $R^2$. This map provides a coordinate system in the neighborhood of $P$.
The standard sentence is: A Manifold is a set of points that locally "looks the same" as some Euclidean space. Such geometric spaces are particularly nice to investigate, because we can use, locally, the familiar tools from elementary calculus. Happily, such spaces are not only nice to investigate, but also useful in physics to describe nature.
The idea of locality is made precise by the mathematical concept of a neighborhood. To each point of the manifold, a neighborhood must exist that is just like a small piece of Euclidean space.
Illustration of the map from some two-dimensional manifold $M$ onto $R^2$. The neighborhood $U$ of $P$ in $M$ is mapped onto $V$ in $R^2$. This map provides a coordinate system in the neighborhood of $P$. The idea of "looks like Euclidean space" is made precise by the concept of a diffeomorphism. If there exists a map: smooth, one-to-one, onto, and with a smooth inverse, (=a diffeomorphism) for each neighborhood onto some piece of Euclidean space, the space in question is a manifold.
This map from some neighborhood $X$ of the manifold onto Euclidean space gives us local coordinates of the points in this neighborhood. The inverse map is called a parametrization of the neighborhood. Therefore another way of thinking about an n-dimensional manifold is that its a set which can be given n independent coordinates ins some neighborhood of any point.
## Researcher
The motto in this section is: the higher the level of abstraction, the better.
## Examples
The Unit Circle
Image by derivative work: Pbroks13 (talk)Circle_with_overlapping_manifold_charts.png: KSmrq (Circle_with_overlapping_manifold_charts.png) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
An example for a manifold is the unit circle $S^1$, mathematically defined as the set of points fulfilling the condition $x^2+y^2=1$. This is commomly expressed as:
$$S^1 = \{ (x,y) \in R^2 : x^2+y^2 =1 \}$$
This manifold is one dimensional, hence the $1$ behind the $S$, because two degrees of freedom $x,y$ are reduced to one by the condition $x^2+y^2=1$. To see that this is a manifold, we need to find a map to each neighborhood onto Euclidean space.
The upper semicircle ($y>0$) is parametrized by $\Phi_1(x)=(x,\sqrt{1-x^2})$. $\Phi_1(x)$ maps the open interval $-1<x<1$ bijectively onto the upper semicircle. The inverse map is smooth and gives us coordinates of the upper semicircle.
For the lower semicircle ($y<0$) a parametrization is $\Phi_2(x)= (x,-\sqrt{1-x^2})$. Therefore we have found so far two local parametrizations.
These maps do not cover the points $(1,0)$ and $(-1,0)$ and we need further maps for the neighborhoods of these points: $\Phi_3(y)= (\sqrt{1-y^2},y)$ and $\Phi_4(y)= (-\sqrt{1-y^2},y)$. These maps are appropriate for the right- and left-half the circle. Nevertheless they not include $(0,1)$ and $(0,-1)$.
We can see that the neighborhoods overlap, which is no problem.
The Two-Sphere
Another example of a manifold is the two-sphere. The two-sphere $S^2$ is defined as the set of points in $R^3$ for which $x^2+y^2+z^2=const$ holds, where $const$ is, of course, the radius of the sphere. The two-sphere is two-dimensional because the definition involves $3$ coordinates and one condition, which eliminates one degree of freedom. Therefore to see that the sphere is a manifold we need a map onto $R^2$. This map is given by the usual spherical coordinates. Almost all points on the surface of the sphere can be identified unambiguously with a coordinate combination of the form $(\varphi, \theta)$. Almost all! Where is the pole $\varphi=0$ mapped to? There is no one-to-one identification possible because the pole is mapped to a whole line, as indicated in the image.Therefore this map does not work for the complete sphere and we need another map in the neighborhood of the pole to describe things there. A similar problem appears for the map on the semicircle $\theta=0$. Each point can be mapped to in the $R^2$ to $\theta=0$ and $\theta=2 \pi$, which is again not a one-to-one map. This illustrates the fact that for manifolds there is in general not one coordinate system for all points of the manifold, only local coordinates, which are valid in some neighborhood. This is no problem because a manifold is defined to look locally like $R^n$.
The spherical coordinate map is only valid in the open neighborhood $0< \varphi <\pi , 0< \theta < 2 \pi$ and we need a second map to cover the whole sphere. We can use for example a second spherical coordinate system with a different orientation, such that the problematic poles lie at different points for this map and no longer at $\varphi=0$. With this second map, every point of the sphere has a map onto $R^2$ and the two-sphere can be seen to be a manifold.
Another trivial example for a manifold is of course $R^n$.
## History
Contributing authors: | 2022-01-21 00:00:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9430614709854126, "perplexity": 238.94295732925448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00384.warc.gz"} |
https://www.physicsforums.com/threads/innner-products-and-basis-representation.331381/ | # Innner products and basis representation
1. Aug 18, 2009
### iontail
hi, I have a quickon vector spaces.
Say for example we have
X = a1U1 + a2U2 ....anUn
this can be written as
X = sum of ( i=0 to n) ai Ui
now how can I get and expression of ai in therms of X and Ui.
do we use inner product to do this...ans someone please explain how to go forward.
2. Aug 18, 2009
### HallsofIvy
Staff Emeritus
If the Ui basis is "orthonormal" then, taking the inner product of X with Uk gives $<X, U_n]>= a_1 <U_1,U_k>+ \cdot\cdot\cdot+ a_k<U_k,U_k>+ \cdot\cdot\cdot+ a_n<U_n, U_k>= a_1(0)+ \cdot\cdot\cdot+ a_k(1)+ \cdot\cdot\cdot+ a_n(0)= a_k$.
That is, for an orthonormal basis, $a_k= <X, U_k>$. If the basis is NOT orthonormal, there is no simple formula. That's why orthonormal bases are so popular!
3. Aug 18, 2009
### iontail
the basis is orthonormal...so the solution you suggested should be ok...however i dont have latex and have never used it before so cant view your reply. do I just downlad latex to view the thread or do I have to do something else.
4. Aug 18, 2009
### iontail
5. Aug 18, 2009
### tiny-tim
LaTeX
Hi iontail!
You don't need to "have" LaTeX, it should be visible anyway.
There's just something wrong with that particular LaTeX …I can't read it either
(I can't see what's wrong with the code though.)
To see the original code, just click on the REPLY button.
6. Aug 19, 2009
### Дьявол
Here is what HallsofIvy want to write:
$$<X, U_n]>= a_1 <U_1,U_k>+ \cdot\cdot\cdot+ a_k<U_k,U_k>+ \cdot\cdot\cdot+ a_n<U_n, U_k>= a_1(0)+ \cdot\cdot\cdot+ a_k(1)+ \cdot\cdot\cdot+ a_n(0)= a_k$$ | 2017-11-18 12:36:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7907309532165527, "perplexity": 3271.0816756852523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804881.4/warc/CC-MAIN-20171118113721-20171118133721-00726.warc.gz"} |
https://takeawildguess.net/blog/linreg/linreg4/ | ## 1. Introduction
We have introduced the concept of the linear-regression problem and the structure to solve it in a “machine-learning” fashion in this post, while we have applied the theory to a simple but practical case of linear-behaviour identification from a bunch of data that are generated in a synthetic way here and extend the analysis to a multi-linear case where more than one feature (or input) are fed to the model to predict the outcome here.
We now face the implementation process with popular libraries available in the Python framework, namely Sklearn and Tensorflow.
Scikit-learn is a free software machine learning library for the Python programming language. It enables fast implementation of several classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting and k-means, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is a symbolic math library, and is mainly employed for machine learning applications and more recently deep learning modeling. It is developed by the Google Brain team.
Code-wise, such libraries let developers focus more on the model itself and achieving an overall better performance by optimizing the model hyper-parameters and by combining different models to deliver an ensemble version out of it.
We are going to implement the logic in Scikit-learn (SKL) first and then in Tensorflow (TF) in this post, while the next one treats fundamental aspects of machine learning theory, such as feature scaling, feature augmentation, via techniques such as polynomial features, and hypothesis evaluation.
## 2. Data generation
We are going to build three datasets:
1. A multi-linear model of two inputs.
2. A multi-linear model of two inputs, where one input outscales the other one.
3. A multi-linear model of two inputs, where one of them represents polynomial features.
### 2.1 A multi-linear model of two inputs
We start generating some synthetic data (Npntx*Npnty=50*50 points). We assume we know both the slope of the two inputs ($\omega_1 = 3, \omega_2 = -1$) and the intercept ($\omega_0 = 5$) of the plane we want to identify, but we also introduce some noise with a gaussian distribution and zero-mean to the plane to make the data source a bit closer to real-world scenarios. The chart shows the generated data cloud (see this post for further details). Here follows the mathematical expression of the model:
$$y = \omega_0 + \omega_1\cdot x_1 + \omega_2\cdot x_2$$
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
from mpl_toolkits import mplot3d
Npntx, Npnty = 50, 50 # number of points
x1_ = np.linspace(-1, 1, Npntx)
x2_ = np.linspace(-1, 1, Npnty)
xx1, xx2 = np.meshgrid(x1_, x2_)
noise = 0.25*(np.random.randn(Npnty,Npntx)-1)
w0, w1, w2 = 5, -3, -1
yy = w0 + w1*xx1 + w2*xx2 + noise
zz = w0 + w1*xx1 + w2*xx2
visData1 = [xx1, xx2, yy, [w0, w1, w2]]
plt.figure(figsize=(10, 5))
ax = plt.axes(projection='3d')
ax.plot_surface(xx1, xx2, zz, rstride=1, cstride=1, cmap='viridis', edgecolor='none', alpha=0.5)
ax.scatter(xx1, xx2, yy, cmap='viridis', linewidth=0.5, alpha=0.5)
plt.xlabel("X1")
plt.ylabel("X2")
plt.ylabel("Y")
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('y')
ax.view_init(30, 35)
plt.show()
The dataset is generated by creating two 2D arrays, one for inputs and one for outputs. The input array, XX, is the horizontal concatenation of the flattened version of the two input arrays, xx1 and xx2. There is no need to add the column filled with 1s, as we had to do in the Numpy implementation.
We first stack the two 1D arrays vertically and then transpose it to get the examples (50*30=1500) over the rows and the features over the columns (2).
The output 2D array is just a single column filled with the y values. Here the shape of the arrays.
XX1 = np.vstack((xx1.flatten(), xx2.flatten())).T
YY1 = yy.flatten().reshape(-1,1)
print([XX1.shape, YY1.shape])
[(2500, 2), (2500, 1)]
### 2.2 A multi-linear model of two inputs, where one input outscales the other one
We generate some synthetic data (Npntx*Npnty=50*50 points), but we make sure one input maximum value is far greater than the other one. In particular, x1 scales from -1000 to 1000, while x2 from -1 to 1. However, the mathematical correlation does not change:
$$y = \omega_0 + \omega_1\cdot x_1 + \omega_2\cdot x_2$$
In a real-life task, it is common to face such situations. Credit risk management is one example, where some inputs to the model could be how many employees work for the company that should take the loan, and the annual revenue. The order of magnitude of the latter is way too greater than the former, in general.
You can see in the below chart how the second input looks like to have no impact on the outcome of the model.
Npntx, Npnty = 50, 50 # number of points
x1_ = np.linspace(-100, 100, Npntx)
x2_ = np.linspace(-1, 1, Npnty)
xx1, xx2 = np.meshgrid(x1_, x2_)
noise = 0.25*(np.random.randn(Npnty,Npntx)-1)
w0, w1, w2 = 2, -3, -1
yy = w0 + w1*xx1 + w2*xx2 + noise
zz = w0 + w1*xx1 + w2*xx2
visData2 = [xx1, xx2, yy, [w0, w1, w2]]
plt.figure(figsize=(10, 5))
ax = plt.axes(projection='3d')
ax.plot_surface(xx1, xx2, zz, rstride=1, cstride=1, cmap='viridis', edgecolor='none', alpha=0.5)
ax.scatter(xx1, xx2, yy, cmap='viridis', linewidth=0.5, alpha=0.5)
plt.xlabel("X1")
plt.ylabel("X2")
plt.ylabel("Y")
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('y')
ax.view_init(30, 35)
plt.show()
The dataset is generated with the same procedure.
XX2 = np.vstack((xx1.flatten(), xx2.flatten())).T
YY2 = yy.flatten().reshape(-1,1)
print([XX2.shape, YY2.shape])
[(2500, 2), (2500, 1)]
### 2.3 A multi-linear model of two inputs, where one of them represents polynomial features.
We generate some synthetic data (Npntx*Npnty=50*50 points), where the first feature x1 shows off as a quadratic function. The mathematical correlation is as follows:
$$y = \omega_0 + \omega_1\cdot x_1 + \omega_2\cdot x_1^2 + \omega_3\cdot x_2$$
In real-life task, it is common to face such situations. Joule heating is one example, where the heat released by a light bulb is correlated to the square of the electric current through the wires.
You can see in the below chart how the first input is responsible for the curvature of the generated surface.
Npntx, Npnty = 50, 50 # number of points
x1_ = np.linspace(-5, 5, Npntx)
x2_ = np.linspace(-5, 5, Npnty)
xx1, xx2 = np.meshgrid(x1_, x2_)
noise = 0.25*(np.random.randn(Npnty,Npntx)-1)
w0, w1, w2, w3 = 2, -3, -1, 2
yy = w0 + w1*xx1 + w2*xx1**2 + w3*xx2 + noise
zz = w0 + w1*xx1 + w2*xx1**2 + w3*xx2
visData3 = [xx1, xx2, yy, [w0, w1, w2, w3]]
plt.figure(figsize=(10, 5))
ax = plt.axes(projection='3d')
ax.plot_surface(xx1, xx2, zz, rstride=1, cstride=1, cmap='viridis', edgecolor='none', alpha=0.5)
ax.scatter(xx1, xx2, yy, cmap='viridis', linewidth=0.5, alpha=0.5)
plt.xlabel("X1")
plt.ylabel("X2")
plt.ylabel("Y")
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('y')
ax.view_init(30, 35)
plt.show()
The dataset is generated with the same procedure.
XX3 = np.vstack((xx1.flatten(), xx1.flatten()**2, xx2.flatten())).T
YY3 = yy.flatten().reshape(-1,1)
print([XX3.shape, YY3.shape])
[(2500, 3), (2500, 1)]
## 3. Linear regression with Scikit-learn
We import the module required to define the linear model, LinearRegression, from the linear_model package, and the module to evaluate the performance of the model, RMSE, from the metrics package. It is enough to fit the model parameters to the first dataset and to calculate the model prediction for the inputs of each sample of the same dataset.
We can realize (and appreciate) how the overall code ends up being much more compact and easier to write and maintain.
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lm = LinearRegression()
lm.fit(XX1, YY1)
Ypred = lm.predict(XX1)
print('The final RSME is : {}'.format(mean_squared_error(YY1, Ypred)))
print('The final parameter values: {}'.format(np.hstack((lm.intercept_, lm.coef_[0,:])).tolist()))
The final RSME is : 0.060867550456315374
The final parameter values: [4.753763678286057, -3.001900368199036, -1.0124158793115567]
xx1, xx2, yy = visData1
ypred = Ypred.reshape(-1, xx1.shape[-1])
plt.figure(figsize=(10, 5))
ax = plt.axes(projection='3d')
ax.plot_surface(xx1, xx2, ypred, rstride=1, cstride=1, cmap='viridis', edgecolor='none', alpha=0.5)
ax.scatter(xx1, xx2, yy, cmap='viridis', linewidth=0.5, alpha=0.5)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('y')
ax.view_init(20, 30)
plt.tight_layout()
plt.show()
In three lines of Python code, we are now able to perform what requires a lot of effort and coding when starting from scratch.
## 4. Linear regression with TensorFlow
We import the entire library, from which we access to the various methods required to describe the model, to train it to the dataset and to estimate the outputs that are compared to the dataset ground-truth values.
### 4.1 Model definition
The very first step is to reset the TF to the default graph, which means TF clears the default graph stack and resets the global default graph.
We then define the x and y variables as placeholder, while the ww parameters as variable.
In short, tf.Variable is used for trainable parameters of the model, while tf.placeholder is used to feed actual training examples. That’s why we need to assign initial values, often random-generated, to the TF variables only. The variable values can therefore be updated during optimization, can be shared and be stored after training. We assign the placeholder type as float32 to both input and output. The size of the input placeholder, xp, is set to (None, 2), since the number of rows is automatically determined from the batch size we feed to the optimizer object in the training step, while the column size is equal to the number of features (2 for the first case). The size of the output placeholder is instead set to (None, 1), since only one value is required for each sample.
The feature weights ww and bias bb, which is equivalent to the Sk-Learn intercept, are defined with the Variable method and initialized as a (2,1) and a (1,1) zero-arrays, respectively.
The final step is to combine TF variables and placeholders to translate the mathematical model into code. The matrix multiplication between the input matrix and the weight array is performed with matmul. At the end of these steps, we inspect the shape of each tensor. The question-mark symbol says that TF needs some data to determine the actual row size.
import tensorflow as tf
tf.reset_default_graph()
xp = tf.placeholder(dtype=tf.float32, shape=(None, 2))
yp = tf.placeholder(dtype=tf.float32, shape=(None, 1))
ww = tf.Variable(np.zeros((2,1)), dtype=tf.float32)
bb = tf.Variable(np.zeros((1,1)), dtype=tf.float32)
ymdl = tf.matmul(xp, ww) + bb
print('Input shape: {}'.format(xp.shape))
print('Ground-truth output shape: {}'.format(yp.shape))
print('Weight shape: {}'.format(ww.shape))
print('Model output shape: {}'.format(ymdl.shape))
Input shape: (?, 2)
Ground-truth output shape: (?, 1)
Weight shape: (2, 1)
Model output shape: (?, 1)
The loss function is easily implemented using the method mean_squared_error from losses package. The optimizer object that actually adjusts the model parameters (TF variables) with the gradient descent algorithm.
mdlLoss = tf.losses.mean_squared_error(yp, ymdl)
### 4.2 Model training
The next steps are to: 1. initialize the variables. 2. run a new session, which let us perform the actual computation by exploiting the graph structure previously defined. 3. run the optimizer as many steps as the number of epochs Nepoch. 4. run the model with the final parameter set and store the model output ymdl into the prediction array. 5. retrieve the final parameter values by running a dedicated session. A different way would be to call the global_variables() method and get the variable values by key name.
Nepoch = 5000
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
Jevol = []
for kk in range(Nepoch):
mdl_loss, _ = sess.run([mdlLoss, optimizer], feed_dict={xp: XX1, yp: YY1})
if kk%100 == 0:
Jevol.append((kk, mdl_loss))
if kk==Nepoch-1:
print('The final model loss is {}'.format(mdl_loss))
Ypred_tf = sess.run(ymdl, feed_dict={xp: XX1})
bOpt, wOpt = sess.run([bb, ww])
The final model loss is 0.0608675591647625
Jevol = np.array(Jevol)
plt.figure(figsize=(10, 5))
plt.plot(Jevol[:,0], np.log(Jevol[:,1]), lw=2)
plt.xlabel("training steps ($N_{epoch}$)")
plt.ylabel("Logarithm loss trend ($log(J_{evol})$)")
plt.title('The model loss over the training epochs')
plt.show()
print('The final RSME is : {}'.format(mean_squared_error(YY1, Ypred_tf)))
print('The final parameter values: {}'.format(np.vstack((bOpt, wOpt))[:,0].tolist()))
The final RSME is : 0.060867550351984094
The final parameter values: [4.753759384155273, -3.001899242401123, -1.012415885925293]
xx1, xx2, yy = visData1
ypredTF = Ypred_tf.reshape(-1, xx1.shape[-1])
plt.figure(figsize=(10, 5))
ax = plt.axes(projection='3d')
ax.plot_surface(xx1, xx2, ypredTF, rstride=1, cstride=1, cmap='viridis', edgecolor='none', alpha=0.5)
ax.scatter(xx1, xx2, yy, cmap='viridis', linewidth=0.5, alpha=0.5)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('y')
ax.view_init(20, 30)
plt.tight_layout()
plt.show() | 2020-10-21 05:15:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4958571195602417, "perplexity": 3473.13195032539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00394.warc.gz"} |
https://answers.opencv.org/questions/134638/revisions/ | # Revision history [back]
### Identify the best angle to orient a map
I can't upload my image or publish links so I'll describe my map. It is a robotics map that is similar to the one you will find if you Google "Centibots: the Hundred-Robots Project". Take away the coloured lines from that image and that is a good representation of what I am working with.
Essentially, I am trying to orient the image so that the longest walls are aligned horizontally to within a hundredth of a degree. This also includes hallways that have bumpy walls and possibly insets. It's fairly easy to do in GIMP, but automating it has turned out to be a rabbit hole.
The first thing I did was look into using Houghlines for this, but it doesn't help me find the "best" lines in my image. It just produces a set of lines and angles; this is a problem because there may be a lot of lines in my map and I need to know the length and angle of each of them to evaluate the best map orientation. After a lot of tinkering I wasn't able to get the function to properly detect precise location of my lines, even after giving rho and theta a precision of 0.01 and np.pi/18000 respectively.
Ok, that's fine, so I decided I would take a different approach. I would use findNonZero to get the location of all black pixels (in a bw image). Apply a rotation transform of 0 to 180 degrees and then count the number of black pixels that share the same Y coordinate (within a small tolerance).
This is my code for doing that and it is extremely processor intensive. So much so that It will take 10 minutes to compute the result of 6100 points. Even further down the rabbit hole I went to try and use multi-core processing in Python and shared memory access. This is what I have right now but I think there is a better way. I will provide examples of my actual maps once I reach the points required.
import cv2
import numpy as np # used for matrix calculations with opencv
class align_map():
def __init__(self, map_name):
# Set precision
STEP_ANGLE = 0.01 # degrees
MAX_ANGLE = 1
bw_image = cv2.bitwise_not(image)
self.black_pixel_coords = cv2.findNonZero(bw_image)
# Find bounding box
x,y,w,h = cv2.boundingRect(self.black_pixel_coords)
diagonal = np.ceil(np.sqrt(w*w + h*h))
self.diagonal_center = np.ceil(diagonal/2)
# Determine size of result matrix
result_mat_cols = np.ceil(diagonal)
accum_angle = 0
result_mat_rows = MAX_ANGLE / STEP_ANGLE
self.result_mat = np.zeros((result_mat_rows + 1, result_mat_cols), np.uint32)
self.job_pool = []
# Recenter all y_coords at diagonal midpoint
center_x = x + w/2;
center_y = y + h/2;
y_offset = self.diagonal_center - center_y
x_offset = self.diagonal_center - center_x
for row in self.black_pixel_coords:
row[0][0] = row[0][0] + x_offset
row[0][1] = row[0][1] + y_offset
new_list = list()
index = 0
while (accum_angle < MAX_ANGLE):
rotation_affine = cv2.getRotationMatrix2D((self.diagonal_center, self.diagonal_center), accum_angle, 1)
test = cv2.transform(self.black_pixel_coords.astype('double'), rotation_affine)
rounded = np.round(test).astype('int')
for row in rounded:
col_index = int(row[0][1])
self.result_mat[index, col_index] = self.result_mat[index, col_index] + 1
Is there a better way to compute how to orient my map?
### Identify the best angle to orient a map
I can't upload my This is the image or publish links so I'll describe my map. It is a robotics map that is similar to the one you will find if you Google "Centibots: the Hundred-Robots Project". Take away the coloured lines from that image and that is a good representation of what I am working with.
Essentially, I am trying to orient the image with:
My goal is to rotate this (within a hundredth of a degree) so that the longest walls longest-lines/most-lines are aligned horizontally to within a hundredth of a degree. This also includes hallways that have bumpy walls and possibly insets. It's fairly easy to do in GIMP, but automating it has turned out to be a rabbit hole.
completely horizontal.
## First Attempts
The first thing I did was look into using Houghlines for this, but it doesn't help me find the "best" lines in my image. It just produces a set of lines and angles; this is a problem because there may be a lot of lines in my map and I need to know the length and angle of each of them to evaluate the best map orientation. After a lot of tinkering I wasn't able to get the function to properly detect precise location of my lines, even after giving rho and theta a precision of 0.01 and np.pi/18000 respectively.
Ok, that's fine, so I decided I would take a different approach. I would use findNonZero to get the location of all black My second attempt was to count the pixels (in a bw image). Apply a rotation transform of 0 to 180 degrees and then count the number of black pixels that share the same Y coordinate (within a small tolerance).
individually and analyze them statistically. This is my code for doing that and it is extremely processor intensive. So much so that It will take 10 minutes to compute the result of 6100 points. Even further was very computationally intensive and lead me down the a rabbit hole I went to try and use multi-core processing in Python and trying to organise shared memory access. This between multiple processors in Python. The original code is what in the first version of my post (I've removed it to keep things clean).
## Using HoughLinesP
As suggested I have right now but I think there is a better way. I will provide examples of my actual maps once I reach the points required. tried using HoughLinesP.
import cv2
cv2 # used for image processing functions
import numpy as np # used for matrix calculations with opencv
class align_map():
def __init__(self, map_name):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
th, bw_image = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY)
# Set precision
STEP_ANGLE = 0.01 # degrees
MAX_ANGLE = 1
bw_image = cv2.bitwise_not(image)
self.black_pixel_coords = cv2.findNonZero(bw_image)
# Find bounding box
x,y,w,h = cv2.boundingRect(self.black_pixel_coords)
diagonal = np.ceil(np.sqrt(w*w cv2.imshow("bw_image", bw_image)
edges = cv2.Canny(bw_image, 50, 150, apertureSize=7)
threshold = 60
min_line_length = 10
max_line_gap = 30
lines = cv2.HoughLinesP(edges, 1, np.pi/180, threshold=threshold, maxLineGap=max_line_gap, minLineLength=min_line_length)
count = 50
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(image,(x1,y1),(x2,y2),(0,count,255-count),2)
count = count + h*h))
self.diagonal_center = np.ceil(diagonal/2)
# Determine size of result matrix
result_mat_cols = np.ceil(diagonal)
accum_angle = 0
result_mat_rows = MAX_ANGLE / STEP_ANGLE
self.result_mat = np.zeros((result_mat_rows + 1, result_mat_cols), np.uint32)
self.job_pool = []
# Recenter all y_coords at diagonal midpoint
center_x = x + w/2;
center_y = y + h/2;
y_offset = self.diagonal_center - center_y
x_offset = self.diagonal_center - center_x
for row in self.black_pixel_coords:
row[0][0] = row[0][0] + x_offset
row[0][1] = row[0][1] + y_offset
new_list = list()
index = 0
while (accum_angle < MAX_ANGLE):
rotation_affine = cv2.getRotationMatrix2D((self.diagonal_center, self.diagonal_center), accum_angle, 1)
test = cv2.transform(self.black_pixel_coords.astype('double'), rotation_affine)
rounded = np.round(test).astype('int')
for row in rounded:
col_index = int(row[0][1])
self.result_mat[index, col_index] = self.result_mat[index, col_index] + 1
20
if (count > 255):
count = 40
cv2.imshow("HoughLinesP_Result", image)
cv2.waitKey(0)
map_name = "example.png"
align_map(map_name)
Is
I've managed to detect the lines reasonably well. The different colours are there a better way to compute how to orient just to identify individual lines
Now I need to sort these into respective bins according to their angle, but I have a slight problem because I am getting duplicates--you can see this by the red and green neighbouring lines. I believe this is because I'm using Canny() for my map?edge detection. It's a problem because it will skew my weightings for each line. | 2020-01-22 11:02:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3937886953353882, "perplexity": 1458.6955997284006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00408.warc.gz"} |
https://www.physicsforums.com/threads/rigorous-math-and-physics-textbooks.618095/ | # Rigorous math and physics textbooks
1. Jul 2, 2012
### crat0z
Hello, physics forums. As an introduction to the community, i'm 15 years old and live in northwestern Ontario. I've recently became very interested in physics, but i've always excelled in math. I've looked into some textbooks, particularly Apostol's I and II, along with Spivak to bridge the two books. As a prerequisite to reading those three, i've also ordered Precalculus by Barnett. For after Apostol, I have bought Borelli and Coleman's Differential Equations. On the physics side of textbooks, i've ordered University Physics by Young & Freedman, but i'm a little confused on what to read after these books.
I read in a thread about Artin's Algebra, and that it covers abstract and linear algebra. With the linear algebra in Apostol's books, I think I should be able to read Artin. I've heard good things about Rudin, but i'm unsure if I should buy Real and Complex Analysis if it is too complicated, especially if I will need knowledge of complex variables for electromagnetism.
Any help is appreciated, i'm a very motivated student for this type of stuff, and i'm willing to work through the most rigorous books in order to understand the mathematical principles behind physics.
2. Jul 2, 2012
### Snicker
You are going too fast. If you're reviewing precalc, there is no reason to even THINK about Big Rudin.
Apostol is okay, but it's quality hardly justifies its price.
It's important to read solid books on algebra so that, if you were to ever crack open some math olympiad book, you wouldn't feel too behind. Algebra by Gelfand is considered an ideal starting point for the young mathematician. Euler's Elements of Algebra is also an amazing read (and not, as one may suspect, archaic).
For calculus, there are several good books. Calculus Made Easy by Thompson is absolutely wonderful (albeit hardly rigorous). Euler himself wrote three calculus textbooks: Foundations of Differential Calculus, Foundations of Integral Calculus, and Introduction to the Analysis of the Infinite. For a (fairly) rigorous treatment, I suggest Elementary Real and Complex Analysis by Shilov. Don't be fooled by its title, I believe that the book was written as an introduction to calculus.
All of Euler's books that I listed can be found for free at http://www.17centurymaths.com/. Calculus Made Easy can be found for free at http://www.gutenberg.org/ebooks/33283 . Gelfand's Algebra has a list price of $32.95, and Shilov's book has a list price of$22.95.
3. Jul 2, 2012
### Number Nine
Rudin wrote two analysis texts: Introduction to Mathematical Analysis and Real and Complex Analysis. The latter is most definitely not an introductory text and you are nowhere near ready for it, and the former, in my opinion, is just not very good. Actually completing all of the exercises in Apostol and Spivak (this seems redundant; I'd recommend Spivak over Apostol) will give you some familiarity with the basics of analysis and proof-writing, so you won't need a completely introductory treatment. There are a few different analysis texts at the appropriate level; one that I'm fond of is Shilov's Introduction to Real and Complex Analysis.
For algebra, you can't really do better than Artin.
4. Jul 2, 2012
### crat0z
Thank you for the reply, i'm a very fast learner, so I will definitely pick up Shilov's book sometime in the next few months. If you think I should just skip Apostol, what would you recommend for multivariable calculus?
I left out a few details, and it probably explains why there is some disbelief towards me being able to cover the books I listed above. A year ago, I read through a lot about trigonometry, algebra and calculus (didn't necessarily complete questions), and focused on much of the concepts. I watched through many videos on these fields in math through Khan Academy (whatever that is worth), and i've taken a peak into Apostol I, and I think it would be perfect for me.
EDIT: Money isn't an issue, I come from a somewhat wealthy family, and the new copies of Apostol I bought from abebooks were around \$60 in total.
5. Jul 2, 2012
### Fredrik
Staff Emeritus
You need to understand complex numbers, but you won't really need complex analysis until you're at the graduate level (at least). A book on complex analysis will teach you e.g. how to integrate functions along curves in the complex plane, and how to use that knowledge to prove theorems like the fundamental theorem of algebra (every polynomial has at least one root).
Linear algebra is very useful, for special relativity and quantum mechanics in particular. Abstract algebra is less useful. I don't think a physics student will need a whole book on the subject, but it's certainly useful to understand the definitions of the most important terms, e.g. field, vector space, homomorphism, isomorphism, etc. | 2017-08-16 17:43:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4115276038646698, "perplexity": 595.7970498094368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00001.warc.gz"} |
https://gmatclub.com/forum/if-the-only-stocks-that-an-investor-owns-are-75-shares-of-130879.html?fl=similar | If the only stocks that an investor owns are 75 shares of : GMAT Data Sufficiency (DS)
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 20 Feb 2017, 20:27
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
If the only stocks that an investor owns are 75 shares of
Author Message
TAGS:
Hide Tags
Intern
Joined: 03 Nov 2011
Posts: 8
Location: United States
Followers: 0
Kudos [?]: 39 [0], given: 7
If the only stocks that an investor owns are 75 shares of [#permalink]
Show Tags
18 Apr 2012, 09:04
00:00
Difficulty:
35% (medium)
Question Stats:
62% (01:41) correct 38% (00:32) wrong based on 41 sessions
HideShow timer Statistics
If the only stocks that an investor owns are 75 shares of stock A and 100 shares of stock B, what is the total dollar value of his stocks?
(1) The total value of 3 shares of stock A and 4 shares of stock B is $160. (2) The value of each share of stock A is twice the value of each share of stock B . OA to follow after some discussion. I know this question is easy, however I am not convienced with the OA and need to discuss with the group. Thanks, [Reveal] Spoiler: OA Last edited by Bunuel on 18 Apr 2012, 10:02, edited 1 time in total. Added the OA Math Expert Joined: 02 Sep 2009 Posts: 37038 Followers: 7233 Kudos [?]: 96168 [0], given: 10707 Re: If the only stocks that an investor owns are 75 shares....DS [#permalink] Show Tags 18 Apr 2012, 10:01 If the only stocks that an investor owns are 75 shares of stock A and 100 shares of stock B , what is the total dollar value of his stocks? Question: 75A+100B=25(3A+4B)=? Where A and B are the prices of the stock A and stock B receptively. As you can see, basically we should find the value of 3A+4B. (1) The total value of 3 shares of stock A and 4 shares of stock B is$160 --> 3A+4B=\$160. Sufficient.
(2) The value of each share of stock A is twice the value of each share of stock B --> A=2B --> 3A+4B=10B, we need the value of either A or B. not sufficient.
P.S. You should always indicate OA if you have it.
_________________
Senior Manager
Joined: 01 Apr 2010
Posts: 301
Location: Kuwait
Schools: Sloan '16 (M)
GMAT 1: 710 Q49 V37
GPA: 3.2
WE: Information Technology (Consulting)
Followers: 4
Kudos [?]: 57 [0], given: 11
Re: If the only stocks that an investor owns are 75 shares of [#permalink]
Show Tags
23 Apr 2012, 09:24
oo I rushed it, I thought it was C. If the ratio in given in A didnt match the question then it would have been C then.
Re: If the only stocks that an investor owns are 75 shares of [#permalink] 23 Apr 2012, 09:24
Similar topics Replies Last post
Similar
Topics:
15 Of the shares of stock owned by a certain investor, 30 4 25 Jan 2014, 01:16
2 Mr. Odusote owns two kinds of stock shares: r shares of 2 15 Jan 2014, 13:25
7 John owns stock in nine different companies, for a total of 7 24 Mar 2012, 09:58
1 Value per share of the stock 1 15 Mar 2012, 02:47
If the total price of n equally priced stock shares is p, 3 25 Nov 2010, 08:30
Display posts from previous: Sort by | 2017-02-21 04:27:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5153881907463074, "perplexity": 6783.1160541030085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170624.14/warc/CC-MAIN-20170219104610-00224-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://blog.jpolak.org/?p=1987 | # Ideal in a union of ideals
Suppose $I$ is an ideal in a ring $R$ and $J,K$ are ideals such that $I\subseteq J\cup K$. Then either $I\subseteq J$ or $I\subseteq K$. Indeed, suppose that there is some $x\in I$ such that $x\not\in J$. If $y\in I$ is arbitrary and $y\not\in K$ then $x + y$ is in neither $J$ nor $K$. Thus, $y\in K$ and so $I\subseteq K$.
In other words, if there is some element of $I$ that is not in $J$, then $I$ is contained entirely in $K$.
A generalisation for commutative rings is as follows: if $J_1,J_2,\dots,J_n\subseteq R$ are ideals such that at most two of them are not prime ideals, and $I$ is an ideal such that $I\subseteq \cup_i J_i$ then $I\subseteq J_k$ for some $k$. Of course, one does not need the hypothesis that at most two of the $J_1,\dots,J_n$ are not prime if $I$ is principal.
If one drops the hypothesis that at most two of the ideals $J_1,\dots,J_n$ are not prime, then the conclusion no longer holds in general, though.
For example, consider the ring $R = \Z/2[x,y]/(x^2,y^2)$. It is a ring with sixteen elements. In $R$, the ideal $(x,y)$ has eight elements. Furthermore,
$$(x,y)\subseteq (x)\cup (y)\cup (x+y).$$
However, each of the ideals $(x), (y),$ and $(x+y)$—none of which are prime—only has four elements, and so $(x,y)$ is not contained in any of them. | 2020-07-10 22:39:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956774115562439, "perplexity": 39.40266730970155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00379.warc.gz"} |
https://www.ssccglapex.com/hi/the-average-score-of-a-cricketer-for-ten-matches-is-38-9-runs-if-the-average-for-the-first-six-matches-is-42-then-find-the-average-for-the-last-four-matches/ | ### The average score of a cricketer for ten matches is 38.9 runs. If the average for the first six matches is 42, then find the average for the last four matches.
A. 33.25
B. 33.5
C. 34.25
D. 35
Total sum of last 4 matches, = (10 × 38.9) – (6 × 42) = 389 – 252 = 137 Average = $\Large\frac{137}{4}$ = 34.25 | 2022-09-26 16:29:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427504301071167, "perplexity": 347.6726290824486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00690.warc.gz"} |
http://openstudy.com/updates/513a5f4ee4b029b0182aac08 | ## that1chick 2 years ago how do you write y = 2x2 + 6x + 4 in general form?
1. iheartfood
do u know the general form?
2. that1chick
yes, y=a(x-h)^2+k
3. iheartfood
did u need it in linear form?
4. iheartfood
like the general form of a linear equation?
5. that1chick
just the general form of a quadratic equation
6. iheartfood
okay, so u need to know the quadratic formula i believe..do u know it?
7. that1chick
general form> y=a(x-h)^2+k
8. that1chick
oh, yeah!
9. that1chick
10. that1chick
but how does that help convert it to general form?
11. that1chick
I thought it just helped find the x-intercepts..
12. iheartfood
if i can remember this correctly, i believe u need to first solve to get the values of a,b,c and then you just plug it into the general form... I'm not 100% sure tho... @abb0t, am i remembering this correctly? ;/
13. that1chick
I know how to convert it... the problem is the 2 in 2x^2
14. abb0t
General form of what?
15. that1chick
16. that1chick
2 in 2x^2 is the problem.... you isolate the x variables getting: y-4=2x^2+6x then complete the square and balance the equation: 6/2=3 3^2=9, y-4+9=2x^2+6x+9, y+5=2x^2+6x+9 then convert the trinomial into a binomial... thus lies my dilemma
17. iheartfood
sorry, i'm not very sure, cuz i don't think i remember this correctly ahha and i don't wanna help u wrongly haha @Mertsj probs knows :) good luck!!! :D
18. that1chick
thanks (:
19. Mertsj
$y=2x^2+6x+4$ $y=2(x^2+3x+______ )+4$
20. Mertsj
Now complete the square by adding (3/2)^2 inside the parentheses: $y=2(x^2+3x+(\frac{3}{2})^2)+4-\frac{9}{2}$
21. Mertsj
Now factor: $y=2(x+\frac{3}{2})^2-\frac{1}{2}$
22. that1chick
why 3/2?
23. Mertsj
I want to show you something: $x^2+6x+9=(x+3)^2$
24. Mertsj
$x^2+8x+16=(x+4)^2$
25. that1chick
I don't get what you did with: (3/2^2)+4−92
26. Mertsj
Notice the relationship between the coefficient of x and the constant term in a trinomial square.
27. Mertsj
If you take 1/2 the coefficient and square it, you get the constant term.
28. Mertsj
So I took 1/2 of 3 and got 3/2. Then I squared it and added it to the x^2 + 3x to get a trinomial square.
29. Mertsj
Now the (3/2)^2 was inside a parenthesis which has a 2 in front of it so I was really adding 2 times (3/2)^2 which is 2 times 9/4 which is 9/2. So since I could not change the equation, I then had to subtract 9/2.
30. Mertsj
31. that1chick
I think so
32. that1chick
Okay, yes! I get what you did (: thank you! what were you trying to show me with those other two equations?
33. Mertsj
The relationship between the coefficient of x and the constant term.
34. that1chick
oh, that half of the coefficient of x to the second power = the constant. okay, cool
35. that1chick
Thank you for helping me (:
36. Mertsj
yw | 2016-02-06 23:30:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7681972980499268, "perplexity": 3165.3272155989816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148402.62/warc/CC-MAIN-20160205193908-00132-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=CCSHBU_2016_v29n1_137 | COMPLETIONS OF HANKEL PARTIAL CONTRACTIONS OF SIZE 5×5 NON-EXTREMAL CASE
Title & Authors
COMPLETIONS OF HANKEL PARTIAL CONTRACTIONS OF SIZE 5×5 NON-EXTREMAL CASE
Lee, Sang Hoon;
Abstract
We introduce a new approach that allows us to solve, algorithmically, the contractive completion problem. In this article, we provide concrete necessary and sufficient conditions for the existence of contractive completions of Hankel partial contractions of size $\small{4{\times}4}$ using a Moore-Penrose inverse of a matrix.
Keywords
Hankel partial contraction;contractive completion;Moore-Penrose inverse;
Language
English
Cited by
References
1.
G. Arsene and A. Gheondea, Completing matrix contractions, J. Operator Th. 7 (1982), 179-189.
2.
W. Arveson, Interpolation problems in nest algebra, J. Funct. Anal. 20 (1975), 208-233.
3.
J. Bowers, J. Evers, L. Hogben, S. Shaner, K Sinder, and A. Wangsness, On completion problems for various classes of P-matrices, Linear Algebra Appl. 413 (2006), 342-354.
4.
M. G. Crandall, Norm preserving extensions of linear transformations on Hilbert space, Proc. Amer. Math. Soc. 21 (1969), 335-340.
5.
G. M. Crippen and T. F. Havel, Distance Geometry and Molecular Conformation (Wiley, New York, 1988).
6.
R. Curto, C. Hernandez, and E. de Oteyza, Contractive completions of Hankel partial contractions, J. Math. Anal. Appl. 203 (1996), 303-332.
7.
R. Curto, S. H. Lee, and J. Yoon, Completion of Hankel partial contractions of extremal type, J. Math. Phys. 53, 123526(2012).
8.
R. E. Curto and W. Y. Lee, Joint hyponormality of Toeplitz pairs, Memoirs Amer. Math. Soc. 712 (2001).
9.
C. Davis, An extremal problem for extensions of a sesquilinear form, Linear Alg. Appl. 13 (1976), 91-102.
10.
C. Davis, W. M. Kahan, and H. F. Weinberger, Norm-preserving dilations and their application to optimal error bounds, SIAM J. Numer. Anal. 19 (1982), 445-469.
11.
C. Foias and A. E. Frazho, Redheffer products and the lifting of contractions on Hilbert space, J. Operator Th. 11 (1984), 193-196.
12.
R. M. Gray, On unbounded Toeplitz matrices and nonstationary time series with an application to information theory, Inf. Control 24 (1974), 181-196.
13.
R. M. Gray and L. D. Davisson, An Introduction to Statistical Signal Processing, Cambridge University Press, London, 2005.
14.
L. Hogben, Matrix completion problems for pairs of related classes of matrices, Numer. Linear Algebra Appl. 373 (2003), 13-19.
15.
R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, London, 1985.
16.
I. S. Iohvidov, Hankel and Toeplitz Matrices and Forms: Algebraic Theory, Birkhauser-Verlag, Boston, 1982.
17.
C. R. Johnson and L. Rodman, Completion of partial matrices to contractions, J. Funct. Anal. 69 (1986), 260-267.
18.
C. R. Johnson and L. Rodman, Completion of Toeplitz partial contractions, SIAM J. Matrix Anal. Appl. 9 (1988), 159-167.
19.
I. H. Kim, S. Yoo, and J. Yoon, Completion of Hankel partial contractions of non-extremal type, J. Korean Math. Soc. 52 (2015), 1003-1021.
20.
M. Laurent, A connection between positive semi-definite and Euclidean distance matrix completion problems, Numer. Linear Algebra Appl. 273 (1998), 9-22.
21.
V. Paulsen, Completely bounded maps and dilations, Pitmam Research Notes in Mathematics Series, vol. 146, Longman Sci. Tech., New York, 1986.
22.
S. Parrott, On a quotient norm and Sz.-Nagy-Foias lifting theorem, J. Funct. Anal. 30(1978), 311-328.
23.
S. Power, The distance to upper triangular operators, Math. Proc. Cambridge Phil. Soc. 88 (198), 327-329.
24.
Y. L. Shmul'yan and R. N. Yanovskaya, Blocks of a contractive operator matrix (Russian), Izv. Vyssh. Uchebn. Zaved. Mat. 25 (1981), 72-75; (English translation) Soviet Math. (Iz. VUZ) 25 (1981), 82-86.
25.
J. L. Smul'jan, An operator Hellinger integral, Mat. Sb. (N.S.) 49 (1959), 381-430 (in Russian).
26.
H. J. Woerdeman, Strictly contractive and positive completions for block matrices, Linear Alg. and Its Appl. 136 (1990), 63-105.
27.
Wolfram Research, Inc., Mathematica, Version 9, Wolfram Research Inc., Champaign, IL, 2013. | 2018-08-20 17:13:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5495706796646118, "perplexity": 3325.817395097877}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216718.53/warc/CC-MAIN-20180820160510-20180820180510-00305.warc.gz"} |
http://weirlands.com/index.php?title=Category:Race | # Category:Race
Main article: Races
## Pages in category ‘Race’
The following 10 pages are in this category, out of 10 total. | 2019-02-21 22:29:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880825281143188, "perplexity": 9723.384575164704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511174.69/warc/CC-MAIN-20190221213219-20190221235219-00585.warc.gz"} |
https://discourse.pymc.io/t/sample-posterior-predicitve-not-catching-shape-of-new-data/10179 | # Sample_posterior_predicitve not catching shape of new data
Hi all. I’m trying to predict on new data, but pm.sample_posterior_predicitve does not catch the new shape of data. I have tried an approach combining this and this posts, but to no avail. Here’s my latest attempt:
import numpy as np
import pymc as pm
rng = np.random.default_rng(18)
score = np.random.normal(0,1, (2,50)).flatten()
fi = np.array([np.arange(50),np.arange(50)]).flatten() #dates index
oi = np.array([np.zeros(50),np.ones(50)]).flatten() #number of options
#### Original model ####
def gen_mod(sco):
with pm.Model() as mod:
zs = pm.MutableData("zs", sco, dims="obs_id")
F = int(zs.get_value().shape[0]/2)
f = np.array([np.arange(F),np.arange(F)]).flatten() #dates index
o = np.array([np.zeros(F).astype("int32"),np.ones(F).astype("int32")]).flatten() #number of options
sd = pm.HalfNormal.dist(1.0)
L, corr, std = pm.LKJCholeskyCov("L", n=2, eta=2.0, sd_dist=sd, compute_corr=True)
Σ = pm.Deterministic("Σ", L.dot(L.T))
w = pm.GaussianRandomWalk("w", shape=(F,2), init_dist=pm.Normal.dist(0,1))
B = pm.Deterministic("B", pm.math.matrix_dot(Σ,w.T))
α = pm.Normal("α", 0, 1.0, shape=F)
μ = pm.Deterministic("μ", α[f] + B[o,f])
ϵ = pm.HalfNormal('ϵ', 1.0)+1
y = pm.Normal("y", mu=μ, sigma=ϵ, observed=zs)
return mod
model = gen_mod(score)
with model:
trace = pm.sample(100, chains=2, cores=12)
#### prediction
s1 = np.concatenate([score[:50], np.ones(20)])
s2 = np.concatenate([score[50:], np.ones(20)])
score_pred = np.array([s1,s2]).flatten()
model_pred = gen_mod(score_pred)
with model_pred:
preds = pm.sample_posterior_predictive(trace)
preds = preds.posterior_predictive
preds = preds['y']
Only 100 samples in chain.
Auto-assigning NUTS sampler...
Multiprocess sampling (2 chains in 12 jobs)
NUTS: [L, w, α, ϵ]
|████████████| 100.00% [2200/2200 00:53<00:00 Sampling 2 chains, 4 divergences]Sampling 2 chains for 1_000 tune and 100 draw iterations (2_000 + 200 draws total) took 128 seconds.
There were 4 divergences after tuning. Increase target_accept or reparameterize.
The acceptance probability does not match the target. It is 0.5258, but should be close to 0.8. Try to increase the number of tuning steps.
|████████████████████| 100.00% [200/200 00:00<00:00]
preds.shape
Out[2]: (2, 100, 100)
score_pred.shape
Out[3]: (140,)
So the shape of preds should be (2,100,140). I have tried using multidimensional arrays (i.e. 2 by 50) rather than flattened arrays, but nothing seems to work. Any help would be really appreciated.
Your model is always built on the data in score. The sco argument taken by your gen_mod() function is never used.
Thanks for spotting the typo/bug (sorry about that, my brain may be melting ). I’ve updated my question with the bug fixed. Sadly, the outcome remains the same (as in previous attempts without that typo).
Are you attempting to define 2 separate models? Or are you just looking to alter zs in order to apply the posterior to new data? It looks like you are doing the former, but the latter just requires the use of model.set_data(). That being said, I do forget (off the top of my head how to alter the dimensions of a Mutable Data object.
Try this?
[Edit:] Which clearly won’t work. I would keep an eye on that issue.
1 Like
Many thanks for the help. I’ll keep trying to find a solution or to find what may be causing the issue.
I think I found the solution:
# -*- coding: utf-8 -*-
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
rng = np.random.default_rng(18)
score = np.random.normal(0,1, (2,50))
fi = np.arange(50)
oi = [0,1]
#### Original model ####
def gen_mod(score, fi, oi):
with pm.Model() as mod:
sd = pm.HalfNormal.dist(1.0)
L, corr, std = pm.LKJCholeskyCov("L", n=2, eta=2.0, sd_dist=sd, compute_corr=True)
Σ = pm.Deterministic("Σ", L.dot(L.T))
w = pm.GaussianRandomWalk("w", init_dist=pm.Normal.dist(0,1), dims=("f","o"))
B = pm.Deterministic("B", pm.math.matrix_dot(Σ,w.T))
α = pm.Normal("α", 0, 0.01, dims="f")
μ = pm.Deterministic("μ", α + B)
ϵ = pm.HalfNormal('ϵ', 0.01)+1
y = pm.Normal("y", mu=μ, sigma=ϵ, observed=score)
return mod
model = gen_mod(score, fi, oi)
with model:
trace = pm.sample(1000, tune=1000, chains=4, cores=12)
#### prediction
s1 = np.concatenate([score.flatten()[:50], np.ones(20)])
s2 = np.concatenate([score.flatten()[50:], np.ones(20)])
score = np.array([s1,s2])
fi = np.arange(score.shape[1])
oi = [0,1]
model = gen_mod(score, fi, oi)
with model:
preds = pm.sample_posterior_predictive(trace)
preds = preds.posterior_predictive
preds = preds['y']
score.shape
preds.shape
#### poltting
pre = preds.mean(axis=0).mean(axis=0)
presd = preds.mean(axis=0).std(axis=0)
plt.plot(fi, score[0], label='observed mean')
plt.plot(fi, pre[0], color='orange', label='predicted mean')
plt.fill_between(fi, pre[0]-presd[0], pre[0]+presd[0], alpha=0.2, color='orange', label='SD')
plt.grid(alpha=0.2)
plt.legend()
plt.show()
Auto-assigning NUTS sampler...
Multiprocess sampling (4 chains in 12 jobs)
NUTS: [L, w, α, ϵ]
|████████████| 100.00% [8000/8000 00:41<00:00 Sampling 4 chains, 1 divergences]Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 66 seconds.
There was 1 divergence after tuning. Increase target_accept or reparameterize.
|████████████████████| 100.00% [4000/4000 00:00<00:00]
preds.shape
Out[2]: (4, 1000, 2, 70)
score.shape
Out[3]: (2, 70)
This image (pardon the horrible predictions) illustrates that in principle the predicted distribution is behaving correctly:
I think the trick was in re-initialising the model, rather than initialising a new model (i.e. with a different name) as @cluhmann you pointed out. But somehow with the set_data approach it wasn’t working well, as it is now with the add_coord approach. I guess everything is working as it should, but I’ll look in more detail and if there’s something off I’ll post it here. Thanks again.
My apologies. I didn’t read this one too carefully. Indeed, the add_coord argument seems to break the model when used with mutable=True. Here are some results for comparison.
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
rng = np.random.default_rng(18)
score = np.random.normal(0,1, (2,50))
fi = np.arange(50)
oi = [0,1]
#### Original model ####
def gen_mod(score, fi, oi):
with pm.Model() as mod:
w = pm.GaussianRandomWalk("w", init_dist=pm.Normal.dist(0,1), shape=(50,2))
ϵ = pm.HalfNormal('ϵ', 1.0)
y = pm.Normal("y", mu=w.T, sigma=ϵ, observed=score)
return mod
model = gen_mod(score, fi, oi)
with model:
trace = pm.sample(1000, tune=1000, chains=4, cores=12)
with model:
preds = pm.sample_posterior_predictive(trace)
preds = preds.posterior_predictive
preds = preds['y']
#### poltting
pre = preds.mean(axis=0).mean(axis=0)
presd = preds.mean(axis=0).std(axis=0)
plt.plot(fi, score[0], label='observed mean')
plt.plot(fi, pre[0], color='orange', label='predicted mean')
plt.fill_between(fi, pre[0]-presd[0], pre[0]+presd[0], alpha=0.2, color='orange', label='SD')
plt.grid(alpha=0.2)
plt.legend()
plt.show()
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
rng = np.random.default_rng(18)
score = np.random.normal(0,1, (2,50))
fi = np.arange(50)
oi = [0,1]
#### Original model ####
def gen_mod(score, fi, oi):
with pm.Model() as mod:
w = pm.GaussianRandomWalk("w", init_dist=pm.Normal.dist(0,1), dims=("f","o"))
ϵ = pm.HalfNormal('ϵ', 1.0)
y = pm.Normal("y", mu=w.T, sigma=ϵ, observed=score)
return mod
model = gen_mod(score, fi, oi)
with model:
trace = pm.sample(1000, tune=1000, chains=4, cores=12)
with model:
preds = pm.sample_posterior_predictive(trace)
preds = preds.posterior_predictive
preds = preds['y']
#### poltting
pre = preds.mean(axis=0).mean(axis=0)
presd = preds.mean(axis=0).std(axis=0)
plt.plot(fi, score[0], label='observed mean')
plt.plot(fi, pre[0], color='orange', label='predicted mean')
plt.fill_between(fi, pre[0]-presd[0], pre[0]+presd[0], alpha=0.2, color='orange', label='SD')
plt.grid(alpha=0.2)
plt.legend()
plt.show()
1 Like
This may be a bit off-topic respect to my question, but just in case someone finds it a useful solution I’ll leave it here. I have written a very simple function to find the posterior predictive via sampling and it gives reasonable results for the example above. This implements a Gaussian random walk over a 2 dimensional variable (score: 50 datapoints randomly drawn from a standard Gaussian).
# -*- coding: utf-8 -*-
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
rng = np.random.default_rng(18)
score = np.random.normal(0,1, (2,50))
O,F = score.shape
times = np.arange(F)
#### Original model #### ##does not work with coords!!!
with pm.Model() as mod:
w = pm.GaussianRandomWalk("w", init_dist=pm.Normal.dist(0,1), shape=(F,O))
ϵ = pm.HalfNormal('ϵ', 1.0)
y = pm.Normal("y", mu=w.T, sigma=ϵ, observed=score)
with mod:
trace = pm.sample(1000, tune=1000, chains=4, cores=12)
with mod:
preds = pm.sample_posterior_predictive(trace, var_names=['y'])
preds = preds.posterior_predictive
preds = preds['y']
#################
#### prediction
def predict(mu, f):
y0 = []
for chain in range(mu.shape[0]):
s_id = np.random.choice(sigma[chain].shape[0])
s = sigma[chain][s_id]
mm = []
for time in range(mu.shape[2]+f):
if time < mu.shape[2]:
t = time
if time > mu.shape[2]:
t = np.random.choice(np.arange(mu.shape[2]))
m = mu[chain,:,t,0]
p = np.random.normal(m,s,m.shape[0])
mm.append(np.array(p))
y0.append(np.array(mm))
y0 = np.array(y0)
y1 = []
for chain in range(mu.shape[0]):
s_id = np.random.choice(sigma[chain].shape[0])
s = sigma[chain][s_id]
mm = []
for time in range(mu.shape[2]+f):
if time < mu.shape[2]:
t = time
if time > mu.shape[2]:
t = np.random.choice(np.arange(mu.shape[2]))
m = mu[chain,:,t,1]
p = np.random.normal(m,s,m.shape[0])
mm.append(np.array(p))
y1.append(np.array(mm))
y1 = np.array(y1)
return np.array([y0,y1])
mu = trace.posterior["w"]
sigma = trace.posterior["ϵ"]
preds = predict(mu,20)
preds = preds.mean(axis=1)
times2 = np.arange(70)
s1 = np.concatenate([score.flatten()[:50], np.random.normal(0,1,20)])
s2 = np.concatenate([score.flatten()[50:], np.random.normal(0,1,20)])
score2 = np.array([s1,s2])
#### poltting
pre = preds.mean(axis=2)
presd = preds.std(axis=2)
plt.plot(times, score[0], alpha=0.5, color='purple', label='observed')
plt.plot(times2[49:], score2[0,49:], alpha=0.5, linestyle=':', color='purple', label='new (not yet observed)')
plt.plot(times2, pre[0], color='green', label='predicted mean')
plt.fill_between(times2, pre[0]-presd[0], pre[0]+presd[0], alpha=0.2, color='green', label='SD')
plt.grid(alpha=0.2)
plt.legend()
plt.show()
pre = preds.mean(axis=2)
presd = preds.std(axis=2)
plt.plot(times, score[1], alpha=0.5, color='blue', label='observed')
plt.plot(times2[49:], score2[1,49:], alpha=0.5, linestyle=':', color='blue', label='new (not yet observed)')
plt.plot(times2, pre[1], color='crimson', label='predicted mean')
plt.fill_between(times2, pre[1]-presd[1], pre[1]+presd[1], alpha=0.2, color='crimson', label='SD')
plt.grid(alpha=0.2)
plt.legend()
plt.show()
First dimension of Gaussian random walk:
Second dimension of Gaussian random walk:
Although this works well, I’m still a bit unsure about my approach. I used random samples from the first 50 times (50 observed data points) to predict 20 new unobserved data points. I simulated random new data points and added them after predictions (dotted lines on images above) and they seem to be reasonable (though not super accurate), but maybe the way I performed these predictions is not entirely appropriate (?). Any comments would be greatly appreciated.
I’ll mark this as the solution, though it doesn’t really solve the underlying issue.
Your predictions don’t look right. In a time series model you should never have wiggles and dynamics in the average prediction unless you built them in, i.e. with seasonal components or AR terms. For a GRW, your best guess of tomorrow’s value is today’s value, so your predictions should be a straight line out from the last observed value, with increasing “cup shaped” variance as you move out from the observations in time (variance should be proportional to \sqrt{T + h})
To see this, here’s some math. The GRW model can be written:
y_{t+1} = y_t + \epsilon_{t+1}, \quad \epsilon_t \sim N(\mu, \sigma)
Important points:
1. This is a so-called “Markov Process”, meaning that including information about past values does not improve your ability to forecast future values. Formally, \mathbb E [y_{t+1} | \{y_\tau\}_{\tau=0}^t] = \mathbb E [y_{t+1} | y_t], i.e. all that extra info is useless.
2. The mean future prediction, then, is simply \mathbb E_t [y_{t+1}] = \mathbb E_t [y_t + \epsilon_{t+1}] = y_t + \mathbb E_t[\epsilon_{t+1}] = y_t + \mu
3. Similarly, the forecast variance is just Var_t (y_{t+1}) = Var_t (y_t + \epsilon_t) = Var(\epsilon_t) = \sigma^2)
4. By the “Law of Iterated Expectations”, all your best guesses about future values will boil down to only your knowledge today. It means that if we predict for time t+2, we’ll end up with \mathbb E_t [y_{t+2}] = \mathbb E_t [\mathbb E_t [y_{t+1}] + \epsilon_{t+2}] = \mathbb E_t[\mathbb E_t [y_t + \epsilon_{t+1}] + \epsilon_{t+2}] = y_t + \mathbb E_t [\mathbb E_t [\epsilon_{t+1}] + \epsilon_{t+2}]. The important point that the double expectation washes out, so you get \mathbb E_t [y_{t+2}] = y_t + 2\mu.
This generalizes to E_t [y_{t+h}] = y_t + h\mu, so your predicted mean should be the last observed value, plus a deterministic trend with slope \mu (0 in your case).
You can do the same thing for variance: Var_t (y_{t+2}) = Var_t(y_{t+1} + \epsilon_{t+2}) = Var_t(y_t + \epsilon_{t+1} + \epsilon_{t+2} The only important thing to know here is that variance is quadratic, so you can distribute it over addition, but little covariance babies pop out, as in Var(a + b) = Var(a) + Var(b) + 2 Cov(a, b). I appeal to Cov(\epsilon_t, \epsilon_s) = 0, \quad \forall t, s to ignore these, and you end up with: Var_t (y_{t+2}) = 2\sigma^2. So in general, the standard deviation of your prediction for y_{T+h} will be \sqrt{h} \cdot \sigma.
So far I’ve said nothing helpful about making predictions with your PyMC model… sorry about that. Basically, you want to sample from the posterior distribution over w[-1] and ϵ, construct a normal distribution centered on w[-1] with standard error ϵ, sample h values from this normal (where h is your forecast horizon) then take the cumsum to get your predictions. Do this a lot of times and you’ll get a nice cup shaped forecast distribution.
You can also skip the hassle and just use the formulas derived above.
Here’s the first approach applied to the first time series in score, starting after sampling the model you posted:
T = 50
h = 20
post = az.extract_dataset(trace)
# Note that the last prediction is *not* included in the cumsum, it is "x0", not the drift!
forecasts = preds.stack(sample = ['chain', 'draw']).values[0, -1, :] + (post['ϵ'].values[None, :] * rng.normal(size=(h, 2000))).cumsum(axis=0)
fig, ax = plt.subplots(figsize=(14,4), dpi=77)
ax.plot(preds.stack(sample = ['chain', 'draw'])[0, :, :], color='0.5', alpha=0.05)
ax.plot(preds.stack(sample = ['chain', 'draw']).mean(dim='sample').values[0, :], color='tab:blue')
ax.plot(score[0, :], color='k')
ax.plot(np.arange(T, T + h), forecasts, color='tab:green', alpha=0.1)
ax.plot(np.arange(T, T + h), forecasts.mean(axis=-1), color='k', ls='--')
plt.show()
As a general comment, you can always make out-of-sample predictions by just re-implementing your model in Numpy, using the posterior samples together with whatever data you wish. In this case it’s a bit opaque how the pieces fit together, but that’s all we’re doing.
2 Likes
Thank you very much for the detailed explanation. Makes a lot of sense. I have corrected my approach following your advice. | 2022-09-30 17:08:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7517654299736023, "perplexity": 11746.709723267575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00587.warc.gz"} |
http://www.math.wpi.edu/Course_Materials/MA1022A01/areaapprox/node1.html | Subsections
# Area Approximations
## Introduction
The purpose of this lab is to acquaint you with some rectangular approximations to areas under curves.
## Rectangular Approximations
Integration, the second major theme of calculus, deals with areas, volumes, masses, and averages such as centers of mass and gyration. In lecture you have learned that the area under a curve between two points and can be found as a limit of a sum of areas of rectangles which approximate the area under the curve of interest. As these sums, and their limits, are often tedious to calculate, there is clear motivation for the analytical techniques which will be introduced shortly in class. However, not all area finding'' problems can be solved using analytical techniques, and the Riemann sum definition of area under a curve gives rise to several numerical methods which can approximate the area of interest with great accuracy.
Suppose is a non-negative, continuous function defined on some interval . Then by the area under the curve between and we mean the area of the region bounded above by the graph of , below by the -axis, on the left by the vertical line , and on the right by the vertical line . All of the numerical methods in this lab depend on subdividing the interval into subintervals of uniform length. For example, dividing the interval [0,4] into four uniform pieces produces the subintervals , , , and .
In these simple approximation schemes, the area above each subinterval is approximated by the area of a rectangle, with the height of the rectangle being chosen according to some rule. In particular, we will consider the left, right and midpoint rules. When using the left endpoint rule, the height of the rectangle is the value of the function at the left-hand endpoint of the subinterval. When using the right endpoint rule, the height of the rectangle is the value of the function at the right-hand endpoint of the subinterval. The midpoint rule uses the value of the function at the midpoint of the subinterval for the height of the rectangle.
The Maple student package has commands for visualizing these three rectangular area approximations. To use them, you first must load the package via the with command. Then try the three commands given below. Make sure you understand the differences between the three different rectangular approximations. Take a moment to see that the different rules choose rectangles which in each case will either underestimate or overestimate the area.
> with(student):
> rightbox(x^2,x=0..4);
> leftbox(x^2,x=0..4);
> middlebox(x^2,x=0..4);
There are also Maple commands leftsum, rightsum, and middlesum to sum the areas of the rectangles, see the examples below. Note the use of evalf to obtain numerical answers.
> rightsum(x^2,x=0..4);
> evalf(rightsum(x^2,x=0..4));
> evalf(leftsum(x^2,x=0..4));
> evalf(middlesum(x^2,x=0..4));
## Accuracy
It should be clear from the graphs that adding up the areas of the rectangles only approximates the area under the curve. However, by increasing the number of subintervals the accuracy of the approximation can be increased. All of the Maple commands described so far in this lab permit a third argument to specify the number of subintervals. The default is 4 subintervals. The example below approximates the area under from to using the rightsum command with 4, 10, 20 and 100 subintervals. As the number of subintervals increases, the approximation gets closer and closer to the exact answer. You can see this by assigning a label to the approximation, assigning a label to the exact answer and taking their difference. The closer you are to the actual answer, the smaller the difference. The example below shows how we can use Maple to approximate this area with an error no greater than 0.1.
> exact := 4^3/3;
> estimate := evalf(rightsum(x^2,x=0..4));
> evalf(exact-estimate);
> estimate := evalf(rightsum(x^2,x=0..4,50));
> evalf(exact-estimate);
> estimate := evalf(rightsum(x^2,x=0..4,100));
> evalf(exact-estimate);
> estimate := evalf(rightsum(x^2,x=0..4,320));
> evalf(exact-estimate);
> estimate := evalf(rightsum(x^2,x=0..4,321));
> evalf(exact-estimate);
## Exercises
1. For the function over the interval , use the rightbox, leftbox, and middlebox commands to plot the rectangular approximation of the area above the -axis and under with 20 rectangles. State in your opinion which graph gives the best approximation to the area and give a reason why.
2. The area under above the axis over the interval accurate to 10 decimal places is 6.0632791021. Plot over the given interval. Use each of the approximations rightsum and middlesum to determine the minimum number of subintervals required so that the approximation of this area has an error no greater than 0.001. Which method requires the least number of subintervals? | 2018-07-19 01:40:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200258851051331, "perplexity": 395.1790172759985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590443.0/warc/CC-MAIN-20180719012155-20180719032155-00064.warc.gz"} |
https://stats.stackexchange.com/questions/80738/what-is-the-probability-that-a-person-will-die-on-their-birthday | # What is the probability that a person will die on their birthday?
I am curious about what the probability is that a person will die on their birthday?
I am sure there are a number of ways to approach this, plus I have heard that actual numbers point to a higher rate on birthdays, hence why I am asking it here.
• Probability that when they die, it will be their birthday? Or probability that on their (n-th) birthday, they will die? In other words, determine the probability field, the outcome, and the condition. – ttnphns Dec 28 '13 at 19:30
• @ttnphns the former, but I like the distinction. – jbranchaud Dec 28 '13 at 21:36
• Depends if they like their presents – wolfies Jan 2 at 16:21
Sorry, a bit new here so please excuse me if this doesn't help too much.
The US Social Security Administration keeps records of births and deaths and has their information available for purchase (apparently for a hefty price): Here
However I found a source that claims to have bought it and is offering it for free (as well as offering the data sorted by date on the site): Here
I'm assuming you can just use that as your sample and go through all the data with a script and find how many people actually die on their birthday. I would do that myself but I have 20 min left to download (they're about 1.5GB) so I'll try to get back to you on the statistics myself if I find the time to write up a script.
Of course the United States can't represent the entire world's population but it is a good start. I'm assuming you will see a higher rate in deaths on birthdays because of "first world problems" because we're using the United States and I think the effect would be less visible across the world...
## Update - Numbers :D
I've ran through the Social Security Death Master File from the free source, so there's no way knowing if the information is valid. However, given the size that they're ~3 Gigabytes each and that there's no reason for anyone to spoof these kind of files... I'll assume they are valid.
You can see the code that I used to run through it here: http://pastebin.com/9wUFuvpN
It's written in C#, it reads through the lines of the death index one by one and then parses the date using regex. I assumed that the file was basically this format:
(Social Security Number)(First Name) (LastName) (Middle Name) (Some Letter)(MM-DD-YYYY of Death)(MM-DD-YYYY Of Birth)
I had regex just pick out the last part for the dates of birth/death, check if any of the fields are just 0 (which I'm assuming it means that Social Security couldn't get a valid month/date for the record), and discard the 0's. Then it'll check if the day of birth and month of birth match the day of death/month of death and add that to the died on birthday count. It'll add all records that aren't 0's to the death count.
It outputs the results in this format:
Deaths On Birthday/Total Deaths Lines Looked Through - People With a 0 in any of their record
It's be great if someone could double check that code, as I've found quite a few errors I've made before and could only tell because my results made no statistical sense.
Here is the console output:
Doing some math...
• File 1 had 44665 Deaths on a Birthday out of 14879058 Deaths in Total
• File 2 had 47060 Deaths on a Birthday out of 15278724 Deaths in Total
• File 3 had 49289 Deaths on a Birthday out of 15374049 Deaths in Total
• Total we have 141014 Deaths on a Birthday out of 45531831.
So we have ~0.3097% chance of dying on a birthday while statistically (1/365) would lead us to believe there is only ~0.27397% chance of dying on a birthday. That is indeed a 13% increase in chance of death on a birthday from 1/365. Of course this sample is only for Americans and only has 45 million records, I'm sure organizations who originally published their paper had access to much more reliable and larger death indexes. However, I think that it is indeed valid that deaths on a birthday is more likely than death on any other day.
Here's a Time article citing jumps in reasons for death on birthdays: Article
Edit 2: @cbeleites pointed out that I forgot to account for same day deaths, which would be a huge factor in increasing deaths on birthdays. Strictly speaking my data is still valid but I did not throw out if a person died on the same day they were born. It's interesting that my results were not affected too heavily by this error so it seems that these records don't include death on first day. I'll look into it later. I'm thinking there would be very interesting statistics I can look for such as death on days of the month and make a heatmap of some sort. I'll probably try to do that sometime...
• No, this is interesting too. I was hoping to think about this question beyond simply 1/365. – jbranchaud Dec 28 '13 at 17:09
• Can you post a link to the free data? – Max Dec 28 '13 at 17:29
• Sorry, I messed up pasting the second link. I fixed it on my post but here you go: ssdmf.info/download.html I currently can't do the script because I have some college apps to finish and the files are 2GB each... :( – Mike Shi Dec 28 '13 at 17:34
• You need to take in account errors due to counting statistics. Roughly speaking the relative uncertainty in these calculation is going to be about 1/sqrt(47000) = 0.5%. So these differences are not statistically significant. – Dave31415 Mar 13 '14 at 5:01
• @Dave31415: Isn't the denominator $\sqrt{45531831}$? That makes it very statistically significant. – Alex R. Oct 10 '18 at 23:09
We can be even more precise than @Mike Shi's data: the most dangerous of all birthdays is the very first one.
The 1st day mortality rates reported there are around 0.2 % for industrialized countries and 0.8 % average for all countries. Which means that the risk of dying on the day of birth is at least as high as the risk of dying at any of the following birth days*.
* I think it is a safe assumption that 1st day deaths do not appear in @Mark Shi's file, as the US 1st day mortality rates are reported to be 0.3 % (other source: 0.26 %). Which is almost the total birth day death rate in the social security file. So either babies who die at the day of birth do not get a social security number, or dying on a birth day > 1 year is extremely improbable.
• Ah yes, I forgot to account for deaths that occur on the same day as the birth. I'm assuming this was excluded from the data as the hospital would have to submit this data to the state for a birth certificate and they wouldn't submit data on babies who die I'm assuming. This has lead me to a series of awkward Google searches... "do dead babies get a social security number"... oh search history. – Mike Shi Dec 29 '13 at 21:09
Here's an argument why the probability of death on the birthday may be higher than on other days: Birthdays are emotionally charged days. More over, people tend to celebrate it somehow.. So there is an excess of factors (relative to the person's usual life style) that increase biological stress (excess emotions, excess drinking, excess eating, excess dancing, excess banjee jumping etc). Statistically speaking, this situation increases the chances of dying on a birthday, since it intensifies any health issues a person may have, or because it exposes the person to situations and risks for which the person is inexperienced.
• sure. but rather than speculate, let's measure :) – Hugh Perkins Mar 26 '18 at 2:06
• I would think that it should in fact be lower. In the US more births occur near August, and most deaths occur in the winter due to the cold. But maybe the effect of too much cake and drinking overshadows seasonality :) – Alex R. Oct 10 '18 at 23:13
The probability that a newborn dies within a year can be found in the life tables. For example, you can check out the periodic life tables and look at the column $q_x$ for $x=0$ in the human mortality database. This is not exactly want you want, but will give you an idea.
In addition to the other excellent answers, but there is a point none of them discussed: Birthdays are not uniformly distributed over the year, and neither are deathdays. That conspires such that the "statistical" probability is not 1/365. To get an idea of this effect, lets first assume they are both almost uniform, only 29 february has a probability 1/4 of the others. That gives $$365 p + \frac14 p=1$$ so $$p= 0.002737851$$. That leads to probability of birth and death on the same day equal $$356\cdot p^2 + (p/4)^2= 0.002736445 > 0.00273224=\frac1{366}$$ which is the minimum possible value (with 366 days).
With a bit more generality, let $$p_i, i=1, \dotsc, n$$ be the birthday probabilities, and $$q_i, i=1,\dotsc,n$$ the deathday probabilities, for a year with $$n$$ days. Then, if birthday and deathday for a person are statistically independent, we will find that $$\DeclareMathOperator{\P}{\mathbb{P}} \P(\text{Birth and death on same day}) = \sum_{i=1}^n p_i q_i$$ so if $$p_i=q_i$$ then that is $$\sum_i p_i^2$$. That is a quantity known (in biology) as Simpsons index of (bio)diversity. Its inverse could then be taken as "effective number of days (in a year)"! The minimum value of $$\sum_i p_i^2$$ is $$1/n$$. To see that use convexity.
But assuming $$p_i=q_i$$ is quite a stretch, lets first look at some data, birthday probabilities for Norway calculated from data from ssb.no:
Clearly not uniform, the high outlier is 1. july. That is not real, it is caused by immigrants without documented birthday registered that date. One max in spring, around beginning of april, another maximum in autumn, in september. The simpson index calculated from this is $$0.002750224$$, and the inverse is $$363.6067$$, so the "effective number of birthdays" is about 363 and a half, rather close to 366. So the nonuniformity maybe is not to important. It is more difficult to find data for deathday, but I found the paper (in norwegian) (this is the official journal of the Norwegian medical association) they report around 12% higher rate of death in winter than in summer. They also report a slightly increased risk of death at Mondays! In fact, international comparisons reported by that paper shows that winter overmortality is lowest in scandinavia, in countries like Irland or England it is about double. That might be surprising, might have to do with us Scandinavians having warmer and better isolated houses?
From that we can reconstruct a deathday distribution. I take winter halfyear as november-april. Then we can calculate $$p_w =1.12 p_s \\ (182 \cdot 1.12 + 184) p_s = 1$$ leading to $$p_s=0.002578383, p_w= 0.002887789$$ and finally $$\sum_i p_i q_i = 0.00273151$$, its inverse, the "effective number of days" being 366.1, pretty close to 366! The anticorrelation ($$\rho(p_i,q_i)=-0.06$$) seems to offset the nonuniformity in such a way that we could as well assume uniformity (and equal distribution for birthday and deathday). That is quite interesting.
EDIT: Here is a published paper on nonuniformity in the birthday problem.
1 out of 365 would be the correct odds, because you are guaranteed to die on one day out of a 365 day year... Therefore odds are 1 out of 365.
• How do you account for the purported observation (in the question, supported by Mike Shi's answer) that more people die on their birthdays? Could it be that your assumption that death is equally likely on every day might be flawed? Might it be, for example, that Alecos' suggested reason (in his answer) applies? You should justify your assumption or otherwise address the information in the question and other answers to explain why it doesn't cause a problem for your assumption. There may be such an argument but you'd need to offer it, not just hope it's true. – Glen_b Feb 22 '17 at 0:28
• On a unrelated point, we should clarify the terms used here. You are describing a probability, not an odds. It might help to read my answer here: Interpretation of simple predictions to odds ratios in logistic regression. – gung Feb 22 '17 at 1:39 | 2019-06-16 03:54:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5624217391014099, "perplexity": 981.7523906949217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00423.warc.gz"} |
http://mathhelpforum.com/algebra/5110-please-help-print.html | • Aug 24th 2006, 09:44 AM
Angel King
I need help with this problem .
15 divide 6 2/3 =?
A) 100 1/4 B) 2 1/4 C)100 D) 2 3/4
I'm thinking the letter D :eek:
• Aug 24th 2006, 09:55 AM
Quick
Quote:
Originally Posted by Angel King
I need help with this problem .
15 divide 6 2/3 =?
A) 100 1/4 B) 2 1/4 C)100 D) 2 3/4
I'm thinking the letter D :eek:
You know: $15=\frac{15}{1}$ and $6\frac{2}{3}=\frac{20}{3}$
therefore we have equation: $\frac{15}{1}\div \frac{20}{3}$
when dividing by a fraction you multiply by the reciprocal of the fraction, so we get: $\frac{15}{1}\div \frac{20}{3}=\frac{15}{1}\times \frac{3}{20}=\frac{15\times3}{1\times20}=\frac{45} {20}=2\frac{5}{20}=2\frac{1}{4}$
so the answer is B, not d
BTW When I was taught to divide fractions my teachers would say that you would "cross multiply" so you can think of it like this:
$\frac{15}{1}\div\frac{20}{3}=\frac{15}{1}
\!
\!
\nwarrow
\!
\!
\!
\!
\!
\!
\swarrow
\!
\!
\frac{20}{3}=\frac{15\times3}{1\times20}=\frac{45} {20}$
did this post help? | 2016-10-28 19:20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910094678401947, "perplexity": 2349.0021036000717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00347-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://campus.datacamp.com/courses/chip-seq-workflows-in-r/comparing-chip-seq-samples?ex=6 | As you have seen in the video, you have to create a set of consensus peak calls before you can test for differential binding. This can be achieved with the following line of R code:
ar_counts <- dba.count(ar_peaks, summits=200)
Consider the following statements.
1. In ar_counts all samples will have read counts the same set of peak calls.
2. Some read counts may be 0.
3. All peaks in ar_counts are 200 bp wide.
4. All peaks in ar_counts are 400 bp wide.
Which of these statements are true? | 2020-03-30 03:42:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23668605089187622, "perplexity": 1804.170964148801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496523.5/warc/CC-MAIN-20200330023050-20200330053050-00384.warc.gz"} |
https://www.storyofmathematics.com/expanded-form-exponents/ | # Expanded Form Exponents — Explanation and Examples
If we expand a number as a summation of individual digits multiplied by powers of $10$, then we call it the expanded form exponents.
In this topic, we will learn how to expand any given number using exponents. We will cover integers as well as decimal numbers using many numerical examples.
## What Is Expanded Form Exponents?
When an integer or a decimal is expanded using the exponents, then it is called expansion with exponents or expanded form exponents. In the exponential form, there is a base number and the power of the base is known as its exponent.
### Expanded Form
The expanded form of any number is the expansion of the said number as individual digits. In the expanded form we add all the values of each individual and it will give us the original number.
In short, we divide the number into ones, tens, hundreds etc and then add all those digits to get the original number. If we are given a number $121$, then we can divide this number into three parts: units, tens and hundreds as: $121 = 100\times 1 + 2 \times 10 + 1 \times 1 = 100 + 20 + 1$ and this is called the expansion of a number.
So in short, we can say that in the expanded form the digits of the number are associated with an expression which has the same digits but each digit is then multiplied with a base of $10$ with an exponent in such a manner that if we add them all up we get the original number.
### Writing a Number in Expanded Form
The method of writing a number in expanded form is very easy. Suppose we have a number “$a$” and we can divide into “$n$” digits, we can write it as $a = x_{n-1} \cdots x_{3} x_{2} x_{1}x_{0}$. Here, $x_{0}$ is the ones or units digit while $x_{1}$ the tens digits, $x_{2}$ the hundreds digit, and so on.
Let $a=321$, then $n=3$ and $x_{2}=3$, $x_{1} = 2$, and $x_{0}=1$.
Now, we want to expand $a$ as a summation of $n$ numbers, i.e., $a = c_{n-1} + c_{n-2} + \cdots + c_{0}$. In such a case, $c_{0}$ will be equal to $x_{0}$, $c_{1}$ will be equal to $x_{1}$ but with one extra zero at the end. Similarly, $c_{2}$ will be equal to $x_{2}$ but with two zeros appended at the end. For example, for $a=321$, we can write:
$a = 300 + 20 + 1$. Note that in this case, $c_{0}=1=x_{1}$, $c_{1}=20=x_{1}0$ and $c_{2}=300=x_{3}00$.
This expansion method which we discussed is suitable for integers, but what if the number which we are given for expansion is not an integer but a decimal, then what should be done? Well, this is where expansion with exponents comes in handy. Let us discuss what is meant by expansion with exponents and how we can use it to expand decimal numbers.
### Expansion Statement
Expanded Form Exponents is just like the normal expansion which we have discussed in the previous section but we do the expansion using the exponents. If you remember the expansion statement:
$a = x_{n-1} …… x_{3} x_{2} x_{1} x_{0} = c_{n-1}+ …… + c_{3} + c_{2}+ c_{1} + c_{0}$
Earlier, we added zeros at end of each “$c$” depending upon the base value. Instead, we can remove the extra zeros and multiply the digit with “$10^{k}$”, where “$k$” is the power of the exponent. For example, if we are given a digit $x_{2}$ then we can write $c_{2} = x_{2} \times 10^{2}$. The general expression can be written as $c_{n} = x_{n} \times 10^{n}$.
For example, we take the same previous number $321$ and now let us expand it using the exponent method. The digit “$3$” is the hundred digit while the digit “$2$” is the tens and “1” is the unit digit. $x_{2} = 3$, $x_{1} = 2$ and $x_{0} = 1$ and we can write the term as $c_{2} = 3 \times 10^{2}$, $c_{1} = 2 \times 10^{1}$ and $c_{0} = 1 \times 10^{0}$ so if we add all the “c” terms we get $321 = 3 \times 10^{2} + 2 \times 10^{1} + 1 \times 10^{0} = 3 \times 100 + 2 \times 10 + 1 \times 1 = 300 + 20 + 1$.
Let us study some of the examples related to the expansion of numbers using the exponent method.
Example 1: Expand the number $6565$ using the exponent method.
Solution:
The number $6565$ can be separated into digits $6$,$5$,$6$, and $5$.
Let $x = 6565$, then $x_{3} = 6, x_{2} = 5, x_{1} = 6, x_{0} = 5$
$6565 = 6 \times 10^{3} + 5 \times 10^{2} + 6 \times 10^{1} + 5 \times 10^{0}$
$6565 = 6 \times 1000 + 5 \times 100 + 6 \times 10 + 5 \times 1$
$6565 = 6000 + 500 + 60 + 5$
Example 2: Expand the number $7012$ using the exponent method.
Solution:
The number $7012$ can be separated into digits $6$,$5$,$6$, and $5$.
Let $x = 7012$, then $x_{3} = 7, x_{2} = 0, x_{1} = 1, x_{0} = 2$
$7012 = 7 \times 10^{3} + 0 \times 10^{2} + 1 \times 10^{1} + 2 \times 10^{0}$
$7012 = 7 \times 1000 + 0 \times 100 + 1 \times 10 + 2 \times 1$
$7012 = 7000 + 0 + 10 + 2$
Example 3: Expand the number $30492$ using the exponent method.
Solution:
The number $30492$ can be separated into digits $6$,$5$,$6$, and $5$.
Let $x = 30492$, then $x_{4} = 3$,$x_{3} = 0$, $x_{2} = 4$, $x_{1} = 9$, $x_{0} = 2$
$30492 = 3 \times 10^{4} + 0 \times 10^{3} + 4 \times 10^{2} + 9 \times 10^{1} + 2 \times 10^{0}$
$30492 = 3 \times 10000 + 0 \times 1000 + 4 \times 100 + 9 \times 10 + 2 \times 1$
$30492 = 30000 + 0 + 400 + 90 + 2$
### Expansion of Decimal Numbers
The decimal numbers can easily be expanded using the expansion with exponents. In the case of numbers, the digit on the far right is termed as a unit digit and it is multiplied by “$10^{0}$” but in the case of decimal numbers, there are digits after the decimal point. For example, the number 145.65 is considered a decimal number. So how do you expand the numbers after the decimal point?
It can easily be done by separating the digits before and after the decimal point. The digits prior to decimal points are $1$,$4$, and $5$, and we will expand them with the same method we have used so far, i.e., $x_{2} = 1$, $x_{1} = 4$ and $x_{0} = 5$. We will multiply each digit with $10^{k}$, where $k$ depends upon the base value of “$x$”.
In the case of digits prior to the decimal point, we start from the right and multiply each digit with “10” while increasing the power of “$10$” by “$1$”; as a general expression, we can write it as:
$a = x_{n-1} \times 10^{n-1} + x_{n-2} \times 10^{n-2} + \cdots + x_{0} \times 10^{0}$
In the case of digits after the decimal point, we start from the left and multiply each digit with “10” while decreasing the power of “$10$” by “$1$”. As a general expression, we can write it as:
$a = b_{1} \times 10^{-1} + b_{2} \times 10^{-2} + \cdots + b_{n} \times 10^{-n}$
For the digits after the decimal point, we start decreasing the exponent of base “$10$” from left to right. Continuing the above example of number 145.65, the number after the decimal point can be written as $0.65 = 6 \times 10^{-1} + 5 \times 10^{-2} = 0.6 + 0.05$. So if we want to expand the decimal number $145.65$ using exponents, then it can be done as:
$145.65 = 1 \times 10^{2} + 4 \times 10^{1} + 5 \times 10^{0} + 6 \times 10^{-1} + 5 \times 10^{2} = 100 + 40 + 5 + 0.6 + 0.05$
As you can see, if we start from the right-most digit in this example which is 1, it was multiplied by $10^{2}$ as it was at a hundred place and as we moved to the left, we decreased the power of base “$10$” by $1$.
Let us discuss an example of an expanded exponential form of a decimal number.
Example 4: Expand the number $920.12$ using the exponent method.
Solution:
The number $920.12$ can be separated into digits 9,2,0, 1, and 2.
Let $x = 920.12$, then $c_{2} = 9$, $c_{1} = 2$, $c_{0} = 0$, $b_{1} = 1$, $b_{2} = 2$
$920.12 = 9 \times 10^{2} + 2 \times 10^{1} + 0 \times 10^{0} + 1 \times 10^{-1} + 2 \times 10^{-2}$
$920.12 = 9 \times 100 + 2 \times 10 + 0 \times 1 + \dfrac{1}{10} + \dfrac{2}{100}$
$920.12 = 900 + 20 + 0 + 0.1 + 0.02$
This is how decimals in the expanded form are presented or written.
### Practice Questions
1. Expand the number $-121.40$ using the exponent method.
2. Write $224,090$ in expanded form using exponents.
The number is negative and there are two methods to solve this. You can either follow the first method which we have discussed and just simply multiply the final answer with “$-1$”, or take every digit as negative to expand the number.
$-121.40$ can be separated in to digits $-1$,$-2$,$-1$,$- 4$, and $0$.
Let $x = -121.40$, then $c_{2} = -1$, $c_{1} = -2$, $c_{0} = -1$, $b_{1} = -4$, b_{2} = 0-121.40 = -1 \times 10^{2} – 2 \times 10^{1} – 1\times 10^{0} – 4 \times 10^{-1} – 0 \times 10^{-2}-121.40 = -1 \times 100 – 2 \times 10 – 1 \times 1 – \dfrac{4}{10} – \dfrac{0}{100}-121.40 = -100 – 20 – 1 – 0.4 – 0$2). The number$224,090$can be separated into digits$2$,$2$,$4$,$0$,$9$, and$5$. Let$x = 224,090$, then$x_{5} = 2$,$x_{4} = 2$,$ x_{3} = 4$,$ x_{2} = 0$,$x_{1} = 9 $,$x_{0} = 0224,090 = 2 \times 10^{5} + 2 \times 10^{4} + 4 \times 10^{3} + 0 \times 10^{2} + 9 \times 10^{1} + 0 \times 10^{0}224,090 = 2 \times 100000 + 2 \times 10000 + 4 \times 1000 + 0 \times 100 + 9 \times 1 + 0 \times 1224,090 = 200000 + 20000 + 4000 + 0 + 90 + 0\$ | 2022-10-06 13:38:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935101866722107, "perplexity": 218.94518844735134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00619.warc.gz"} |
http://www.intlpress.com/HHA/v7/n2/a2/ | # Transferring TTP-structures via contraction
## V. Alvarez, J.A. Armario, M.D. Frau and P. Real
Let $A \otimes _tC$ be a {\em twisted tensor product} of an algebra $A$ and a coalgebra $C$, along a {\em twisting cochain} $t:C \rightarrow A$. By means of what is called the {\em tensor trick} and under some nice conditions, Gugenheim, Lambe and Stasheff proved in the early 90s that $A \ot _tC$ is homology equivalent to the objects $M \ot _{t'}C$ and $A \ot _{t''}N$, where $M$ and $N$ are strong deformation retracts of $A$ and $C$, respectively. In this paper, we attack this problem from the point of view of contractions. We find explicit contractions from $A\ot _t C$ to $M \ot _{t'}C$ and $A\ot_{t''}N$. Applications to the comparison of resolutions which split off of the bar resolution, as well as to some homological models for central extensions are given.
Homology, Homotopy and Applications, Vol. 7(2005), No. 2, pp. 41-54
Available as: dvi dvi.gz ps ps.gz pdf | 2013-05-20 09:27:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744361042976379, "perplexity": 691.6449247969108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698693943/warc/CC-MAIN-20130516100453-00017-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://brilliant.org/problems/4-numbers-theory/ | # 4 Numbers Theory
Number Theory Level pending
Find pair of 4 numbers a,b,c&n such that they satisfies equation. a^n + b^n = c^ n Where a,b,c&n belongs to Natural Number Set and n>2.
× | 2017-01-18 01:51:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607524275779724, "perplexity": 10000.977230483575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00201-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://rdrr.io/cran/nnspat/man/ceTk.html | ceTk: Cuzick and Edwards T_k Test statistic In nnspat: Nearest Neighbor Methods for Spatial Patterns
Description
This function computes Cuzick and Edwards T_k test statistic based on the number of cases within `k`NNs of the cases in the data.
For disease clustering, \insertCitecuzick:1990;textualnnspat suggested a `k`-NN test based on number of cases among `k` NNs of the case points. Let z_i be the i^{th} point and d_i^k be the number cases among `k` NNs of z_i. Then Cuzick-Edwards' `k`-NN test is T_k=∑_{i=1}^n δ_i d_i^k, where δ_i=1 if z_i is a case, and 0 if z_i is a control.
The argument `cc.lab` is case-control label, 1 for case, 0 for control, if the argument `case.lab` is `NULL`, then `cc.lab` should be provided in this fashion, if `case.lab` is provided, the labels are converted to 0's and 1's accordingly. Also, T_1 is identical to the count for cell (1,1) in the nearest neighbor contingency table (NNCT) (See the function `nnct` for more detail on NNCTs).
Usage
`1` ```ceTk(dat, cc.lab, k = 1, case.lab = NULL, ...) ```
Arguments
`dat` The data set in one or higher dimensions, each row corresponds to a data point. `cc.lab` Case-control labels, 1 for case, 0 for control `k` Integer specifying the number of NNs (of subject i), default is `1`. `case.lab` The label used for cases in the `cc.lab` (if `cc.lab` is not provided then the labels are converted such that cases are 1 and controls are 0), default is `NULL`. `...` are for further arguments, such as `method` and `p`, passed to the `dist` function.
Value
Cuzick and Edwards T_k test statistic for disease clustering
Elvan Ceyhan
References
\insertAllCited
`Tcomb`, `seg.ind`, `Pseg.coeff` and `ceTkinv`
``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18``` ```n<-20 #or try sample(1:20,1) Y<-matrix(runif(3*n),ncol=3) cls<-sample(0:1,n,replace = TRUE) #or try cls<-rep(0:1,c(10,10)) ceTk(Y,cls) ceTk(Y,cls,method="max") ceTk(Y,cls,k=3) ceTk(Y,cls+1,case.lab = 2) #cls as a factor na<-floor(n/2); nb<-n-na fcls<-rep(c("a","b"),c(na,nb)) ceTk(Y,fcls,case.lab="a") #try also ceTk(Y,fcls) ############# n<-40 Y<-matrix(runif(3*n),ncol=3) cls<-sample(1:4,n,replace = TRUE) # here ceTk(Y,cls) gives an error message ``` | 2021-12-03 04:39:05 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237146735191345, "perplexity": 3087.3559122091033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00546.warc.gz"} |
https://mathoverflow.net/questions/87998/resolution-of-singularities-for-flat-families | # Resolution of singularities for flat families.
Is there a resolution of singularities for flat families?
More precisely, if $X \rightarrow \mathbb{A} ^n$ is a flat map, does there exist a map $Y \rightarrow X$ such that, for every $p \in \mathbb{A} ^n$, the fiber map $Y_p \rightarrow X_p$ is a resolution of singularities? Can one require, moreover, that the map $Y \rightarrow \mathbb{A} ^n$ is smooth?
I assume you want $Y \to X$ to be proper. The answer is a definite no, in general. For example, take a polynomial $f: \mathbb A^2 \to \mathbb A^1$; such a $Y$ would have to be finite over $\mathbb A^2$, and birational, so $Y = \mathbb A^2$. There are lots of counterexamples in higher dimension too: for example, it follows from the purity theorem that you usually can't have a simultaneous resolution when $X$ is smooth. Thus, for example, in the very simple example $f\colon \mathbb A^3 \to \mathbb A^1$, $f(x, y, z) = x^2 + yz$, in which the only singular fiber is over the origin, and it has the simplest kind of surface singularity, of type $A_1$, you don't have a simultaneous resolution.
There are some non-trivial results, but they require the base to be 1-dimensional, and they require a base change on the base to get the resolution. For example, in the example above if one makes a base change $t \mapsto t^2$ on the base and normalizes, one has a simultaneous resolution. This is particular case of a theorem of Brieskorn: see for example M. Artin, Algebraic construction of Brieskorn's resolutions, Journal of Algebra 29 (1974). This is only possible in very particular cases, though.
This is false in general. Take $f:\mathbb A^2\to \mathbb A^1$, $(x,y)\mapsto xy$. Any attempt to resolve the singular fiber will bring in a new component in the fiber, so it remains singular. | 2022-01-18 06:53:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125616312026978, "perplexity": 82.68715895586122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00391.warc.gz"} |
https://en.academic.ru/dic.nsf/enwiki/610269 | # Short rate model
Short rate model
In the context of interest rate derivatives, a short rate model is a mathematical model that describes the future evolution of interest rates by describing the future evolution of the short rate.
The short rate
The short rate, usually written "r""t" is the (annualized) interest rate at which an entity can borrow money for an infinitesimally short period of time from time "t". Specifying the current short rate does not specify the entire yield curve. However no-arbitrage arguments show that, under some fairly relaxed technical conditions, if we model the evolution of "r""t" as a stochastic process under a risk-neutral measure "Q" then the price at time "t" of a zero-coupon bond maturing at time "T" is given by
:$P\left(t,T\right) = mathbb\left\{E\right\}left \left[left. exp\left\{left\left(-int_t^T r_s, ds ight\right) \right\} ight| mathcal\left\{F\right\}_t ight\right]$
where $mathcal\left\{F\right\}$ is the natural filtration for the process. Thus specifying a model for the short rate specifies future bond prices. This means that instantaneous forward rates are also specified by the usual formula
:$f\left(t,T\right) = - frac\left\{partial\right\}\left\{partial T\right\} ln\left(P\left(t,T\right)\right).$
And its third equivalent, the yields are given as well.
Particular short-rate models
Throughout this section $W_t$ represents a standard Brownian motion and $dW_t$ its differential.
#The Rendleman-Bartter model models the short rate as $dr_t = heta r_t, dt + sigma r_t, dW_t$
#The Vasicek model models the short rate as $dr_t = a\left(b-r_t\right), dt + sigma , dW_t$
#The Ho-Lee model models the short rate as $dr_t = heta_t, dt + sigma, dW_t$
#The Hull-White model (also called the extended Vasicek model sometimes) posits $dr_t = \left( heta_t-alpha r_t\right),dt + sigma_t , dW_t$. In many presentations one or more of the parameters $heta, alpha$ and $sigma$ are not time-dependent. The process is called an Ornstein-Uhlenbeck process.
#The Cox-Ingersoll-Ross model supposes $dr_t = \left( heta_t-alpha r_t\right),dt + sqrt\left\{r_t\right\},sigma_t, dW_t$
#In the Black-Karasinski model a variable "X""t" is assumed to follow an Ornstein-Uhlenbeck process and "r""t" is assumed to follow $r_t = exp\left\{X_t\right\}$.
# The Black-Derman-Toy model
Besides the above one-factor models, there are also multi-factor models of the short rate, among them the best known are Longstaff and Schwartz two factor model and Chen three factor model (also called "stochastic mean and stochastic volatility model"):
#The Longstaff-Schwartz model supposes the short rate dynamics is given by the following two equations: $dX_t = \left( heta_t-Y_t\right),dt + sqrt\left\{X_t\right\},sigma_t, dW_t$, $d Y_t = \left(zeta_t-Y_t\right),dt + sqrt\left\{Y_t\right\},sigma_t, dW_t$.
#The Chen model models the short rate, also called stochastic mean and stochastic volatility of the short rate, is given by : $dr_t = \left( heta_t-alpha_t\right),dt + sqrt\left\{r_t\right\},sigma_t, dW_t$, $d alpha_t = \left(zeta_t-alpha_t\right),dt + sqrt\left\{alpha_t\right\},sigma_t, dW_t$, .
Other interest rate models
The other major framework for interest rate modelling is the Heath-Jarrow-Morton framework (HJM). Unlike the short rate models described above, this class of models is generally non-Markovian. This makes general HJM models computationally intractable for most purposes. The great advantage of HJM models is that they give an analytical description of the entire yield curve, rather than just the short rate. For some purposes (e.g., valuation of mortgage backed securities), this can be a big simplification. The Cox-Ingersoll-Ross and Hull-White models in one or more dimensions can both be straightforwardly expressed in the HJM framework. Other short rate models do not have any simple dual HJM representation.
The HJM framework with multiple sources of randomness, including as it does the Brace-Gatarek-Musiela model and market models, is often preferred for models of higher dimension.
References
*
*
* cite book | author = Jessica James and Nick Webber | year = 2000 | title = Interest Rate Modelling
publisher = Wiley Finance | id = ISBN 0-471-97523-0
*
*
*cite book | title = Interest Rate Models - An Introduction | author = Andrew J.G. Cairns | publisher = Princeton University Press | year = 2004 | id = ISBN 0-691-11894-9
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Short rate — may refer to: Short rate (old short rate) cancellation (insurance) Penalty method of calculating return premium of an insurance policy Short rate (90% pro rata) cancellation (insurance) Penalty method of calculating return premium of an insurance … Wikipedia
• Short-term effects of alcohol — on the human body can take many forms. The drug alcohol, specifically ethanol, is a central nervous system depressant with a range of side effects. The amount and circumstances of consumption play a large part in determining the extent of… … Wikipedia
• short story — short story, adj. a piece of prose fiction, usually under 10,000 words. [1885 90] * * * Brief fictional prose narrative. It usually presents a single significant episode or scene involving a limited number of characters. The form encourages… … Universalium
• Short (finance) — Schematic representation of short selling in two steps. The short seller borrows shares and immediately sells them. He then waits, hoping for the stock price to decrease, when the seller can profit by purchasing the shares to return to the lender … Wikipedia
• Short Scion — infobox Aircraft name = S.16 Scion/Scion II type = Light transport landplane/floatplane manufacturer = Short Brothers Pobjoy Airmotors Ltd. caption = designer = Arthur Gouge first flight = 18 August 1933 introduced = retired = produced = number… … Wikipedia
• Rate of return — In finance, rate of return (ROR), also known as return on investment (ROI), rate of profit or sometimes just return, is the ratio of money gained or lost (whether realized or unrealized) on an investment relative to the amount of money invested.… … Wikipedia
• Model aircraft — A die cast Boeing 747 400 model. Model aircraft are flying or non flying models of existing or imaginary aircraft using a variety of materials including plastic, diecast metal, polystyrene, balsa wood, foam and fibreglass. Flying designs range… … Wikipedia
• Model rocket — A typical model rocket during launch A model rocket is a small rocket that is commonly advertised as being able to be launched by anybody, to, in general, low altitudes (usually to around 100–500 m (300–1500 ft) for a 30 g (1 oz.) model) and … Wikipedia
• Model (art) — For non artistic human models, see Model (person). Art model posing in a French painting school Art models are models who pose for photographers, painters, sculptors, and other artists as part of their work of art. Art models who pose in the nude … Wikipedia
• Short 330 — Infobox Aircraft name= Short 330 (SD3 30) type = Transport airplane manufacturer=Short Brothers caption=A Short 330 of Mississippi Valley Airlines at Minneapolis Saint Paul International Airport in 1985 first flight= 22 August 1974… … Wikipedia | 2020-08-13 12:18:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4742495119571686, "perplexity": 3677.2649278466592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00078.warc.gz"} |
https://www.physicsforums.com/threads/conformal-mapping-and-flow-normal-to-ellipse.672425/ | # Conformal Mapping and flow normal to ellipse
1. Feb 17, 2013
### nickthequick
Hi,
Given that the flow normal to a thin disk or radius r is given by
$\phi = -\frac{2rU}{\pi}\sqrt{1-\frac{x^2+y^2}{r^2}}$
where U is the speed of the flow normal to the disk, find the flow normal to an ellipse of major axis a and minor axis b.
I can only find the answer in the literature in one place, where it's stated
$\phi = -\frac{U b}{E(e)} \sqrt{1-\frac{x^2}{a^2}-\frac{y^2}{b^2}}$
where E(e) is the complete elliptical integral of the second kind and e is the eccentricity of the disk.
I have been trying to use the Joukowski map to send lines of equipotential of the disk to those of the ellipse, but I'm not sure how the complete elliptical integral of the second kind enters this picture.
Any suggestions, references, would be appreciated!
Nick
2. Feb 20, 2013
### nickthequick
On second thought, the Joukowski map seems inappropriate here. I think the map I want is
$z\to a \cosh(\xi + i \eta)$ so that
$x=a\sinh (\xi) \cos(\eta)$ and $y = a\cosh (\xi) \sin(\eta)$.
This will effectively give me the change in functional form that I expect; however, I still don't see how this will modify the coefficient in the appropriate way. | 2017-08-24 02:54:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009354114532471, "perplexity": 348.4702895520451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00335.warc.gz"} |
https://tex.stackexchange.com/tags/sectioning/info | Tag Info
{sectioning} is about commands like \chapter or \section forming the logical structure of documents. For questions specifically about {parts}, {chapters} or lower-level commands ({sections-paragraphs}), add the respective tag to the more general {sectioning}. A popular package is {titlesec}.
is about commands like `\chapter` or `\section` forming the logical structure of documents. For questions specifically about , or lower-level commands (), add the respective tag to the more general . A popular package is .
LaTeX provides 7 levels of sectioning: `\part`, `\chapter`, `\section`, `\subsection`, `\subsubsection`, `\paragraph`, and `\subparagraph`. `\chapter` is only available in book-type classes (e.g. `book`, `amsbook`, and `report`), and `\part` isn't available in the `letter` class.
Each command also has a starred version, e.g. `\section*{``}`, which produces an unnumbered section. You can also set a short title for headers and the table of contents with `\section[Short Title]{Much Longer Full Title}`. | 2021-07-28 07:23:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876384735107422, "perplexity": 4488.7456570763225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00066.warc.gz"} |
http://blog.a222.net/2017/09/ | ## 27 September 2017
### iTunes - cannot locate CD configuration folder
A problem first reported in 2004!
Quick fix is to search for a folder called "CD Configuration" then copy it into c:\program files\itunes or c:\program files (x86)\itunes .
This assumes that you can find the missing folder. At present I have no data on how to fix if not present.
### Windows Default Programs
To assign a program to a particular file type proceed as follows.
Windows 7.
Right click the file.
If there is an 'Choose default...' option then take this.
You will see a list of candidates.
(If the one you need it not there then see below.)
Make sure that the "Always Use..." box is ticked.
Click on the desired program.
All done.
Windows 10.
Right click the file.
If there is an 'Open With' option then take this.
You will see an initial list of candidates.
If the one you want is not there, then click "View More Apps..."
Make sure that the "Always Use..." box is ticked.
Click on the desired program.
All done.
Can't see the program?
Options are:
OPEN MANUALLY
Open the required program, select File-Open, browse to and open the file. Close it an repeat the process above. The program should now be on the candidate list.
BROWSE FOR PROGRAM
For Win 7 there is a Browse button, for Win 10, a "Look..on this PC". This can be used to find the executable but this is rather geeky. You'll need information about where the program is located, possibly from the vendor's website. | 2019-10-14 23:29:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411970138549805, "perplexity": 5215.953023791847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00479.warc.gz"} |
https://www.biostars.org/p/343282/ | VEP- What is the best idea to start analyzing?
1
0
Entering edit mode
2.5 years ago
Hi all,
I have not worked with VEP software yet. But I need some outputs of this software. Unfortunately, I did not understand how to do the analysis by reading the guide it. So, What is the best idea to start analyzing?
Best Regard
Mostafa
SNP vep Ensembl • 1.9k views
1
Entering edit mode
Why not install locally and try out examples?
0
Entering edit mode
I have installed it, But I do not know exactly what the first step is? I guess I should first annotate my VCF file using the script below? Is my guess right?
grep -v "#" data.gff | sort -k1,1 -k4,4n -k5,5n -t$'\t' | bgzip -c > data.gff.gz tabix -p gff data.gff.gz ./vep -i input.vcf -gff data.gff.gz -fasta genome.fa.gz ADD REPLY 0 Entering edit mode what zx8754 said: did you only try "quick start" on the right of https://www.ensembl.org/info/docs/tools/vep/script/index.html ADD REPLY 0 Entering edit mode hi i have this problem after installing vep. i'm getting this errors on running vep. i'm helping mostafa by the way. Can't locate Try/Tiny.pm in @INC (@INC contains: /home/sadri/vep/ensembl-vep/modules /home/sadri/vep/ensembl-vep /opt/miRDeep2/mirdeep2/lib/perl5/x86_64-linux-thread-multi /opt/miRDeep2/mirdeep2/lib/perl5 /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Feature.pm line 85. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Feature.pm line 85. Compilation failed in require at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/BaseVariationFeature.pm line 58. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/BaseVariationFeature.pm line 58. Compilation failed in require at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/VariationFeature.pm line 97. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/VariationFeature.pm line 97. Compilation failed in require at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/DBSQL/VariationFeatureAdaptor.pm line 89. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/Bio/EnsEMBL/Variation/DBSQL/VariationFeatureAdaptor.pm line 89. Compilation failed in require at /home/sadri/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/BaseVEP.pm line 59. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/BaseVEP.pm line 59. Compilation failed in require at (eval 7) line 3. ...propagated at /usr/share/perl5/base.pm line 94. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/BaseRunner.pm line 56. Compilation failed in require at (eval 6) line 3. ...propagated at /usr/share/perl5/base.pm line 94. BEGIN failed--compilation aborted at /home/sadri/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/Runner.pm line 71. Compilation failed in require at ./vep line 20. BEGIN failed--compilation aborted at ./vep line 20. ADD REPLY 0 Entering edit mode What command are you running when you get that error? ADD REPLY 0 Entering edit mode I am getting the same type of errors when I execute the ./vep --custom. I don't have root access unfortunately, so I'm having trouble fixing the error using sadri's solution. Can anyone help? ADD REPLY 0 Entering edit mode i found the solution yum install enablerepo=rpmforge perl-Try-Tiny thanks ADD REPLY 0 Entering edit mode mostafa asks : is the input vcf file annotated? if it's annotated how should i annotate my file. thanks ADD REPLY 0 Entering edit mode The VCF should be a standard VCF. The VEP will only take into account the location and alleles. If you specify --vcf as output, you will retain everything that is already in your VCF, and the VEP will add its data into the INFO column. ADD REPLY 2 Entering edit mode 2.5 years ago The basic commands are in the documentation. ./vep --cache -i input.txt -o output.txt Is it working when you run that with the example files that ship with the VEP? ADD COMMENT 0 Entering edit mode Unfortunately, no? The error is related to cache files. Another question, My Organism is Buffalo and there is no information in the cache folder for it? Can i use other organisms as file caches? ADD REPLY 0 Entering edit mode When you run the command with the example files, what is your error? There is no buffalo genome in Ensembl, so you will need to work with your own data. But we should fix the installation before we worry about that. ADD REPLY 0 Entering edit mode Hi Emily, Unfortunately, I've been involved with VEP for days. you asked me if VEP works for me correctly or not? I think i installed it correctly. Please see below: Is the installation done correctly? ADD REPLY 1 Entering edit mode It's impossible to read what's on the console. Can you please copy-paste the text and not a screenshot of the console? ADD REPLY 0 Entering edit mode Yes, Sure which: no tabix in (/opt/vep/ensembl-vep:/opth/hadoop/hadoop-2.7.3/bin:/opth/hadoop/hadoop-2.7.3/sbin:/opt/Mathematica/11.0/SystemFiles/Libraries/Linux-x86-64/:/opt/Mathematica/11.0/Executables:/opt/intel/composer_xe_2015.0.090/bin/intel64:/opt/torque/bin:/opt/torque/sbin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/torque/sbin/:/opt/torque/bin/:/opt/maui/bin/:/opt/maui/sbin/:/opt/gold/bin:/opt/torque/sbin/:/opt/torque/bin/:/opt/mireap/viennarna/share/perl5/:/opt/maui/bin/:/opt/maui/sbin/:/opt/boost/boost-installed:/opt/MATLAB/MATLAB_Production_Server/R2013a/toolbox/distcomp/bin/:/opt/cuda/bin:/home/m.rafiepour222/bin) #----------------------------------# # ENSEMBL VARIANT EFFECT PREDICTOR # #----------------------------------# Versions: ensembl : 94.5c08d90 ensembl-funcgen : 94.08b0c13 ensembl-io : 94.8d53275 ensembl-variation : 94.066b102 ensembl-vep : 94.4 Help: dev@ensembl.org , helpdesk@ensembl.org Twitter: @ensembl http://www.ensembl.org/info/docs/tools/vep/script/index.html Usage: ./vep [--cache|--offline|--database] [arguments] Basic options ============= --help Display this message and quit -i | --input_file Input file -o | --output_file Output file --force_overwrite Force overwriting of output file --species [species] Species to use [default: "human"] --everything Shortcut switch to turn on commonly used options. See web documentation for details [default: off] --fork [num_forks] Use forking to improve script runtime For full option documentation see: http://www.ensembl.org/info/docs/tools/vep/script/vep_options.html ADD REPLY 0 Entering edit mode The error is on the first line: which: no tabix Install bgzip2 and try again? ADD REPLY 0 Entering edit mode Ok, Is it possible for you to send me the bgzip2 installation link? ADD REPLY 0 Entering edit mode I'm glad to see you've solved it. These are issues where you can show (and have shown) that you've invested your effort. Remember, asking for a download link is like using the forum as Google, which is not encouraged. ADD REPLY 0 Entering edit mode many thanks for your guide, As I said above, my Organism is Buffalo and there is no information in the cache folder for it in VEP. So, as regards that in VEP documents do not provide information on how to create a file cache. Now I want to know how to generate the file cache? ADD REPLY 0 Entering edit mode Emily is the better person to tackle that. Like she said, installation needed to be solved before the data cache could be addressed. I'd recommend opening a new question about getting VEP to work with the Buffalo genome. That way, this thread would be able installing VEP and all the information about the new genome would belong in that thread. Please accept Emily's answer to mark this thread as solved. Thank you! ADD REPLY 0 Entering edit mode You don't need to generate a cache, you can use it directly with a GFF or GTF file and a genome FASTA. If you're having trouble with that, I agree with Ram that you should open a new post, because I'm getting very confused reading through here what is done and what links to what. ADD REPLY 0 Entering edit mode Hi Emily, many thanks for reply, Yes, I have been involved with this challenge for days. First, do you suggest that I use this script: grep -v "#" data.gff | sort -k1,1 -k4,4n -k5,5n -t$'\t' | bgzip -c > data.gff.gz
tabix -p gff data.gff.gz
./vep -i input.vcf -gff data.gff.gz -fasta genome.fa.gz
And if i did not get a result, opening a new question about getting VEP to work with the Buffalo genome ??
1
Entering edit mode
Asking about using a script is not very useful since we have no idea if your data input is in the correct format as the data.gff file above.
You are going to need to do this step by step. Do just grep -v "#" data.gff | sort -k1,1 -k4,4n -k5,5n -t$'\t' and see what you get first. Does the output look reasonable/right. Then proceed to add one step at a time. It is indeed time to stop posting in this thread and ask a new question if you are not able to make any progress/run into new errors. ADD REPLY 0 Entering edit mode Ok, First, I used the script below and created the zip file without error. module load SAMTools-1.4.1 grep -v "#" GCA_003121395.1_ASM312139v1_genomic.gff | sort -k1,1 -k4,4n -k5,5n -t$'\t' | bgzip -c > data.gff.gz
And then, use tabix -p gff data.gff.gz
And then, i use:
vep -i Final_Filter_GQ_KHUZ_MAZ_EAZ_GIL_WAZ.vcf -gff data.gff.gz -fasta GCA_003121395.1_ASM312139v1_genomic.fna
But, I encountered this error?
-------------------- EXCEPTION --------------------
MSG: ERROR: Cannot use format gff without Bio::DB::HTS::Tabix module installed
STACK Bio::EnsEMBL::VEP::AnnotationSource::File::new /opt/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/AnnotationSource/File.pm:162
STACK Bio::EnsEMBL::VEP::BaseRunner::get_all_AnnotationSources /opt/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/BaseRunner.pm:175
STACK Bio::EnsEMBL::VEP::Runner::init /opt/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/Runner.pm:123
STACK Bio::EnsEMBL::VEP::Runner::run /opt/vep/ensembl-vep/modules/Bio/EnsEMBL/VEP/Runner.pm:194
STACK toplevel /opt/vep/ensembl-vep/vep:224
Date (localtime) = Thu Oct 25 16:42:55 2018
Ensembl API version = 94
---------------------------------------------------
0
Entering edit mode
Looks like you need to install this module.
0
Entering edit mode
Hi genomax,
I have been able to fix the installation problem. i tried and i was able to run this script (vep -i Final.vcf -gff data.gff.gz -fasta genomic.fna) Which Emily had suggested to me. with a few WARNING But no Error:
(vep) [m.rafiepour222@abrii1 ~]$vep -i Final.vcf -gff data.gff.gz -fasta genomic.fna Possible precedence issue with control flow operator at /opt/anaconda2/lib/site_perl/5.26.2/Bio/DB/IndexedBase.pm line 845. WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna27858, rna27857 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna40648 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna46030, rna46031 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna47129, rna47130 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna50084 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna54313, rna54314 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna60662 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna63492, rna63491 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna64693 WARNING: Parent entries with the following IDs were not found or skipped due to invalid types: rna67395, rna67394 (vep) [m.rafiepour222@abrii1 ~]$
And that's part of my output:
#Uploaded_variation Location Allele Gene Feature Feature_type Consequence cDNA_position CDS_position Protein_position Amino_acids Codons Existing_variation Extra
CM009840.1_932_C/A CM009840.1:932 A - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1096_A/T CM009840.1:1096 T - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1107_A/G CM009840.1:1107 G - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1177_C/G CM009840.1:1177 G - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1276_C/T CM009840.1:1276 T - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1295_G/A CM009840.1:1295 A - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1471_C/A CM009840.1:1471 A - - - intergenic_variant - - - - - - IMPACT=MODIFIER
CM009840.1_1518_A/G CM009840.1:1518 G - - - intergenic_variant - - - - - - IMPACT=MODIFIER
Did everything go well?
0
Entering edit mode
Hi genomax,
I am waiting for your response. i have another question, i want to see if VEP works correctly, how can I calculate SIFT. i think that there should be a column with the name of SIFT in my output, but as you see in the output, is not this?
0
Entering edit mode
Deleted due to reduced space | 2021-04-14 20:57:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17300264537334442, "perplexity": 6159.898996233556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00541.warc.gz"} |
https://physics.meta.stackexchange.com/questions/5668/invalid-flags-%E2%86%92-disputed-%E2%86%92-flags-should-not-be-used-to-indicate-technical-inaccu | # Invalid flags → disputed → flags should not be used to indicate technical inaccuracies. What?
I continue to be befuddled by what exactly one is supposed to do with flags in the 10k tools, and I'm quite relieved to know they're going away soon. In the meantime, though, here's a stumper I just came across in my flagging history:
The post in question is this answer, which came up flagged as Not an Answer if I remember correctly. I understand why someone would flag this - it was only a sort-of good fit to the question to begin with - but I disagree that it failed to address the original intent of the OP. Particularly, I felt it did answer the question in its state at the time, with a less specific title.
On the other hand, given the current state of the question, I can understand that it might best be classed as Not an Answer. Given that, I can understand that my invalid flags is disputed.
I'm completely stumped, however, at the message. Flags should not be used to indicate technical inaccuracies, or an altogether wrong answer? That has absolutely nothing to do with why I flagged this way. The technical content of the post was never an issue.
Could someone elucidate what on Earth this message means and how it came into being? | 2020-10-23 11:01:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5699766874313354, "perplexity": 582.3867699382101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00437.warc.gz"} |
https://hackage-origin.haskell.org/package/containers-verified-0.6.0.1/candidate/docs/Data-Set.html | containers-verified-0.6.0.1: Formally verified drop-in replacement of containers
Data.Set
Description
Please see the documentation of containers for details.
Synopsis
# Set type
data Set a :: * -> * #
A set of values a.
Instances
# Query
null :: Set a -> Bool #
O(1). Is this the empty set?
size :: Set a -> Int #
O(1). The number of elements in the set.
member :: Ord a => a -> Set a -> Bool #
O(log n). Is the element in the set?
notMember :: Ord a => a -> Set a -> Bool #
O(log n). Is the element not in the set?
isSubsetOf :: Ord a => Set a -> Set a -> Bool #
O(n+m). Is this a subset? (s1 isSubsetOf s2) tells whether s1 is a subset of s2.
disjoint :: Ord a => Set a -> Set a -> Bool #
O(n+m). Check whether two sets are disjoint (i.e. their intersection is empty).
disjoint (fromList [2,4,6]) (fromList [1,3]) == True
disjoint (fromList [2,4,6,8]) (fromList [2,3,5,7]) == False
disjoint (fromList [1,2]) (fromList [1,2,3,4]) == False
disjoint (fromList []) (fromList []) == True
Since: 0.5.11
# Construction
empty :: Set a #
O(1). The empty set.
singleton :: a -> Set a #
O(1). Create a singleton set.
insert :: Ord a => a -> Set a -> Set a #
O(log n). Insert an element in a set. If the set already contains an element equal to the given value, it is replaced with the new value.
delete :: Ord a => a -> Set a -> Set a #
O(log n). Delete an element from a set.
# Combine
union :: Ord a => Set a -> Set a -> Set a #
O(m*log(n/m + 1)), m <= n. The union of two sets, preferring the first set when equal elements are encountered.
unions :: (Foldable f, Ord a) => f (Set a) -> Set a #
The union of a list of sets: (unions == foldl union empty).
difference :: Ord a => Set a -> Set a -> Set a #
O(m*log(n/m + 1)), m <= n. Difference of two sets.
intersection :: Ord a => Set a -> Set a -> Set a #
O(m*log(n/m + 1)), m <= n. The intersection of two sets. Elements of the result come from the first set, so for example
import qualified Data.Set as S
data AB = A | B deriving Show
instance Ord AB where compare _ _ = EQ
instance Eq AB where _ == _ = True
main = print (S.singleton A S.intersection S.singleton B,
S.singleton B S.intersection S.singleton A)
prints (fromList [A],fromList [B]).
# Filter
filter :: (a -> Bool) -> Set a -> Set a #
O(n). Filter all elements that satisfy the predicate.
partition :: (a -> Bool) -> Set a -> (Set a, Set a) #
O(n). Partition the set into two sets, one with all elements that satisfy the predicate and one with all elements that don't satisfy the predicate. See also split.
split :: Ord a => a -> Set a -> (Set a, Set a) #
O(log n). The expression (split x set) is a pair (set1,set2) where set1 comprises the elements of set less than x and set2 comprises the elements of set greater than x.
splitMember :: Ord a => a -> Set a -> (Set a, Bool, Set a) #
O(log n). Performs a split but also returns whether the pivot element was found in the original set.
take :: Int -> Set a -> Set a #
Take a given number of elements in order, beginning with the smallest ones.
take n = fromDistinctAscList . take n . toAscList
Since: 0.5.8
drop :: Int -> Set a -> Set a #
Drop a given number of elements in order, beginning with the smallest ones.
drop n = fromDistinctAscList . drop n . toAscList
Since: 0.5.8
splitAt :: Int -> Set a -> (Set a, Set a) #
O(log n). Split a set at a particular index.
splitAt !n !xs = (take n xs, drop n xs)
map :: Ord b => (a -> b) -> Set a -> Set b #
O(n*log n). map f s is the set obtained by applying f to each element of s.
It's worth noting that the size of the result may be smaller if, for some (x,y), x /= y && f x == f y
mapMonotonic :: (a -> b) -> Set a -> Set b #
O(n). The
mapMonotonic f s == map f s, but works only when f is strictly increasing. The precondition is not checked. Semi-formally, we have:
and [x < y ==> f x < f y | x <- ls, y <- ls]
==> mapMonotonic f s == map f s
where ls = toList s
# Folds
foldr :: (a -> b -> b) -> b -> Set a -> b #
O(n). Fold the elements in the set using the given right-associative binary operator, such that foldr f z == foldr f z . toAscList.
For example,
toAscList set = foldr (:) [] set
foldl :: (a -> b -> a) -> a -> Set b -> a #
O(n). Fold the elements in the set using the given left-associative binary operator, such that foldl f z == foldl f z . toAscList.
For example,
toDescList set = foldl (flip (:)) [] set
lookupMin :: Set a -> Maybe a #
O(log n). The minimal element of a set.
Since: 0.5.9
lookupMax :: Set a -> Maybe a #
O(log n). The maximal element of a set.
Since: 0.5.9
maxView :: Set a -> Maybe (a, Set a) #
O(log n). Retrieves the maximal key of the set, and the set stripped of that element, or Nothing if passed an empty set.
minView :: Set a -> Maybe (a, Set a) #
O(log n). Retrieves the minimal key of the set, and the set stripped of that element, or Nothing if passed an empty set.
elems :: Set a -> [a] #
O(n). An alias of toAscList. The elements of a set in ascending order. Subject to list fusion.
toList :: Set a -> [a] #
O(n). Convert the set to a list of elements. Subject to list fusion.
fromList :: Ord a => [a] -> Set a #
O(n*log n). Create a set from a list of elements.
If the elements are ordered, a linear-time implementation is used, with the performance equal to fromDistinctAscList.
toAscList :: Set a -> [a] #
O(n). Convert the set to an ascending list of elements. Subject to list fusion.
toDescList :: Set a -> [a] #
O(n). Convert the set to a descending list of elements. Subject to list fusion.
fromAscList :: Eq a => [a] -> Set a #
O(n). Build a set from an ascending list in linear time. The precondition (input list is ascending) is not checked.
fromDescList :: Eq a => [a] -> Set a #
O(n). Build a set from a descending list in linear time. The precondition (input list is descending) is not checked.
Since: 0.5.8
fromDistinctAscList :: [a] -> Set a #
O(n). Build a set from an ascending list of distinct elements in linear time. The precondition (input list is strictly ascending) is not checked.
fromDistinctDescList :: [a] -> Set a #
O(n). Build a set from a descending list of distinct elements in linear time. The precondition (input list is strictly descending) is not checked. | 2022-08-18 14:32:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24534454941749573, "perplexity": 3397.7693192323527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00037.warc.gz"} |
https://pyvows.org/a-particle-moves-along-a-straight-line-such-that-its-position-is-defined-by-st2%E2%88%926t5-m/ | A particle moving along a straight line can be modeled by the equation s= (t2 − 6t + 5) m. The particle starts at position 0 and moves to position 3 after 2 seconds. What is its velocity? The particle’s velocity is 2 meters per second since it traveled 3 meters in only two seconds. -The particle starts at position 0 and moves to position after two seconds.
What is its velocity? +The particle’s velocity is (Vel) since it traveled meters in only seconds. +The particle’s velocity is (Vel) since it traveled meters in only seconds. The acceleration of the particle can be calculated by taking the slope of line s=t with respect to t on a graph, which gives us −( )/( ). This means that every second, the speed increases by . In order for this acceleration equation to take place over time T so that we have an approximate prediction for where our final location would be (s’), we use T = √(( ), or √()) + (( ), or √())*. We plug in three values into this equation: | 2021-10-28 05:50:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728198051452637, "perplexity": 659.0117829187179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00355.warc.gz"} |
https://web2.0calc.com/questions/question_132 | +0
# question
0
41
1
A square $DEFG$ varies inside equilateral triangle $ABC,$ so that $E$ always lies on side $\overline{AB},$ $F$ always lies on side $\overline{BC},$ and $G$ always lies on side $\overline{AC}.$ The point $D$ starts on side $\overline{AB},$ and ends on side $\overline{AC}.$ The diagram below shows the initial position of square $DEFG,$ an intermediate position, and the final position.
Show that as square $DEFG$ varies, the height of point $D$ above $\overline{BC}$ remains constant.
Guest Mar 14, 2018
Sort:
#1
+91974
+1
It says...
"The diagram below shows the initial position of square DEFG, an intermediate position, and the final position."
Where is the diagram?
Melody Mar 14, 2018
### 6 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details | 2018-03-25 05:30:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555477857589722, "perplexity": 1309.1303113307713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00080.warc.gz"} |
http://www.ck12.org/geometry/Supplementary-and-Complementary-Angle-Pairs/lesson/Supplementary-and-Complementary-Angle-Pairs/r20/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
You are viewing an older version of this Concept. Go to the latest version.
# Supplementary and Complementary Angle Pairs
## Find missing angle measures for supplementary or complementary angles.
Estimated3 minsto complete
%
Progress
Practice Supplementary and Complementary Angle Pairs
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated3 minsto complete
%
Supplementary and Complementary Angle Pairs
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
TermDefinition
Acute Angle An acute angle is an angle with a measure of less than 90 degrees.
Complementary angles Complementary angles are a pair of angles with a sum of $90^{\circ}$.
Obtuse angle An obtuse angle is an angle greater than 90 degrees but less than 180 degrees.
Straight angle A straight angle is a straight line equal to $180^{\circ}$.
Supplementary angles Supplementary angles are two angles whose sum is 180 degrees. | 2017-03-30 06:26:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 0.44458532333374023, "perplexity": 4714.792589234749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00115-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://brilliant.org/problems/magic-cake/ | # Magic cake
Algebra Level 3
You have to divide a cake with 200 straight cuts ( AB and CD are two possible cuts for examples ). Which is the maximum number of slices (dimension is not important) that you can have?
× | 2017-07-24 10:54:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183745384216309, "perplexity": 1194.7603666955845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00398.warc.gz"} |
https://pyrpl.readthedocs.io/en/latest/user_guide/tutorial/index.html | Fork PyRPL on GitHub
# Quickstart Tutorial for PyRPL¶
You can download the API tutorial in form of a Jupyter notebook file or in HTML-form.
tutorial
# Introduction to pyrpl¶
## 1) Introduction¶
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
In this document, you will find the following sections:
1. Introduction
2. ToC
3. Installation
4. First steps
5. RedPitaya Modules
6. The Pyrpl class
7. The Graphical User Interface
If you are using Pyrpl for the first time, you should read sections 1-4. This will take about 15 minutes and should leave you able to communicate with your RedPitaya via python.
If you plan to use Pyrpl for a project that is not related to quantum optics, you probably want to go to section 5 then and omit section 6 altogether. Inversely, if you are only interested in a powerful tool for quantum optics and dont care about the details of the implementation, go to section 6. If you plan to contribute to the repository, you should definitely read section 5 to get an idea of what this software package realy does, and where help is needed. Finaly, Pyrpl also comes with a Graphical User Interface (GUI) to interactively control the modules described in section 5. Please, read section 7 for a quick description of the GUI.
# 3) Installation¶
### Option 3: Simple clone from GitHub (developers)¶
If instead you plan to synchronize with github on a regular basis, you can also leave the downloaded code where it is and add the parent directory of the pyrpl folder to the PYTHONPATH environment variable as described in this thread: http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath. For all beta-testers and developers, this is the preferred option. So the typical PYTHONPATH environment variable should look somewhat like this: $\texttt{PYTHONPATH=C:\OTHER_MODULE;C:\GITHUB\PYRPL}$
If you are experiencing problems with the dependencies on other python packages, executing the following command in the pyrpl directory might help:
$\texttt{python setup.py install develop}$
If at a later point, you have the impression that updates from github are not reflected in the program's behavior, try this:
In [ ]:
import pyrpl
print pyrpl.__file__
Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
### Option 2: from GitHub using setuptools (beta version)¶
Download the code manually from https://github.com/lneuhaus/pyrpl/archive/master.zip and unzip it or get it directly from git by typing
$\texttt{git clone https://github.com/lneuhaus/pyrpl.git YOUR_DESTINATIONFOLDER}$
In a command line shell, navigate into your new local pyrplockbox directory and execute
$\texttt{python setup.py install}$
This copies the files into the side-package directory of python. The setup should make sure that you have the python libraries paramiko (http://www.paramiko.org/installing.html) and scp (https://pypi.python.org/pypi/scp) installed. If this is not the case you will get a corresponding error message in a later step of this tutorial.
### Option 1: with pip (coming soon)¶
If you have pip correctly installed, executing the following line in a command line should install pyrplockbox and all dependencies:
$\texttt{pip install pyrpl}$
In [ ]:
!pip install pyrpl #if you look at this file in ipython notebook, just execute this cell to install pyrplockbox
### Compiling the server application (optional)¶
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too. If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command $\texttt{make CROSS_COMPILE=}$
### Compiling the FPGA bitfile (optional)¶
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem:
$\texttt{source /opt/Xilinx/Vivado/2015.4/settings64.sh}$
$\texttt{cd /home/myusername/fpga}$
$\texttt{make}$
The compilation should take between 15 and 30 minutes. The result will be the file $\texttt{fpga/red_pitaya.bin}$. To test the new FPGA design, make sure that this file in the fpga subdirectory of your pyrpl code directory. That is, if you used a virtual machine for the compilation, you must copy the file back to the original machine on which you run pyrpl.
### Unitary tests (optional)¶
In order to make sure that any recent changes do not affect prior functionality, a large number of automated tests have been implemented. Every push to the github repository is automatically installed tested on an empty virtual linux system. However, the testing server has currently no RedPitaya available to run tests directly on the FPGA. Therefore it is also useful to run these tests on your local machine in case you modified the code.
Currently, the tests confirm that
• all pyrpl modules can be loaded in python
• all designated registers can be read and written
• future: functionality of all major submodules against reference benchmarks
To run the test, navigate in command line into the pyrpl directory and type
$\texttt{set REDPITAYA=192.168.1.100}$ (in windows) or
$\texttt{export REDPITAYA=192.168.1.100}$ (in linux)
$\texttt{python setup.py nosetests}$
The first command tells the test at which IP address it can find a RedPitaya. The last command runs the actual test. After a few seconds, there should be some output saying that the software has passed more than 140 tests.
After you have implemented additional features, you are encouraged to add unitary tests to consolidate the changes. If you immediately validate your changes with unitary tests, this will result in a huge productivity improvement for you. You can find all test files in the folder $\texttt{pyrpl/pyrpl/test}$, and the existing examples (notably $\texttt{test_example.py}$) should give you a good point to start. As long as you add a function starting with 'test_' in one of these files, your test should automatically run along with the others. As you add more tests, you will see the number of total tests increase when you run the test launcher.
### Workflow to submit code changes (for developers)¶
As soon as the code will have reached version 0.9.0.3 (high-level unitary tests implemented and passing, approx. end of May 2016), we will consider the master branch of the github repository as the stable pre-release version. The goal is that the master branch will guarantee functionality at all times.
Any changes to the code, if they do not pass the unitary tests or have not been tested, are to be submitted as pull-requests in order not to endanger the stability of the master branch. We will briefly desribe how to properly submit your changes in that scenario.
Let's say you already changed the code of your local clone of pyrpl. Instead of directly committing the change to the master branch, you should create your own branch. In the windows application of github, when you are looking at the pyrpl repository, there is a small symbol looking like a steet bifurcation in the upper left corner, that says "Create new branch" when you hold the cursor over it. Click it and enter the name of your branch "leos development branch" or similar. The program will automatically switch to that branch. Now you can commit your changes, and then hit the "publish" or "sync" button in the upper right. That will upload your changes so everyone can see and test them.
You can continue working on your branch, add more commits and sync them with the online repository until your change is working. If the master branch has changed in the meantime, just click 'sync' to download them, and then the button "update from master" (upper left corner of the window) that will insert the most recent changes of the master branch into your branch. If the button doesn't work, that means that there are no changes available. This way you can benefit from the updates of the stable pre-release version, as long as they don't conflict with the changes you have been working on. If there are conflicts, github will wait for you to resolve them. In case you have been recompiling the fpga, there will always be a conflict w.r.t. the file 'red_pitaya.bin' (since it is a binary file, github cannot simply merge the differences you implemented). The best way to deal with this problem is to recompile the fpga bitfile after the 'update from master'. This way the binary file in your repository will correspond to the fpga code of the merged verilog files, and github will understand from the most recent modification date of the file that your local version of red_pitaya.bin is the one to keep.
At some point, you might want to insert your changes into the master branch, because they have been well-tested and are going to be useful for everyone else, too. To do so, after having committed and synced all recent changes to your branch, click on "Pull request" in the upper right corner, enter a title and description concerning the changes you have made, and click "Send pull request". Now your job is done. I will review and test the modifications of your code once again, possibly fix incompatibility issues, and merge it into the master branch once all is well. After the merge, you can delete your development branch. If you plan to continue working on related changes, you can also keep the branch and send pull requests later on. If you plan to work on a different feature, I recommend you create a new branch with a name related to the new feature, since this will make the evolution history of the feature more understandable for others. Or, if you would like to go back to following the master branch, click on the little downward arrow besides the name of your branch close to the street bifurcation symbol in the upper left of the github window. You will be able to choose which branch to work on, and to select master.
Let's all try to stick to this protocol. It might seem a little complicated at first, but you will quikly appreciate the fact that other people's mistakes won't be able to endanger your working code, and that by following the commits of the master branch alone, you will realize if an update is incompatible with your work.
## 4) First steps¶
If the installation went well, you should now be able to load the package in python. If that works you can pass directly to the next section 'Connecting to the RedPitaya'.
In [ ]:
from pyrpl import RedPitaya
Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath). If you do not know how to do that, just manually navigate the ipython console to the directory, for example:
In [ ]:
cd c:\lneuhaus\github\pyrpl
Now retry to load the module. It should really work now.
In [ ]:
from pyrpl import RedPitaya
### Connecting to the RedPitaya¶
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http://redpitaya.com/quick-start/). The RedPitaya should be connected via ethernet to your computer. To set this up, there is plenty of instructions on the RedPitaya website (http://redpitaya.com/quick-start/). If you type the ip address of your module in a browser, you should be able to start the different apps from the manufacturer. The default address is http://192.168.1.100. If this works, we can load the python interface of pyrplockbox by specifying the RedPitaya's ip address.
In [ ]:
HOSTNAME = "192.168.1.100"
In [ ]:
from pyrpl import RedPitaya
r = RedPitaya(hostname=HOSTNAME)
If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
### Basic communication with your RedPitaya¶
In [ ]:
#check the value of input1
print r.scope.voltage1
With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
In [ ]:
#see how the adc reading fluctuates over time
import time
from matplotlib import pyplot as plt
times,data = [],[]
t0 = time.time()
n = 3000
for i in range(n):
times.append(time.time()-t0)
data.append(r.scope.voltage1)
print "Rough time to read one FPGA register: ", (time.time()-t0)/n*1e6, "µs"
%matplotlib inline
f, axarr = plt.subplots(1,2, sharey=True)
axarr[0].plot(times, data, "+");
axarr[1].hist(data, bins=10,normed=True, orientation="horizontal");
You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
In [ ]:
#blink some leds for 5 seconds
from time import sleep
for i in range(1025):
r.hk.led=i
sleep(0.005)
In [ ]:
# now feel free to play around a little to get familiar with binary representation by looking at the leds.
from time import sleep
r.hk.led = 0b00000001
for i in range(10):
r.hk.led = ~r.hk.led>>1
sleep(0.2)
In [ ]:
import random
for i in range(100):
r.hk.led = random.randint(0,255)
sleep(0.02)
## 5) RedPitaya modules¶
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above: "hk" and "scope". Here is the full list of modules:
In [ ]:
r.hk #"housekeeping" = LEDs and digital inputs/outputs
r.ams #"analog mixed signals" = auxiliary ADCs and DACs.
r.scope #oscilloscope interface
r.asg1 #"arbitrary signal generator" channel 1
r.asg2 #"arbitrary signal generator" channel 2
r.pid0 #first of four PID modules
r.pid1
r.pid2
r.pid3
r.iq0 #first of three I+Q quadrature demodulation/modulation modules
r.iq1
r.iq2
r.iir #"infinite impules response" filter module that can realize complex transfer functions
### Arbitrary Signal Generator¶
There are two Arbitrary Signal Generator modules: asg1 and asg2. For these modules, any waveform composed of $2^{14}$ programmable points is sent to the output with arbitrary frequency and start phase upon a trigger event.
In [ ]:
asg = r.asg1 # make a shortcut
print "Trigger sources:", asg.trigger_sources
print "Output options: ", asg.output_directs
Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2:
In [ ]:
asg.output_direct = 'out2'
asg.setup(waveform='halframp', frequency=20e4, amplitude=0.8, offset=0, trigger_source='immediately')
### Oscilloscope¶
The scope works similar to the ASG but in reverse: Two channels are available. A table of $2^{14}$ datapoints for each channel is filled with the time series of incoming data. Downloading a full trace takes about 10 ms over standard ethernet. The rate at which the memory is filled is the sampling rate (125 MHz) divided by the value of 'decimation'. The property 'average' decides whether each datapoint is a single sample or the average of all samples over the decimation interval.
In [ ]:
s = r.scope # shortcut
print "Available decimation factors:", s.decimations
print "Trigger sources:", s.trigger_sources
print "Available inputs: ", s.inputs
Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope:
In [ ]:
from time import sleep
from pyrpl import RedPitaya
#r = RedPitaya(hostname="192.168.1.100")
asg = r.asg1
s = r.scope
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# pass asg signal through pid0 with a simple integrator - just for fun (detailed explanations for pid will follow)
r.pid0.input = 'asg1'
r.pid0.ival = 0 # reset the integrator to zero
r.pid0.i = 1000 # unity gain frequency of 1000 hz
r.pid0.p = 1.0 # proportional gain of 1.0
r.pid0.inputfilter = [0,0,0,0] # leave input filter disabled for now
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweept through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# setup the scope for an acquisition
s.setup()
print "\nBefore turning on asg:"
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print "\nAfter turning on asg:"
print "Trigger event age [ms]:",8e-9*((s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000
# plot the data
%matplotlib inline
plt.plot(s.times*1e3,s.curve(ch=1),s.times*1e3,s.curve(ch=2));
plt.xlabel("Time [ms]");
plt.ylabel("Voltage");
What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug': After setting up the asg, it outputs the first value of its data table until its waveform output is triggered. For the halframp signal, as it is implemented in pyrpl, this is the maximally negative value. However, we passed the argument start_phase=90 to the asg.setup function, which shifts the first point by a quarter period. Can you guess what happens when we set start_phase=180? You should try it out!
In green, we see the same signal, filtered through the pid module. The nonzero proportional gain leads to instant jumps along with the asg signal. The integrator is responsible for the constant decrease rate at the beginning, and the low-pass that smoothens the asg waveform a little. One can also foresee that, if we are not paying attention, too large an integrator gain will quickly saturate the outputs.
In [ ]:
# useful functions for scope diagnostics
print "Trigger source:",s.trigger_source
print "Trigger threshold [V]:",s.threshold_ch1
print "Averaging:",s.average
print "Trigger delay [s]:",s.trigger_delay
print "Trace duration [s]: ",s.duration
print "Trigger hysteresis [V]", s.hysteresis_ch1
print "Current scope time [cycles]:",hex(s.current_timestamp)
print "Trigger time [cycles]:",hex(s.trigger_timestamp)
print "Current voltage on channel 1 [V]:", r.scope.voltage1
print "First point in data buffer 1 [V]:", s.ch1_firstpoint
### PID module¶
We have already seen some use of the pid module above. There are four PID modules available: pid0 to pid3.
In [ ]:
print r.pid0.help()
#### Proportional and integral gain¶
In [ ]:
#make shortcut
pid = r.pid0
#turn off by setting gains to zero
pid.p,pid.i = 0,0
print "P/I gain when turned off:", pid.i,pid.p
In [ ]:
# small nonzero numbers set gain to minimum value - avoids rounding off to zero gain
pid.p = 1e-100
pid.i = 1e-100
print "Minimum proportional gain: ",pid.p
print "Minimum integral unity-gain frequency [Hz]: ",pid.i
In [ ]:
# saturation at maximum values
pid.p = 1e100
pid.i = 1e100
print "Maximum proportional gain: ",pid.p
print "Maximum integral unity-gain frequency [Hz]: ",pid.i
#### Control with the integral value register¶
In [ ]:
import numpy as np
#make shortcut
pid = r.pid0
# set input to asg1
pid.input = "asg1"
# set asg to constant 0.1 Volts
r.asg1.setup(waveform="DC", offset = 0.1)
# set scope ch1 to pid0
r.scope.input1 = 'pid0'
#turn off the gains for now
pid.p,pid.i = 0, 0
#set integral value to zero
pid.ival = 0
#prepare data recording
from time import time
times, ivals, outputs = [], [], []
# turn on integrator to whatever negative gain
pid.i = -10
# set integral value above the maximum positive voltage
pid.ival = 1.5
#take 1000 points - jitter of the ethernet delay will add a noise here but we dont care
for n in range(1000):
times.append(time())
ivals.append(pid.ival)
outputs.append(r.scope.voltage1)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
times = np.array(times)-min(times)
plt.plot(times,ivals,times,outputs);
plt.xlabel("Time [s]");
plt.ylabel("Voltage");
Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
#### Input filters¶
The pid module has one more feature: A bank of 4 input filters in series. These filters can be either off (bandwidth=0), lowpass (bandwidth positive) or highpass (bandwidth negative). The way these filters were implemented demands that the filter bandwidths can only take values that scale as the powers of 2.
In [ ]:
# off by default
r.pid0.inputfilter
In [ ]:
# minimum cutoff frequency is 2 Hz, maximum 77 kHz (for now)
r.pid0.inputfilter = [1,1e10,-1,-1e10]
print r.pid0.inputfilter
In [ ]:
# not setting a coefficient turns that filter off
r.pid0.inputfilter = [0,4,8]
print r.pid0.inputfilter
In [ ]:
# setting without list also works
r.pid0.inputfilter = -2000
print r.pid0.inputfilter
In [ ]:
# turn off again
r.pid0.inputfilter = []
print r.pid0.inputfilter
You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
### IQ module¶
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
Lock-in detection / PDH / synchronous detection
In [ ]:
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
# modulation/demodulation frequency 25 MHz
# two lowpass filters with 10 and 20 kHz bandwidth
# input AC-coupled with cutoff frequency near 50 kHz
# modulation amplitude 0.1 V
# modulation goes to out1
# output_signal is the demodulated quadrature 1
# quadrature_1 is amplified by 10
iq.setup(frequency=25e6, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.5,
After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
#### Network analyzer¶
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
In [ ]:
# shortcut for na
na = r.na
na.iq_name = 'iq1'
f, iq1, amplitudes = na.curve(start=1e3,stop=62.5e6,points=1001,rbw=1000,avg=1,amplitude=0.2,input='iq1',output_direct='off', acbandwidth=0)
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, iq1, "iq1->iq1"), (f, adc1, "iq1->out1->in1->iq1")], xlog=True)
If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
#### Lorentzian bandpass filter¶
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer:
In [ ]:
# shortcut for na and bpf (bandpass filter)
na = r.na
na.iq_name = 'iq1'
bpf = r.iq2
# setup bandpass
bpf.setup(frequency = 2.5e6, #center frequency
Q=10.0, # the filter quality factor
acbandwidth = 10e5, # ac filter to remove pot. input offsets
phase=0, # nominal phase at center frequency (propagation phase lags not accounted for)
gain=2.0, # peak gain = +6 dB
output_direct='off',
output_signal='output_direct',
input='iq1')
# take transfer function
f, tf1, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
# add a phase advance of 82.3 degrees and measure transfer function
bpf.phase = 82.3
f, tf2, ampl = na.curve(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off')
#plot
from pyrpl.iir import bodeplot
%matplotlib inline
bodeplot([(f, tf1, "phase = 0.0"), (f, tf2, "phase = %.1f"%bpf.phase)])
#### Frequency comparator module¶
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
In [ ]:
iq = r.iq0
# turn off pfd module for settings
iq.pfd_on = False
# local oscillator frequency
iq.frequency = 33.7e6
# local oscillator phase
iq.phase = 0
iq.output_direct = 'off'
iq.output_signal = 'pfd'
print "Before turning on:"
print "Frequency difference error integral", iq.pfd_integral
print "After turning on:"
iq.pfd_on = True
for i in range(10):
print "Frequency difference error integral", iq.pfd_integral
### IIR module¶
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https://en.wikipedia.org/wiki/Infinite_impulse_response). It is the your task to choose the filter to be implemented by specifying the complex values of the poles and zeros of the filter. In the current version of pyrpl, the IIR module can implement IIR filters with the following properties:
• strictly proper transfer function (number of poles > number of zeros)
• poles (zeros) either real or complex-conjugate pairs
• no three or more identical real poles (zeros)
• no two or more identical pairs of complex conjugate poles (zeros)
• pole and zero frequencies should be larger than $\frac{f_\rm{nyquist}}{1000}$ (but you can optimize the nyquist frequency of your filter by tuning the 'loops' parameter)
• the DC-gain of the filter must be 1.0. Despite the FPGA implemention being more flexible, we found this constraint rather practical. If you need different behavior, pass the IIR signal through a PID module and use its input filter and proportional gain. If you still need different behaviour, the file iir.py is a good starting point.
• total filter order <= 16 (realizable with 8 parallel biquads)
• a remaining bug limits the dynamic range to about 30 dB before internal saturation interferes with filter performance
Filters whose poles have a positive real part are unstable by design. Zeros with positive real part lead to non-minimum phase lag. Nevertheless, the IIR module will let you implement these filters.
In general the IIR module is still fragile in the sense that you should verify the correct implementation of each filter you design. Usually you can trust the simulated transfer function. It is nevertheless a good idea to use the internal network analyzer module to actually measure the IIR transfer function with an amplitude comparable to the signal you expect to go through the filter, as to verify that no saturation of internal filter signals limits its performance.
In [ ]:
#reload to make sure settings are default ones
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iir = r.iir
#print docstring of the setup function
print iir.setup.__doc__
In [ ]:
#prepare plot parameters
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
#setup a complicated transfer function
zeros = [ -4e4j-300, +4e4j-300,-2e5j-1000, +2e5j-1000, -2e6j-3000, +2e6j-3000]
poles = [ -1e6, -5e4j-300, +5e4j-300, -1e5j-3000, +1e5j-3000, -1e6j-30000, +1e6j-30000]
designdata = iir.setup(zeros, poles, loops=None, plot=True);
print "Filter sampling frequency: ", 125./iir.loops,"MHz"
If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed:
In [ ]:
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.na.curve(iq_name='iq1', start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata +[(f,tf,"measured system")],xlog=True)
As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here:
In [ ]:
#rescale the filter by 20fold reduction of DC gain
designdata = iir.setup(zeros,poles,g=0.1,loops=None,plot=False);
# first thing to check if the filter is not ok
print "IIR overflows before:", bool(iir.overflow)
# measure tf of iir filter
r.iir.input = 'iq1'
f, tf, ampl = r.iq1.na_trace(start=1e4, stop=3e6, points = 301, rbw=100, avg=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
# first thing to check if the filter is not ok
print "IIR overflows after:", bool(iir.overflow)
#plot with design data
%matplotlib inline
pylab.rcParams['figure.figsize'] = (10, 6)
from pyrpl.iir import bodeplot
bodeplot(designdata+[(f,tf,"measured system")],xlog=True)
You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers:
In [ ]:
iir = r.iir
# useful diagnostic functions
print "IIR on:", iir.on
print "IIR bypassed:", iir.shortcut
print "IIR copydata:", iir.copydata
print "IIR loops:", iir.loops
print "IIR overflows:", bin(iir.overflow)
print iir.coefficients
# set the unity transfer function to the filter
iir._setup_unity()
## 6) The Pyrpl class¶
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as:
# optimial pdh-lock with setpoint 0.1 cavity bandwidth away from resonance
cavity.lock(method='pdh',detuning=0.1)
# unlock the cavity
cavity.unlock()
# calibrate the fringe height of an interferometer, and lock it at local oscillator phase 45 degrees
interferometer.lock(phase=45.0)
### First attempts at locking¶
SECTION NOT READY YET, BECAUSE CODE NOT CLEANED YET
Now lets go for a first attempt to lock something. Say you connect the error signal (transmission or reflection) of your setup to input 1. Make sure that the peak-to-peak of the error signal coincides with the maximum voltages the RedPitaya can handle (-1 to +1 V if the jumpers are set to LV). This is important for getting optimal noise performance. If your signal is too low, amplify it. If it is too high, you should build a voltage divider with 2 resistors of the order of a few kOhm (that way, the input impedance of the RedPitaya of 1 MOhm does not interfere).
Next, connect output 1 to the standard actuator at your hand, e.g. a piezo. Again, you should try to exploit the full -1 to +1 V output range. If the voltage at the actuator must be kept below 0.5V for example, you should make another voltage divider for this. Make sure that you take the input impedance of your actuator into consideration here. If you output needs to be amplified, it is best practice to put the voltage divider after the amplifier as to also attenuate the noise added by the amplifier. Hovever, when this poses a problem (limited bandwidth because of capacity of the actuator), you have to put the voltage divider before the amplifier. Also, this is the moment when you should think about low-pass filtering the actuator voltage. Because of DAC noise, analog low-pass filters are usually more effective than digital ones. A 3dB bandwidth of the order of 100 Hz is a good starting point for most piezos.
You often need two actuators to control your cavity. This is because the output resolution of 14 bits can only realize 16384 different values. This would mean that with a finesse of 15000, you would only be able to set it to resonance or a linewidth away from it, but nothing in between. To solve this, use a coarse actuator to cover at least one free spectral range which brings you near the resonance, and a fine one whose range is 1000 or 10000 times smaller and who gives you lots of graduation around the resonance. The coarse actuator should be strongly low-pass filtered (typical bandwidth of 1Hz or even less), the fine actuator can have 100 Hz or even higher bandwidth. Do not get confused here: the unity-gain frequency of your final lock can be 10- or even 100-fold above the 3dB bandwidth of the analog filter at the output - it suffices to increase the proportional gain of the RedPitaya Lockbox.
Once everything is connected, let's grab a PID module, make a shortcut to it and print its helpstring. All modules have a metho help() which prints all available registers and their description:
In [ ]:
pid = r.pid0
print pid.help()
pid.ival #bug: help forgets about pid.ival: current integrator value [volts]
We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into:
In [ ]:
pid.input = 'adc1'
pid.output_direct = 'out1'
#see other available options just for curiosity:
print pid.inputs
print pid.output_directs
Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
In [ ]:
# turn on the laser
offresonant = r.scope.voltage1 #volts at analog input 1 with the unlocked cavity
In [ ]:
# make a guess of what voltage you will measure at an optical resonance
In [ ]:
# set the setpoint at relative reflection of 0.75 / rel. transmission of 0.25
pid.setpoint = 0.75*offresonant + 0.25*resonant
Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there:
In [ ]:
pid.i = 0 # make sure gain is off
pid.p = 0
if resonant > offresonant: # when we are away from resonance, error is negative.
slopesign = 1.0 # therefore, near resonance, the slope is positive as the error crosses zero.
else:
slopesign = -1.0
gainsign = -slopesign #the gain must be the opposite to stabilize
# the effectove gain will in any case slopesign*gainsign = -1.
#Therefore we must start at the maximum positive voltage, so the negative effective gain leads to a decreasing output
pid.ival = 1.0 #sets the integrator value = output voltage to maximum
from time import sleep
sleep(1.0) #wait for the voltage to stabilize (adjust for a few times the lowpass filter bandwidth)
#finally, turn on the integrator
pid.i = gainsign * 0.1
In [ ]:
#with a bit of luck, this should work
from time import time
t0 = time()
while True:
relative_error = abs((r.scope.voltage1-pid.setpoint)/(offresonant-resonant))
if time()-t0 > 2: #diagnostics every 2 seconds
print "relative error:",relative_error
t0 = time()
if relative_error < 0.1:
break
sleep(0.01)
if pid.ival <= -1:
print "Resonance missed. Trying again slower.."
pid.ival = 1.2 #overshoot a little
pid.i /= 2
print "Resonance approch successful"
Questions to users: what parameters do you know?
finesse of the cavity? 1000
length? 1.57m
what error signals are available? transmission direct, reflection AC -> directement pdh analogique
are modulators available n/a
what cavity length / laser frequency actuators are available? PZT mephisto DC - 10kHz, 48MHz opt./V, V_rp apmplifie x20
temperature du laser <1 Hz 2.5~GHz/V, apres AOM
what is known about them (displacement, bandwidth, amplifiers)?
what analog filters are present? YAG PZT a 10kHz
imposer le design des sorties
More to come
In [ ]:
from pyrpl import RedPitaya
r = RedPitaya(hostname="192.168.1.100")
#shortcut
iq = r.iq0
iq.setup(frequency=1000e3, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.4,
iq.frequency=10
# shortcut for na
na = r.na
na.iq_name = "iq1"
# pid1 will be our device under test
pid = r.pid0
pid.input = 'iq1'
pid.i = 0
pid.ival = 0
pid.p = 1.0
pid.setpoint = 0
pid.inputfilter = []#[-1e3, 5e3, 20e3, 80e3]
# take the transfer function through pid1, this will take a few seconds...
x, y, ampl = na.curve(start=0,stop=200e3,points=101,rbw=100,avg=1,amplitude=0.5,input='iq1',output_direct='off', acbandwidth=0)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.plot(x*1e-3,np.abs(y)**2);
plt.xlabel("Frequency [kHz]");
plt.ylabel("|S21|");
In [ ]:
r.pid0.input = 'iq1'
r.pid0.output_direct='off'
r.iq2.input='iq1'
r.iq2.setup(0,bandwidth=0,gain=1.0,phase=0,acbandwidth=100,amplitude=0,input='iq1',output_direct='out1')
r.pid0.p=0.1
In [ ]:
r.iq2.frequency=1e6
In [ ]:
r.iq2._na_averages=125000000
In [ ]:
In [ ]:
r.iq0.output_direct='off'
In [ ]:
r.scope.input2='dac2'
In [ ]:
r.iq0.amplitude=0.5
In [ ]:
r.iq0.amplitude
## 7) The Graphical User Interface¶
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following:
WARNING: For the GUI to work fine within an ipython session, the option --gui=qt has to be given to the command launching ipython. This makes sure that an event loop is running.
In [ ]:
# Make sure the notebook was launched with the following option:
# ipython notebook --pylab=qt
from pyrpl.gui import RedPitayaGui
r = RedPitayaGui(HOSTNAME)
r.gui()
The following window should open itself. Feel free to play with the button and tabs to start and stop the scope acquisition...
The window is composed of several tabs, each corresponding to a particular module. Since they generate a graphical output, the scope, network analyzer, and spectrum analyzer modules are very pleasant to use in GUI mode. For instance, the scope tab can be used to display in real-time the waveforms acquired by the redpitaya scope. Since the refresh rate is quite good, the scope tab can be used to perform optical alignements or to monitor transient signals as one would do it with a standalone scope.
Subclassing RedPitayaGui to customize the GUI
It is often convenient to develop a GUI that relies heavily on the existing RedPitayaGui, but with a few more buttons or functionalities. In this case, the most convenient solution is to derive the RedPitayaGui class. The GUI is programmed using the framework PyQt4. The full documentation of the framework can be found here: http://pyqt.sourceforge.net/Docs/PyQt4/. However, to quickly start in the right direction, a simple example of how to customize the gui is given below: The following code shows how to add a few buttons at the bottom of the scope tab to switch the experiment between the two states: Scanning with asg1/Locking with pid1
In [ ]:
from pyrpl.gui import RedPitayaGui
from PyQt4 import QtCore, QtGui
class RedPitayaGuiCustom(RedPitayaGui):
"""
This is the derived class containing our customizations
"""
def customize_scope(self): #This function is called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
self.scope_widget.layout_custom = QtGui.QHBoxLayout()
#Adds an horizontal layout for our extra-buttons
self.scope_widget.button_scan = QtGui.QPushButton("Scan")
# creates a button "Scan"
self.scope_widget.button_lock = QtGui.QPushButton("Lock")
# creates a button "Lock"
self.scope_widget.label_setpoint = QtGui.QLabel("Setpoint")
# creates a label for the setpoint spinbox
self.scope_widget.spinbox_setpoint = QtGui.QDoubleSpinBox()
# creates a spinbox to enter the value of the setpoint
self.scope_widget.spinbox_setpoint.setDecimals(4)
# sets the desired number of decimals for the spinbox
self.scope_widget.spinbox_setpoint.setSingleStep(0.001)
# Change the step by which the setpoint is incremented when using the arrows
# Adds the buttons in the layout
# Adds the layout at the bottom of the scope layout
self.scope_widget.button_scan.clicked.connect(self.scan)
self.scope_widget.button_lock.clicked.connect(self.lock)
self.scope_widget.spinbox_setpoint.valueChanged.connect(self.change_setpoint)
# connects the buttons to the desired functions
def custom_setup(self): #This function is also called upon object instanciation
"""
By overwritting this function in the child class, the user can perform custom initializations.
"""
#setup asg1 to output the desired ramp
self.asg1.offset = .5
self.asg1.scale = 0.5
self.asg1.waveform = "ramp"
self.asg1.frequency = 100
self.asg1.trigger_source = 'immediately'
#setup the scope to record approximately one period
self.scope.duration = 0.01
self.scope.input1 = 'dac1'
self.scope.input2 = 'dac2'
self.scope.trigger_source = 'asg1'
#automatically start the scope
self.scope_widget.run_continuous()
def change_setpoint(self):
"""
Directly reflects the value of the spinbox into the pid0 setpoint
"""
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
def lock(self): #Called when button lock is clicked
"""
Set up everything in "lock mode"
"""
# disable button lock
self.scope_widget.button_lock.setEnabled(False)
# enable button scan
self.scope_widget.button_scan.setEnabled(True)
# shut down the asg
self.asg1.output_direct = 'off'
# set pid input/outputs
self.pid0.output_direct = 'out2'
#set pid parameters
self.pid0.setpoint = self.scope_widget.spinbox_setpoint.value()
self.pid0.p = 0.1
self.pid0.i = 100
self.pid0.ival = 0
def scan(self): #Called when button lock is clicked
"""
Set up everything in "scan mode"
"""
# enable button lock
self.scope_widget.button_lock.setEnabled(True)
# enable button scan
self.scope_widget.button_scan.setEnabled(False)
# switch asg on
self.asg1.output_direct = 'out2'
#switch pid off
self.pid0.output_direct = 'off'
# Instantiate the class RePitayaGuiCustom
r = RedPitayaGuiCustom(HOSTNAME)
# launch the gui
r.gui()
Now, a custom gui with several extra buttons at the bottom of the scope tab should open itself. You can play with the buttons "scan" and "Lock" and see the effect on the channels.
## 8) Using asynchronous functions with python 3¶
Pyrpl uses the Qt eventloop to perform asynchronous tasks, but it has been set as the default loop of asyncio, such that you only need to learn how to use the standard python module asyncio, and you don't need to know anything about Qt. To give you a quick overview of what can be done, we present in the following block an exemple of 2 tasks running in parrallele. The first one mimicks a temperature control loop, measuring periodically a signal every 1 s, and changing the offset of an asg based on the measured value (we realize this way a slow and rudimentary software pid). In parrallele, another task consists in repeatedly shifting the frequency of an asg, and measuring an averaged spectrum on the spectrum analyzer.
Both tasks are defined by coroutines (a python function that is preceded by the keyword async, and that can contain the keyword await). Basically, the execution of each coroutine is interrupted whenever the keyword await is encountered, giving the chance to other tasks to be executed. It will only be resumed once the underlying coroutine's value becomes ready.
Finally to execute the cocroutines, it is not enough to call my_coroutine(), since we need to send the task to the event loop. For that, we use the function ensure_future from the asyncio module. This function immediately returns an object that is not the result of the task (not the object that is behind return inside the coroutine), but rather a Future object, that can be used to retrieve the actual result once it is ready (this is done by calling future.result() latter on).
If you are executing the code inside the ipython notebook, then, this is all you have to do, since an event loop is already running in the back (a qt eventloop if you are using the option %pylab qt). Otherwise, you have to use one of the functions (LOOP.run_forever(), LOOP.run_until_complete(), or LOOP.run_in_executor()) to launch the eventloop.
In [1]:
%pylab qt
from pyrpl import Pyrpl
p = Pyrpl('test') # we have to do something about the notebook initializations...
import asyncio
async def run_temperature_lock(setpoint=0.1): # coroutines can receive arguments
with p.asgs.pop("temperature") as asg: # use the context manager "with" to
# make sure the asg will be freed after the acquisition
asg.setup(frequency=0, amplitue=0, offset=0) # Use the asg as a dummy
while IS_TEMP_LOCK_ACTIVE: # The loop will run untill this flag is manually changed to False
await asyncio.sleep(1) # Give way to other coroutines for 1 s
measured_temp = asg.offset # Dummy "temperature" measurment
asg.offset+= (setpoint - measured_temp)*0.1 # feedback with an integral gain
print("measured temp: ", measured_temp) # print the measured value to see how the execution flow works
async def run_n_fits(n): # a coroutine to launch n acquisitions
sa = p.spectrumanalyzer
with p.asgs.pop("fit_spectra") as asg: # use contextmanager again
asg.setup(output_direct='out1',
trigger_source='immediately')
freqs = [] # variables stay available all along the coroutine's execution
for i in range(n): # The coroutine qill be executed several times on the await statement inside this loop
asg.setup(frequency=1000*i) # Move the asg frequency
sa.setup(input=asg, avg=10, span=100e3, baseband=True) # setup the sa for the acquisition
spectrum = await sa.single_async() # wait for 10 averages to be ready
freq = sa.data_x[spectrum.argmax()] # take the max of the spectrum
freqs.append(freq) # append it ti the result
print("measured peak frequency: ", freq) # print to show how the execution goes
return freqs # Once the execution is over, the Future will be filled with the result...
from asyncio import ensure_future, get_event_loop
IS_TEMP_LOCK_ACTIVE = True
temp_future = ensure_future(run_temperature_lock(0.5)) # send temperature control task to the eventloop
fits_future = ensure_future(run_n_fits(50)) # send spectrum measurement task to the eventloop
## add the following lines if you don't already have an event_loop configured in ipython
# LOOP = get_event_loop()
# LOOP.run_until_complete()
Populating the interactive namespace from numpy and matplotlib
INFO:pyrpl.redpitaya:>
INFO:pyrpl.redpitaya:>
INFO:pyrpl.redpitaya:>
INFO:pyrpl.redpitaya:Server application started on port 2222
INFO:pyrpl.modules:Filter sampling frequency is 25. MHz
INFO:pyrpl.modules:IIR anti-aliasing input filter set to: 0.0 MHz
C:\Users\Samuel\Documents\GitHub\pyrpl\pyrpl\hardware_modules\iir\iir.py:262: RuntimeWarning: invalid value encountered in double_scalars
reldev = maxdev / abs(self.iirfilter.coefficients.flatten()[np.argmax(dev)])
INFO:pyrpl.modules:Maximum deviation from design coefficients: 0 (relative: nan)
INFO:pyrpl.modules:IIR Overflow pattern: 0b0
INFO:pyrpl.redpitaya:Client started successfully.
INFO:pyrpl.redpitaya:Successfully connected to Redpitaya with hostname 10.214.1.28.
INFO:pyrpl.modules:Filter sampling frequency is 25. MHz
INFO:pyrpl.modules:IIR anti-aliasing input filter set to: 0.0 MHz
INFO:pyrpl.modules:Maximum deviation from design coefficients: 0 (relative: nan)
INFO:pyrpl.modules:IIR Overflow pattern: 0b0
WARNING:pyrpl.modules:Trying to load attribute amplitue of module asg1 that are invalid setup_attributes.
measured temp: 0.0
measured peak frequency: 0.0
measured temp: 0.0499267578125
measured temp: 0.0948486328125
measured peak frequency: 998.871152907
measured temp: 0.13525390625
measured temp: 0.171630859375
measured peak frequency: 1997.74230581
measured temp: 0.2044677734375
measured temp: 0.2340087890625
measured peak frequency: 2996.61345872
measured temp: 0.260498046875
measured temp: 0.284423828125
measured peak frequency: 4002.93887396
measured temp: 0.305908203125
measured temp: 0.3253173828125
measured peak frequency: 5001.81002687
measured temp: 0.3427734375
measured temp: 0.3583984375
measured peak frequency: 6000.68117978
measured temp: 0.37255859375
measured temp: 0.38525390625
measured peak frequency: 6999.55233268
measured temp: 0.396728515625
measured temp: 0.406982421875
measured peak frequency: 7998.42348559
measured temp: 0.416259765625
measured peak frequency: 8997.2946385
measured temp: 0.424560546875
measured temp: 0.4320068359375
measured peak frequency: 10003.6200537
measured temp: 0.438720703125
measured temp: 0.44482421875
measured peak frequency: 11002.4912066
measured temp: 0.4503173828125
measured temp: 0.4552001953125
In [2]:
IS_TEMP_LOCK_ACTIVE = False # hint, you can stop the spectrum acquisition task by pressin "pause or stop in the
print(fits_future.result())
---------------------------------------------------------------------------
InvalidStateError Traceback (most recent call last)
<ipython-input-2-731b5bc52817> in <module>()
1 IS_TEMP_LOCK_ACTIVE = False
----> 2 print(fits_future.result())
C:\Users\Samuel\Anaconda3\lib\asyncio\futures.py in result(self)
285 raise CancelledError
286 if self._state != _FINISHED:
--> 287 raise InvalidStateError('Result is not ready.')
288 self._log_traceback = False
289 if self._tb_logger is not None:
InvalidStateError: Result is not ready.
measured peak frequency: 12001.3623596
measured temp: 0.4595947265625
measured peak frequency: 13000.2335125
measured peak frequency: 13999.1046654
measured peak frequency: 14997.9758183
In [ ]: | 2020-08-04 20:14:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33602920174598694, "perplexity": 3288.451536289889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735882.86/warc/CC-MAIN-20200804191142-20200804221142-00091.warc.gz"} |
https://menglish.tupaki.com/post/17520335/This-Politician-Tweet-Got-Misunderstood | # This Politician's Tweet Got Misunderstood
Sarcasm is a very difficult art. Many a time, it boomerangs if not understood properly. Not just that, it might even turn counter-productive. Former MP Konda Visweshwar Reddy must be learning this quite fast.
Recently, this TRS-turned-Congress leader and ace industrialist has sent a tweet congratulating both KCR and KTR for their handling of Corona. The tweet was meant to heckle KCR and KTR for their failure to curb Corona. Reddy wanted the tweet to be laced with wit and sarcasm. However, the tweet was written in such a way that it had the opposite effect. The TRS,instead of being angry, was happy and the Congress, instead of being happy, was shocked.
Reddy later realised that his tweet has badly misfired and made TRS happey instead of being angry. He had to issue a clarification in a hurry to tell everyone that he wanted his tweet to make fun of the TRS. The Congress, it appears, is not very pleased with Reddy's tweet.
× | 2020-08-14 13:53:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932738900184631, "perplexity": 7984.528139819866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739328.66/warc/CC-MAIN-20200814130401-20200814160401-00370.warc.gz"} |
http://math.stackexchange.com/questions/14250/sl2-c-and-the-harmonic-oscillator?answertab=votes | # sl(2,C) and the harmonic oscillator
I've been studying the finite-dimensional representations of the lie algebra sl(2,C). I've read that these representations are related to the harmonic oscillator and the associated raising and lowering operators, but I'm not really sure how they are related. I've read you can generate a new energy state from an old one, but I'm not really sure how it works, and the energy states do not differ by 2 like the eigenspaces do for sl(2,C). I'm having trouble finding references about this, especially ones that are accessible. If anyone knows where I can find more info, or has some knowledge of the material, I would greatly appreciate their help.
-
My understanding is that the eigenvalues of the representation of $\text{sl}(2,\mathbb{C})$ are integer valued but the standard physics convention has an additional factor of $\hbar/2$ which may fix your issue. – Alex Troesch Dec 14 '10 at 3:42
I'm guessing you're using the basis $X_+,X_-,H$ with $[H,X_+] = 2X_+, [H,X_-] = -2X_-, [X_+,X_-] = H$? Then I think in physics they use $\frac{1}{2}H$. So if $v$ is an eigenvector of $\frac{1}{2}H$ with value $r$ then $X_+ v$ is an eigenvector of $\frac{1}{2}H$ with value $r+1$. – Eric O. Korman Dec 14 '10 at 3:44
Err...it looks like the commutation relations in physics involve $[a_+, a_-] = 1$, so it's not quite the same since $H$ doesn't act as the identity in the rep space. – Eric O. Korman Dec 14 '10 at 3:51
Ryder section 2.3. Hall section 1.7. – Matt Calhoun Dec 14 '10 at 4:48
The Quantum Mechanical Harmonic Oscillator
If you look at the Wikipedia page on the quantum mechanical harmonic oscillator, you will see that it is not the Lie algebra $\mathfrak{sl}_2$ that enters the picture, but rather an algebra generated by two operators $a$ and $a^{\dagger}$ satisfying $[a,a^{\dagger}] = 1$ (see also Eric's comment above).
This is a familiar algebra in disguise, the so called Weyl algebra of differential operators with polynomial coefficients $\mathbb C[x,\partial_x]$. (Note that $[\partial_x,x] = 1$, by the Leibniz rule.) The product $x\partial x$ is the so-called Euler operator (it acts on $x^n$ as multiplication by $n$); you can see that in the harmonic oscillator interpretation, it acts as the Hamiltonian (up to a shift of $1/2$, to add non-zero groundstate energy, and a rescaling by $\hbar \omega$).
This algebra has a natural representation, namely on the space of polynomials $\mathbb C[x]$, and we see that this is isomorphic to the representation on the span of the eigenstates that occurs in the harmonic oscillator picture. (The polynomial $x^n$ corresponds to the eigenstate with energy $\hbar \omega(n + 1/2).$)
The Lie algebra $\mathfrak sl_2$ in quantum mechanics
If my memory serves, the first time $\mathfrak sl_2$ entered the picture (in my undergrad QM class, at least) is in the analysis of the hydrogen atom. The point is that the Schrodinger equation in this case has a rotational symmetry, and differentiating this action, one gets an action of the Lie algebra of $SO(3)$ on the space of states. Complexifying, this gives an action of $\mathfrak sl_2$.
If you separate variables in the equation (using spherical coordinates, which are natural in view of the rotational symmetry), the radial part and the spherical part separate, the radial part is easily dealt with, and one is left studying a certain differential equation on the space of functions of the sphere $S^2$.
Mathematically, one is looking at $L^2(S^2)$, with its natural action of $\mathfrak sl_2$ (in the guise of the complexified Lie algebra of $SO(3)$). The Shrodinger equation involves the Laplacian, which is also the Casimir in the enveloping algebra of $\mathfrak sl_2$, and so if you fix the energy (eigenvalue of the Laplacian), one is trying to understand the corresponding eigenspace of the Casimir on $L^2(S^2)$. This is a special case of the Peter--Weyl theorem, which (in this case) also goes under the name of the theory of spherical harmonics.
The idea, roughly, is: $S^2 = SO(3)/SO(2)$. Now by Peter--Weyl, $L^2(SO(3))$ is the Hilbert space direct sum of $V\otimes V^*$, where $V$ runs over the irreps. of $SO(3)$. Thus $L^2(S^2)$ is the Hilbert space direct sum of $V\otimes (V^*)^{SO(2)}$. The space of $SO(2)$ invariants in each irrep. is one-dimensional, so in fact $L^2(S^2)$ is the Hilbert space direct sum of $V$, as $V$ runs over the irreps. of $SO(3)$.
Since $SO(3) = SU(2)/\{\pm 1\}$, these correspond on the Lie algebra level to the odd-dimensional irreps. of $\mathfrak sl_2$.
Each $V$ is an eigenspace for the Casimir (if $V$ has dimension $n$ then the Casimiar eigenvalue is some quadratic expression in $n$ that you can easily figure out, or look up), and distinct $V$ give distinct eigenvalues.
So, going back to the hydrogen atom, if you fix the energy of the electron, then the collection of states has dimension equal to the dimension of the corresponding $V$. (The fact that this is typically greater than one-dimensional is referred to in physics as degeneracy; the energy does not uniquely determine the state.) This has a natural basis corresponding to the $SO(2)$ eigenvectors. Physically, the derivative of this $SO(2)$ (the traditional element $H$ in the Lie algebra $\mathfrak sl_2$, perhaps up to some scaling, and often denoted $L_z$ in physics literature) is the operator corresponding to angular momentum around the axis in $S^2$ which is fixed by $SO(2)$ (often taken to be the $z$-axis, by convention, in the physics literature), and so the different eigenvectors for $SO(2)$ in the given $V$ are states with the same energy, but with different angular momenta around the $z$-axis.
Physically, one can apply a magnetic field along this axis which will then affect these different $L_z$-eigenstates differently (because its effect on the electron depends on the angular momentum of the electron around the field lines), and so one can split the degeneracy. This is the Zeeman effect. One sees the different energy lines in the spectrum of hydrogen split into a number of lines ($n$ lines when the energy is proportional to $n^2$).
I haven't looked, but this must be discussed in many, many places online, and is also in any standard intro. to quantum text. (But probably with less appearance of the symbols $L^2(S^2)$ and $\mathfrak sl_2$ then a mathematician would use!)
-
A nice reference for some of this are Peter Woit's notes: math.columbia.edu/~woit/LieGroups – BBischof Dec 15 '10 at 7:20
Weyl's The theory of groups and quantum mechanics is a charming place where to read this :) – Mariano Suárez-Alvarez Jan 19 '12 at 3:31
First, I think there is potentially more than one question here. The finite-dimensional version, addressed by Matt E, is about the hydrogen atom. The "harmonic oscillator" sometimes means a different thing, not about finite-dimensional repns of the Lie algebra $\mathfrak{sl}_2(\mathbb C)$, but about infinite-dimensional ones. There there is interaction with repns of SO(n) for all n. (This is also an example of "Segal-Shale-Weil" repns, ... also existing over p-adic fields, and abstractly, after Weil.)
I'm sure there are other worked-out versions on-line, but at least I can offer my own worked-exercise, with (a mathematician's notion of) a bibliography: http://www.math.umn.edu/~garrett/m/v/oscillator_repn.pdf
- | 2015-08-03 02:45:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216518402099609, "perplexity": 217.18317204325731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989443.69/warc/CC-MAIN-20150728002309-00042-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://beast2-dev.github.io/hmc/hmc/tracelog/fileName/ | BEAST 2 Help Me Choose
BEAST 2 Help Me Choose Trace log -- file name
Trace log – file name
Name of the file, or stdout if left blank. File names can be parameterised, and there are a few build-in parameters:
• $(filebase) is replaced by the XML file name minus .xml • $(seed) is replaced by the random number seed
So, if you set the file name to $(filebase)-$(seed).log and use a file beast.xml with seed 123 it saves the trace log in beast-123.log.
You can also define your own parameters, and run BEAST with the -D option to define the parameter. For instance, setting the file name to $(filebase)-$(run).log and running BEAST with
beast -D run=7 beast.xml
results in the trace file being written in beast-7.log. | 2022-10-03 08:10:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47749292850494385, "perplexity": 5792.112402634626}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00347.warc.gz"} |
http://www.wowhead.com/quest=41159/process-of-elimination | Quick Facts
Screenshots
Videos
# Process of Elimination
Dig around Stormheim until you find a piece of the Titan Disc.
Eliminate Digsites
## Description
With so many places ta dig up fragments, it's become quite a pain figuring out where ta look for more of the titan disc.
I have a plan though. I need ye ta eliminate potential areas where the disc could exist so we can narrow down where in Stormheim it might exist.
You take part of the area an' I'll take the other part. Meet back here if ye find anythin'.
## Gains
Upon completion of this quest you will gain:
• 16,450 experience
## Series
1. Bits and PiecesProcess of EliminationAnd Into the Fel FireDeciphering DemonologyThe Purple Hills of Mac'Aree The Relic Renewed | 2018-07-17 20:19:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202710151672363, "perplexity": 10469.158885331384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00580.warc.gz"} |
https://people.maths.bris.ac.uk/~matyd/GroupNames/480/Dic15s2D4.html | Copied to
clipboard
## G = Dic15⋊2D4order 480 = 25·3·5
### 2nd semidirect product of Dic15 and D4 acting via D4/C2=C22
Series: Derived Chief Lower central Upper central
Derived series C1 — C2×C30 — Dic15⋊2D4
Chief series C1 — C5 — C15 — C30 — C2×C30 — D5×C2×C6 — C2×C15⋊D4 — Dic15⋊2D4
Lower central C15 — C2×C30 — Dic15⋊2D4
Upper central C1 — C22 — C2×C4
Generators and relations for Dic152D4
G = < a,b,c,d | a30=c4=d2=1, b2=a15, bab-1=a-1, ac=ca, dad=a11, cbc-1=dbd=a15b, dcd=c-1 >
Subgroups: 1004 in 188 conjugacy classes, 50 normal (44 characteristic)
C1, C2 [×3], C2 [×4], C3, C4 [×5], C22, C22 [×10], C5, S3 [×3], C6 [×3], C6, C2×C4, C2×C4 [×5], D4 [×6], C23 [×3], D5, C10 [×3], C10 [×3], Dic3 [×3], C12 [×2], D6 [×2], D6 [×5], C2×C6, C2×C6 [×3], C15, C22⋊C4 [×2], C4⋊C4, C22×C4, C2×D4 [×3], Dic5 [×4], C20, D10 [×3], C2×C10, C2×C10 [×7], C4×S3 [×2], D12 [×2], C2×Dic3 [×2], C3⋊D4 [×4], C2×C12, C2×C12, C22×S3 [×2], C22×C6, C5×S3 [×3], C3×D5, C30 [×3], C4⋊D4, C2×Dic5, C2×Dic5 [×4], C5⋊D4 [×4], C2×C20, C5×D4 [×2], C22×D5, C22×C10 [×2], Dic3⋊C4, D6⋊C4, C3×C22⋊C4, S3×C2×C4, C2×D12, C2×C3⋊D4 [×2], C3×Dic5, Dic15 [×2], Dic15, C60, C6×D5 [×3], S3×C10 [×2], S3×C10 [×5], C2×C30, C10.D4, D10⋊C4, C23.D5, C22×Dic5, C2×C5⋊D4 [×2], D4×C10, Dic3⋊D4, S3×Dic5 [×2], C15⋊D4 [×4], C6×Dic5, C5×D12 [×2], C2×Dic15 [×2], C2×C60, D5×C2×C6, S3×C2×C10 [×2], Dic5⋊D4, D6⋊Dic5, C3×D10⋊C4, C30.4Q8, C2×S3×Dic5, C2×C15⋊D4 [×2], C10×D12, Dic152D4
Quotients: C1, C2 [×7], C22 [×7], S3, D4 [×4], C23, D5, D6 [×3], C2×D4 [×2], C4○D4, D10 [×3], C22×S3, C4⋊D4, C5⋊D4 [×2], C22×D5, C4○D12, S3×D4 [×2], S3×D5, D4×D5, D42D5, C2×C5⋊D4, Dic3⋊D4, C2×S3×D5, Dic5⋊D4, D125D5, C20⋊D6, S3×C5⋊D4, Dic152D4
Smallest permutation representation of Dic152D4
On 240 points
Generators in S240
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30)(31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90)(91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120)(121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150)(151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180)(181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210)(211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240)
(1 53 16 38)(2 52 17 37)(3 51 18 36)(4 50 19 35)(5 49 20 34)(6 48 21 33)(7 47 22 32)(8 46 23 31)(9 45 24 60)(10 44 25 59)(11 43 26 58)(12 42 27 57)(13 41 28 56)(14 40 29 55)(15 39 30 54)(61 160 76 175)(62 159 77 174)(63 158 78 173)(64 157 79 172)(65 156 80 171)(66 155 81 170)(67 154 82 169)(68 153 83 168)(69 152 84 167)(70 151 85 166)(71 180 86 165)(72 179 87 164)(73 178 88 163)(74 177 89 162)(75 176 90 161)(91 147 106 132)(92 146 107 131)(93 145 108 130)(94 144 109 129)(95 143 110 128)(96 142 111 127)(97 141 112 126)(98 140 113 125)(99 139 114 124)(100 138 115 123)(101 137 116 122)(102 136 117 121)(103 135 118 150)(104 134 119 149)(105 133 120 148)(181 227 196 212)(182 226 197 211)(183 225 198 240)(184 224 199 239)(185 223 200 238)(186 222 201 237)(187 221 202 236)(188 220 203 235)(189 219 204 234)(190 218 205 233)(191 217 206 232)(192 216 207 231)(193 215 208 230)(194 214 209 229)(195 213 210 228)
(1 195 74 129)(2 196 75 130)(3 197 76 131)(4 198 77 132)(5 199 78 133)(6 200 79 134)(7 201 80 135)(8 202 81 136)(9 203 82 137)(10 204 83 138)(11 205 84 139)(12 206 85 140)(13 207 86 141)(14 208 87 142)(15 209 88 143)(16 210 89 144)(17 181 90 145)(18 182 61 146)(19 183 62 147)(20 184 63 148)(21 185 64 149)(22 186 65 150)(23 187 66 121)(24 188 67 122)(25 189 68 123)(26 190 69 124)(27 191 70 125)(28 192 71 126)(29 193 72 127)(30 194 73 128)(31 236 155 117)(32 237 156 118)(33 238 157 119)(34 239 158 120)(35 240 159 91)(36 211 160 92)(37 212 161 93)(38 213 162 94)(39 214 163 95)(40 215 164 96)(41 216 165 97)(42 217 166 98)(43 218 167 99)(44 219 168 100)(45 220 169 101)(46 221 170 102)(47 222 171 103)(48 223 172 104)(49 224 173 105)(50 225 174 106)(51 226 175 107)(52 227 176 108)(53 228 177 109)(54 229 178 110)(55 230 179 111)(56 231 180 112)(57 232 151 113)(58 233 152 114)(59 234 153 115)(60 235 154 116)
(1 129)(2 140)(3 121)(4 132)(5 143)(6 124)(7 135)(8 146)(9 127)(10 138)(11 149)(12 130)(13 141)(14 122)(15 133)(16 144)(17 125)(18 136)(19 147)(20 128)(21 139)(22 150)(23 131)(24 142)(25 123)(26 134)(27 145)(28 126)(29 137)(30 148)(31 107)(32 118)(33 99)(34 110)(35 91)(36 102)(37 113)(38 94)(39 105)(40 116)(41 97)(42 108)(43 119)(44 100)(45 111)(46 92)(47 103)(48 114)(49 95)(50 106)(51 117)(52 98)(53 109)(54 120)(55 101)(56 112)(57 93)(58 104)(59 115)(60 96)(61 202)(62 183)(63 194)(64 205)(65 186)(66 197)(67 208)(68 189)(69 200)(70 181)(71 192)(72 203)(73 184)(74 195)(75 206)(76 187)(77 198)(78 209)(79 190)(80 201)(81 182)(82 193)(83 204)(84 185)(85 196)(86 207)(87 188)(88 199)(89 210)(90 191)(151 212)(152 223)(153 234)(154 215)(155 226)(156 237)(157 218)(158 229)(159 240)(160 221)(161 232)(162 213)(163 224)(164 235)(165 216)(166 227)(167 238)(168 219)(169 230)(170 211)(171 222)(172 233)(173 214)(174 225)(175 236)(176 217)(177 228)(178 239)(179 220)(180 231)
G:=sub<Sym(240)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150)(151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180)(181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210)(211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240), (1,53,16,38)(2,52,17,37)(3,51,18,36)(4,50,19,35)(5,49,20,34)(6,48,21,33)(7,47,22,32)(8,46,23,31)(9,45,24,60)(10,44,25,59)(11,43,26,58)(12,42,27,57)(13,41,28,56)(14,40,29,55)(15,39,30,54)(61,160,76,175)(62,159,77,174)(63,158,78,173)(64,157,79,172)(65,156,80,171)(66,155,81,170)(67,154,82,169)(68,153,83,168)(69,152,84,167)(70,151,85,166)(71,180,86,165)(72,179,87,164)(73,178,88,163)(74,177,89,162)(75,176,90,161)(91,147,106,132)(92,146,107,131)(93,145,108,130)(94,144,109,129)(95,143,110,128)(96,142,111,127)(97,141,112,126)(98,140,113,125)(99,139,114,124)(100,138,115,123)(101,137,116,122)(102,136,117,121)(103,135,118,150)(104,134,119,149)(105,133,120,148)(181,227,196,212)(182,226,197,211)(183,225,198,240)(184,224,199,239)(185,223,200,238)(186,222,201,237)(187,221,202,236)(188,220,203,235)(189,219,204,234)(190,218,205,233)(191,217,206,232)(192,216,207,231)(193,215,208,230)(194,214,209,229)(195,213,210,228), (1,195,74,129)(2,196,75,130)(3,197,76,131)(4,198,77,132)(5,199,78,133)(6,200,79,134)(7,201,80,135)(8,202,81,136)(9,203,82,137)(10,204,83,138)(11,205,84,139)(12,206,85,140)(13,207,86,141)(14,208,87,142)(15,209,88,143)(16,210,89,144)(17,181,90,145)(18,182,61,146)(19,183,62,147)(20,184,63,148)(21,185,64,149)(22,186,65,150)(23,187,66,121)(24,188,67,122)(25,189,68,123)(26,190,69,124)(27,191,70,125)(28,192,71,126)(29,193,72,127)(30,194,73,128)(31,236,155,117)(32,237,156,118)(33,238,157,119)(34,239,158,120)(35,240,159,91)(36,211,160,92)(37,212,161,93)(38,213,162,94)(39,214,163,95)(40,215,164,96)(41,216,165,97)(42,217,166,98)(43,218,167,99)(44,219,168,100)(45,220,169,101)(46,221,170,102)(47,222,171,103)(48,223,172,104)(49,224,173,105)(50,225,174,106)(51,226,175,107)(52,227,176,108)(53,228,177,109)(54,229,178,110)(55,230,179,111)(56,231,180,112)(57,232,151,113)(58,233,152,114)(59,234,153,115)(60,235,154,116), (1,129)(2,140)(3,121)(4,132)(5,143)(6,124)(7,135)(8,146)(9,127)(10,138)(11,149)(12,130)(13,141)(14,122)(15,133)(16,144)(17,125)(18,136)(19,147)(20,128)(21,139)(22,150)(23,131)(24,142)(25,123)(26,134)(27,145)(28,126)(29,137)(30,148)(31,107)(32,118)(33,99)(34,110)(35,91)(36,102)(37,113)(38,94)(39,105)(40,116)(41,97)(42,108)(43,119)(44,100)(45,111)(46,92)(47,103)(48,114)(49,95)(50,106)(51,117)(52,98)(53,109)(54,120)(55,101)(56,112)(57,93)(58,104)(59,115)(60,96)(61,202)(62,183)(63,194)(64,205)(65,186)(66,197)(67,208)(68,189)(69,200)(70,181)(71,192)(72,203)(73,184)(74,195)(75,206)(76,187)(77,198)(78,209)(79,190)(80,201)(81,182)(82,193)(83,204)(84,185)(85,196)(86,207)(87,188)(88,199)(89,210)(90,191)(151,212)(152,223)(153,234)(154,215)(155,226)(156,237)(157,218)(158,229)(159,240)(160,221)(161,232)(162,213)(163,224)(164,235)(165,216)(166,227)(167,238)(168,219)(169,230)(170,211)(171,222)(172,233)(173,214)(174,225)(175,236)(176,217)(177,228)(178,239)(179,220)(180,231)>;
G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150)(151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180)(181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210)(211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240), (1,53,16,38)(2,52,17,37)(3,51,18,36)(4,50,19,35)(5,49,20,34)(6,48,21,33)(7,47,22,32)(8,46,23,31)(9,45,24,60)(10,44,25,59)(11,43,26,58)(12,42,27,57)(13,41,28,56)(14,40,29,55)(15,39,30,54)(61,160,76,175)(62,159,77,174)(63,158,78,173)(64,157,79,172)(65,156,80,171)(66,155,81,170)(67,154,82,169)(68,153,83,168)(69,152,84,167)(70,151,85,166)(71,180,86,165)(72,179,87,164)(73,178,88,163)(74,177,89,162)(75,176,90,161)(91,147,106,132)(92,146,107,131)(93,145,108,130)(94,144,109,129)(95,143,110,128)(96,142,111,127)(97,141,112,126)(98,140,113,125)(99,139,114,124)(100,138,115,123)(101,137,116,122)(102,136,117,121)(103,135,118,150)(104,134,119,149)(105,133,120,148)(181,227,196,212)(182,226,197,211)(183,225,198,240)(184,224,199,239)(185,223,200,238)(186,222,201,237)(187,221,202,236)(188,220,203,235)(189,219,204,234)(190,218,205,233)(191,217,206,232)(192,216,207,231)(193,215,208,230)(194,214,209,229)(195,213,210,228), (1,195,74,129)(2,196,75,130)(3,197,76,131)(4,198,77,132)(5,199,78,133)(6,200,79,134)(7,201,80,135)(8,202,81,136)(9,203,82,137)(10,204,83,138)(11,205,84,139)(12,206,85,140)(13,207,86,141)(14,208,87,142)(15,209,88,143)(16,210,89,144)(17,181,90,145)(18,182,61,146)(19,183,62,147)(20,184,63,148)(21,185,64,149)(22,186,65,150)(23,187,66,121)(24,188,67,122)(25,189,68,123)(26,190,69,124)(27,191,70,125)(28,192,71,126)(29,193,72,127)(30,194,73,128)(31,236,155,117)(32,237,156,118)(33,238,157,119)(34,239,158,120)(35,240,159,91)(36,211,160,92)(37,212,161,93)(38,213,162,94)(39,214,163,95)(40,215,164,96)(41,216,165,97)(42,217,166,98)(43,218,167,99)(44,219,168,100)(45,220,169,101)(46,221,170,102)(47,222,171,103)(48,223,172,104)(49,224,173,105)(50,225,174,106)(51,226,175,107)(52,227,176,108)(53,228,177,109)(54,229,178,110)(55,230,179,111)(56,231,180,112)(57,232,151,113)(58,233,152,114)(59,234,153,115)(60,235,154,116), (1,129)(2,140)(3,121)(4,132)(5,143)(6,124)(7,135)(8,146)(9,127)(10,138)(11,149)(12,130)(13,141)(14,122)(15,133)(16,144)(17,125)(18,136)(19,147)(20,128)(21,139)(22,150)(23,131)(24,142)(25,123)(26,134)(27,145)(28,126)(29,137)(30,148)(31,107)(32,118)(33,99)(34,110)(35,91)(36,102)(37,113)(38,94)(39,105)(40,116)(41,97)(42,108)(43,119)(44,100)(45,111)(46,92)(47,103)(48,114)(49,95)(50,106)(51,117)(52,98)(53,109)(54,120)(55,101)(56,112)(57,93)(58,104)(59,115)(60,96)(61,202)(62,183)(63,194)(64,205)(65,186)(66,197)(67,208)(68,189)(69,200)(70,181)(71,192)(72,203)(73,184)(74,195)(75,206)(76,187)(77,198)(78,209)(79,190)(80,201)(81,182)(82,193)(83,204)(84,185)(85,196)(86,207)(87,188)(88,199)(89,210)(90,191)(151,212)(152,223)(153,234)(154,215)(155,226)(156,237)(157,218)(158,229)(159,240)(160,221)(161,232)(162,213)(163,224)(164,235)(165,216)(166,227)(167,238)(168,219)(169,230)(170,211)(171,222)(172,233)(173,214)(174,225)(175,236)(176,217)(177,228)(178,239)(179,220)(180,231) );
G=PermutationGroup([(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30),(31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90),(91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120),(121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150),(151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180),(181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210),(211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240)], [(1,53,16,38),(2,52,17,37),(3,51,18,36),(4,50,19,35),(5,49,20,34),(6,48,21,33),(7,47,22,32),(8,46,23,31),(9,45,24,60),(10,44,25,59),(11,43,26,58),(12,42,27,57),(13,41,28,56),(14,40,29,55),(15,39,30,54),(61,160,76,175),(62,159,77,174),(63,158,78,173),(64,157,79,172),(65,156,80,171),(66,155,81,170),(67,154,82,169),(68,153,83,168),(69,152,84,167),(70,151,85,166),(71,180,86,165),(72,179,87,164),(73,178,88,163),(74,177,89,162),(75,176,90,161),(91,147,106,132),(92,146,107,131),(93,145,108,130),(94,144,109,129),(95,143,110,128),(96,142,111,127),(97,141,112,126),(98,140,113,125),(99,139,114,124),(100,138,115,123),(101,137,116,122),(102,136,117,121),(103,135,118,150),(104,134,119,149),(105,133,120,148),(181,227,196,212),(182,226,197,211),(183,225,198,240),(184,224,199,239),(185,223,200,238),(186,222,201,237),(187,221,202,236),(188,220,203,235),(189,219,204,234),(190,218,205,233),(191,217,206,232),(192,216,207,231),(193,215,208,230),(194,214,209,229),(195,213,210,228)], [(1,195,74,129),(2,196,75,130),(3,197,76,131),(4,198,77,132),(5,199,78,133),(6,200,79,134),(7,201,80,135),(8,202,81,136),(9,203,82,137),(10,204,83,138),(11,205,84,139),(12,206,85,140),(13,207,86,141),(14,208,87,142),(15,209,88,143),(16,210,89,144),(17,181,90,145),(18,182,61,146),(19,183,62,147),(20,184,63,148),(21,185,64,149),(22,186,65,150),(23,187,66,121),(24,188,67,122),(25,189,68,123),(26,190,69,124),(27,191,70,125),(28,192,71,126),(29,193,72,127),(30,194,73,128),(31,236,155,117),(32,237,156,118),(33,238,157,119),(34,239,158,120),(35,240,159,91),(36,211,160,92),(37,212,161,93),(38,213,162,94),(39,214,163,95),(40,215,164,96),(41,216,165,97),(42,217,166,98),(43,218,167,99),(44,219,168,100),(45,220,169,101),(46,221,170,102),(47,222,171,103),(48,223,172,104),(49,224,173,105),(50,225,174,106),(51,226,175,107),(52,227,176,108),(53,228,177,109),(54,229,178,110),(55,230,179,111),(56,231,180,112),(57,232,151,113),(58,233,152,114),(59,234,153,115),(60,235,154,116)], [(1,129),(2,140),(3,121),(4,132),(5,143),(6,124),(7,135),(8,146),(9,127),(10,138),(11,149),(12,130),(13,141),(14,122),(15,133),(16,144),(17,125),(18,136),(19,147),(20,128),(21,139),(22,150),(23,131),(24,142),(25,123),(26,134),(27,145),(28,126),(29,137),(30,148),(31,107),(32,118),(33,99),(34,110),(35,91),(36,102),(37,113),(38,94),(39,105),(40,116),(41,97),(42,108),(43,119),(44,100),(45,111),(46,92),(47,103),(48,114),(49,95),(50,106),(51,117),(52,98),(53,109),(54,120),(55,101),(56,112),(57,93),(58,104),(59,115),(60,96),(61,202),(62,183),(63,194),(64,205),(65,186),(66,197),(67,208),(68,189),(69,200),(70,181),(71,192),(72,203),(73,184),(74,195),(75,206),(76,187),(77,198),(78,209),(79,190),(80,201),(81,182),(82,193),(83,204),(84,185),(85,196),(86,207),(87,188),(88,199),(89,210),(90,191),(151,212),(152,223),(153,234),(154,215),(155,226),(156,237),(157,218),(158,229),(159,240),(160,221),(161,232),(162,213),(163,224),(164,235),(165,216),(166,227),(167,238),(168,219),(169,230),(170,211),(171,222),(172,233),(173,214),(174,225),(175,236),(176,217),(177,228),(178,239),(179,220),(180,231)])
60 conjugacy classes
class 1 2A 2B 2C 2D 2E 2F 2G 3 4A 4B 4C 4D 4E 4F 5A 5B 6A 6B 6C 6D 6E 10A ··· 10F 10G ··· 10N 12A 12B 12C 12D 15A 15B 20A 20B 20C 20D 30A ··· 30F 60A ··· 60H order 1 2 2 2 2 2 2 2 3 4 4 4 4 4 4 5 5 6 6 6 6 6 10 ··· 10 10 ··· 10 12 12 12 12 15 15 20 20 20 20 30 ··· 30 60 ··· 60 size 1 1 1 1 6 6 12 20 2 4 10 10 30 30 60 2 2 2 2 2 20 20 2 ··· 2 12 ··· 12 4 4 20 20 4 4 4 4 4 4 4 ··· 4 4 ··· 4
60 irreducible representations
dim 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 4 4 4 4 type + + + + + + + + + + + + + + + + + + + - + - image C1 C2 C2 C2 C2 C2 C2 S3 D4 D4 D5 D6 D6 D6 C4○D4 D10 D10 C5⋊D4 C4○D12 S3×D4 S3×D5 D4×D5 D4⋊2D5 C2×S3×D5 D12⋊5D5 C20⋊D6 S3×C5⋊D4 kernel Dic15⋊2D4 D6⋊Dic5 C3×D10⋊C4 C30.4Q8 C2×S3×Dic5 C2×C15⋊D4 C10×D12 D10⋊C4 Dic15 S3×C10 C2×D12 C2×Dic5 C2×C20 C22×D5 C30 C2×C12 C22×S3 D6 C10 C10 C2×C4 C6 C6 C22 C2 C2 C2 # reps 1 1 1 1 1 2 1 1 2 2 2 1 1 1 2 2 4 8 4 2 2 2 2 2 4 4 4
Matrix representation of Dic152D4 in GL6(𝔽61)
0 60 0 0 0 0 1 1 0 0 0 0 0 0 60 1 0 0 0 0 16 44 0 0 0 0 0 0 1 0 0 0 0 0 0 1
,
11 0 0 0 0 0 50 50 0 0 0 0 0 0 0 18 0 0 0 0 17 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1
,
38 15 0 0 0 0 46 23 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 15 20 0 0 0 0 7 46
,
38 15 0 0 0 0 38 23 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 15 20 0 0 0 0 1 46
G:=sub<GL(6,GF(61))| [0,1,0,0,0,0,60,1,0,0,0,0,0,0,60,16,0,0,0,0,1,44,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[11,50,0,0,0,0,0,50,0,0,0,0,0,0,0,17,0,0,0,0,18,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[38,46,0,0,0,0,15,23,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,15,7,0,0,0,0,20,46],[38,38,0,0,0,0,15,23,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,15,1,0,0,0,0,20,46] >;
Dic152D4 in GAP, Magma, Sage, TeX
{\rm Dic}_{15}\rtimes_2D_4
% in TeX
G:=Group("Dic15:2D4");
// GroupNames label
G:=SmallGroup(480,529);
// by ID
G=gap.SmallGroup(480,529);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-3,-5,253,254,219,100,1356,18822]);
// Polycyclic
G:=Group<a,b,c,d|a^30=c^4=d^2=1,b^2=a^15,b*a*b^-1=a^-1,a*c=c*a,d*a*d=a^11,c*b*c^-1=d*b*d=a^15*b,d*c*d=c^-1>;
// generators/relations
×
𝔽 | 2021-03-01 04:57:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973338842391968, "perplexity": 2915.206115007696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00478.warc.gz"} |
https://search.r-project.org/CRAN/refmans/energy/html/U_product.html | U_product {energy} R Documentation
## Inner product in the Hilbert space of U-centered distance matrices
### Description
Stand-alone function to compute the inner product in the Hilbert space of U-centered distance matrices, as in the definition of partial distance covariance.
### Usage
U_product(U, V)
### Arguments
U U-centered distance matrix V U-centered distance matrix
### Details
Note that pdcor, etc. functions include the centering and projection operations, so that these stand alone versions are not needed except in case one wants to check the internal computations.
Exported from U_product.cpp.
### Value
U_product returns the inner product, a scalar.
### Author(s)
Maria L. Rizzo mrizzo@bgsu.edu and Gabor J. Szekely
### References
Szekely, G.J. and Rizzo, M.L. (2014), Partial Distance Correlation with Methods for Dissimilarities, Annals of Statistics, Vol. 42, No. 6, pp. 2382-2412.
https://projecteuclid.org/euclid.aos/1413810731
### Examples
x <- iris[1:10, 1:4]
y <- iris[11:20, 1:4]
M1 <- as.matrix(dist(x))
M2 <- as.matrix(dist(y))
U <- U_center(M1)
V <- U_center(M2)
U_product(U, V)
dcovU_stats(M1, M2)
[Package energy version 1.7-10 Index] | 2022-05-25 20:29:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6026495099067688, "perplexity": 14656.109420299941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00470.warc.gz"} |
https://www.acmicpc.net/problem/7292 | 시간 제한메모리 제한제출정답맞힌 사람정답 비율
1 초 128 MB96666.667%
## 문제
Scrabble is a word game which has now sold in excess of 100 million sets in 121 countries and 29 languages. The game is played by placing tiles, each of which is inscribed with a letter, on a board, according to some simple rules, which we will not bother about now. The values of the tiles in the English version are:
10: Q, Z
8: J, X
5: K
4: F, H, V, W, Y
3: B, C, M, P
2: D, G
1: A, E, I, L, N, O, R, S, T, U
The board consists of a 15×15 grid of squares. Some of these squares are coloured and there is a bonus for using them. Letter bonus squares (2L, 3L) multiply the value of the letter placed on them by two or three respectively; word bonus squares (2W, 3W) multiply the score of the entire word (after any relevant letter multipliers) by two or three respectively. Thus if I had placed the word ‘BANQUET’ as shown on the right of the figure below then I would score 84 (3+1+1+10*2+1+1+1)*3. If I had played ‘BANQUETS’ starting in the same place I would have scored 261 (3+1+1+2*10+1+1+1+1)*3*3.
Bonus squares are shown below for the top left quadrant of the board and are symmetrically placed on the rest of the board, i.e. the board is reflected about column H and row 8.
A play is denoted by specifying a starting position and orientation (row, column for horizontal words and column, row for vertical words) and the word. In actual play one would also need to worry about tiles already on the board, blank tiles, tiles in adjacent squares, bonus points for playing all the letters on your rack and so on, but we will ignore those details for this problem.
## 입력
Input will consist of a series of lines, each denoting a play. Each line will start with the designation of the starting position of the word followed by a space and the word itself — a sequence of 2 to 15 upper case letters. The placement of the word will be such that it will fit on the board. Rows will be designated by a number in the range 1 to 15, columns by an upper case letter in the range ‘A’ to ‘O’. If the row is specified first then the word is played horizontally, if the column is specified first then the word is played vertically. The sequence of plays will be terminated by a line containing a single ‘#’.
## 출력
Output will consist of one line for each play in the input, consisting of the play itself, followed by a space and the score for that play, as outlined above.
## 예제 입력 1
15H BANQUET
O1 BANQUETS
#
## 예제 출력 1
15H BANQUET 57
O1 BANQUETS 261 | 2022-05-16 09:41:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3669378161430359, "perplexity": 1046.090156832619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00435.warc.gz"} |
http://orbit.dtu.dk/en/publications/compressed-communication-complexity-of-longest-common-prefixes(b0d17039-39a1-471b-88d9-24a5c3ab75f6).html | ## Compressed Communication Complexity of Longest Common Prefixes
Research output: Research - peer-reviewArticle in proceedings – Annual report year: 2018
### DOI
We consider the communication complexity of fundamental longest common prefix $$({{\mathrm{\textsc {Lcp}}}})$$ problems. In the simplest version, two parties, Alice and Bob, each hold a string, A and B, and we want to determine the length of their longest common prefix $$\ell ={{\mathrm{\textsc {Lcp}}}}(A,B)$$ using as few rounds and bits of communication as possible. We show that if the longest common prefix of A and B is compressible, then we can significantly reduce the number of rounds compared to the optimal uncompressed protocol, while achieving the same (or fewer) bits of communication. Namely, if the longest common prefix has an LZ77 parse of z phrases, only $$O(\lg z)$$ rounds and $$O(\lg \ell )$$ total communication is necessary. We extend the result to the natural case when Bob holds a set of strings $$B_1, \ldots , B_k$$ , and the goal is to find the length of the maximal longest prefix shared by A and any of $$B_1, \ldots , B_k$$ . Here, we give a protocol with $$O(\log z)$$ rounds and $$O(\lg z \lg k + \lg \ell )$$ total communication. We present our result in the public-coin model of computation but by a standard technique our results generalize to the private-coin model. Furthermore, if we view the input strings as integers the problems are the greater-than problem and the predecessor problem.
Original language English String Processing and Information Retrieval Springer 2018 74-87 9783030004798 10.1007/978-3-030-00479-8_7 Published - 2018 25th International Symposium on String Processing and Information Retrieval - Lima, PeruDuration: 9 Oct 2018 → 11 Oct 2018
### Conference
Conference 25th International Symposium on String Processing and Information Retrieval Peru Lima 09/10/2018 → 11/10/2018
Series Lecture Notes in Computer Science 11147 0302-9743
Citations Web of Science® Times Cited: No match on DOI
### Research areas
• Communication complexity, LZ77, Compression Upper bound, Output sensitive , Longest common prefix, Predecessor | 2019-02-16 13:38:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2208799123764038, "perplexity": 946.0264515798779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00367.warc.gz"} |
https://brilliant.org/discussions/thread/i-couldnt-solve-it-can-someone-tell-me-the/ | ×
# I couldn't solve it can someone tell me the solution
Question 4
Note by Ishan Dixit
2 years, 3 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Use the ideas here: complementary probability, which can be applied to counting (as well as probability).
In particular, there are a total of $$\binom{32}{3}$$ combinations of 3 objects.
Now, you need to identify the "bad" choices. The tricky thing is making sure your bad choices are disjoint. Can you think of what might work? (Don't read on before you try to think of it yourself!)
Anyway, there are three kinds of disjoint "bad" choices:
1) All three points are next to each other.
2) Two points are adjacent, but the third isn't.
3) None of the points are adjacent, but two are diametrically opposite.
Try to count the number of choices which would fall into each category!
Staff - 2 years, 3 months ago
Thank you it is very helpful but i have one more doubt i calculated and answer is coming 3584 is it correct
- 2 years, 3 months ago
Could you post your calculations and reasoning for each of the three "bad" cases? Given your answer, I am guessing that you slightly over-counted in Case #2.
Staff - 2 years, 3 months ago
Can i post an image of solution in seperate post
- 2 years, 3 months ago
You can post it into a comment here.
Use the formatting ![](image link here)
If you need to upload the image, you can do that in another post and then copy the link to a comment here using the "Insert an image" button. (Or you could of course just post the picture of your solution in this note.)
Staff - 2 years, 3 months ago
- 2 years, 3 months ago
You claim that the first point leaves 28 choices, since you can't use the diametrically opposed point or either of the two adjacent points. This is a good observation.
However, you then claim that after the second point is chosen, there are 24 choices for the third point. Is this always true? Or is it possible that, in certain cases, some of the points eliminated by the first point are the same as the ones "eliminated" by the second point, thus leaving more than 24 choices for the third point?
(By the way, these issues you're running into with overlapping is why I suggested the approach with complementary counting with disjoint "bad" choices.)
Staff - 2 years, 3 months ago
I am stuck please provide solution I also found another case in which no. Of group are 32 x 28 x 25 /3!.So should i subtract both cases or do something else
- 2 years, 3 months ago
I have laid out three disjoint "bad" choices. Can you try to calculate how many there are of each type?
1) All three points are next to each other. 2) Two points are adjacent, but the third isn't. 3) None of the points are adjacent, but two are diametrically opposite.
Staff - 2 years, 3 months ago | 2018-03-24 00:14:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708159565925598, "perplexity": 1336.4699967870629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00010.warc.gz"} |
https://www.physicsforums.com/threads/finding-probability-stat-mech.700201/ | # Finding Probability(Stat mech)
1. Jul 5, 2013
### catsonmars
I am pre studying for Statistical Mechanics class in the fall and need help with this problem. I’ve already spent some time with it.
Let the displacement x of an oscillator as a function of time t be given by X=Acos(wt+ϕ). Assume that the phase angle ϕ is equally likely to assume any value in the range 0 < ϕ < 2pi. The probability w(ϕ)d ϕ that ϕ lies in the range between ϕ and ϕ +d ϕ is then simply w(ϕ) dϕ=(2pi)^-1d ϕ. For any fixed time t, find the probability P(x)dx that x lies between x and x+dx by summing w(ϕ) over all angles ϕ for which x lies in this range. Express P(x) in termas of A and x.
Relevant equations[/b]
X=Acos(wt+ϕ).
w(ϕ)d ϕ=(2pi)^-1d ϕ
3. The attempt at a solution
The only thing I can come up with is integrating
∫P(x)dx = ∫(2pi)^-1d ϕ and inegrating over x and x+dx
Or ƩP((x)dx* w(ϕ) dϕ)/p(x)
2. Jul 5, 2013
### TSny
Hello catsonmars. Welcome to PF!
You have a good start with $w(\phi) = \frac{1}{2\pi}$.
Since you are looking for the probability that $x$ lies in an infinitesimal range from $x$ to $x+dx$, you will not need to integrate. The probability is just $\small P(x)dx$. This is given by the probability $w(\phi)|d\phi|$ that $\phi$ lies in the range $\phi$ to $\phi + d\phi$, where $\phi$ is the value of the phase angle that corresponds to $x$ and $\phi + d\phi$ corresponds to $x+dx$. [Caution: think about whether or not there is more than one value of $\phi$ that corresponds to the same $x$. If so, you will need to make an adjustment for that.]
So, the probability that $x$ lies between $x$ and $x+dx$ could be expressed as $\small P(x)dx$ or as $w(\phi)|d\phi|$ (if there is only one value of $\phi$ that corresponds to a value of $x$). That is, $\small P(x)dx =$ $w(\phi)|d\phi|$ [I leave it to you to think about what to do if there is more than one value of $\phi$ corresponding to the same value of $x$. Perhaps this has something to do with the word "summing" in the statement of the problem.]
Since you already know how to express $w(\phi)$, all you need to do is find an expression for $d\phi$ in terms of $x$ and $dx$. Hint: $d\phi = \frac{d\phi}{dx}dx$.
Last edited: Jul 6, 2013
3. Jul 12, 2013
### catsonmars
There should be more x values than ϕ's because the range of ϕ is much smaller than x. Second I'm still not sure how I should right the summation. I have ϕ=(Ʃw(ϕ)dϕ)/(P(x)(dx)) but that still seems wrong. I've also thought about relating ϕ+dϕ to x+dx somehow but I'm can't think of what would make them equal so I can get ϕ in terms of x. Also I've looked at the answer key and I have no idea how the amplitue "A" would fit into the equation.
4. Jul 12, 2013
### TSny
Basically, you need to solve $\small P(x)|dx| = w(\phi)|d\phi|$ for $\small P(x)$. That is,
$P(x) = w(\phi)\frac{d\phi}{dx}$
Use $x = Acos(\omega t + \phi)$ to find $\frac{d\phi}{dx}$ as a function of $A$ and $x$.
There's the additional task of dealing with the fact that there might be two different values of $\phi$ corresponding to the same value of $x$. | 2018-01-17 01:22:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204375505447388, "perplexity": 251.96245554020203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886792.7/warc/CC-MAIN-20180117003801-20180117023801-00751.warc.gz"} |
https://socratic.org/questions/what-intermolecular-forces-are-present-in-ch3oh-1 | # What intermolecular forces are present in CH_3OH?
Jun 23, 2018
Well, you got hydrogen bound to the VERY ELECTRONEGATIVE oxygen atom....
#### Explanation:
And in such a scenario where hydrogen is bound to a strongly electronegative element, hydrogen bonding is known to occur….a special case of bond polarity...
We could represent the dipoles as...
${H}_{3} C - \stackrel{{\delta}^{+}}{O} - \stackrel{{\delta}^{-}}{H}$
And in bulk solution, the molecular dipoles line up...and this is a SPECIAL case of dipole-dipole interaction, $\text{intermolecular hydrogen bonding}$, the which constitutes a POTENT intermolecular force, which elevates the melting and boiling points of the molecule.
And so we got normal boiling points of...
$C {H}_{4}$ ;-164 ""^@C.
${H}_{3} C - C {H}_{3}$ ;-89 ""^@C.
${H}_{3} C - O H$ ;+64.7 ""^@C.
${H}_{3} C - C {H}_{2} O H$ ;+78.5 ""^@C.
$H - O - H$ ;+100.0 ""^@C.
Of course, dispersion forces operate between all molecules...but these are not the same magnitude as intermolecular hydrogen bonding.... | 2022-05-27 14:39:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6410265564918518, "perplexity": 6588.87773830218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00624.warc.gz"} |
http://www.askphysics.com/equations-of-motion-images-for-easy-reuse/?shared=email&msg=fail | Home » General » Equations of Motion – Images for easy reuse
# Equations of Motion – Images for easy reuse
Here you can find the equations of motion in the form of images which you can use in your documents.
$v = u + at$ $S = ut + \frac{1}{2} at^{2}$ $v^{2} = u^{2} + 2aS$
### Visitors So Far @ AskPhysics
• 2,166,826 hits | 2019-11-14 10:59:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3123134970664978, "perplexity": 2052.5377120665726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00487.warc.gz"} |
https://www.semanticscholar.org/paper/Game-semantics-of-Martin-L%C3%B6f-type-theory%2C-part-III%3A-Yamada/4ab0d21ec0c92889d3205f5bb31bfb55bc4e82d9 | • Corpus ID: 220546221
# Game semantics of Martin-Löf type theory, part III: its consistency with Church's thesis
@article{Yamada2020GameSO,
title={Game semantics of Martin-L{\"o}f type theory, part III: its consistency with Church's thesis},
journal={ArXiv},
year={2020},
volume={abs/2007.08094}
}
We prove consistency of intensional Martin-Lof type theory (MLTT) with formal Church's thesis (CT), which was open for at least fifteen years. The difficulty in proving the consistency is that a standard method of realizability a la Kleene does not work for the consistency, though it validates CT, as it does not model MLTT; specifically, the realizability does not validate MLTT's congruence rule on pi-types (known as the $\xi$-rule). We overcome this point and prove the consistency by novel…
1 Citations
Parametric Church's Thesis: Synthetic Computability Without Choice
This work introduces various parametric strengthenings of CTφ, which are equivalent to assuming CT φ and an S n operator for φ like in the S n theorem, and explains the novel axioms and proofs of Rice’s theorem.
## References
SHOWING 1-10 OF 70 REFERENCES
Consistency of the intensional level of the Minimalist Foundation with Church’s thesis and axiom of choice
• Philosophy
Arch. Math. Log.
• 2018
It is shown that consistency with the formal Church’s thesis and the axiom of choice are satisfied by the intensional level of the two-level Minimalist Foundation, for short MF, completed in 2009 by the second author.
Game Semantics for Martin-Löf Type Theory
A category with families of a novel variant of games is proposed, which induces a surjective and injective interpretation of the intensional variant of MLTT equipped with unit-, empty-, N-, dependent product, dependent sum and Id-types as well as the cumulative hierarchy of universes for the first time in the literature.
A game-semantic model of computation
This work shows, as a main technical achievement, that viable strategies in game semantics are Turing complete and has given a mathematical foundation of computation in the same sense as Turing machines but beyond computation on natural numbers, e.g., higher-order computation, in a more abstract fashion.
Notes on game semantics
Applications of game semantics to model-checking and abstract interpretation are being developed, which opens the way for connecting the uses of games in semantics and in verification.
Definability and Full Abstraction
• P. Curien
• Computer Science
Electron. Notes Theor. Comput. Sci.
• 2007
A game semantics for generic polymorphism
• Computer Science
Ann. Pure Appl. Log.
• 2003
Realizability Models for Type Theories
Intensionality, Definability and Computation
• S. Abramsky
• Computer Science
Johan van Benthem on Logic and Information Dynamics
• 2014
This work reviews how game semantics has been used to characterize the sequential functional processes, leading to powerful and flexible methods for constructing fully abstract models of programming languages, with applications in program analysis and verification.
Games for Dependent Types
• Philosophy
ICALP
• 2015
Although definability for the hierarchy with $$\mathsf {Id}$$-types remains to be investigated, the notions of propositional equality in syntax and semantics do coincide for open terms of the type hierarchy. | 2022-08-11 14:50:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5106868147850037, "perplexity": 2407.126030961532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00786.warc.gz"} |
https://dsp.stackexchange.com/questions/63602/discrete-fourier-transform-norms-of-complex-input-signals-and-their-transforma | # Discrete Fourier transform - Norms of complex input signals and their transformation
Given a signal $$\mathbf{z} \in \mathbb{C}^n$$ and its Discrete Fourier transform $$\hat{\mathbf{z} }$$, does $$||\mathbf{z}|| = ||\hat{\mathbf{z} }||$$ hold?
The question is given to me like this with no additional details. Information about what kind of norm is also not given. Does anyone have an idea what the question might be looking for? | 2021-09-23 14:33:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28685787320137024, "perplexity": 206.4501138059862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00578.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-12th-edition/chapter-24-organic-chemistry-questions-problems-page-1052/24-20 | # Chapter 24 - Organic Chemistry - Questions & Problems - Page 1052: 24.20
#### Work Step by Step
As we know that alkalines undergo addition reactions with hydrogen, halogens like $Cl_2, Br_2, I_2$ and with hydrogen halides while alkalines do not react with theses substances in ordinary conditions.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-09-23 19:40:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38117578625679016, "perplexity": 4864.47391054678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00112.warc.gz"} |
https://mothur.org/wiki/get.sharedseqs/ | # get.sharedseqs
The get.sharedseqs command takes a list and group file or shared file and outputs a *.shared.seqs file for each distance. This is useful for those cases where you might be interested in identifying sequences that are either unique or shared by specific groups, which you could then classify. To run through the commands below use AbRecovery files.
## Default settings
To execute the get.sharedseqs command you must provide a list and group/count file or shared file. By default this will output the sequences found in the OTUs shared by all the groups in your group/count file or shared file. For example:
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups)
or
mothur > get.sharedseqs(list=abrecovery.fn.unique_list, count=abrecovery.count_table)
or
mothur > make.shared(list=abrecovery.fn.list, group=abrecovery.groups)
mothur > get.sharedseqs(shared=abrecovery.fn.shared)
This will result in output to the screen looking like:
unique 0 - No otus shared by groups A B C.
0.00 0 - No otus shared by groups A B C.
0.01 1
0.02 2
0.03 3
0.04 3
0.05 4
0.06 5
0.07 6
...
The left column indicates the label for each line in the data set and the right column indicates the number of OTUs at this distance.
The .shared.seqs output files look like:
AY457715 C 59
AY457838 A 59
AY457774 B 59
...
The first column is the sequence accession number, the second is the group that the sequence is from, and the third is the OTU number that the sequence belongs to.
## Options
### fasta
If you provide a fasta file mothur will also output a fasta file for each distance you specify:
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, fasta=abrecovery.fasta)
The .shared.fasta output files look like:
>AY457838 A 59
CCCTTAGAGTTTGATCCTGGCTCAGGACG...
>AY457774 B 59
CCCTTAGAGTTTGATCCTGGCTCAGGACG...
>AY457715 C 59
CCCTTAGAGTTTGATCCTGGCTCAGGACG...
...
### label
There may only be a couple of lines in your list file that you are interested in. You could either manually delete the lines you aren’t interested in from you list file or use the label option.
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04-0.82)
0.04 3
0.82 1
Opening abrecovery.fn.0.04.shared.seqs you would see the output as:
AY457701 C 45
AY457715 C 45
AY457838 A 45
AY457774 B 45
...
### uniquegroups & sharedgroups
The uniquegroups parameter allows you to see sequences belonging to OTUs unique to specific groups or unique to a particular group. For example to see the sequences from OTUs unique to group A at distance 0.04, you would enter the following:
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04, uniquegroups=A)
0.04 38
There are 38 OTUs that are unique to A at distance 0.04 and their sequence names are listed in abrecovery.fn.0.04unique.A.shared.seqs.
Similarly, if you wanted the sequences from OTUs unique to groups A and B at distance 0.04, you would enter the following
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04, uniquegroups=A-B)
0.04 12
There are 12 OTUs that only contain sequences from groups A and B at a distance of 0.04. The file abrecovery.fn.0.04unique.A-B.shared.seqs contains:
AY457754 B 44
AY457871 A 44
AY457910 A 44
AY457805 B 63
AY457853 A 63
...
The sharedgroups parameter allows you to see sequences belonging to OTUs that contain specific groups or a particular group. For example to see the sequences from OTUs that contain sequences from group A at distance
0.04, you would enter the following:
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04, sharedgroups=A)
0.04 57
There are 57 OTUs that contain sequences from group A at distance 0.04 and their names are listed in abrecovery.fn.0.04A.shared.seqs.
Similarly, if you wanted the sequences from OTUs that contain sequences from groups A and B at distance 0.04, you would enter the following
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04, sharedgroups=A-B)
0.04 15
There are 15 OTUs that are shared between A and B at a distance of 0.04. The file abrecovery.fn.0.04A-B.shared.seqs contains:
AY457754 B 44
AY457871 A 44
AY457910 A 44
AY457701 C 45
AY457715 C 45
AY457838 A 45
AY457774 B 45
AY457747 C 45
AY457859 A 45
...
### output
The output parameter allows you to have the .name file be in .accnos form so you can use it with the get.seqs, list.seqs and remove.seqs commands. For example:
mothur > get.sharedseqs(list=abrecovery.fn.list, group=abrecovery.groups, label=0.04, output=accnos)
Opening abrecovery.fn.0.04.shared.seqs you would see the output as:
AY457701
AY457715
AY457838
AY457774
AY457747
AY457859
AY457695
AY457732
AY457860
AY457826
AY457767
AY457698
AY457855
AY457804
## Why do the venn diagram results vary from get.sharedseqs results?
Confusion can occur when you have a shared file with more groups than just the subset you are looking at in your venn diagram. For this example let’s look at a simple shared file like:
Full shared file:
label group numOtus Otu001 Otu002 Otu003 Otu004 Otu005
0.26 A 5 43 38 2 0 1
0.26 B 5 46 14 10 13 1
0.26 C 5 16 29 29 0 0
mothur > venn(groups=B-C)
Venn shared file with groups B and C selected:
label group numOtus Otu001 Otu002 Otu003 Otu004 Otu005
0.26 B 5 46 14 10 13 1
0.26 C 5 16 29 29 0 0
C = 0 unique OTUs
B = 2 unique OTUs
BC = 3 shared OTUs
mothur > get.sharedseqs(uniquegroups=B)
Get.sharedseqs with uniquegroups=B shared file:
label group numOtus Otu001 Otu002 Otu003 Otu004 Otu005
0.26 A 5 43 38 2 0 1
0.26 B 5 46 14 10 13 1
0.26 C 5 16 29 29 0 0
B = 1 unique OTUs
The difference between the two commands is the groups mothur is considering when finding the unique and shared OTUs. With the venn command, mothur only uses the groups provided by the groups parameter or if none are provided the first 4 groups in the file. This is done because there are limits to the drawing of the venn diagram. With more than 4 groups the picture becomes too complicated to be of use. In this example group A is not included which changes the shared and unique composition.The get.sharedseqs command does not have the limitations of the picture. You can set parameters with the sharedgroups and uniquegroups. The sharedgroups parameter means the OTUs MUST include the groups you listed, but MAY also include other groups. The uniquegroups parameter means the OTUs MUST include the groups you listed and ONLY the groups you listed.
For example:
mothur > get.sharedseqs(uniquegroups=B-C)
B-C = 0 unique OTUs (no Otus contains just sequences from B and C)
mothur > get.sharedseqs(sharedgroups=B-C)
B-C = 3 shared OTUs (3 Otus contains sequences from B and C and sequences from other groups)
## Revisions
• 1.30.0 - added shared file option and changed unique and shared parameter names to uniquegroups and sharedgroups.
• 1.37.0 - Adds count parameter #133
• 1.40.0 - Speed and memory improvements for shared files. #357 , #347 | 2021-05-15 08:40:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5965601801872253, "perplexity": 245.69196755892284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00597.warc.gz"} |
http://openstudy.com/updates/4d9cc6be8f378b0b40dae117 | ## anonymous 5 years ago Can someone help me with Surface area of pyramids and cones
1. anonymous
Find the slant height of the regular pyramid or cone
2. anonymous
Do you know what pathagorean therom is?
3. anonymous
yeah isnt it a^2+b^2=c^2
4. anonymous
Ok so find the length of the hypotenuse of the pyramid (for the first one)
5. anonymous
so 12^2+15^2=369
6. anonymous
369=c^2 you still need to get rid of that exponent.. so you get $3\sqrt{41}$. Now that you have all the lengths apply the forumla for area of a triangle. Area = 1/2 * base * height
7. anonymous
Because you have 4 triangles you could change your forumla to reflect that A=4(1/2*b*h)
8. anonymous
Then don't forget the add the area of the base of the pyramid itself... width * height. Add them both up, and that's surface area.
9. anonymous
Get it?
10. anonymous
so 4(1/2*144*15)
11. anonymous
12. anonymous
Take the area of one of the isosceles triangles, multiply it by 4 and add it to the area of the base. Forget that 1/2 B*H I messed up
13. anonymous
know im lost
14. anonymous
Do you see the forumla for the area of an isocoles triangle I sent you?
15. anonymous
ya
16. anonymous
Ok so do you see how each side of the pyramid is an isocles triangle?
17. anonymous
ya
18. anonymous
Ok so on the forumla you know that B=12, and C and A are the same and using pythagorean theorm we know it is $3\sqrt{41}$ So plug all those into the forumla and you will have the area of ONE side of the pyramid.
19. anonymous
82.486362509205
20. anonymous
I'm going to take your word on that... lol. Now what do you think you do?
21. anonymous
* it by 4
22. anonymous
23. anonymous
the base of 144
24. anonymous
There you go, that's the surface of a pyramid.
25. anonymous
As long as your algebra was right when putting those numbers in.
26. anonymous
so 473.96
27. anonymous
Assuming your algebra is correct, yes..
Find more explanations on OpenStudy | 2017-01-22 08:33:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7421228885650635, "perplexity": 2150.9084831796567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00150-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://openstudy.com/updates/4dd04d139fe58b0b2fad38f7 | ## safia21 5 years ago algebra can you help me with # 2 and # 3 thanks
1. safia21
2. anonymous
the cooking one?
3. safia21
4. anonymous
cook has 4 quarts that is 50% chicken stock. so at the moment it is 50% of 4 = 2 quarts chicken stock. so for example if she adds 3 quarts of chicken stock she will have 4+3= 7 quarts of liquid of which 2 + 3 = 5 quarts is chicken stock, and the percent will be 5/7 * 100. this is not what you want obviously, i am just trying to explain where the equation will come from. if she adds x quarts of chicken stock she will have 2+x quarts of chicken stock and 4+x quarts of liquid. you want $\frac{2+x}{4+x}=.75$ $2+x=.75(4+x)$ last equation says your two quarts of chicken stock plus your x quarts must be 75% of the total liquid. now we solve multiply out $x+2=.75x+.75\times 4= .75x+3$ $.25x=1$ subtract .75x from both sides and subtract 2 from both sides $x=\frac{1}{.25}=\frac{100}{25}=4$
5. safia21
and #3 thanks
6. anonymous
ok #3 looks like the previous one we did so let me be careful and not put the 1 on the wrong side like i did last time.
7. anonymous
keep seeming to put the 1 on the wrong side. you have two rates, r and r +1 $\frac{12}{r}=\frac{12}{r+1}+1$ slower person's time is one less more than faster persons. $\frac{12}{r}=\frac{12+r+1}{r+1}=\frac{r+13}{r+1}$ cross multiply$12(r+1)=r(r+13)$ $12r+12=r^2+13r$ $r^2+r-12=0$ $(r+4)(r-3)=0$ $r=3$ $r=-4$ so r = 3 | 2017-01-23 05:00:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6214653253555298, "perplexity": 1430.695879481203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00573-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-percent-composition-of-oxygen-in-sodium-hydroxide | # How do you find the percent composition of oxygen in sodium hydroxide?
Mar 21, 2017
The percent composition of oxygen in sodium hydroxide is 40.000%.
#### Explanation:
Determine the molar mass of sodium hydroxide $\left(\text{NaOH}\right)$. Then divide the molar mass of oxygen by the molar mass of $\text{NaOH}$, and multiply by 100.
Molar Masses
$\text{NaOH} :$$\text{39.997 g/mol}$
https://www.ncbi.nlm.nih.gov/pccompound?term=NaOH
$\text{O} :$$\text{15.999 g/mol}$ (periodic table)
Percent Composition of Oxygen
$\text{percent composition"=(15.999cancel("g"/"mol"))/(39.997cancel("g"/"mol"))xx100="40.000%}$ | 2022-08-15 21:34:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8739524483680725, "perplexity": 4568.568428809594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00708.warc.gz"} |
https://latex.org/forum/viewtopic.php?f=46&t=33106 | ## LaTeX forum ⇒ Math & Science ⇒ How to change the local vertical spacing in the align environment?
Information and discussion about LaTeX's math and science related features (e.g. formulas, graphs).
Cham
Posts: 937
Joined: Sat Apr 02, 2011 4:06 pm
### How to change the local vertical spacing in the align environment?
I'm still having a lot of troubles in my vertical alignments. Here's a MWE showing the trouble:
\documentclass[11pt,letterpaper,twoside]{book}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc}\usepackage[french]{babel}\usepackage{lmodern}\usepackage[total={6in,10in},left=1.5in,top=0.5in,includehead,includefoot]{geometry}\usepackage[nodisplayskipstretch]{setspace}\setstretch{1.1}\raggedbottom\usepackage{microtype}\usepackage{amsmath}\usepackage{amsfonts}\usepackage{mathtools} \begin{document} %\setlength{\abovedisplayskip}{1em}\setlength{\abovedisplayshortskip}{0pt}\setlength{\belowdisplayskip}{\abovedisplayskip}\setlength{\belowdisplayshortskip}{\belowdisplayskip}\setlength{\jot}{3ex} Blablabla : \begin{equation} \begin{aligned} x(u, v) &= \cosh{u} \, \cos{v}, \\ y(u, v) &= \cosh{u} \, \sin{v}, \\ z(u, v) &= \sinh{u}. \end{aligned} \end{equation}Blabla bla bla : \begin{align*} dx &= \sinh{u} \, \cos{v} \: du - \cosh{u} \, \sin{v} \: dv, \\ dy &= \sinh{u} \, \sin{v} \: du + \cosh{u} \, \cos{v} \: dv, \\ dz &= \cosh{u} \: du. \end{align*}Bla bla. \end{document}
Preview:
align.jpg (26.75 KiB) Viewed 1075 times
There are two problems with this code. I don't want to change the global lengths defined in the preamble, since most equations are nicely displayed in my main document. But then the vertical space is too large between the equations in both environments shown above. How can I reduce it locally? Is \\[-11pt] or \\[-1em] the only options here? I feel nervous with adding a negative spacing, it's not "natural".
Secondly, the vertical spacing between the second text line and the second align environment is clearly too large (short skip not working there?). Why is that?
Cham
Posts: 937
Joined: Sat Apr 02, 2011 4:06 pm
Apparently, using \begingroup\setlength{\jot}{2ex} ... \endgroup solves my issue, for the few cases where I need to change the vertical spacing in some align environments.
Is this a proper solution? Or is it better to use a negative spacing for each \\ ? | 2020-09-28 11:58:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999834299087524, "perplexity": 4310.610261955067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00593.warc.gz"} |
https://www.ms.u-tokyo.ac.jp/journal/abstract_e/jms190405_e.html | ## Clifford modules, finite-dimensional approximation and twisted $K$-theory
J. Math. Sci. Univ. Tokyo
Vol. 19 (2012), No. 4, Page 587–612.
Gomi, Kiyonori
Clifford modules, finite-dimensional approximation and twisted $K$-theory
A twisted version of Furuta's generalized vector bundle provides a finite-dimensional model of twisted $K$-theory. We generalize this fact involving actions of Clifford algebras. As an application, we show that an analogy of the Atiyah-Singer map for the generalized vector bundles is bijective. Furthermore, a finite-dimensional model of twisted $K$-theory with coefficients $\Z/p$ is given. | 2022-06-27 21:46:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471868276596069, "perplexity": 1080.7393339530392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00707.warc.gz"} |
http://crypto.stackexchange.com/tags/3des/new | # Tag Info
First, note that $192=3\cdot64$, so the real key length of 3DES is $192$ bits. However, since $8$ bits in each subkey are parity bits, this reduces to $3\cdot56=168$ bits of non-redundant key material. Now, the reason that 3DES' effective key length is usually classified as $2\cdot56=112$ bits is that 3DES is susceptible to a meet-in-the-middle attack: When ... | 2015-01-28 20:14:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345426559448242, "perplexity": 1773.6868782620516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00252-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://cvr.cc/?p=504 | # TeX4ht: Options
Following is an incomplete list of options that can be passed on to TeX4ht when it is run from command line. These can also be provided as options when tex4ht package is loaded in a LaTeX document with the default usepackage command.
-css to ignore CSS code, use command line option -css. -xtpipes to avoid xtpipes post-processing the output. This might be useful for docbook XML output. 0 pagination shall be obtained through the option 0 or 1, at locations marked with PageBreak. 1, 2, 3, 4, 5, 6, 7 for automatic sectioning pagination (to break at various section levels), use the appropriate command line option 1, 2, 3, 4, 5, 6, 7. DOCTYPE to request a DOCTYPE declaration, use the command line option DOCTYPE. Gin-dim for key dimensions of the graphic, try this option. Gin-dim+ for key dimensions when the bounding box is not available. NoFonts to ignore CSS font decoration. PMath Option to choose positioned math. Example: Example: def({PMath$}; def){$EndPMath}; def[{PMath$$}; def]{$$EndPMath}. RL2LR to reverse the direction of RL sentences. TocLink option to request links from the tables of contents. ^13 option for active superscript character. _13 option for active subscript character. bib- for degraded bibliography friendlier for conversion to .doc. bibtex2 Option bibtex2 requires compilation of jobname j.aux with bibtex. charset for alternate character set, use the command line option charset="..." (e.g., charset="utf8"). css-in the inline CSS code will be extracted from the input of the previous compilation, so an extra compilaion might be needed for this option to make it effective. css2 for CSS 2 code. early^ for default catcode of superscript in the Preamble. early_ for default catcode of subscript in the Preamble. endnotes for end notes instead of footnotes, use this option. enumerate+ for enumerated list elements with valued data. This will use the description list like
...
for the list counter. enumerate- for enumerated list element’s
• ’s with value attributes, use this command line option. This will be an ordered list with the value of list counter provided as an attribute namely, value of the
• element. fn-in for inline footnotes use this option. fn-out for offline footnotes. fonts for tracing LaTeX font commands, use this command line option. fonts+ for marking of the base font, use this option. font for adjusted font size, use the command line option font=... (e.g., font=-2). frames- for frames support. frames is also valid option for frames support. frames-fn for content, TOC and footnotes in three frames. frames for TOC and content in two frames. gif for bitmaps of pictures in .gif format, use this option. graphics- if the included graphics are of degraded quality, try the command line options graphics-num or graphics-. The num should provide the density of pixels in the bitmaps (e.g., 110). hidden-ref option to hide clickable index and bibliography references. html+ for stricter HTML code. imgdir for addressing images in a subdirectory, use the option imgdir:.../. image-maps for image-maps support. index for n-column index, use the command line option, index=n (e.g., index=2). info-oo for extra tracing information while generating open office output. info for extra information in the jobname.log file. java for javasupport. javahelp for JavaHelp output format, use this command line option. javascript for javascript support. jh- for sources failing to produce XML versions of HTML, try this command line option. jpg for bitmaps of pictures in .jpg format, use this option. li- for enumerated list elements li’s with value attributes. math- option to use when sources fail to produce clean math code. mathltx- option to use when sources fail to produce clean mathltx code. mathml- option to use when sources fail to produce clean MathML code. mathplayer for MathML on Internet Explorer + MathPlayer. minitoc¡ for mini tocs immediately after the header use the command line option, minitoc<. mouseover for pop ups on mouse over. next for linear cross-links of pages, use this option. nikud for Hebrew vowels, use the command line option, nikud. no-DOCTYPE to remove DOCTYPE declaration from the output. no-VERSION to remove processing instruction from the output. no^ for non-active ^ (superscript), use the option no^. no_ for non-active _ (subscript command), use the command line option, no_. no_^ for both non-active superscript and subscript, use the option no_^. nolayers to remove overlays of slides, use this option. nominitoc this will eliminate mini tables of contents from the output. notoc* for tocs without * entries, use this option. The notoc* option is applicable only to pages that are automatically decomposed into separate web pages along section divides. It shall be used when addcontentsline instructions are present in the sources. obj-toc for frames-like object based table of contents, use the command line option obj-toc. p-width for width specifications of tabular p entries, use this option. pic-RL for pictorial RL. pic-align for pictorial align environment. pic-array for pictorial array. pic-cases for pictorial cases environment. pic-eqalign for pictorial equalign environment. pic-eqnarray for pictorial eqnarray. pic-equation for pictorial equations. pic-fbox for pictorial or bitmapped fbox’es. pic-framebox for bitmap fameboxes. pic-longtable for bitmapped longtable. pic-m+ for pictorial $...$ and $$...$$ environments with LaTeX alt, use the command line option pic-m+ (not safe). pic-m for pictorial $...$ environments, use the command line option pic-m (not recommended). pic-matrix for pictorial matrix. pic-tabular use this option for pictorial tabular. plain- for scaled down implimentation. prog-ref for pointers to code files from root fragments, use the command line option prof-ref. This is for debugging. refcaption for links into captions, instead of flat heads, use this option. rl2lr to reverse the direction of Hebrew words, use this option. sec-filename for file names derived from section titles, use the command line option sec-filename. sections+ for back links to table of contents, use this option. svg- for external SVG files, try this option. svg-obj same as above. svg for dvi pictures in svg format. tab-eq for tab-based layout of equation environment, use this option. trace-onmo for mouseover tracing of compilation, use the command line option, trace-onmo. url-enc for URL encoding within href, use this option. Configure{url-encoder} can be used to fine tune encoding. url-il2-pl for il2-pl URL encoding. ver for vertically stacked frames. Effective when frames option is requested. xht for file name extension, .xht, use this command line option. xhtml for XML code, use the command line option, xml or xhtml. xml See previous entry.
• #### 7 Responses to “TeX4ht: Options”
• Thank you, this will be very useful (and something very much awaited by tex4ht users). Thank you (and Karl Berry) for all your efforts maintaining tex4ht.
• Great list! But, I can not got to work option“endnotes”, and it isn’t mentioned in the documentation… Is it something new? (puszcza is down at the moment).
• Thanks.
The ‘endnotes’ option appears to be for oolatex invocation. I’d salvaged most of the options from Eitan’s literate sources, as such, many of them are untested. I will try to add appropriate script name (along with options) if an option is not meant for the default html or xhtml output.
• Kirill Müller
Thank you for the useful list. However, not all options work for all output modes (XHTML, OpenOffice, …) Would you consider adding a matrix or a list for which output modes each option is valid?
• Thanks — for all your tex4ht work!
I’ve wanted ‘pic-align’ for ages…
• julian
Is there any option to produce equations in their original LATEX form (not converted to MathML) so that they can be displayed with MathJax? | 2017-08-17 13:30:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.686957597732544, "perplexity": 11802.043632287556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103316.46/warc/CC-MAIN-20170817131910-20170817151910-00412.warc.gz"} |
https://www.jiskha.com/similar?question=what+is+the+graph+points+of+%281%2C+-3%29%2Cm%3D-5%2F4+in+slope-intercept+form&page=258 | # what is the graph points of (1, -3),m=-5/4 in slope-intercept form
38,199 questions, page 258
1. ## Calculus (Discontinuities)
Suppose, f(x) = { (x - 1)^2 / x + 1 if x < 2 (x^2 - 2x - 8)/(x - 4) if 2
asked by Mishaka on October 14, 2011
2. ## I don't have a CLue
11. What different reasons did the European nations have for exploration? Provide at least one example specific to Spain, France, and Britain. Where did Spain, France, and Britain choose to settle while exploring? Why did they settle these particular
asked by BallaWitSwagg on October 14, 2013
1) A force vector has a magnitude of 579 newtons and points at an angle 43o of below the positive x axis. Find the x scalar component and the y scalar component of the vector? For the x component I did 579cos(43) = 423.45 N. Is this correct? For the y
asked by Hannah on May 25, 2012
4. ## english
can someone help me with this question?At several points in Sherman Alexie's "Indian Education" essay, Alexie uses comparison and contrast. Locate at least two examples and explain what each contributes to the essay? I just don't see a point of comparison
asked by kayla on February 7, 2009
Which of the following selections contains a run-on? If none of the selections contains a run-on, select "Correct." (Points: 5) Everyone praised Matthew for his will power. He quit smoking five years ago, he still craves a cigarette from time to time.
asked by Anonymous on May 27, 2010
6. ## statistics
A business uses a 7-point scale about satisfaction with the services it provides to clients. The ratings are normally distributed with a mean of 4.8 and a standard deviation of .5. What percentage of clients rate their satisfaction 1) Above 5? 2) Above 6?
asked by Sarah on March 24, 2011
7. ## Precalc
Are exponential equations the same as exponential functions? I am writing a summary of this year's Pre-Calculus lessons and one of the topics on the outline is "Exponential Equations", but on the lesson Power Points on my teacher's website there is only
asked by Lie-ma Been on June 15, 2014
8. ## math
Laura is driving to Seattle. Suppose that the remaining distance to drive (in miles) is a linear function of her driving time (in minutes). When graphed, the function gives a line with a slope of -0.75. Laura has 51 miles remaining after 33 minutes of
asked by kia on January 10, 2011
9. ## Math
Jina is driving to Boston. Suppose that the remaining distance to drive (in miles) is a linear function of her driving time (in minutes). When graphed, the function gives a line with a slope of −0.95 Jina has 44 miles remaining after 34 minutes of
asked by Christine N. on April 26, 2016
10. ## chem
PLEEEZE help name the type of reaction K + MgBr -> Not enough information is provided. And not magnesium bromide is MgBr2. Is this a solution phase or exactly how is the K metal added to the MgBr2? As DrBob has indicated, not all of the information has
asked by key on February 19, 2007
11. ## PHYSICS
Use dimensional analysis to determine the pressure, P, in the pipe to within a dimen- sionless multiplicative constant of order 1.(Hint: P may depend on another physical variable besides A and R.) Using your result from part (a), reexpress your answer in a
asked by Sasha on September 21, 2014
12. ## CHEMISTRY
THIS IS THE 3RD TIME POSTING THIS. PLEASE HELP ME ANSWER THIS. THE HOMEWORK IS DUE TONIGHT. Write formulas for the compounds that form from Sr and each of the following polyatomic ion: NO−3, SO2−4, PO3−4. THE -3, -4, -4 ARE THE CHARGES, CANT TYPE
asked by MELISSA on June 24, 2016
13. ## physics
when inflated, a rubber lifeboat takes the form of rectangular box of dimensions 2mx1.5x40cm. with out any load, it floats. how many 60kg people can it carry before water flows into it? assume that the density of seawater is 1025g/cubic meter.
asked by pretty on March 7, 2013
14. ## us history
I'm doing a mock trial defending Andrew Jackson being impeached. He is being indicted for violating states' rights in his dealings with South Carolina in the nullification crisis. I'm finding it extremely hard to find any info to form reasons to defend
asked by Michelle on November 6, 2010
15. ## science
I am having trouble with Restate the problem in terms of the following: Manipulated variable Constant (control) variables Principles of experimental design State in correct form the hypothesis you intend to demonstrate. I have conducted the experiment but
asked by pookie on December 16, 2008
16. ## Science
Hydrochloric acid and sodium hydroxide are mixed together to form salt and water. During the reaction, the temperature of the solution increases. Which of these best explains why this is considered to be a chemical change? No energy was destroyed. Heat was
asked by Anonymous on August 2, 2011
17. ## Math
data regarding the value of a particular color copier is represented in the graph find the rate of chance of the value with respect in time in dollars per year. the rate of change of the value with respect to time is dollars per year
asked by Anonymous on September 29, 2010
18. ## math
draw a square on the graph paper whose each side of the length 5 centimetre and then make partition of the square into 25 small squares as soon in figure 1.1 each square is small square have its height and length 1 centimetre
asked by rohit on July 23, 2015
1. Graph y =sec(1/2O-2pi)– 3. the O after the 2 has a slash through it 2. Write an equation for tangent given the period, phase shift, and vertical shift. period = 1/3pi , phase shift = –1/4pi, vertical shift = –5
asked by heather on April 24, 2012
20. ## math, help
What are two solutions for 3x-y=0 am i correct on this inorder to graph (0,0) and (2,6) Because the line goes through (0,0) so how do i get two solutions from that line? You can get two solutions from that line because the first solution (0,0) is from the
asked by jasmine20 on February 10, 2007
21. ## Calculus
the area of the first quadrant region bounded by the y-axis, the line y=4-x and the graph of y=x-cosx is approximately: a) 4.50 square units, b) 4.54 square units, c) 4.56 square units, d) 4.58 square units, e) 5.00 square units
asked by Miranda on January 9, 2013
22. ## Algebra II
Determine if the following equation is linear. -7^2+y^2-y^2=3x+7 Linear and Standard Form ______________ or Non Linear
asked by Shelvia on March 5, 2013
23. ## physical science
How much heat is necessary to vaporize 100 g of water at 100 °C to form steam at 100 °C?
asked by Anonymous on March 16, 2010
24. ## Math
Can each set line segments form a triangle? Why or why not? -- AB = 1/2 mile -- BC = 1/3 mile -- AC = 1/4 mile Please help. Thank you
asked by Just the curious one on January 11, 2018
25. ## CONJUGUATE SURD
ROOT 72-3 /ROOT 3+. LEAVING YOUR ANSWER IN THE FORM OF A+B ROOT C ,WHERE A, B, C ARE RATIONAL NUMBERS.
asked by JOSHUA BAME on November 26, 2010
26. ## physical science
How much heat is necessary to vaporize 100 g of water at 100 °C to form steam at 100 °C?
asked by Anonymous on March 16, 2010
27. ## pyhsical science
How much heat is necessary to vaporize 100 g of water at 100 °C to form steam at 100 °C?
asked by Anonymous on March 16, 2010
28. ## Algebra
2pi(x^2 + 4x + 4) + 2pi(x^2 + 7x + 10) How do I write this as a polynomial in standard form (by factoring out 2pi)?
asked by Anonymous on February 27, 2014
29. ## math
What is the square root of 80 in simplified radical form? sqrt 80 = sqrt (16*5) = 4 sqrt 5
asked by jon on July 4, 2007
30. ## math
teacher has 27 students. she asks the students to form as many groups of 4 as possible. How many students will not be in a group?
asked by Anonymous on December 15, 2014
31. ## Chemistry
Mg + HCl -------> MgCl2 + H2 What volume of HCl is required from 27 % HCl to form 12.1 gram H2? ( D = 1.41 g/ml)
asked by Majid on September 2, 2016
evaluate the integral integral of 3 to 2 x/(x^2-2)^2 dx u=x^2-2 du=2x dx 1/2 du = x dx integral of 1/u^2 du -1/(x^2-2) Then I plug in 3 and 2 and subtract them form each other -1/(3^2-2) - (-1/(2^2-2) Is this correct?
asked by Hannah on April 30, 2011
Teacher has 27 students. She asks the students to form as many groups of 4 as possible. How many students will not be in a group?
asked by Anonymous on December 15, 2014
34. ## math
write the following square root in the form a root b . where a and b are integers and b has the least value possiable. 3 root 7?
asked by Emma on November 28, 2014
35. ## Algebra
What is the vertex form of the equation? y = -x^2+12x-4 My work: -(x^2 + 12x)-4 -(x^2 + 12x + 36)-4 + 36 Y= -(x+36)^2 + 32 Is this correct? I am not sure if I did this right or not? Thank you!
asked by Mitch n' Joey on October 25, 2017
36. ## Calculus!!
Consider the differential equation given by dy/dx = xy/2. A. Let y=f(x) be the particular solution to the given differential equation with the initial condition. Based on the slope field, how does the value of f(0.2) compare to f(0)? Justify your answer.
asked by Anonymous on April 23, 2016
37. ## Introduction to ICD Classification and Reimburseme
Electronic medical records have been becoming increasingly common in the health care industry. As a result, the issues that surround the confidentiality of a patient’s medical record have come into question. Identify and discuss the major law that
asked by Tamara on November 8, 2010
38. ## English Speech
Derrick's social studies teacher assigned Derrick to present a report about the life of the nomadic peoples who live in Mongolia. Derrick decided that he would focus his report on their traditional homes, called gers. Which visual aids are most likely to
asked by Kes on November 4, 2015
39. ## algebra
Could you please help me answer this? Fund Raising. A charity organization wishes to raise at least $12,000 from a movie premiere to be held at a theatre with 800 seats. The ticket prices are to be$20 and $15, with at least 500 tickets to be sold at$20.
asked by Sarah Mae B. Oquindo on February 12, 2010
40. ## social studies essay
Need help w/7 pg paper debating the pros and cons of year round schools. I've found the basic reasons for each side, but I find it's too difficult to write about both sides in one paper. Can you help me get started and give me some kind of outline to
asked by Forrest on April 15, 2008
41. ## Statistics
I have provided information from #3-5 in order to answer #6.. 3. Give the mean for the mean column of the Worksheet. Is this estimate centered about the parameter of interest (the parameter of interest is the answer for the mean in question 2)? The Mean
asked by Hope on October 7, 2011
42. ## social studies
what was the main belief of Englishment thinkers? A. that government power should be limited B.that a republic was the best form of government C.that the use of reason was vital to improving society D. that a government should be storng enough to carry out
asked by em:) on September 7, 2016
43. ## math,help
can someone help me plz...last problem simplify (7 + radical (5))(7- radical (5)) for this one i have no idea (a+b)*(a-b)= a^2 - b^2 memorize that, it is the factors of the difference of squares. In this case, a=7, b= sqrt5 so, it simplifies to 49-5=44 So
asked by jasmine20 on March 29, 2007
44. ## calculus
Please help with this. I submitted it below but no one responded. I need the first derivative of f(x)=4(x+ then the square root to include (x(8-x)), then close bracket.And then the second derivative of this to show by the second derivative test that it is
asked by Frank on March 1, 2011
45. ## chemistry
which of the structures are impossible ,give the numbers of bonds that various atoms can form? (a)CH3CH3 CH3 (b)CH3CH=CH2CH2CH3 (c)CH3NHCH3 (d)CH3CCl=CCH2CH3 (e)(CH3)3 CHCH(CH3)2 (f)CH3CHO
asked by anie on December 4, 2012
46. ## physics-optics
Two speakers are separated by a distance of 3.4 m. A point P is placed at 5.7 m from one of the speakers so that they form a right triangle. If the speed of sound in this situation is 340 m/s and the speakers are in phase, what is the lowest frequency for
asked by eliz on April 2, 2017
47. ## chem
For the reaction shown, calculate how many grams of oxygen form when each quantity of reactant completely reacts. 2HgO(s)¨2Hg(l)+O 2 (g) 2\;{\rm{HgO}}\left( s \right)\; \rightarrow \;2\;{\rm{Hg}}\left( l \right) + {\rm{O}}_2 \left(1.40 kgHgO g \right)
asked by Masey on November 20, 2014
48. ## Chemistry
Elemental sulfur occurs as octatomic molecules, S8. What mass of fluorine gas is needed for complete reaction with 24.1 g sulfur to form sulfur hexafluoride? I don't know where to even start with this question
asked by Ashley on March 8, 2016
49. ## physic
Three forces F1 = (80.90i − 54.63j) N, F2 = (23.50i − 80.52j) N, and F3 = (−104.4i + 361.9j) N are exerted on a particle. The particle's mass is 23.11 kg. Find the particle's acceleration. (Express your answer in vector form.) a = m/s2
asked by joy on February 6, 2018
50. ## Math
My original question: How many gallons of paint are necessary to paint the walls of a 15' by 12' room with an 8' ceiling if one gallon of paint covers 80 square feet? Your response is quite clear to me. Please write it in simple form. Thanks.
asked by Garnett on June 13, 2011
Sales tax in Pennsylvania is 6%. Create an equation for the total price (cost plus tax) of a purchase in PA in terms of its cost. How much would a person pay for a car whose cost is $32,000? PAY = COST + 0.06 * COST P=C(1+1.06) P=1.06C P= (32,000)= asked by PEG on February 26, 2011 52. ## algebra Sales tax in Pennsylvania is 6%. Create an equation for the total price (cost plus tax) of a purchase in PA in terms of its cost. How much would a person pay for a car whose cost is$32,000? PAY = COST + 0.06 * COST P=C(1+1.06) P=1.06C P= (32,000)=
asked by PEG on February 27, 2011
53. ## Math
I am tring to finish my 10th grade. I got hurt last Jan. 2007. And was force to quit public school. In short of every. I started online schooling and I have to write an essay for a final grade in Math compare three college to each other and make a graph
asked by Cody on September 13, 2007
54. ## phi 103
For John Dewey, open-minded inquiry is: (Points : 1) The virtue that prevents habit from making us unwilling to hear other ideas Something only a child can do For people who are weak in their beliefs Reinforcing our own beliefs by talking with people who
asked by sam on February 8, 2015
55. ## Social Studies
How can personal finance decisions affect the economy? (3 points) a. Saving puts less of your money into the economy*** b. Spending your money doesn't put money into the economy. c. investing your money can aid businesses and services *** d. Widespread
asked by Anonymous on March 29, 2018
56. ## history
13. The Battle of Fort Sumter did which of the following? (5 points) gave the Union control over the Mississippi River demonstrated the superiority of the Union's military leaders gave the Confederacy possession of an important military base stalled the
asked by Amanda Fire on May 28, 2015
57. ## Calculus
Please check, if there is something wrong please explain what I did wrong. Thank you! Calculate the d^2y/dx^2. y= e^-x + e^x y' = e^x - e^-x y'' = e^x + e^-x Find the x-coordinace of all critical points of the given function. determine whether each
58. ## Language arts
In the third wish the king of the forest claims that he has yet to hear of the human being who made any good use of his three wishes in a paragraph consider whether mr peters proves the king wrong do mr peters wishes bring him happiness does he put his
asked by Anonymous on September 12, 2017
59. ## physics
A metal rod is moving in uniform magnetic field of 2T with a velocity perpendicular to the direction of the field as shown on the diagram. If the speed of the rod is 5m/s and the distance between points A and B is 0.1 meters what is the potential
asked by Maria on March 31, 2018
60. ## physics
A proton initially moves left to right long the x‑axis at a speed of 2 ´ 103 m/s. It moves into an electric field, which points in the negative x direction, and travels a distance of 0.2 m before coming to rest. What acceleration magnitude does the
asked by jolanta on July 25, 2010
61. ## Health, Fitness, and Nutrition
The item below has been reviewed and is scheduled to be updated. All students will receive full credit for any response to the following. Please select two answer options to receive full credit for this question. (2 points) which nutrients provide energy
asked by Victoria on May 22, 2017
62. ## PLIZ VERY URGENT
Use implicit differentiation to show that a function defined implicitly by sin x + cos y = 2y has a critical point whenever cos x = 0. Then use the first derivative test to classify those critical numbers that lies in the interval (−2, 2) as relative
asked by Anonymous on June 5, 2013
63. ## Calculus II: Shell Method for finding volumes
Question: What is the volume of the revolution bounded by the curves of y=4-x^2 , y=x, and x=0 and is revolved about the vertical axis. First, I had found the points of intersection to get the limits and I got -2.5616 and 1.5616. And then I plug it in the
asked by Luna on April 9, 2017
64. ## statistics
Suppose a random sample of 25 students is selected from a community college where the scores in the final exam (out of 125 points) are normally distributed, with mean equal to 112 and standard deviation equal to 12. Find the probability that the sample
asked by Monique on October 31, 2011
65. ## LAnguage Arts
Think about the work you completed in your reading character role. Determine the ideas that would be most worthy to share in a literary discussion about THE GIVER Provide an explanation for your choices. How did the role you selected and the work you
asked by aye on May 15, 2017
66. ## Social Studies
What was the main purpose od President Wilson's Fourteen Points? A. To assist the leaders of Europe to gain additional territory from Germany B. To divide Germany into several small parts so it would not be a treat C. To gain reparations from Germany to
asked by EmberShy on January 20, 2017
Two speakers are driven by a common oscillator at 870 Hz and face each other at a distance of 1.20 m. Locate the points along a line joining the two speakers where relative minima of pressure amplitude would be expected. (Use v = 343 m/s. Choose one
asked by julie on January 31, 2011
68. ## English
I am doing a debate with my class on dress codes, and I have to write a speech. One of my points is that uniforms remove gang violence and also cliques based on clothes. Could you give me the links to some articles about gang violence related to dress or
asked by Cassie on June 4, 2009
69. ## calculus
y=5/x-3 find the zeros,relative min and max.the max and min find the intervals on which yis positive or negative. on which y is decreasing.intervals y concave up or down.points of inflections
asked by donna on December 19, 2008
70. ## Math
If you where to plot the following points for two dimensional X and Y axis; POINT 1 (1, 0) POINT 2 (1, -1) POINT 3 (1, -2) POINT 4 (2, 2) then draw lines to connect from point 1 to point 2 to point 3 to point 4 QUESTION: the lines that you would have drawn
asked by Jen on September 30, 2008
71. ## exam study math guide CRUICAL
The average of Anne's, Sara's, and Julie's test score is 72. Julie scored 100 and Anne scored 10 points higher than Sara.what were annes and saras scores?
asked by riley on December 11, 2010
72. ## geometry
show that the sum of the squares of the lengths of the medians of triangle equals three-fourths the sum of the squares of the lengths of the sides.(hint:place the triangle so that its vertices are at points(-a,0),(b0)and (0,c))
asked by muneer on May 20, 2011
73. ## college physics
An object acted on by three forces moves with constant velocity. One force acting on the object is in the positive x direction and has a magnitude of 6.4N ; a second force has a magnitude of 5.0N and points in the negative y direction.
asked by taylor on September 29, 2013
. Pamela Mello is paid on an incremental commission schedule. She is paid 2.6% on the first $60,000 and 3.4% on any sales over$60,000. If her weekly sales volume was $89,400, what was her total commission? (Points : 3) asked by douny on June 22, 2013 75. ## physics A point charge of -4.00 is at the origin, and a second point charge of 6.00 is on the axis at = 0.850 . Find the magnitude and direction of the electric field at each of the following points on the axis. a) x=25.0cm b) x=1.10m c) x=-15.0cm asked by Zac on February 10, 2013 76. ## US History I don't understand this. I have a graph in my American history and it is labeled on the side "1995 Dollars in Billions" In 1965, it has$264 and in between 1965 and 1970, it is at $352 and then in 1970, it goes back down to about$264 again. Then the
asked by anthony on November 17, 2010
77. ## Statistics
1. Which of the following statements are correct? a. A normal distribution is any distribution that is not unusual. b. The graph of a normal distribution is bell-shaped. c. If a population has a normal distribution, the mean and the median are not equal.
asked by Andrew on March 11, 2011
78. ## college
1. Which of the following statements are correct? a. A normal distribution is any distribution that is not unusual. b. The graph of a normal distribution is bell-shaped. c. If a population has a normal distribution, the mean and the median are not equal.
asked by Jay on August 1, 2010
79. ## English
1. He is rather an old man. 1-2. He is a rather old man. (Are both OK and grammatical?) 2. She is quite a good pianist. 2-2. She is a quite good pianist. (Are both OK and grammatical?) 3. I feel better than last night. (What is the postive degree of
asked by John on October 17, 2008
80. ## Calculus help
A car travels along a straight road for 30 seconds starting at time t = 0. Its acceleration in ft/sec2 is given by the linear graph below for the time interval [0, 30]. At t = 0, the velocity of the car is 0 and its position is 10. What is the total
asked by Rich boi on January 26, 2018
81. ## Micoreconomics
As a general rule, profit-maximizing producers in a competitive maket produce output at a point where: A) Marginal cost is increasing B) Marginal cost is decreasing C) marginal revenue is increasing D) Price is less then marginal revenue I picked C? The
asked by G on September 1, 2008
82. ## math
My values are: 0.075 0.025 0.1 0.075 0.1 0.125 0.05 0.125 0.025 0.15 0.125 0.025 What would be an appropriate scale to go by on a bar graph.@bobpursely suggested 40:1 what does this mean???
asked by Rose on April 13, 2016
83. ## math
The graph of a function is horizontally compressed by a factor of 5 and vertically compressed by a factor of 2 Find an equation for this compressed function in terms of the function f(x)
asked by sam on February 4, 2013
84. ## English
can u guys help me add some adverbs (6 or 10) to this story that i summarized thanks in advance!! (^_^): Black, cloudy night-time hung over the backdrop of the house. The grey, ramshackle walls of the house looked worn and forbidding, and the dilapidated
asked by brett on November 25, 2015
85. ## physics
In the design of a supermarket, there are to be several ramps connecting different parts of the store. Customers will have to push grocery carts up the ramps. A grocery cart has a mass of 30kg. The coefficient of friction is .10. Assume that the shoppers
asked by tanya on September 11, 2012
86. ## physics
A 12 kg block is released from rest on a 30 degree fricitonless incline. Below the block is a spring that can be compressed 2 cm by a force of 270 N. The block momentarily srops when it compresses the spring by 5.5 cm. a) How far does the block move down
asked by Jamie on January 5, 2007
87. ## chemistry
Pentane (C5H12) and hexane (C6H14) form an ideal solution. At 25oC the vapor pressures of pentane and hexane are 511 and 150 torr, respectively. A solution is prepared by mixing 25 mL pentane (density, 0.63 g/mL) with 45 mL hexane (density, 0.66 g/mL).
asked by savanna on October 5, 2016
88. ## Chemistry-Bonding
Classify the following bonds as ionic, covalent, or neither (O, atomic number 8; F, atomic number 9; Na, atomic number 11; Cl, atomic number 17; U, atomic number 92). a.) O with F _________ b.) Ca with Cl __________ c.) Na with Na _________ d.) U with Cl
asked by Mary on November 13, 2009
89. ## English
Which of the following is a common error in composing a thesis statement? A. You offer an original perspective on a familiar theme. B. Your thesis statement is specific as opposed to general. C. Your thesis statement contains two or more central points. D.
asked by Jesica on March 23, 2015
90. ## calculating statistical data
last month, the release of information specialist received 232 requests for information. he was able to answer 176 within the specified time frame of five working days. what is the rate of compliance in the answering requests within the specified time
asked by marcus on July 23, 2014
91. ## science
a framed picture of weight 15N is to be hung on a wall using a price of string . the end of the string are tied to two points ,0.60m apart on the same horizontal level ,on the back of the picture .Find the tension in the string if the string is (a) 1.0 m
asked by soumya on September 19, 2017
92. ## science
a sphere of mass 500.0g is released from point A 5m above the ground and slides down to point B 3.2m through point C 2m in frictionless wire track. (1)determine the particle's speed at points B and C. (2)determine the net work done by the force of graviy
asked by trevor on January 16, 2013
Which of the following is a common error in composing a thesis statement? A. Your thesis statement is specific as opposed to general. B. You offer an original perspective on a familiar theme. C. You focus your thesis statement after you begin writing. D.
asked by Eva on March 16, 2015
94. ## Social studies
how do scholars describe islam's golden age? A. As a dangerous period in world history B. As a period when many gold objects were made C. As a brilliant period in world history D. As a period when arts suffered What I'm finding is that the answer points to
asked by Anonymous on February 19, 2016
95. ## science
Two train cars are connected to a locomotive as shown. Wach train has a force of kinetic friction equal to 50,000N. The locomotive pulls the two freight cars at a constant speed of 4.0m/s. Find the force of tension at each of the coupling points A & B.
asked by Jake on November 20, 2014
96. ## physics
Consider a parallel-plate capacitor with charge density 7.5 10-7 C/m2 on the two plates and an electric field that points in the +z direction. What magnetic field is necessary to provide a velocity selector for 58 keV deuterons that move in the +y
asked by Sandhya on February 27, 2010
97. ## PAKISTAN
Q23. Object is thrown vertically upwards and has a speed of 18 m/s when it reaches one-fourth of its maximum height h above its launch point. Label the relevant points and answer the following; a) Determine the maximum height h. b) What is the initial
asked by ABDUL AAZIZ on September 28, 2016
98. ## phy
A boat can travel 2.60 m/s in still water.If the boat points its prow directly across a stream whose current is 1.10 m/s, what is the velocity (magnitude and direction) of the boat relative to the shore? What will be the position of the boat, relative to
asked by ami on September 5, 2010
99. ## algebra
Darla is building a new deask. to make sure she had made a square corner, she measures 4ft. from the corner along one edge and 6ft from the corner long the othe edge . how long should the diaonal be between those two points if the corner is a right angle?
asked by lisa on December 1, 2011
100. ## english
The night has a thousand eyes, And the day but one; Yet the light of the bright world dies With the dying sun. The mind has a thousand eyes, 5 And the heart but one: Yet the light of a whole life dies When love is done. The Rhyme Scheme of the first 4
asked by 2phoneeeeee on December 1, 2014 | 2019-07-18 12:09:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39555248618125916, "perplexity": 1981.388049825137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00497.warc.gz"} |
https://citizendium.org/wiki/Adder_(electronics) | Main Article
Discussion
Related Articles [?]
Bibliography [?]
Citable Version [?]
This editable Main Article is under development and subject to a disclaimer.
An adder is a digital circuit designed to perform integer addition in the Arithmetic Logic Unit on board a computer. These circuits are fundamental to the operation of a computer and have an analog in traditional pencil-and-paper addition.
Integers can be represented by the sum of a series from 0 to infinity.
${\displaystyle \sum _{k=0}^{\infty }nx^{k}}$
• Where n is an integer from zero to (base - 1)
• Where x is an integer equal to the base value.
${\displaystyle 123=(3)(10)^{0}+(2)(10)^{1}+(1)(10)^{2}+(0)(10)^{3}+...+(0)(10)^{\infty }}$
An adder performs a binary operation (two operands) where the n of one power in integer A is added to the n of the same power in integer B. This produces two outputs, a sum, and a carry. The carry is always equal to (sum - (base - 1)). The carry is then added to the sum of the next power's sum, and so on. This represents what is known as a full adder. Each addition operation performed is known as a half adder. Chain a number of half adders together, and a full adder emerges. | 2022-07-02 23:25:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44664594531059265, "perplexity": 778.0921188937713}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00366.warc.gz"} |
http://ndl.iitkgp.ac.in/document/RnV0dHBBOEk5bkozcUo2NHd1Q3RoYUU5VmlXODNJQXYydkRVeDNIbTVRND0 | ### The impact of thoracic load carriage up to 45 kg on the cardiopulmonary response to exerciseThe impact of thoracic load carriage up to 45 kg on the cardiopulmonary response to exercise
Access Restriction
Subscribed
Author Phillips, Devin B. ♦ Ehnes, Cameron M. ♦ Stickland, Michael K. ♦ Petersen, Stewart R. Source SpringerLink Content type Text Publisher Springer Berlin Heidelberg File Format PDF Copyright Year ©2016 Language English
Subject Domain (in DDC) Technology ♦ Medicine & health Subject Keyword Thoracic load carriage ♦ Oxygen demand ♦ Ventilation ♦ Breathing pattern ♦ Occupational physiology ♦ Performance ♦ Human Physiology ♦ Occupational Medicine/Industrial Medicine ♦ Sports Medicine Abstract The purposes of this experiment were to, first, document the effect of 45-kg thoracic loading on peak exercise responses and, second, the effects of systematic increases in thoracic load on physiological responses to submaximal treadmill walking at a standardized speed and grade.On separate days, 19 males (age 27 ± 5 years, height 180.0 ± 7.4 cm, mass 86.9 ± 15.1 kg) completed randomly ordered graded exercise tests to exhaustion in loaded (45 kg) and unloaded conditions. On a third day, each subject completed four randomly ordered, 10-min bouts of treadmill walking at 1.34 m s−1 and 4 % grade in the following conditions: unloaded, and with backpacks weighted to 15, 30, and 45 kg.With 45-kg thoracic loading, absolute oxygen consumption ( $\dot{V}{\text{O}}_{2}$ ), minute ventilation, power output, and test duration were significantly decreased at peak exercise. End-inspiratory lung volume and tidal volume were significantly reduced with no changes in end-expiratory lung volume, breathing frequency, and the respiratory exchange ratio. Peak end-tidal carbon dioxide and the ratio of alveolar ventilation to carbon dioxide production were similar between conditions. The reductions in peak physiological responses were greater than expected based on previous research with lighter loads. During submaximal treadmill exercise, $\dot{V}{\text{O}}_{2}$ increased (P < 0.05) by 11.0 (unloaded to 15 kg), 14.5 (15–30 kg), and 18.0 % (30–45 kg) showing that the increase in exercise $\dot{V}{\text{O}}_{2}$ was not proportional to load mass.These results provide further insight into the specificity of physiological responses to different types of load carriage. ISSN 14396319 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2016-07-09 Publisher Place Berlin/Heidelberg e-ISSN 14396327 Journal European Journal of Applied Physiology Volume Number 116 Issue Number 9 Page Count 10 Starting Page 1725 Ending Page 1734 | 2020-09-28 15:45:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1896306872367859, "perplexity": 12815.450164505664}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00118.warc.gz"} |
http://www.khanacademy.org/math/applied-math/cryptography/modarithmetic/a/modular-inverses | If you're behind a web filter, please make sure that the domain *.kastatic.org is unblocked.
## What is an inverse?
Recall that a number multiplied by its inverse equals 1. From basic arithmetic we know that:
• The inverse of of a number A is 1/A since A * 1/A = 1
### e.g. the inverse of 5 is 1/5
• All real numbers other than 0 have an inverse
• Multiplying a number by the inverse of A is equivalent to dividing by A
## What is a modular inverse?
In modular arithmetic we do not have a division operation. However, we do have modular inverses.
• The modular inverse of A (mod C) is A^-1
• (A * A^-1) 1 (mod C) or equivalently (A * A^-1) mod C = 1
• Only the numbers coprime to C (numbers that share no prime factors with C) have a modular inverse (mod C)
## How to find a modular inverse
A naive method of finding a modular inverse for A (mod C) is:
step 1. Calculate A * B mod C for B values 0 through C-1
step 2. The modular inverse of A mod C is the B value that makes A * B mod C = 1
Note that the term B mod C can only have an integer value 0 through C-1, so testing larger values for B is redundant.
## Example: A=3 C=7
### Step 1. Calculate A * B mod C for B values 0 through C-1
3 * 0 ≡ 0 (mod 7)
3 * 1 ≡ 3 (mod 7)
3 * 2 ≡ 6 (mod 7)
3 * 3 ≡ 9(mod 7)
3 * 4 ≡ 12(mod 7)
3 * 5 ≡ 15 (mod 7) ≡ 1 (mod 7) <------ FOUND INVERSE!
3 * 6 ≡ 18 (mod 7) ≡ 4 (mod 7)
### Step 2. The modular inverse of A mod C is the B value that makes A * B mod C = 1
5 is the modular inverse of 3 mod 7 since 5*3 mod 7 = 1
Simple! Let's do one more example where we don't find an inverse.
## Example: A=2 C=6
### Step 1. Calculate A * B mod C for B values 0 through C-1
2 * 0 ≡ 0 (mod 6)
2 * 1 ≡ 2 (mod 6)
2 * 2 ≡ 4 (mod 6)
2 * 3 ≡ 6 ≡ 0 (mod 6)
2 * 4 ≡ 8 ≡ 2 (mod 6)
2 * 5 ≡ 10 ≡ 4 (mod 6)
### Step 2. The modular inverse of A mod C is the B value that makes A * B mod C = 1
No value of B makes A * B mod C = 1. Therefore, A has no modular inverse (mod 6).
This is because 2 is not coprime to 6 (they share the prime factor 2).
## This method seems slow...
There is a much faster method for finding the inverse of A (mod C) that we will discuss in the next articles on the Extended Euclidean Algorithm. First, let's do some exercises! | 2013-12-08 13:51:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819951057434082, "perplexity": 800.8996964343954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065834/warc/CC-MAIN-20131204131745-00043-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.rdocumentation.org/packages/pmclust/versions/0.2-0 | # pmclust v0.2-0
0
0th
Percentile
## Parallel Model-Based Clustering using Expectation-Gathering-Maximization Algorithm for Finite Mixture Gaussian Model
Aims to utilize model-based clustering (unsupervised) for high dimensional and ultra large data, especially in a distributed manner. The code employs 'pbdMPI' to perform a expectation-gathering-maximization algorithm for finite mixture Gaussian models. The unstructured dispersion matrices are assumed in the Gaussian models. The implementation is default in the single program multiple data programming model. The code can be executed through 'pbdMPI' and MPI' implementations such as 'OpenMPI' and 'MPICH'. See the High Performance Statistical Computing website <https://snoweye.github.io/hpsc/> for more information, documents and examples.
## Functions in pmclust
Name Description One E-Step Compute One E-step and Log Likelihood Based on Current Parameters generate.basic Generate Examples for Testing Set of PARAM A Set of Parameters in Model-Based Clustering. Internal Functions All Internal Functions One Step of EM algorithm One EM Step for GBD One M-Step Compute One M-Step Based on Current Posterior Probabilities assign.N.sample Obtain a Set of Random Samples for X.spmd Independent logL Independent Function for Log Likelihood pmclust-package Parallel Model-Based Clustering Update Class of EM or Kmenas Results Update CLASS.spmd Based on the Final Iteration Set of CONTROL A Set of Controls in Model-Based Clustering. mb.print Print Results of Model-Based Clustering pmclust and pkmeans Parallel Model-Based Clustering and Parallel K-means Algorithm print.object Functions for Printing or Summarizing Objects According to Classes as functions Convert between X.gbd (X.spmd) and X.dmat get.N.CLASS Obtain Total Elements for Every Clusters Initialization Initialization for EM-like Algorithms EM-like algorithms EM-like Steps for GBD Read Me First Read Me First Function Set Global Variables Set Global Variables According to the global matrix X.gbd (X.spmd) or X.dmat generate.MixSim Generate MixSim Examples for Testing No Results!
## Vignettes of pmclust
Name pmclust-include/00-preamble.tex pmclust-include/01-acknowledgement.tex pmclust-include/01-introduction.tex pmclust-include/02-example.tex pmclust-include/03-algorithm.tex pmclust-include/04-discussion.tex pmclust-include/my_jss.cls pmclust-include/pmclust.bib build_pdf.sh pmclust-guide.Rnw No Results! | 2019-02-15 22:48:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2196369171142578, "perplexity": 9333.149468043552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00314.warc.gz"} |
https://stacks.math.columbia.edu/tag/0E8M | Lemma 10.19.2. Let $A \to B$ be a local homomorphism of local rings. Assume
1. $B$ is finite as an $A$-module,
2. $\mathfrak m_ B$ is a finitely generated ideal,
3. $A \to B$ induces an isomorphism on residue fields, and
4. $\mathfrak m_ A/\mathfrak m_ A^2 \to \mathfrak m_ B/\mathfrak m_ B^2$ is surjective.
Then $A \to B$ is surjective.
Proof. To show that $A \to B$ is surjective, we view it as a map of $A$-modules and apply Lemma 10.19.1 (6). We conclude it suffices to show that $A/\mathfrak m_ A \to B/\mathfrak m_ AB$ is surjective. As $A/\mathfrak m_ A = B/\mathfrak m_ B$ it suffices to show that $\mathfrak m_ AB \to \mathfrak m_ B$ is surjective. View $\mathfrak m_ AB \to \mathfrak m_ B$ as a map of $B$-modules and apply Lemma 10.19.1 (6). We conclude it suffices to see that $\mathfrak m_ AB/\mathfrak m_ A\mathfrak m_ B \to \mathfrak m_ B/\mathfrak m_ B^2$ is surjective. This follows from assumption (4). $\square$
There are also:
• 3 comment(s) on Section 10.19: Nakayama's lemma
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2019-06-20 10:56:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9939792156219482, "perplexity": 284.1750982902958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00225.warc.gz"} |
https://www.oreilly.com/content/good-fences-between-data-science-and-production-make-good-neighbors/ | Good fences (between data science and production) make good neighbors
What data scientists need to know about production—and what production should expect from their data scientists.
January 19, 2017
White picket fence. (source: PublicDomainPictures.net)
One of the most important goals of any data science team is the ability to create machine learning models, evaluate them offline, and get them safely to production. The faster this process can be performed, the more effective most teams will be. In most organizations, the team responsible for scoring a model and the team responsible for training a model are separate. Because of this, a clear separation of concerns is necessary for these two teams to operate at whatever speed suits them best. This post will cover how to make this work: implementing your ML algorithms in such a way that they can be tested, improved, and updated without causing problems downstream or requiring changes upstream in the data pipeline.
We can get clarity about the requirements for the data and production teams by breaking the data-driven application down into its constituent parts. In building and deploying a real-time data application, the goal of the data science team is to produce a function that reliably and in real-time ingests each data point and returns a prediction. For instance, if the business concern is modeling churn, we might ingest the data about a user and return a predicted probability of churn. The fact that we have to featurize that user and then send them through a random forest, for instance, is not the concern of the scoring team and should not be exposed to them.
Learn faster. Dig deeper. See farther.
Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.
The above illustration shows a perfect world for the scoring team. They have some data type A that they can analyze—the set of features that your software has observed. A can be a JSON message describing a user, or A can be a Protocol Buffer describing a transaction, or an Avro message describing an item. They then have a model that performs some task—churn prediction, chargeback probability, etc. They then get a result that they use to continue processing.
In order to achieve this goal, the data science team has to have tooling that does at least two things at score time:
1. Create a model that can ingest and return the expected native data type
2. Be able to supply an external representation of a model
In order to address the first issue, we have to realize that the act of featurization must be embedded in the model. Not only does that make the scoring team’s job easier, it also removes a potential source of error, namely feature functions that are different at train and score time. For instance, if the data science team works in R and takes data from a database to make a model, but the scoring team works in Java, then R feature functions will have to be reimplemented at score time in Java. The below training diagram shows what a supervised training architecture might look like using generic types.
While this looks complicated, it should seem pretty familiar to most data scientists after breaking it down. First, we start with training data of some type we’ll call A. This is historical data, such as users at specific times. As a side note, it is incredibly important to make sure that the historical data is time-bounded to avoid information leak. We then take those training examples and featurize them through feature functions we have constructed. These functions all convert a single data type A into a single data type T. Type T is frequently either a scalar value forming a CSV file or a sequence of values for input formats such as LibSVM or Vowpal Wabbit. In both cases, a single List[T] is natively understandable by a machine learning library as one row of featurized data. We will then have a separate list List[G] of observations of ground truth G that we have to join to the featurized training data List[List[T]]. The result is input data that is legible to our machine learning library, whose format per row is (G, List[T]).
Once we have a List[(G, List[T])], we can use any supervised learning framework to train a model. The output of this is a machine learning model, a function defined as List[T] => O. O is the native output type of the model used, which may or may not be the desired output type of the whole model—the output defined by your business needs. A good example of this would be a segmentation model where we desire to classify users as either highly likely to churn, somewhat likely to churn, or unlikely to churn. The output type desired by the calling code is an enum, but the model itself may output a float. We will then use a finalizer to convert that float into an enum through simple segmentation. In the case where the output type of the native model is the same as the type desired, the identity function may be used as the finalizer.
One important caveat to note is that the machine learning model List[T] => O is in the native format of the library used to learn it, such as a Vowpal Wabbit model or H2O. These dependencies are now needed by the whole model. Traditionally, this is a pain point for most machine learning systems in production, as it locks in a specific learning framework that is then difficult to change in the future. In our formulation, however, the types needed by the framework-specific model are not exposed to the calling code. Because of this, the framework-specific model can be swapped out at any time without changing the type contract guaranteed at score time. This is accomplished through the use of function composition. This is a huge win for both the data science team as well as the scoring team. It allows the data science team to be flexible and use the library that best solves each individual problem. It also allows easy version upgrades of specific libraries without fear of breaking models. It makes life easier for the scoring team, too, as they don’t have to fret over understanding any of the machine learning frameworks and can instead focus on scale and reliability.
This work is so important to me that I led a team that has open-sourced these ideas as a project called Aloha. Aloha is an implementation of many of these ideas within a Scala DSL. Aloha is supported by a community of production data scientists, lead by Ryan Deak, the main author of the project. This project has received commercial support from both eHarmony and ZEFR, and is currently under active development. Aloha has streamlined the model deployment process and reduced production error rates in multiple deployment environments to date.
A nice benefit to controlling the featurization layer is that we can place a QA engine within the model itself. In the future, we plan to be able to add arbitrary QA tests to model inputs and take an action (such as an email) if such conditions are not satisfied. For instance, if we observe a feature is present in 90% of examples at train time and then through the use of a sliding window see that it is only present in 10% of examples at score time, then the model may perform very poorly through no fault of its own, but rather because of a data preparation issue. This information can be encoded in an Aloha model, and an action can be associated with it to trigger notification if the data drifts at score time.
Having a quick and safe path to production should be a top priority for all engineering teams, and data science is no exception. While I have seen many approaches to productionalizing data science, any of them that don’t put machine learned models directly from the data scientist’s code to production fall short of realizing their full potential.
Post topics: Data science
Share: | 2021-04-16 14:29:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3115473687648773, "perplexity": 905.2838127927541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00120.warc.gz"} |
https://nanograv.org/glossary/p-pdot-diagram | # P-Pdot Diagram
The spin period vs spin period derivative (how quickly the pulsars spin rate is slowing due to loss of luminous energy) diagram shows the different classes of neutron stars. From it, we have understood different properties of the neutrons stars, how they change over time, etc. | 2022-12-09 20:05:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319411873817444, "perplexity": 1684.6094772270844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00424.warc.gz"} |
https://www.ademcetinkaya.com/2023/02/mgrb-affiliated-managers-group-inc-4750.html | Outlook: Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Sell
Time series to forecast n: 17 Feb 2023 for (n+6 month)
Methodology : Ensemble Learning (ML)
## Abstract
Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 prediction model is evaluated with Ensemble Learning (ML) and Wilcoxon Rank-Sum Test1,2,3,4 and it is concluded that the MGRB stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
## Key Points
1. What is a prediction confidence?
2. Reaction Function
## MGRB Target Price Prediction Modeling Methodology
We consider Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 Decision Process with Ensemble Learning (ML) where A is the set of discrete actions of MGRB stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Wilcoxon Rank-Sum Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Ensemble Learning (ML)) X S(n):→ (n+6 month) $∑ i = 1 n s i$
n:Time series to forecast
p:Price signals of MGRB stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## MGRB Stock Forecast (Buy or Sell) for (n+6 month)
Sample Set: Neural Network
Stock/Index: MGRB Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060
Time series to forecast n: 17 Feb 2023 for (n+6 month)
According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060
1. The business model may be to hold assets to collect contractual cash flows even if the entity sells financial assets when there is an increase in the assets' credit risk. To determine whether there has been an increase in the assets' credit risk, the entity considers reasonable and supportable information, including forward looking information. Irrespective of their frequency and value, sales due to an increase in the assets' credit risk are not inconsistent with a business model whose objective is to hold financial assets to collect contractual cash flows because the credit quality of financial assets is relevant to the entity's ability to collect contractual cash flows. Credit risk management activities that are aimed at minimising potential credit losses due to credit deterioration are integral to such a business model. Selling a financial asset because it no longer meets the credit criteria specified in the entity's documented investment policy is an example of a sale that has occurred due to an increase in credit risk. However, in the absence of such a policy, the entity may demonstrate in other ways that the sale occurred due to an increase in credit risk.
2. If the underlyings are not the same but are economically related, there can be situations in which the values of the hedging instrument and the hedged item move in the same direction, for example, because the price differential between the two related underlyings changes while the underlyings themselves do not move significantly. That is still consistent with an economic relationship between the hedging instrument and the hedged item if the values of the hedging instrument and the hedged item are still expected to typically move in the opposite direction when the underlyings move.
3. The business model may be to hold assets to collect contractual cash flows even if the entity sells financial assets when there is an increase in the assets' credit risk. To determine whether there has been an increase in the assets' credit risk, the entity considers reasonable and supportable information, including forward looking information. Irrespective of their frequency and value, sales due to an increase in the assets' credit risk are not inconsistent with a business model whose objective is to hold financial assets to collect contractual cash flows because the credit quality of financial assets is relevant to the entity's ability to collect contractual cash flows. Credit risk management activities that are aimed at minimising potential credit losses due to credit deterioration are integral to such a business model. Selling a financial asset because it no longer meets the credit criteria specified in the entity's documented investment policy is an example of a sale that has occurred due to an increase in credit risk. However, in the absence of such a policy, the entity may demonstrate in other ways that the sale occurred due to an increase in credit risk.
4. Rebalancing does not apply if the risk management objective for a hedging relationship has changed. Instead, hedge accounting for that hedging relationship shall be discontinued (despite that an entity might designate a new hedging relationship that involves the hedging instrument or hedged item of the previous hedging relationship as described in paragraph B6.5.28).
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 is assigned short-term Ba1 & long-term Ba1 estimated rating. Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 prediction model is evaluated with Ensemble Learning (ML) and Wilcoxon Rank-Sum Test1,2,3,4 and it is concluded that the MGRB stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
### MGRB Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCB2
Balance SheetBaa2Baa2
Leverage RatiosBaa2B2
Cash FlowBaa2Ba2
Rates of Return and ProfitabilityBaa2B3
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 72 out of 100 with 834 signals.
## References
1. E. Collins. Using Markov decision processes to optimize a nonlinear functional of the final distribution, with manufacturing applications. In Stochastic Modelling in Innovative Manufacturing, pages 30–45. Springer, 1997
2. Athey S, Mobius MM, Pál J. 2017c. The impact of aggregators on internet news consumption. Unpublished manuscript, Grad. School Bus., Stanford Univ., Stanford, CA
3. N. B ̈auerle and A. Mundt. Dynamic mean-risk optimization in a binomial model. Mathematical Methods of Operations Research, 70(2):219–239, 2009.
4. Jiang N, Li L. 2016. Doubly robust off-policy value evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pp. 652–61. La Jolla, CA: Int. Mach. Learn. Soc.
5. White H. 1992. Artificial Neural Networks: Approximation and Learning Theory. Oxford, UK: Blackwell
6. Artis, M. J. W. Zhang (1990), "BVAR forecasts for the G-7," International Journal of Forecasting, 6, 349–362.
7. L. Prashanth and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings of Advances in Neural Information Processing Systems 26, pages 252–260, 2013.
Frequently Asked QuestionsQ: What is the prediction methodology for MGRB stock?
A: MGRB stock prediction methodology: We evaluate the prediction models Ensemble Learning (ML) and Wilcoxon Rank-Sum Test
Q: Is MGRB stock a buy or sell?
A: The dominant strategy among neural network is to Sell MGRB Stock.
Q: Is Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 stock a good investment?
A: The consensus rating for Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060 is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of MGRB stock?
A: The consensus rating for MGRB is Sell.
Q: What is the prediction period for MGRB stock?
A: The prediction period for MGRB is (n+6 month) | 2023-03-22 03:49:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2651674449443817, "perplexity": 5608.189231454788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00352.warc.gz"} |
https://www.zbmath.org/?q=an%3A0953.60059 | # zbMATH — the first resource for mathematics
Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type. (English) Zbl 0953.60059
This paper studies the backward stochastic differential equation (BSDE) of the form $y_t = y_T + \int_t^T g(s,y_s,z_s) ds + (A_T-A_t)-\int_t^T z_s dW_s,\quad t \in [0,T],$ where $$W$$ is a Brownian motion and $$g$$ is a non-anticipative Lipschitz-continuous function. As usual, a process $$y$$ is called a supersolution of the BSDE if it is of the above form for some adapted right-continuous, increasing process $$A$$ and some predictable, square-integrable process $$z$$. The main result of the paper is a theorem which asserts that, under suitable integrability conditions, the pointwise monotone limit of a sequence of supersolutions is again a supersolution of the BSDE. Moreover, it is shown that the corresponding integrands $$z$$ converge weakly in $$L^2$$ and strongly in each $$L^p$$ with $$p<2$$; the processes $$A$$ converge weakly in $$L^2$$. As an application of this result, the author proves a generalization of the classical Doob-Meyer decomposition to so-called $$g$$-supermartingales. These processes refer to a suitably defined nonlinear expectation operator in essentially the same way as usual supermartingales to usual expectations. As a second application, the author shows that there exists a minimal supersolution of the BSDE which is subject to fairly general state- and time-dependent constraints.
##### MSC:
60H99 Stochastic analysis 60H30 Applications of stochastic analysis (to PDEs, etc.) 60G48 Generalizations of martingales
Full Text: | 2021-05-13 02:40:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331266641616821, "perplexity": 359.2670899998209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00112.warc.gz"} |
http://tex.stackexchange.com/questions/85513/multiply-matrix-and-letter?answertab=active | # Multiply matrix and letter
I want to multiply a matrix with a letter, it looks like that:
\documentclass{report}
\usepackage{ngerman}
\usepackage{amsmath}
\begin{document}
$\left( \begin{array}{ccc} | & & | \\ f_1 & \dots & f_n \\ | & & | \end{array}\right) \mathcal{A} = \left( \begin{array}{ccc} | & & | \\ q_1 & \dots & q_n \\ | & & | \end{array} \right)$
\end{document}
Is there a chance, that the letter is not so small in comparison to the matrix, so that they have about the same size?
-
Via graphicx package, you can use {\raisebox{-1.5ex}{\scalebox{3}{$\mathcal{A}$}}} but the result would not look that nice I guess. – percusse Dec 4 '12 at 18:16
hm true, it looks strange. But thanks anyway. Maybe I will stick to my old variant. – Adam Dec 4 '12 at 18:20
It would maybe look better if you reduce the size of the matrices via smallmatrix variants. – percusse Dec 4 '12 at 18:22
thanks! that does look much better!! – Adam Dec 4 '12 at 18:26
As percusse points out, you can resize it using the package graphicx:
{\raisebox{-1.5ex}{\scalebox{3}{$\mathcal{A}$}}}
However, I would say that it looks better "small" than "resized". That would be very inconsistent and weird. I recommend you to stuck with small $A$. However, there're few possible improvements of your code:
\documentclass{report}
\usepackage[ngerman]{babel}
\usepackage{amsmath}
\begin{document}
$\begin{pmatrix} | & & | \\ f_1 & \cdots & f_n \\ | & & | \end{pmatrix} \mathcal{A} = \begin{pmatrix} | & & | \\ q_1 & \cdots & q_n \\ | & & | \end{pmatrix}$
\end{document}
• Do not use ngerman package, use babel with the appropriate option.
• The package amsmath offers the environment pmatrix for matrices in parentheses, as well as bmatrix for [...], Bmatrix for {...}, vmatrix for |...| and Vmatrix for ||...||.
• I'm not sure what the verical bars denote (probably a vector written in a column?) but I'm sure I would not understand it as a reader. However, I don't know how to improve it, since the context is missing. If the entries are really column vectors and you defined them properly before, I think that the reader would understand it even without the bars.
- | 2015-08-01 12:06:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906754732131958, "perplexity": 1670.797756373052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00318-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/60104/existence-of-nonnegative-solutions-to-an-underdetermined-system-of-linear-equatio/60797 | # Existence of nonnegative solutions to an underdetermined system of linear equations
Similar questions have been asked elsewhere, but I think this is sufficiently different to warrant a new post. I have a particular matrix $A$ and would like to know when the system $Ax = 0$ has at least one non-negative solution (other than $\vec{0}$). The problem is underdetermined: in most cases I expect the number of variables to be of the order $m^2$, where $m$ is the number of equations. Furthermore, each column of the matrix sums to $0$ and every equation has a mix of positive and negative coefficients. Is this a sufficient condition for the existence of a non-negative solution?
I have seen the algorithm of http://www.jstor.org/pss/1968384, which can be used to test whether a particular system of equations has a non-negative solution, but have not been able to use it to derive a proof for a general family of matrices.
Thanks.
-
This scenario is explicitly handled by Gordan's theorem, which states $$\text{either} \quad \exists x \in \mathbb{R}_+^m\setminus\{0\} \centerdot Ax = 0, \quad\text{or}\quad \exists y\in\mathbb{R}^n\centerdot A^\top y > 0,$$
where $\mathbb{R}_+$ denotes nonnegative reals. (Like Farkas's Lemma, this is a "Theorem of Alternatives"; furthermore, it can be proved from Farkas's lemma.)
A nice way to prove this is, as in Theorem 2.2.6 of Borwein & Lewis's text "Convex Analysis and Nonlinear Optimization", to consider the related optimization problem $$\inf_y \quad\underbrace{\ln\left(\sum_{i=1}^m \exp(y^\top A \textbf e_i)\right)}_{f(y)};$$ as stated in that theorem, $f(y)$ is unbounded below iff there exists $y$ so that $A^\top y > 0$. As such, this also gives an unconstrained optimization problem you can plug into your favorite solver to determine which of the two scenarios you are in. Alternatively, you can explicitly solve for either the primal variables $x$ or the dual variables $y$ by considering a similar max entropy problem (i.e. $\inf_y\sum_i \exp(y^\top A\textbf{e}_i)$, which approaches 0 iff the desired $y$ exists) or its dual (you can find this in the above book, as well as papers by the same authors).
Anyway, considering Gordan's theorem, your condition on the columns (which can be written $\textbf{1}^\top A = 0$) has no relationship to the question at hand. In one of your comments you mentioned wanting to generate these matrices. To pick positive examples, fix a satisfying $x$, and construct rows $b_i'$ by first getting some $b_i$ and setting $b_i' := b_i - (x^\top b_i)x / (x^\top x)$; to pick negative examples, by Gordan's theorem, choose some nonzero $y$, and then consider adding to $A$ a column $a_i$, including it if it satisfies $a_i^\top y > 0$.
-
A homogeneous linear system does not have a nontrivial nonnegative solution if (and only if) some linear combination of the equaltions yields a nontrivial equation with nonnegative coefficients. Nothing in your assumptions prevents, for example, that the sum of the first two equations is $x_1+x_2+\dots+x_n=0$. Then obviously there is no nonnegative solutions.
-
Thanks. I'm aware of Farkas' condition for the existence of a non-negative solution. Unfortunately, applying Farkas to my problem leads me into a circular argument (to do with the origin of the equations). Are there any other methods to determine whether a non-negative solution exists? – bandini Mar 30 '11 at 20:13
You need to say more about your problem. From what you have said the matrix might have $m=4$ rows involving $m^2=16$ variables as follows: Each equation has 8 positive and 8 negative coefficients. The first two equations sum to give an equation with 16 positive coefficients (So that Farkas condition shows that every non-zero solution has both positive and negative entries) and the third and fourth equations are exactly the negatives of the first and second equations. One can of course do this with each row of the form $aaaaaaaabbbbbbbb$ but one could make small perturbations so that it was not so blatant.
-
Thanks. I see that I need to go back and extract more structure from my matrices if I am to be able to prove what I want. – bandini Apr 1 '11 at 9:16
Algorithmically, you can solve the linear programming problem:
$\max \sum_{i=1}^{n} x_{i}$
$Ax=0$
$x \geq 0$
If the maximum is strictly greater than zero, then there's a nonzero solution that satisfies $Ax=0$ and $x\geq 0$. If not, then $x=0$ is the only nonnegative solution.
-
Thanks. I'm able to solve this linear program for any particular instance. However, for my problem I want to prove that every matrix, generated in a certain way, is guaranteed to give rise to a system of linear equations with a non-negative solution. – bandini Mar 30 '11 at 21:12 | 2016-07-23 13:21:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901549756526947, "perplexity": 122.82896422595176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257822598.11/warc/CC-MAIN-20160723071022-00006-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/A-direct-calibration-of-the-IRX%E2%80%93%CE%B2-relation-in-at-z-Koprowski-Coppin/552ccbac6a741a347f8ba00c1f7b23d44f823dd5 | # A direct calibration of the IRX–β relation in Lyman-break Galaxies at z = 3–5
@article{Koprowski2018ADC,
title={A direct calibration of the IRX–$\beta$ relation in Lyman-break Galaxies at z = 3–5},
author={Maciej Koprowski and K. E. K. Coppin and James E. Geach and Ross J. McLure and Omar Almaini and A. W. Blain and Malcolm N. Bremer and N Bourne and Sandra C. Chapman and C. J. Conselice and James S. Dunlop and Duncan Farrah and William G. Hartley and Alexander Karim and Kirsten K. Knudsen and Michał Jerzy Michałowski and Douglas Scott and Chris Simpson and D. J. B. Smith and P van der Werf},
journal={Monthly Notices of the Royal Astronomical Society},
year={2018}
}
• Published 2 January 2018
• Physics
• Monthly Notices of the Royal Astronomical Society
We use a sample of 4209 Lyman-break galaxies (LBGs) at z~ -3, 4, and 5 in the UKIRT Infrared Deep Sky Survey Ultra Deep Survey field to investigate the relationship between the observed slope of the stellar continuum emission in the ultraviolet, β , and the thermal dust emission, as quantified via the so-called ‘infrared excess’ (IRX≡LIR/LUV). Through a stacking analysis, we directly measure the 850-μm flux density of LBGs in our deep (0.9 mJy) James Clerk Maxwell Telescope SCUBA-2850-μm map as…
## Figures and Tables from this paper
UV slope of z ∼ 3 bright (L > L*) Lyman-break galaxies in the COSMOS field
• Physics
Astronomy & Astrophysics
• 2019
Context. The analysis of the UV slope β of Lyman-break galaxies (LBG) at different luminosities and redshifts is fundamental for understanding their physical properties, and in particular, their dust
An ALMA survey of the SCUBA-2 cosmology legacy survey UKIDSS/UDS field: Dust attenuation in high-redshift Lyman-break galaxies
• Physics, Geology
• 2020
We analyse 870 $\mu$m Atacama Large Millimetre Array (ALMA) dust continuum detections of 41 canonically selected $z$ ≃ 3 Lyman-break galaxies (LBGs), as well as 209 ALMA-undetected LBGs, in
Rest-frame far-ultraviolet to far-infrared view of Lyman break galaxies at z = 3: Templates and dust attenuation
• Physics
Astronomy & Astrophysics
• 2019
Aims. This work explores, from a statistical point of view, the rest-frame far-ultraviolet (FUV) to far-infrared (FIR) emission of a population of Lyman-break galaxies (LBGs) at z ∼ 3 that cannot be
The ALMA Spectroscopic Survey Large Program: The Infrared Excess of z = 1.5–10 UV-selected Galaxies and the Implied High-redshift Star Formation History
• Physics
• 2020
We make use of sensitive (9.3 microJy/beam RMS) 1.2mm-continuum observations from the ASPECS ALMA large program of the Hubble Ultra Deep Field (HUDF) to probe dust-enshrouded star formation from 1362
A3COSMOS: the dust attenuation of star-forming galaxies at z = 2.5–4.0 from the COSMOS-ALMA archive
• Physics
• 2019
We present an analysis of the dust attenuation of star-forming galaxies at z = 2.5–4.0 through the relationship between the UV spectral slope (β), stellar mass (M*), and the infrared excess (IRX =
Dark-age reionization and galaxy formation simulation – XIX. Predictions of infrared excess and cosmic star formation rate density from UV observations
• Physics
Monthly Notices of the Royal Astronomical Society
• 2019
We present a new analysis of high-redshift UV observations using a semi-analytic galaxy formation model, and provide self-consistent predictions of the infrared excess (IRX)–β relations and cosmic
High-redshift JWST predictions from IllustrisTNG: II. Galaxy line and continuum spectral indices and dust attenuation curves
• Physics
• 2020
We present predictions for high redshift (z = 2−10) galaxy populations based on the IllustrisTNG simulation suite and a full Monte Carlo dust radiative transfer post-processing. Specifically, we
Big Three Dragons: A z = 7.15 Lyman-break galaxy detected in [O iii] 88 μm, [C ii] 158 μm, and dust continuum with ALMA
• Physics
Publications of the Astronomical Society of Japan
• 2019
We present new ALMA observations and physical properties of a Lyman break galaxy at z = 7.15. Our target, B14-65666, has a bright ultra-violet (UV) absolute magnitude, MUV ≈ −22.4, and has been
Dust Attenuation, Star Formation, and Metallicity in z ∼ 2–3 Galaxies from KBSS-MOSFIRE
• Physics
The Astrophysical Journal
• 2019
We present a detailed analysis of 317 2.0 ≤ z ≤ 2.7 star-forming galaxies from the Keck Baryonic Structure Survey. Using complementary spectroscopic observations with Keck/LRIS and Keck/MOSFIRE, as
Diversity of Galaxy Dust Attenuation Curves Drives the Scatter in the IR X–β Relation
• Physics
The Astrophysical Journal
• 2019
We study the drivers of the scatter in the IRX-beta relation using 23,000 low-redshift galaxies from the GALEX-SDSS-WISE Legacy Catalog 2 (GSWLC-2). For each galaxy we derive, using CIGALE and the
## References
SHOWING 1-10 OF 86 REFERENCES
The HDUV Survey : A revised assessment of the relationship between UV slope and dust attenuation for high-redshift galaxies
• Physics
• 2017
We use a newly assembled sample of 3545 star-forming galaxies with secure spectroscopic, grism, and photometric redshifts at z = 1.5–2.5 to constrain the relationship between UV slope (β) and dust
ALMA Spectroscopic Survey in the Hubble Ultra Deep Field: The Infrared Excess of UV-selected z=2-10 galaxies as a function of UV-continuum Slope and Stellar Mass
• Physics
• 2016
We make use of deep 1.2mm-continuum observations (12.7microJy/beam RMS) of a 1 arcmin^2 region in the Hubble Ultra Deep Field to probe dust-enshrouded star formation from 330 Lyman-break galaxies
The dust attenuation of star-forming galaxies at z ˜ 3 and beyond: New insights from ALMA observations
• Physics
• 2017
We present results on the dust attenuation of galaxies at redshift ˜3-6 by studying the relationship between the UV spectral slope (βUV) and the infrared excess (IRX ; LIR/LUV) using Atacama Large
Dust properties of Lyman-break galaxies at z ~ 3
• Physics
• 2016
Context. Since the mid-1990s, the sample of Lyman-break galaxies (LBGs) has been growing thanks to the increasing sensitivities in the optical and in near-infrared telescopes for objects at z
The SCUBA-2 Cosmology Legacy Survey: the submillimetre properties of Lyman-break galaxies at z=3-5
• Physics
• 2015
We present detections at 850 mu m of the Lyman-break galaxy (LBG) population at z approximate to 3, 4, and 5 using data from the Submillimetre Common User Bolometer Array 2 Cosmology Legacy Survey in
Spitzer Observations of z ~ 3 Lyman Break Galaxies: Stellar Masses and Mid-Infrared Properties
• Physics
• 2006
We describe the spectral energy distributions (SEDs) of Lyman break galaxies (LBGs) at z ~ 3, using deep mid-infrared and optical observations of the extended Groth strip, obtained with IRAC and MIPS
The unbiased measurement of ultraviolet spectral slopes in low-luminosity galaxies at z ≈ 7
• Physics
• 2013
The Ultraviolet (UV) continuum slope beta, typically observed at z=7 in Hubble Space Telescope (HST) WFC3/IR bands via the J-H colour, is a useful indicator of the age, metallicity, and dust content
A RESOLVED MAP of the INFRARED EXCESS in A LYMAN BREAK GALAXY at z = 3
• Physics
• 2016
We have observed the dust continuum of 10 z = 3.1 Lyman break galaxies with the Atacama Large Millimeter/submillimeter Array at similar to 450 mas resolution in Band 7. We detect and resolve the 870
Subaru Deep Survey. VI. A Census of Lyman Break Galaxies at z ≃ 4 and 5 in the Subaru Deep Fields: Clustering Properties*
• Physics
• 2003
We investigate the photometric properties of Lyman break galaxies (LBGs) at z = 3.5-5.2 based on large samples of 2600 LBGs detected in deep (i' 27) and wide-field (1200 arcmin2) images taken in the
New Observational Constraints and Modeling of the Infrared Background: Dust Obscured Star-Formation at z>1 and Dust in the Outer Solar System
• Physics
• 2010
We provide measurements of the integrated galaxy light at 70, 160, 250, 350 and 500 micron using deep far-infrared and submillimeter data from space (Spitzer) and balloon platform (BLAST) | 2022-06-24 23:32:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6724236607551575, "perplexity": 14683.196004268408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00515.warc.gz"} |
https://www.maa.org/press/periodicals/loci/resources/calcplot3d-an-exploration-environment-for-multivariable-calculus-directional-derivatives | # CalcPlot3D, an Exploration Environment for Multivariable Calculus - Directional Derivatives
Author(s):
Paul Seeburger (Monroe Community College)
Exercise: Determine the directional derivative function for $f(x, y) = x^2 + xy + y^2 + 1$ in the direction of v = i + j. Then determine its value at the point (0, -1).
Use CalcPlot3D to graph this surface and show the appropriate tangent line on the surface at the point (0, -1) and displaying the unit direction vector and the correct directional derivative value.
To do this, first enter the function in Function 1. Then choose the directional derivative option from the drop-down menu just above the Trace Plot to the left of the 3D plot. You can then use the Trace Plot menu at the top of the applet to enter the point (0, -1) and the direction vector. I recommend hiding the edges (using the E key or the Hide Edges option on the View Settings menu) and also making the surface transparent
(using Ctrl-T or the Make Surfaces Transparent option on the View Settings menu). Rotate the plot until you can clearly see the direction vector, the surface, the tangent line, and the directional derivative value. Be sure it is the approximation of the exact value you obtained in your homework problem. | 2019-09-23 07:38:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611846446990967, "perplexity": 681.0706083402455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00031.warc.gz"} |
http://tug.org/pipermail/xetex/2012-December/023908.html | # [XeTeX] typesetting hyphen in xelatex
Philip TAYLOR P.Taylor at Rhul.Ac.Uk
Wed Dec 19 18:00:58 CET 2012
I don't speak LaTeX, Sasi, but this hybrid approach
seems to work :
\documentclass {minimal}
\font \bodyfont = "Arial Unicode MS:mapping=tex-text"
\begin {document}
\bodyfont
Number range : 80--95
Em-dash : used to set off --- parenthetical -- clauses
Smart quotes''
\end {document}
Philip Taylor
--------
Sasi Kumar wrote:
> Thank you for both suggestions.So sorry. I did try both separately, but still did not succeed. I couldn't get either the smart quotes or the long dash. Am willing to study any document that is available online if that would help. The reason I am insisting on this is that I am preparing a document for publication and I would like it to look as "authentic" as I can.
>
> Thanks for the help and sorry for the trouble.
>
> Best regards,
> Sasi
More information about the XeTeX mailing list | 2017-10-22 01:09:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838235139846802, "perplexity": 4819.222063762678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00452.warc.gz"} |
https://www.physicsforums.com/threads/perturbation-theory.345425/ | # Perturbation Theory
1. Oct 13, 2009
### latentcorpse
a particle moves in one dimension in the potential
$V(x)=\infty \forall |x|>a, V(x)=V_0 \cos{\frac{\pi x}{2a}} \forall |x| \leq a$
now the unperturbed state that i use is just a standard infinite square well.
anyway the solution says that perturbation theory is only valid provided that the energy scale of the "bump" (set by $V_0$) is less than the difference in energy between square well states.
Q1: is this just a fact: perturbation theory is only applicable providing the perturbation is less than the energy difference of the states of teh unperturbed system?
it then explains the above mathematically by saying:
$V_0 << \frac{{\hbar}^2 \pi^2}{8ma^2}(2n-1)$
i can't for the life of me see where the RHS of that comes from. arent the energy levels dependent on $n^2$ and not $n$?
2. Oct 14, 2009
### lanedance
i think if you look at the difference in energy is given by
$$n^2- (n-1)^2 = n^2 - (n^2 - 2n+1) = 2n-1$$
i think less than is probably not strong enough, i think the perturbation has to be "small" relative to the energy difference. Looking in Ballentine, this can be seen if you look at the first order contribution to the eigenvector. It effectively contains the ratio of the perturbation to the energy level difference. Higher order terms carry the ratio at higher powers, so for the perturbation sum to converge (and quickly) the ratio must be small | 2018-01-19 06:01:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343526482582092, "perplexity": 390.0525571426151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00793.warc.gz"} |
https://brilliant.org/problems/there-can-be-infinitely-many/ | # There Can Be Infinitely Many
A positive integer $$n$$ is called sacred if it is divisible by all odd integers $$a$$ for which $$n \geq a^2$$. Determine the sum of all sacred numbers.
As an arbitrary example, $$n=15$$ is sacred because it is divisible by $$1$$ and $$3$$.
× | 2019-04-26 06:47:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5438226461410522, "perplexity": 493.349744050318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00287.warc.gz"} |
https://www.deepdyve.com/lp/springer-journals/uncertainty-and-certainty-relations-for-complementary-qubit-KrpA2dMXHW?impressionId=5c41e1a5431f4&i_medium=docview&i_campaign=references&i_source=references | # Uncertainty and certainty relations for complementary qubit observables in terms of Tsallis’ entropies
Uncertainty and certainty relations for complementary qubit observables in terms of Tsallis’... Uncertainty relations for more than two observables have found use in quantum information, though commonly known relations pertain to a pair of observables. We present novel uncertainty and certainty relations of state-independent form for the three Pauli observables with use of the Tsallis $$\alpha$$ -entropies. For all real $$\alpha \in (0;1]$$ and integer $$\alpha \ge 2$$ , lower bounds on the sum of three $$\alpha$$ -entropies are obtained. These bounds are tight in the sense that they are always reached with certain pure states. The necessary and sufficient condition for equality is that the qubit state is an eigenstate of one of the Pauli observables. Using concavity with respect to the parameter $$\alpha$$ , we derive approximate lower bounds for non-integer $$\alpha \in (1;+\infty )$$ . In the case of pure states, the developed method also allows to obtain upper bounds on the entropic sum for real $$\alpha \in (0;1]$$ and integer $$\alpha \ge 2$$ . For applied purposes, entropic bounds are often used with averaging over the individual entropies. Combining the obtained bounds leads to a band, in which the rescaled average $$\alpha$$ -entropy ranges in the pure-state case. A width of this band is essentially dependent on $$\alpha$$ . It can be interpreted as an evidence for sensitivity in quantifying the complementarity. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Quantum Information Processing Springer Journals
# Uncertainty and certainty relations for complementary qubit observables in terms of Tsallis’ entropies
Quantum Information Processing, Volume 12 (9) – Apr 11, 2013
17 pages
/lp/springer-journals/uncertainty-and-certainty-relations-for-complementary-qubit-KrpA2dMXHW
Publisher
Springer Journals
Subject
Physics; Quantum Information Technology, Spintronics; Quantum Computing; Data Structures, Cryptology and Information Theory; Quantum Physics; Mathematical Physics
ISSN
1570-0755
eISSN
1573-1332
D.O.I.
10.1007/s11128-013-0568-y
Publisher site
See Article on Publisher Site
### Abstract
Uncertainty relations for more than two observables have found use in quantum information, though commonly known relations pertain to a pair of observables. We present novel uncertainty and certainty relations of state-independent form for the three Pauli observables with use of the Tsallis $$\alpha$$ -entropies. For all real $$\alpha \in (0;1]$$ and integer $$\alpha \ge 2$$ , lower bounds on the sum of three $$\alpha$$ -entropies are obtained. These bounds are tight in the sense that they are always reached with certain pure states. The necessary and sufficient condition for equality is that the qubit state is an eigenstate of one of the Pauli observables. Using concavity with respect to the parameter $$\alpha$$ , we derive approximate lower bounds for non-integer $$\alpha \in (1;+\infty )$$ . In the case of pure states, the developed method also allows to obtain upper bounds on the entropic sum for real $$\alpha \in (0;1]$$ and integer $$\alpha \ge 2$$ . For applied purposes, entropic bounds are often used with averaging over the individual entropies. Combining the obtained bounds leads to a band, in which the rescaled average $$\alpha$$ -entropy ranges in the pure-state case. A width of this band is essentially dependent on $$\alpha$$ . It can be interpreted as an evidence for sensitivity in quantifying the complementarity.
### Journal
Quantum Information ProcessingSpringer Journals
Published: Apr 11, 2013
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations | 2019-03-21 10:07:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086928725242615, "perplexity": 773.1120545966818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00539.warc.gz"} |
https://themodularperspective.com/2019/08/03/geometry-topology-rtg/ | ## Geometry & Topology RTG
Previously this week (week of August 3rd) I was able to attend the Geometry & Topology RTG workshop at the University of Notre Dame. The workshop was a week long event consisting of two parts, I and II, the first being an introduction into geometry & topology, and the latter being lectures on more advanced topics including student presentations. I’ve included their website link here. I attended part II and thought I’d speak about my experiences.
#### Closed Geodesics
This talk was given by post-doc Gabor Szekelyhidi. One starts with a closed Riemann surface $M \subset \mathbb{R}^{3}$. We call a smooth non-trivial loop $r:[0,1] \to M$ such that $r'(0) = r'(1)$ a closed geodesic if it is a map of minimal length or equivalently a critical point of the length functional (think a non-squiggly curve at small scales). The goal is to deduce if there are closed geodesics in $M$. Szekelyhidi then broke the argument into two cases, either $M$ is simply-connected or it is not. If not, then we have the following well-known theorem:
Theorem: If $\pi_{1}(M) \ne \{1\}$, then every non-trivial homotopy class contains a closed geodesic.
The proof is essentially finding the shortest loop in each homotopy class and we know such a loop exists because on $M$ there always exists some length $L$ such that if the length of a loop $r$ is less than $L$, written $\ell(r) < L$, then $r$ must be homotopic to the identity (for example, any loop less than length $2\pi$ in the torus cannot go around length-wise or width-wise and so must be trivial). To find the shortest loop we apply a process called curve shortening flow, which is a process that minimizes the curvature everywhere on the loop in a continuous manner without affecting the homotopy class of the loop.
On the other hand, if $M$ is simply-connected then all loops are homotopic to the identity and we need to work a little harder. Instead of considering classes of loops we consider loops of loops (commonly called “sweepouts”). Formally, a sweepout is a loop $\Phi:[0,1] \to \Omega M$ into the space of all loops on $M$ (not homotopy classes of loops) such that the induced map $S^{2} \to M$ viewing $S^{2} \cong [0,1] \times S^{1}/\sim$ is not trivial. We can think of a sweepout as taking a non-trivial loop in $M$ and pulling it in $M$ so it traces out $M$ in terms of the loop (think about pulling a rubber band around the sphere from top to bottom). We then define $W(\Phi) = \max_{s \in [0,1]}\ell(\Phi(s))$ and try to find the sweepout minimizing width because this sweepout will contain the geodesic. In fact, Birkhoff proved this in 1917.
Theorem (Birkhoff 1917): There exists closed geodesics in $M$.
Current research aims to generalize this idea to higher dimensional manifolds where we are now looks for surfaces of minimal area, $3$-manifolds of minimal volume, etc. In fact, in 2016 and again in 2018 it was proved (by Sang and Marques-Neves respectively) that if $M$ is a closed Riemann $3$-manifold, then it has infinitely many minimal surfaces.
#### Relating Topology and Geometry of Manifolds
This talk was given by Notre Dame professor Stephan Stolz. He started with a short introduction about the differences between topology and geometry. His view, was that topologists are interested in qualitative aspects of spaces such as the number of connected components, holes, punctures, and twists. While on the other hand, geometers are more interested in quantitative aspects of spaces such as the curvature, area, length, and area. This viewpoint is quite nice because it also alludes to the fact that most homeomorphisms (mapping that preserve topological structure) are often very far off from being a Riemannian isometries (a smooth homeomorphism which respects the Riemannian metric between manifolds), that is qualitative aspects being preserved doesn’t imply the quantitative ones are as well. However in this viewpoint the converse is true because every Riemannian isometry is in particular a homeomorphism.
The next piece of his talk introduced the Euler characteristic of a surface $M$, which is a topological invariant. Its obtained, in a crude manner, by putting a pattern of of polygons $\Gamma$ on $M$, defining $v(\Gamma)$, $e(\Gamma)$, and $f(\Gamma)$ to be the number of vertices, edges, and faces in $\Gamma$ respectively, and then defining the Euler characteristic $\chi(M,\Gamma)$ by the alternating sum $\chi(M,\Gamma) = v(\Gamma)-e(\Gamma)+f(\Gamma)$. It turns out that $\chi(M,\Gamma)$ is independent of the type of patterning $\Gamma$ so that we may write $\chi(M)$ for the characteristic and that it is topologically invariant, so homeomorphic manifolds have the same Euler characteristic. This gives a nice method for determining if two surfaces are not homeomorphic, and if we assume the surfaces $M$ and $N$ are compact orientable then they are homeomorphic if and only if they have the same Euler characteristic.
Stolz then shifted gears to talk a little about Riemannian manifolds: smooth manifolds $M$ with an inner product $g_{p}$ on the tangent space $T_{p}M$ such that the inner product varies smoothly with respect to $p$. With this inner product we have a way to measure lengths of curves, distance between points, and areas in $M$ analogous to how such quantities are measured in calculus. More importantly, we have a notation of scalar curvature at $x \in M$ (up to normalization) which is defined to be $-3(n+2)$ times the second derivative with respect to $r$ evaluated at $r = 0$ of $\text{vol\,}B_{r}(x,M)/\text{vol\,}B_{r}(0,\mathbb{R}^{2})$ (this is a measure of how fast the manifold is curving away or towards $x \in M$). Varying $x$ over all of $M$ we get a function $sc:M \to \mathbb{R}$ called the scalar curvature function. Incredibly, this relates to the Euler characteristic of $M$ in the following manner:
Theorem (Gauss-Bonnet): If $M$ is a compact orientable surface, then $\int_{M}s(x)\,dx = 4\pi\chi(M)$.
As an interesting corollary, it can be shown that the sphere is the only compact orientable surface admitting a positive Euler characteristic (namely $2$). Hence by Gauss-Bonnet it is the only compact orientable surface that can possibly admit an everywhere positive scalar curvature function (and in fact it does).
#### Directed Homology
One of the undergraduate presentations was on a new field of study: directed homology. Here one has a topological space with a sense of “time flow” on the manifold. Homological calculations can then be performed to gain some understanding of space. The presentation was primarily computational, so naturally I was curious about the homological properties these spaces have. The talk ended with a section on future research and some open questions were if a Mayer-Vietoris sequence and the standard homology axioms hold in this setting. I spoke with the presenter after about pursuing this topic and he agreed to get me in touch with his mentors.
For peace of mind, a directed space is a pair $(X,dX)$ consisting of a topological space $X$ with a subset $dX \subset C(I,X)$ of continuous paths from the interval into $X$ such that every constant path is in $dX$, $dX$ is closed under composition of increasing maps $I \to I$, and $dX$ is closed under path-concatenation. Notice that since $X$ is usually uncountable, $dX$ quite often a very large space. A morphism $f:X \to Y$ between directed spaces is a continuous map which preserves directed paths in the sense that if $\gamma \in dX$, then $f \circ \gamma \in dY$. With these definitions in mind, simplicial complexes $\Delta^{n}$ should become directed spaces by the induced ordering on their vertices (I’m not exactly sure how this should best be worked out), so the directed homological theory would begin by considering the chain groups $C_{n}(X)$ consisting of all singular directed morphisms $\sigma:\Delta^{n} \to X$I’m looking into this as time permits, so keep a look out for future posts on the topic.
Overall I really enjoyed the level of mathematics the workshop was at. I understood a majority of the content, and for the the things I didn’t quite understand I either had a good sense of intuition for them or I had someone else give me more detail after the talk. There were quite a few more talks then the two presented here by the way (seven professor/post-doc talks and twelve student presentations). Between talks there was a $30$ minute break intended to be a time to discuss the talk in greater detail, ask questions, and chat with other members of the conference. However, there was really no external indication that a workshop was taking place (no direction signs, banner, etc.) and that gave a feeling of “if I was in the local area and didn’t get invited I could of just shown up and no one would of minded”. I would have also preferred if there was some type of lunch session where invitees could go grab food with some of the speakers and talk about their research and interests in a less formal setting. Perhaps these things occurred in Part I and not in Part II, but I would have preferred them nonetheless. Overall, it was a good experience and I’ll be applying again next summer. | 2020-04-05 09:01:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 75, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354933857917786, "perplexity": 260.26502979291047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00098.warc.gz"} |
https://mathematica.stackexchange.com/questions/95513/what-is-the-definition-of-head-in-mathematica | # What is the definition of head in Mathematica? [closed]
I know (by practice) what a head is, but I am unable to find the appropriate words to actually define it. I would like to see a precise definition of head using the appropriate technical words involved. Hopefully, this should provide me with a better understanding of how Mathematica works.
• Note that as stated in the documentation, "Heads need not be symbols", e.g., FixedPointList[Head, f[x][y][z]] // Most returns {f[x][y][z], f[x][y], f[x], f, Symbol} – Bob Hanlon Sep 25 '15 at 21:39
• An answer may be found in the first tutoria,l Everything Is an Expression, linked in the documentation for Head. – Michael E2 Sep 26 '15 at 1:30 | 2020-03-30 20:37:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26925230026245117, "perplexity": 1069.670268817837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00054.warc.gz"} |
https://www.lmfdb.org/Genus2Curve/Q/8649/a/233523/1 | # Properties
Label 8649.a.233523.1 Conductor 8649 Discriminant 233523 Mordell-Weil group $$\Z \times \Z$$ Sato-Tate group $G_{3,3}$ $$\End(J_{\overline{\Q}}) \otimes \R$$ $$\R \times \R$$ $$\End(J_{\overline{\Q}}) \otimes \Q$$ $$\mathrm{RM}$$ $$\overline{\Q}$$-simple yes $$\mathrm{GL}_2$$-type yes
# Related objects
Show commands for: Magma / SageMath
## Simplified equation
magma: R<x> := PolynomialRing(Rationals()); C := HyperellipticCurve(R![-70, -47, 7, 3, -3], R![1, 1, 0, 1]);
sage: R.<x> = PolynomialRing(QQ); C = HyperellipticCurve(R([-70, -47, 7, 3, -3]), R([1, 1, 0, 1]))
magma: R<x> := PolynomialRing(Rationals()); C := HyperellipticCurve(R![-70, -47, 7, 3, -3], R![1, 1, 0, 1]);
sage: R.<x> = PolynomialRing(QQ); C = HyperellipticCurve(R([-279, -186, 29, 14, -10, 0, 1]))
$y^2 + (x^3 + x + 1)y = -3x^4 + 3x^3 + 7x^2 - 47x - 70$ (homogenize, simplify) $y^2 + (x^3 + xz^2 + z^3)y = -3x^4z^2 + 3x^3z^3 + 7x^2z^4 - 47xz^5 - 70z^6$ (dehomogenize, simplify) $y^2 = x^6 - 10x^4 + 14x^3 + 29x^2 - 186x - 279$ (minimize, homogenize)
## Invariants
$$N$$ = $$8649$$ = $$3^{2} \cdot 31^{2}$$ magma: Conductor(LSeries(C)); Factorization($1); $$\Delta$$ = $$233523$$ = $$3^{5} \cdot 31^{2}$$ magma: Discriminant(C); Factorization(Integers()!$1);
### G2 invariants
magma: G2Invariants(C);
$$I_2$$ = $$72776$$ = $$2^{3} \cdot 11 \cdot 827$$ $$I_4$$ = $$-4565084$$ = $$- 2^{2} \cdot 1141271$$ $$I_6$$ = $$-110566728408$$ = $$- 2^{3} \cdot 3 \cdot 4606947017$$ $$I_{10}$$ = $$956510208$$ = $$2^{12} \cdot 3^{5} \cdot 31^{2}$$ $$J_2$$ = $$9097$$ = $$11 \cdot 827$$ $$J_4$$ = $$3495695$$ = $$5 \cdot 7 \cdot 99877$$ $$J_6$$ = $$1814445117$$ = $$3^{4} \cdot 29 \cdot 107 \cdot 7219$$ $$J_8$$ = $$1071530924081$$ = $$23 \cdot 157 \cdot 296740771$$ $$J_{10}$$ = $$233523$$ = $$3^{5} \cdot 31^{2}$$ $$g_1$$ = $$62300419867534985257/233523$$ $$g_2$$ = $$2631649929116327735/233523$$ $$g_3$$ = $$1853767256362813/2883$$
## Automorphism group
magma: AutomorphismGroup(C); IdentifyGroup($1); $$\mathrm{Aut}(X)$$ $$\simeq$$$C_2$magma: AutomorphismGroup(ChangeRing(C,AlgebraicClosure(Rationals()))); IdentifyGroup($1); $$\mathrm{Aut}(X_{\overline{\Q}})$$ $$\simeq$$ $C_2$
## Rational points
magma: [C![-7,189,3],C![-7,190,3],C![-3,10,1],C![-3,19,1],C![-2,4,1],C![-2,5,1],C![1,-1,0],C![1,0,0]];
Known points
$$(1 : 0 : 0)$$ $$(1 : -1 : 0)$$ $$(-2 : 4 : 1)$$ $$(-2 : 5 : 1)$$ $$(-3 : 10 : 1)$$ $$(-3 : 19 : 1)$$
$$(-7 : 189 : 3)$$ $$(-7 : 190 : 3)$$
magma: #Roots(HyperellipticPolynomials(SimplifiedModel(C)));
Number of rational Weierstrass points: $$0$$
magma: f,h:=HyperellipticPolynomials(C); g:=4*f+h^2; HasPointsEverywhereLocally(g,2) and (#Roots(ChangeRing(g,RealField())) gt 0 or LeadingCoefficient(g) gt 0);
This curve is locally solvable everywhere.
## Mordell-Weil group of the Jacobian:
magma: MordellWeilGroupGenus2(Jacobian(C));
Group structure: $$\Z \times \Z$$
Generator Height Order
$$z (3x + 7z)$$ $$=$$ $$0,$$ $$3y$$ $$=$$ $$-3x^3 - 17z^3$$ $$0.132768$$ $$\infty$$
$$z (x + 3z)$$ $$=$$ $$0,$$ $$y$$ $$=$$ $$10z^3$$ $$0.141745$$ $$\infty$$
## BSD invariants
Analytic rank: $$2$$ (upper bound) Mordell-Weil rank: $$2$$ 2-Selmer rank: $$2$$ Regulator: $$0.018738$$ Real period: $$9.071150$$ Tamagawa product: $$2$$ Torsion order: $$1$$ Leading coefficient: $$0.339964$$ Analytic order of Ш: $$1$$ (rounded) Order of Ш: square
## Local invariants
Prime ord($$N$$) ord($$\Delta$$) Tamagawa L-factor
$$3$$ $$5$$ $$2$$ $$2$$ $$( 1 + T )^{2}$$
$$31$$ $$2$$ $$2$$ $$1$$ $$( 1 + T )^{2}$$
## Sato-Tate group
$$\mathrm{ST}$$ $$\simeq$$ $G_{3,3}$ $$\mathrm{ST}^0$$ $$\simeq$$ $$\mathrm{SU}(2)\times\mathrm{SU}(2)$$
## Decomposition of the Jacobian
Simple over $$\overline{\Q}$$
## Endomorphisms of the Jacobian
Of $$\GL_2$$-type over $$\Q$$
Endomorphism ring over $$\Q$$:
$$\End (J_{})$$ $$\simeq$$ $$\Z [\sqrt{5}]$$ $$\End (J_{}) \otimes \Q$$ $$\simeq$$ $$\Q(\sqrt{5})$$ $$\End (J_{}) \otimes \R$$ $$\simeq$$ $$\R \times \R$$
All $$\overline{\Q}$$-endomorphisms of the Jacobian are defined over $$\Q$$. | 2020-01-20 07:25:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822184443473816, "perplexity": 3411.468719210571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00529.warc.gz"} |
https://www.answers.com/Q/How_Many_Square_Feet_are_in_130_square_yards | Math and Arithmetic
Area
# How Many Square Feet are in 130 square yards?
###### Wiki User
130 square yards = 1,170 square feet.
🙏
0
🤨
0
😮
0
😂
0
## Related Questions
### How many square yards in 168 square feet?
168 square feet = about 18.7 square yards. (18.6666667 square yards)
### How many square yards are in 43200 square feet?
4,800 square yards. (divide square feet by 9 to get square yards)
### How many square yards are 324 square feet?
36 square yards. (divide square feet by 9 to get square yards).
### How many square yards are there in 700 square feet?
700 square feet = about 77.8 square yards. (77.7777778 square yards)
### How many feet are equal to 4840 square yards?
You either convert yards to feet, or square yards to square feet. But you can't convert square yards to feet.
### How many square yards are in 655 square feet?
9 square feet = 1 square yard 18 square feet = 2 square yards 27 square feet = 3 square yards . . . 655 square feet = 727/9 square yards
### How many square feet is 21 square yards?
189 square feet. (multiply square yards by 3 to get square feet)
### How many yards are in 360 square feet?
There are 40 square yards in 360 square feet. (just divide square feet by nine to get square yards).
### How many square yards are in 2808 square feet?
There are 312 square yards in 2,808 square feet.
### How many square feet in 5 square yards?
There are 45 square feet in 5 square yards
### How many square yards are in 45 square feet?
There are five square yards in 45 square feet.
## Still have questions?
Trending Questions
How old is Danielle cohn? Asked By Wiki User
Previously Viewed | 2021-03-08 04:10:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749765753746033, "perplexity": 3806.4807434548948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00067.warc.gz"} |
https://www.physicsforums.com/threads/derivation-of-equation-for-sliding-object.288604/ | # Derivation of equation for sliding object
1. Jan 30, 2009
### hitek0007
1. The problem statement, all variables and given/known data
1. The problem:
Derive an expression in its simplest form to show the relationship between the mass, ramp angle, and acceleration of the sliding object. Explain clearly but briefly the effect of each term in the expression on the actual acceleration, as the ramp angle changes.
No known variables. Context of this question: part of a lab, in which we found values of friction and coefficients of friction through measuring acceleration of objects sliding down a ramp.
2. Relevant equations
a=Fnet/m
Fgramp=mgsin(x)
Ffk=ukmgcos(x)
3. The attempt at a solution
a=Fnet/m
a=(Fgramp-Fk)/(m)
a=(mgsin(x)-ukmgcos(x))/(m)
a=gsin(x)-ukgcos(x)
a=g(sin(x)-ukcos(x))
Is this the right equation? If so, mass has no effect on the acceleration. Acceleration increases as angle increases.
However, my teacher told me that the equation is supposed to look like:
a=_________+_________
I suppose it is possible the 2nd term is negative... but I am not sure.
Are there different equations? We also calculated ideal and measured accelerations to find the value of friction. Is this any use?
Thanks!
2. Jan 30, 2009
### americanforest
Looks like you've got it right.
$$a=g(sin\theta-\mu cos\theta)$$ where $$\theta$$ is the angle between the ramp and the horizontal. Note that $$a$$ is the acceleration when the object has been released and slides down the ramp. If the object has been pushed up the ramp and is in the process of sliding up then a slightly different equation governs its acceleration. | 2017-04-28 10:32:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6990988850593567, "perplexity": 527.1436257820825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00431-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/contestant-winter-sporting-event-pushes-450-kg-block-ice-across-frozen-lake | Question
A contestant in a winter sporting event pushes a 45.0-kg block of ice across a frozen lake as shown in Figure 5.21(a). (a) Calculate the minimum force F he must exert to get the block moving. (b) What is the magnitude of its acceleration once it starts to move, if that force is maintained?
Question Image
1. $50\textrm{ N}$
2. $0.7 \textrm{ m/s}^2$
Solution Video
# OpenStax College Physics Solution, Chapter 5, Problem 18 (Problems & Exercises) (6:56)
View sample solution
## Calculator Screenshots
Video Transcript
Submitted by raynellmcclellan on Tue, 05/19/2020 - 11:47
Why do given the solution as 50 if it comes out as 51. Are we suppose to be rounding to the nearest tenth?
Submitted by ShaunDychko on Wed, 05/20/2020 - 15:43
Yes, exactly, we're rounding to one significant figure. This is due to the of the coefficient of static friction, which has only one significant figure. When multiplying by a number with one sig. fig., the answer gets only one sig. fig. also.
All the best,
Shaun | 2020-07-11 19:51:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4030017852783203, "perplexity": 2544.8595848227965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00177.warc.gz"} |
https://www.techwhiff.com/issue/lynn-plotted-point-g-3-units-to-the-left-and-2-units--302782 | # Lynn plotted point G, 3 units to the left and 2 units above point F. Where did lynn plot point G?
###### Question:
Lynn plotted point G, 3 units to the left and 2 units above point F. Where did lynn plot point G?
### Calculate the spectral half-width at room temperature of an infrared LED of peak wavelength 550 nm.
Calculate the spectral half-width at room temperature of an infrared LED of peak wavelength 550 nm....
### The price of a technology stock has risen to $9.61 today. Yesterday's price was$9.52 . Find the percentage increase. Round your answer to the nearest tenth of a percent.
The price of a technology stock has risen to $9.61 today. Yesterday's price was$9.52 . Find the percentage increase. Round your answer to the nearest tenth of a percent....
### Evaluate the expression 9.3^1
Evaluate the expression 9.3^1...
### If the population has gone up 20,000 in 6yr what percent is that
if the population has gone up 20,000 in 6yr what percent is that...
### Give an example of a risk a teen might take. Explain how this behavior affects one or more body systems. Answer in paragraph form using 5-7 complete sentences. NO LINKS!!!!
Give an example of a risk a teen might take. Explain how this behavior affects one or more body systems. Answer in paragraph form using 5-7 complete sentences. NO LINKS!!!!...
### Two spherical shells have a common center. A -2.1 10-6 C charge is spread uniformly over the inner shell, which has a radius of 0.050 m. A +5.0 10-6 C charge is spread uniformly over the outer shell, which has a radius of 0.15 m. Find the magnitude and direction of the electric field at the following distances (measured from the common center). (a) 0.20 m magnitude direction (b) 0.10 m magnitude direction (c) 0.025 m magnitude direction
Two spherical shells have a common center. A -2.1 10-6 C charge is spread uniformly over the inner shell, which has a radius of 0.050 m. A +5.0 10-6 C charge is spread uniformly over the outer shell, which has a radius of 0.15 m. Find the magnitude and direction of the electric field at the followin...
### State whether the equation represents exponential growth, exponential decay, or neither. y = 2 x 1.35 and x to the second power
State whether the equation represents exponential growth, exponential decay, or neither. y = 2 x 1.35 and x to the second power...
### What is a violation of public law
What is a violation of public law...
### Which conditions of climate, latitude, rainfall, and kind of land would make the best farmland?
Which conditions of climate, latitude, rainfall, and kind of land would make the best farmland?...
### Why would you need a system to exchange currency between Japan and China? A. They speak different languages. B. Their currencies have different values. C. Japan has a stronger economy than China. D. China has a stronger economy than Japan.
Why would you need a system to exchange currency between Japan and China? A. They speak different languages. B. Their currencies have different values. C. Japan has a stronger economy than China. D. China has a stronger economy than Japan....
### 2 3/4 Quarts is how many pints?
2 3/4 Quarts is how many pints?...
### I NEED THIS RIGHT NOW PLEASE. You have two options for a moon bounce rental company for the school carnival. The first company charges a $50 fee plus$15 per day. The second company charges a $80 fee plus$20 per day. Write and SIMPLIFY an expression that represents how much more the second company charges?
I NEED THIS RIGHT NOW PLEASE. You have two options for a moon bounce rental company for the school carnival. The first company charges a $50 fee plus$15 per day. The second company charges a $80 fee plus$20 per day. Write and SIMPLIFY an expression that represents how much more the second company ...
### What is the area of the triangle
What is the area of the triangle...
### Which historical figure was known for their high respect for our environment
Which historical figure was known for their high respect for our environment...
### Show that for a projectile d2 (v2) / dt2 = 2g2
Show that for a projectile d2 (v2) / dt2 = 2g2...
### Can someone answer this question please answer it correctly if it’s corect I will mark you brainliest
Can someone answer this question please answer it correctly if it’s corect I will mark you brainliest...
### If the prefix ir- means "not" and re- means "back," what is the exact meaning of the word irreversible in the following sentence? Once the space shuttle launched into orbit, its course was irreversible. A. not able to turn back B. not able to continue on C. not able to move sideways D. not able to chart its path
If the prefix ir- means "not" and re- means "back," what is the exact meaning of the word irreversible in the following sentence? Once the space shuttle launched into orbit, its course was irreversible. A. not able to turn back B. not able to continue on C. not able to move sideways D. not ...
### Do you agree with billy's father that there is sportsmanlike hunting - and unsportsmanlike hunting?
Do you agree with billy's father that there is sportsmanlike hunting - and unsportsmanlike hunting?... | 2023-04-01 19:59:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34050342440605164, "perplexity": 1639.2921770504806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00259.warc.gz"} |
https://www.springerprofessional.de/gendered-behavior-as-a-disadvantage-in-open-source-software-deve/16906148?fulltextView=true | main-content
## Weitere Artikel dieser Ausgabe durch Wischen aufrufen
01.12.2019 | Regular article | Ausgabe 1/2019 Open Access
# Gendered behavior as a disadvantage in open source software development
Zeitschrift:
EPJ Data Science > Ausgabe 1/2019
Autoren:
Balazs Vedres, Orsolya Vasarhelyi
Wichtige Hinweise
## Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## 1 Introduction
Women suffer a considerable disadvantage in information technology: their proportion in the workforce is decreasing, and they are especially underrepresented in open source software development. The proportion of women in computing occupations has been steadily declining from 36% in 1991 to 25% today [13]. In open source software only about 5% of the developers are women [4], and they exit their computing occupation careers with higher probability. Women suffer from a gender wage gap in STEM—and especially in computer programming—more so than in other fields [5]: that has not decreased over the past two decades [6]. Many women quit their computing occupation careers in the middle [7]. These developments are puzzling, especially in the face of a favorable shift in public consciousness, and considerable private and public policy efforts to counter gender discrimination. With accumulating evidence of the benefits of gender diversity in teams [810], it is clear that marginalization of women in software development leads to major societal costs.
In this article we analyze a large dataset of open source software developers to answer the question: are women at a disadvantage because of who they are, or because of what they do? Typically, gender discrimination is conceptualized as categorical discrimination against women [11]; however, as much of the scholarship in gender studies had shown, to understand gender inequalities one needs to shift the focus to the gendered pattern of behavior [12, 13]: The more likely causes of discrimination are actions that are typical of men and women, rather than the gender category of the person [1315]. Women in leadership roles often feel compelled to (or are expected to) follow male behavioral traits [16], just as men in feminine occupations take on female-like behavioral traits [17], and the choice of collaborators and mentors often follows gender homophily [18].
While categorical gender discrimination is an easy target for policies, discrimination based on behavioral expectations are more difficult to counter. Recently Google was sued by women for categorizing women as ‘front-end’ developers without reason, blocking their access to higher pay and faster promotion that ‘back-end’ developers enjoy, who are more likely to be male [19, 20]. This also underscores that when we analyze the gendered pattern of behavior, we should not assume that such behavior is a result of free choice. In fact, the history of computing occupations is also a history of marginalizing women from an increasing number of specializations [21]. Thus far there have been no analysis based on large data in a contemporary setting, to analyze behavioral traces, and to assess the relative weight of categorical and behavioral gender in gender inequality. Our data source is GitHub: the most popular online open source software project management system, which provides an opportunity to track the behavior of software developers directly, identify gender from user names, and observe success and survival [22, 23]. In open source software development the most important payoff to participants is reputation [24], hence we operationalize success as the number of users declaring interest in one’s work by “starring” a repository. As a second dependent variable we analyze differences in the odds of sustaining open source development activity over a one year period subsequent to our data collection time window.
Using data about behavior in a large sample allows us to construct a measure of femaleness of observed behavioral choices over the entire career, as a measure of gender typicality. This approach has a long history, using survey data [12, 25, 26], and more recently with behavioral trace data in diverse settings [2729]. In addition to the interval scale gendered behavioral dimension, we also identify multiple kinds of gendered behavioral patterns using a decision tree classification approach, and we assess the relative explanatory power of one behavioral dimension when controlling for multiple patterns of behavior.
We first compare men and women: users who display a recognizable gender on their profile, but we also analyze data of users with unidentifiable gender. The first question is whether gendered behavior makes any difference at all, or is it only the gender category, that relates to female disadvantage. If gendered behavior is related to outcomes, is that relationship the same for both women and men? Are there signs of change in patterns of gendered disadvantage?
It is also important to analyze gendered behavior of those who do not readily reveal their gender. Scholars have discussed the potential of online collaborations to mitigate gender inequalities, as it is easier to manipulate or hide gender identity online, compared to face-to-face settings [3032]. Our first question here is whether we see evidence for surrounding users recognizing the gender from the behavior of focal users that are hiding their categorical gender. Our second question is whether success and survival for unknown-gender users are related to their gendered behavior as well.
## 2 Empirical setting and data
### 2.1 GitHub
Github (github.com) is a social coding platform that allows software engineers to develop and publish software together, recording their contributions to a collaborative activity. It is the most popular web-based ‘git’ software repository hosting and version tracking service, with 20 million users and over 57 million private and public repositories in May, 2018. Working in repositories collaboratively can lead to success through visibility and reputation, which helps developers to be noticed by potential employers [22, 24, 33]. We used coding and collaboration activity to conceptualize individual careers.
The empirical basis of this study is a data set acquired via githubarchive.org between 2009-02-19 and 2016-10-21 about the following: creation of a repository, push to a repository, opening, closing and merging a pull request. To collect information about users’ names, e-mail addresses, number of followers, number of public repositories and the date they joined GitHub, we sent calls to the official Github users API.
### 2.2 Inferring gender
Since users do not list their gender directly, we infer each person’s gender using their first names. This is a commonly and successfully used method in Western societies [27, 34]. In this work, we rely on the 2016 US baby name dataset published by the US Social Security Administration annually (SSA 2016). Users’ first names for gender recognition come from a number of data points. Users can add their full names and e-mail addresses to their profiles, but only a nickname is required to use GitHub. We first check whether a user’s full name is available and separate its first and last name(s). If not, we check the availability of the e-mail address and separate the part before the “@” by various punctuation marks or capital letters, and save first and we then last name(s). Since in some countries such as Japan or Hungary the given name is the second or the third name, if our baby name database does not contain the inferred first name, we ran the algorithm on last name(s) as well. Baby names dataset mainly covers American and European names, and lacks Asian names. In Asia, it is a common tradition to choose Western given names and use them in real and online life [3537] thus if no full name or e-mail data is available or not inferable we use the user’s nickname as the name for gender recognition. See Fig. 1 for population size.
### 2.3 Accuracy of gender inference
We assess the accuracy of our gender inference by a comparison to a baseline (consensus of two manual coders), and by a comparison to two other methods. We took a sample of 600 users from our data set, and assessed their gender manually. We, the two authors independently hand-coded 600 user profiles (200 females, 200 males, 200 unknowns according to our original method), using information publicly accessible online, in approximately the same way a GitHub user would and could come to a conclusion about the gender of another user of interest.
There were 73 cases (12.2%), where the opinion of us, two manual coders differed. We re-checked these cases, and came to a consensus about each. To quantify our inter-rater reliability, we used Krippendorff’s alpha [39]; a commonly used statistic of agreement. Considering three gender categories—female, male, and unknown—the alpha was 0.80. Considering female and male users only, the alpha was 0.95. Both of these are conventionally considered to be good reliability.
40 users profiles had been deleted over the past two years, so our final tally is 300 males, 156 females and 104 unknowns. Using this consensus classification as our baseline, we compared our gender inference method, and two other well-known algorithms trained for inferring gender in online communities; Gender Computer by Valiescu [22] and Simple Gender by Ford [40]. Figure 2 shows the Precision, Recall and F Score of each algorithm by gender.
The three algorithms have very similar accuracy; all methods are optimized for high male-precision and female-recall. Valiescu’s method minimizes the number of unknowns, which gives it’s an overall worse precision in the case of women. Our method’s weakness is the male-recall. Overall, we believe that our gender inferring method is robust and sufficiently accurate in comparison to other already published methods, while it has the advantage of being simple and easy to implement.
### 2.4 Data cleaning
We decided to filter users by their level of activity, as there are many users who establish a GitHub account with hardly any subsequent developer engagement (but use GitHub, for example, as a web hosting platform). First we excluded organizational and company accounts, then selected those 1,634,373 users in our data set with at least 10 traces of activity over their careers. Then we deleted 1604 users for evidence of being artificial agents (having a substring, like “bot”, “test”, “daemon”, “svn2github”, “gitter-badger” in their usernames). As we were interested in patterns of gendered behavior (for which we encountered resource and time intensive data crawling challenges regarding pages of connected users), we took a biased sample with 10,000 users of each gender groups (men, women, unknown gender). We repeated the sampling procedure five times, to test for robustness to sampling error. We crawled the profile pages of all sampled users, and collected who they follow, and whom they are followed by. Gender of followers and followed users were identifies with the same approach outlined above.
## 3 Measures
### 3.1 Identifying specializations
To capture the specialization of activity, we used principal component analysis of programming languages, where variables represented the number of times a given programming language was used by the individual. For each repository, GitHub auto-detects the main language. In total, we extracted 103 different programming languages, and kept those which appeared at least in 1000 projects within our samples, resulting in 22 most commonly used ones. Fig. 3 shows the language frequency. We used Scipy’s PCA.decomposiation package with Varimax Rotation to identify independent factors [41]. We ran the PCA analysis on each sample, than used the least square criteria to extract the factors and compare them.
### 3.2 Femaleness
The main variables of interest in our article is the gendered pattern of behavior, which we operationalize as the probability of being female given behavior. Several studies had adopted a similar approach of using an empirical typicality measure as an explanatory variable, in a wide range of empirical problems, from the phonological typicality of words [42] to the typicality of music [43], careers [44], businesses [45], or restaurants [46]. Typicality has been used to investigate gender as well [27, 47]. We selected variables that capture the most relevant aspects of behavior in open source software development. We use variables that represent choices reasonably under the control of the individual.
For measuring gendered behavior, we used a Random Forest model [41] to predict the gender identity (conveyed by name choice of a user), using their collaboration history, activity, and specializations identified above by principal component analysis. We used the following variables: No of repositories, No of touched repositories, No of ’pushes, No of opened pull requests, No of followed females, No of followed people No of collaborator, Frontend, Ruby Backend, Backend, Data Science, iOS, PHP Frontend. We used a Random Forest classifier with 10-folds cross validation, to predict gender (a prediction of someone being female). The size of our dataset allows us to set $$k=10$$, which is a commonly used value in applied machine learning [48, 49].
The Random Forest classification was moderately accurate—behavior in open source is not drastically different by gender. The area under the ROC curve was 0.71, which was consistent across five samples, and decreased to no less than 0.67 with 5% and 10% swapped gender. Variable importance scores were also robust to gender classification error. See S5 and S6. This is a moderate classification performance, which is weaker than classic instruments devised to measure gendered behavior [26] (AUC for inkblots test =0.94, for combined test =0.96), but similar to the performance of gender classifiers based on internet messaging [28] ($$\mathrm{AUC} = 0.72$$), graphic design works [27] ($$\mathrm{AUC}= 0.72$$), or biometric gender prediction based on screen swiping [29] ($$\mathrm{AUC} = 0.71$$).
As Fig. 4 shows, the most important behavioral aspect for femaleness prediction is gender homophily: the number of female collaborators (a collaborator is someone who contributed to the same repository with the user). This variable has both the highest variable importance and the highest odds ratio. With one standard deviation increase in the number of female collaborators, the odds of being female increases by 1.84 ($$p=0.000$$). Other gender-coded collaboration tie variables are far less important, corroborating findings of others that female homophily is a marked phenomenon in fields where women are underrepresented [18]. Specializations of programming languages are important components of gendered behavior, although contradicting stereotypical assumptions. Front-end specialization (work on the look of interfaces) is assumed to be feminine, while back-end (work on algorithms and data procedures under the hood) is considered to be more male. We identified two principal components of each specialization, and found that there is one pair of front-end and one back-end specialty that is more male, while there is another pair of front-end and back end specialty that is more female. For the distribution of femaleness see Fig. 5.
Robustness to mis-identification
Gender prediction depends on inferred gender, which will have error. To test the sensitivity of our analyses to gender mis-identification, we re-ran the Random Forest prediction with datasets where 5% and 10% of the users had their gender swapped. This amount of error is in the range of mis-classification that we saw comparing our method to the baseline (7.5% of users with known gender was mis-identified by our method). We created 100 mis-classified datasets for each randomization type. Variable importance in the Random Forest prediction was robust to swaps of gender, Fig. 6 shows original variable importance (dashed grey line) compared with the distribution of new variable importance calculated on gender-swapped datasets.
### 3.3 Classes of gendered behavior
With our gender typicality measure we assume that the gendered nature of behavior varies along one continuous dimension. This assumption has been challenged before [50, 51], so we test whether multiple categories of gendered behavior is a more adequate approach. To accomplish this we identify multiple classes of femaleness with a decision tree prediction approach. We then include a set of binary indicator variables representing decision tree classes, with the most gender-balanced class being the reference category in our models for success and survival. We also identify a range of classes, from 5 to 100, to test the robustness of our findings to the resolution of the classification tree. See section Models.
Our Decision Tree classifier is based on the same variables we calculated femaleness. Figure 7 shows the final tree with classes of typical gendered constellations of behavioral variables.
Optimization
We optimized the decision tree classifier for maximum depth, running the algorithm with different fixed depth sizes, resulting with 5, 10, 20, 50 and a 100 categories. We use these categories for predicting success and survival for developers belonging to the same classes.
## 4 Models
Our dependent variables are success and survival. Our success measure is the total number of times other users have starred (bookmarked as useful) repositories owned by our focal user, during the entire career. A star is a statement of usefulness: interest from another user to easily locate and to utilize the given repository in the future. Since success and our behavioral variables co-evolve during the career, causal arguments can not be tested. We measured survival by re-visiting all users’ pages exactly one year after the end of our data collection, and recording the number of actions taken by the user over this one year. If a user did not make any actions on the site for one year, we recorded exit for that user; otherwise we marked the user as survivor. Users seldomly close their accounts (0.3% of users), since keeping an account is free. In the case of survival we can test causal hypotheses, as behavior precedes cessation.
Our measure of success is an over-dispersed count variable, thus we use a negative binomial model specification. Moreover, we also know that many users of GitHub are not interested in accumulating stars for repositories, but use the platform for other purposes (e.g. as a personal archive); in other words users are a mixture of two latent classes: one interested in achieving success, and one without such interest. We therefore estimated a zero-inflated negative binomial model (ZINB), where we separately modeled excess zeros with a logit model, and the accumulation of stars with a negative binomial model. We also tested the robustness of our findings with an OLS model with the log of success as the dependent variable, and a specification identical to the count model of our zero inflated negative binomial models.
We estimate our ZINB mixture model with equation (1): where $$\gamma _{i}$$ is the number of stars accumulated by user i for own repositories, γ is the gamma distribution, k is a dispersion parameter, and n is a natural number >0. We can model $$\pi _{i}$$ and $$\lambda _{i}$$ as functions of independent variables. For $$\pi _{i}$$—the model for the zero component—we specify a logistic regression with a logit link function at (2), and for the count model we use an identical specification (3), where $$x_{g}$$ is the female gender category (for women $$x_{g}=1$$, for men $$x_{g}=0$$), and $$x_{b}$$ is the femaleness of behavior from our random forest prediction.
\begin{aligned} &\textstyle\begin{cases} P(Y_{i} = 0) = \pi _{i} + (1- \pi _{i}) \cdot (1 + k \lambda _{i} )^{- \frac{1}{k}}, \\ P(Y_{i} = n) = \frac{(1- \pi _{i}) \cdot \varGamma ( Y_{i} + \frac{1}{k} ) (k\lambda _{i})^{Y_{i}}}{\varGamma \frac{1}{k} \varGamma (Y_{i}+1) \varGamma (1+ k \lambda _{i})^{Y_{i}+\frac{1}{k}}}, \end{cases}\displaystyle \end{aligned}
(1)
\begin{aligned} &\operatorname{logit}(\pi _{i})= \gamma _{0} + \gamma _{g}x_{gi} + \gamma _{b}x_{bi} + \gamma _{gb}(x_{gi}x_{bi}) + \gamma _{n}x_{ni} + \gamma _{gn}(x_{gi}x _{ni}) +\gamma _{c}x_{ci}, \end{aligned}
(2)
\begin{aligned} &\log (\lambda _{i})= \beta _{0} + \beta _{g}x_{gi} + \beta _{b}x_{bi} + \beta _{gb}(x_{gi}x_{bi}) + \beta _{n}x_{ni} + \beta _{gn}(x_{gi}x_{ni}) + \beta _{c}x_{ci}. \end{aligned}
(3)
As an auxiliary test for the presence of discrimination by categorical gender, we added a variable that records the relative frequency of the first name of the user (relative to the total number of users of the same gender)—an approach recently taken to measure discrimination in patenting [52]. If discrimination is by categorical gender, we expect women to be significantly disadvantaged in proportion to the frequency (easy recognizability) of their names. We expect that women with names like “Mary” (the most common female name) are more disadvantaged than women with names like “Maddie” (one of the least common female names). We thus include $$x_{n}$$ as the normalized logged relative frequency of first name within gender: $$x_{ngi} = \log \frac{f _{i}}{N_{g}}/\max (x_{n})$$, where $$f_{i}$$ is the overall frequency of the first name of user i, and $$N_{g}$$ is the overall number of users of gender g.
Finally, $$x_{ci}$$ stands for control variables. Our control variables represent alternative explanations connecting gender and outcomes: Tenure (number of years since joining) might favor men, as women tend to have shorter tenure (and drop out). The level of activity (number of own repositories and number of repositories where the user contributed) might also favor men, as women usually have less time to devote to professional activities. Social ties (number of followers and collaborators) might also favor men, as gender homophily is expected. Finally, we measure the total number of potential bookmarkers as the number of developers who worked with the same programming languages as our focal subject. A developer with a large potential audience might gather stars more easily for his or her repositories.
We estimate a logit model for survival with an identical specification to the success model (4), where $$\gamma _{i}=1$$ for users with sustained activity over one year after data collection, and $$\gamma _{i} = 0$$ for cessation. The independent variables are defined in the same way as described above.
\begin{aligned} \ln \frac{P(\gamma _{i}=1|x)}{1-P(\gamma _{i}=1|x)} = {}&\beta _{0} + \beta _{g}x_{gi} + \beta _{b}x_{bi} + \beta _{gb}(x_{gi}x_{bi}) \\ &{}+ \beta _{n}x_{ni} +\beta _{gn}(x_{gi}x_{ni}) + \beta _{c}x_{ci}. \end{aligned}
(4)
## 5 Results
### 5.1 Femaleness and outcomes
Considering gender as a category (females and males) for success, women on average received 8.76 stars, and men received 13.26, however, this difference is not statistically significant, neither by an F-test ($$F=2.208$$), nor by a bivariate ZINB model entering only an intercept and gender category (female = 1, male = 0) in both the zero inflation model (gender coefficient $$z= 0.488$$), and the count model (gender coefficient $$z= 0.835$$). Women, however, have a statistically significant disadvantage in the probability of survival: 92.8% men survived one year after our data collection, while only 88.2% of women ($$\text{odds ratio}=0.575$$, $$\text{Chi-squared}=126.1$$).
The femaleness of the pattern of behavior is significantly negatively related to success, using both a t-test ($$t=-5.337$$), and a ZINB model (zero inflation model $$z=23.947$$; count model $$z=-12.365$$). Femaleness is also negatively related to survival (bivariate logit model $$z=-9.875$$).
Turning to multivariate models, Fig. 8 shows point estimates of expected success and expected probability of survival for gender-related variables from five model specifications. All variables are measured on the 0–1 scale, making estimates comparable. In our full models—ZINB models for success SI 1. in Additional file 1 and logit models for survival (SI 3.)—the coefficient for being female shows no consistent relationship with outcomes. In our main models of success and survival (model 1 with variables shown on Fig. 8 and additional control variables), females are not significantly disadvantaged compared to males. In fact, our success model shows a weak positive coefficient (0.62, $$p=0.049$$). We tested the robustness of this finding by adding binary indicator variables for decision tree classes representing typical gendered behavioral patterns (model 2), or adding all programming language use frequencies (model 3). We also re-estimated model 1 (both for success and survival) with randomly swapped genders. We estimate model 4 by using the same variables as in model 1, but randomly swapping the gender for 5% of developers in the sample with known gender, and in model 5 swapping 10%. Both model 4 and model 5 report 95% confidence intervals from 100 trials. Of the five models, only models 4 and 5 (with 5% and 10% randomly swapped gender) show significant disadvantage for females in survival. Our findings for success were robust with an OLS specification predicting $$\operatorname{log}(\mathrm{success}+1)$$ as well (SI. 2.).
While categorical gender is not a consistently significant predictor of outcomes, the femaleness of behavior is in all models for both success and survival. Femaleness of behavior is a strong negative predictor of both success and survival, and it is the only coefficient related to gender that is consistently and significantly different from zero. Figure 9 shows predictions for success and survival along the range of femaleness, keeping all other variables constant at their means. The difference between females (red line) and males (blue line) is small compared to the difference along the range of femaleness.
First, consider success at the median for both males and females (Fig. 9 panel (a)). Taking the predicted success of males at the median is 2.53 (stars for their repositories), for females the prediction at their median femaleness is 1.07. Taking the male prediction as 100%, the expected success of females is 42.3% of that. The disadvantage is 57.7% points, of which 8.9% points are due to categorical gender, and 48.8% points are due to difference in femaleness. In other words, only 15.4% of the expected female disadvantage in success is due to categorical gender, and 84.5% is due to femaleness of behavior. Considering the same decomposition for probability of survival (Fig. 9 panel (c)), we see a smaller disadvantage for women: 6.1% points, of which 4.0% points is doe to categorical gender, and 2.1% due to differences in femaleness (34.8% of the expected disadvantage in survival).
Males are also disadvantaged by their gendered behavior. Considering the interquartile range of femaleness, the expected success of males at the first quartile of femaleness (0.32) is 4.16 stars, while the same expectation at the third quartile (0.52) is only 1.51 stars, which is 63.7% less. For females the predicted success at the first quartile of femaleness (0.43) is 1.84 stars, while at the third quartile (0.72) it is only 0.51 stars—a difference of 72.2%. For survival the same inter-quartile disadvantage for males is 2.7%, for females it is 8.8%.
The coefficient of the interaction between female gender and femaleness is positive for success, but not significantly different from zero for survival (considering model 1). This indicates that the penalty for femaleness is higher for males overall than for females. (The female disadvantage over the interquartile range is nevertheless higher than males because of the wider spread of femaleness for females.)
Using the frequency of first name shows some evidence of discrimination in success, but not in survival. The interaction of being female and having a frequent name is negative, while the coefficient for name frequency itself is not significant, indicating that it is only women, who suffer a disadvantage if their name is more common, and thus their gender is easier to recognize. The prediction for a woman with the rarest name is 2.74 stars, while the prediction for a woman with the commonest name is only 0.95 stars—a 65.5% lower success.
Figure 9 also shows predicted outcomes for users with unknown gender. To predict outcomes for unknowns, we use a specification identical to model 1, without variables for categorical gender and name frequency (see SI 4.). Again, our findings about success were robust with an OLS specification predicting $$\operatorname{log}(\mathrm{success}+1)$$ (see SI 2.). As apparent on Fig. 9 panel (b) and (d), the femaleness disadvantage is also demonstrable for those who do not reveal their gender. At the first quartile of femaleness (0.54) the expected number of stars is 1.99, while at the third quartile (0.62) it is only 1.03 stars—a 48.0% drop. The disadvantage for survival is even more severe: a reduction of 10.4% across the interquartile range (compared to 2.7% for males, and 8.8% for females). These results are robust if we restrict our analysis to those users who do not reveal any name, and omit those who do reveal a name that was not listed in the US baby name dataset.
Do we see evidence for change in femaleness-based disadvantage? Are there signs for a decreasing salience of femaleness in predicting success? To answer this, we split our sample by tenure, showing separate predictions for those starting in 2013-14, and in 2015-16. Figure 10 is a version of Fig. 9 panel (a), now split to earlier and later recruits. For a decreasing disadvantage we expect to see the dashed lines (drawn for the more recent cohort) to be closer to horizontal, than the solid line drawn for the earlier cohort. Unfortunately we see evidence for the contrary: disadvantage by femaleness of behavior is increasing.
### 5.2 Classes of gendered behavior and outcomes
Thus far we focused on relating one continuous dimension of gendered behavior, femaleness, with outcomes. We now turn to estimating how classes of gendered behavior relate to outcomes. In our models of success and survival presented in the previous section (specifically model 2 on Fig. 8) we entered 14 decision tree classes of gendered behavior alongside the continuous dimension (omitting the most gender balanced as reference category), and found that the coefficient of the continuous dimension remains unchanged. This indicates that classes of gendered behavior do not add qualitatively different insights into how behavioral disadvantage operates. Now we test this idea further, by estimating models of success and survival by substituting the continuous dimension of femaleness by the classes of gendered behavior.
Figure 11 shows the marginal predictions for decision tree classes for success and survival, aligned by the female proportion in the class. In this analysis we use an OLS model with $$\log(\mathrm{success}+1)$$ as the dependent variable, as the zero inflated negative binomial models did not converge for the robustness checks with a range of classes from 5 to 100. For both the success and survival models we use an identical specification to model 1 on Fig. 8, the only difference being the replacement of the continuous femaleness variable by 13 binary indicators for classes (the 14th class being the omitted reference category). The trends on these figures show a negative relationship between female proportion in the class and outcomes: Regardless of the content of the behavior class, the proportion of women in the class is strongly negatively related with outcomes. This is true both for men and women.
To test the significance of this downward trend, we ran multilevel models, where we entered the class level female proportion instead of the dummies of behavioral class. We specified these models otherwise the same way as model 1 on Fig. 8. We found that the female proportion in the decision tree class is a significant negative predictor for both success and survival, and that the difference between the intercepts and slopes of males and females is not significant. This finding holds with a range of decision tree class resolutions, from 5 to 100. SI 8. SI 9. This suggests that gender segregation operates along emergent types of activities, regardless of the level of detail. It is chiefly the female quality of these classes of activities that relates with outcomes, and one dimension of femaleness is adequate to capture that.
## 6 Discussion
We found that gendered behavior is a significant source of disadvantage in open source software development: our models show negative coefficients for femaleness, and only weak support for categorical discrimination. Femaleness of behavior is not only a disadvantage for women: men and users with unidentifiable gender are just as disadvantaged along this dimension. Even of we consider classes of gendered behavior with as many as 100 different decision tree classes, outcomes are chiefly related to the female proportion in those classes, both for men and women. This is an important finding, as thus far the relative importance of categorical and behavioral gender have not been studied in the context of software development, and gender segregation was only studied at the level of professions.
Our findings have important consequences for policy and interventions in gender inequalities in software development, and possibly other creative fields. In the short term, attempts to set quotas for women in software companies will not address the component of inequality that is related to gendered behavior. Increased proportion of women eventually might lead to the flattening of the slope of the relationship between behavioral femaleness and outcomes. A higher proportion of women can lead to questioning stereotypes, more visible female success stories in conventionally male types of behavior, and decisions to re-classify types of work that are now packaged in masculine-feminine stereotyped specialties.
In the longer term, as the use of AI systems in human resources management advances, the importance of gendered behavior in disadvantage means an increased risk of algorithmic discrimination. Algorithms can be policed to exclude manifest gender information from their decision making, but they can perpetuate discrimination based on behavioral typicality, as a recent case at Amazon’s AI-aided hiring have shown [53]. It will be difficult to hold such algorithms accountable, as the particular behavioral specializations figuring in gendered behavior can be shifting constantly. Today activist target the front end–back end dichotomy at Google [19, 20], but tomorrow they might need to target D3 and Hadoop.
We should re-think the place of coding schools for women that are becoming widespread. These schools are typically training women in specialties that already have a number of women working in them (such as Ruby), and thus might perpetuate the disadvantage of women by their femaleness of behavior [54]. Another unintended consequence of these schools is that they contribute to gender homophily by creating more women-to-women ties among the participants.
Users, and especially women, should re-think the potential benefits of hiding their gender online. It seems that the inequalities stemming from gendered behavior impact those just as much who hide their gender. A hidden gender identity can prevent discrimination by categorical gender, but it might also lead to a lack of trust and exclusion from projects, that might be behind the higher exit rate of such users. Comparing our calculation of the marginal effects of behavioral gender for users with unknown gender and women with known (manifest) gender shows that there is no advantage for gender hiding, the effect of categorical discrimination can not be escaped from by hiding.
While we were discussing gendered behavior, it is important to distinguish gendered behavior from gendered free choice. We were composing our measure of gendered behavior out of variables that could be controlled by the individual, but we don’t want to leave the impression that these traits are fully under the control of the individual. It is likely that the reasons behind the high (and increasing) negative slope of femaleness of behavior is due to constrained choice and deep-rooted stereotypes, rather than free choice. Women are being boxed into specializations even despite their manifest protest against it, as the legal case against the front end–back end distinction have shown. What is hopeful though, is that there is already a recognition that action needs to be targeted at discrimination by specializations.
### Acknowledgements
We thank Michael Szell for his assistance in collecting the data. We thank Ivan Szelenyi for helpful comments along the preparation of this manuscript. We also thank participants of seminars at the Department of Network and Data Science at CEU, at the Institute for Analytical Sociology at Linkoping University, and at the Institute for Social Research and Policy at Columbia University. We thank workshop participants at the Hungarian Academy of Sciences Centre for Social Sciences for comments and criticisms. We also thank the Hungarian Academy of Sciences for previously censoring the presentation of this manuscript for political reasons as part of the Hungarian government’s ban on studies on gender studies. This provided considerable visibility to our work, and led several anonymous citizen commenters to our manuscript engaging in a lively debate, whom we would also like to acknowledge.
### Availability of data and materials
The datasets used and analyzed during the current study are available from the corresponding author (B.V.) on request.
### Competing interests
The authors declare that they have no competing interests.
## Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Unsere Produktempfehlungen
### Premium-Abo der Gesellschaft für Informatik
Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.
### Springer Professional "Wirtschaft+Technik"
Online-Abonnement
Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:
• über 69.000 Bücher
• über 500 Zeitschriften
aus folgenden Fachgebieten:
• Automobil + Motoren
• Bauwesen + Immobilien
• Elektrotechnik + Elektronik
• Energie + Umwelt
• Finance + Banking
• Management + Führung
• Marketing + Vertrieb
• Maschinenbau + Werkstoffe
• Versicherung + Risiko
Testen Sie jetzt 30 Tage kostenlos.
### Springer Professional "Wirtschaft"
Online-Abonnement
Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:
• über 58.000 Bücher
• über 300 Zeitschriften
aus folgenden Fachgebieten:
• Bauwesen + Immobilien
• Finance + Banking
• Management + Führung
• Marketing + Vertrieb
• Versicherung + Risiko
Testen Sie jetzt 30 Tage kostenlos.
Weitere Produktempfehlungen anzeigen
Zusatzmaterial
Supplementary information (PDF 161 kB)
13688_2019_202_MOESM1_ESM.pdf
Literatur
Über diesen Artikel
Zur Ausgabe | 2019-10-15 09:40:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5315050482749939, "perplexity": 2499.1820420896934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00151.warc.gz"} |
https://fermatslastspreadsheet.com/2012/04/05/pricing-options-in-your-head/?replytocom=780 | # Calculating option prices in your head
We all know that option prices are calculated with the Black-Scholes formula, using a volatility, time-to-maturity, strike and forward. Typically you just chuck them all into your computer and let it spit out the number.
Trouble with this is how do you get an intuition for prices, especially when you are looking at options trades like conditional steepeners or calendar spreads?
Recently I set myself the problem of getting some simple way to cross-check the numbers coming out of my PC, and to get some intuition for the way an option’s value decays to its intrinsic value as the forward moves (which is what you definitely need for options trading).
In this post I show you a simple approximation I have found which does a pretty stellar job of accurately telling you the price of an option at any strike, and which you can calculate in your head!
### The 2 flavours of Black Scholes: lognormal or normal
Most people are familiar with the flavour of the Black-Scholes formula which is based on a so-called geometric brownian motion, and which needs a so-called lognormal vol as its input.
In concrete terms, lognormal vols are like the numbers you see in the VIX index: 40% is big, 10% is small.
A lognormal vol of 10% is saying that the annual return on an asset can sit in a range about 10% wide.
Makes sense: stocks can return something like 5%, but maybe they’ll return 10% in a good year or make 0% in a bad year.
In the world of interest rates, however, it is more intuitive to express changes in terms of the absolute moves: if the 10y rate was 4% last year and 5% now, then the normal vol might be something like 1% per year (or 100 basis points or 100bps per year).
The less-known Black-Scholes formula that uses these normal vols looks quite like the lognormal version.
If you are interested, click here for a paper which shows the full derivation of the BS normal formula.
All we need to know for this post is that the Black-Scholes normal flavour formula for the price of a call & put is:
In these two equations the d1 term is defined as:
It is just an alternative way of saying how far away you are from the strike of your option, I’ll talk more about it below.
Remark: In the world of swaptions trading (options on interest-rate swaps), the formulas don’t need the exp{-r(T-t)} term at the front, so in the following I basically assume r=0 and never write that part — I just use the terms inside the square brackets.
### Pricing an at-the-money option: well-known and easy
For an at-the-money call or put (ie where K=F), the price is the same, let’s call it ATMPrice. The BS formula reduces to a simpler:
ATMPrice = 1/sqrt(2*PI) * vol * sqrt(Maturity)
which is very well approximated by:
ATMPrice = 0.4 * vol * sqrt(Maturity).
Here is an example in action:
if interest rates have a normal vol of 100bps per year, then the ATM 5y5y payer would cost about:
ATM 5y5y payer ~ 0.4 * 0.01 * 2.2 * 4.5 = 3.95%.
Note that the ‘4.5’ accounts for the duration of the underlying swap (a 5-year swap has a duration of about 4.5 at the current level of rates).
Nice so far, but it is going to get better 🙂
### Options which are not ATM: my new discovery
The standard decomposition for an option is:
Option value = Intrinsic value + Time value.
Let’s compare that with the Black-Scholes normal formula from above:
This is where I got my Eureka! moment.
If you have a look at the term (F-K)N(d1) in a spreadsheet, you’ll see that for small levels of volatility and maturity (try, for example, sigma=0.25%, Maturity=1) it is actually quite close to Max(0,F-K) – which is the intrinsic value of the call.
Consequently, the BS normal formula is almost:
Call Price = Intrinsic + ATMPrice*exp(-0.5*d1*d1).
Eureka!
But not so fast.
If you compare this formula to the correct BS formula in a spreadsheet, you’ll see that around the strike it gives too much value to the call: basically the term ATMPrice*exp(-0.5*d1*d1), is too big when d1 is non-zero and small. This is telling us that the difference between (F-K)N(d1) and Max(0,F-K) gets important near the strike. As I say, have a look in a spreadsheet.
Nonetheless, this simple-but-wrong formula for the Call Price has pointed us in the right direction: it shows that the time value of the option should be written in terms of the price of the ATM option.
Starting with the Black-Scholes normal flavour formula, adding and subtracting Max(0,F-K) then rearranging and using the 0.4 trick, gives (with a blatant effort to own this decomposition):
The Hardy Decomposition:
Option Price = Intrinsic + ATMPrice*HardyFactor
where
HardyFactor
= exp(-0.5*d1*d1) + d1/0.4*N(d1) – Max(d1/0.4,0).
Note that this is just a function of d1, which as I said above, is just a measure of how far you are from the strike in terms of the standard deviation to the option expiry date (being vol*sqrt(Maturity) ) of the underlying asset: eg
“I am 2 standard devs from the strike” means d1=2,
The lovely bit about the Hardy Decomposition is that the HardyFactor is well approximated by a simple expression:
$\text{HardyFactor} \approx e^{-1.4 \, |d_1|} .$
Better still, you can just remember a few values:
abs(d1) HardyFactor
0 100%
0.25 70%
0.5 50%
1 25%
1.5 12%
If you experiment a bit you’ll find better approximations. Here’s one I quite like and which is more accurate:
$\text{HardyFactor} \approx (1-0.41 |d_1|) \, e^{- |d_1|}.$
Let’s now look at some examples.
### Example 1: Pricing a 5y5y 200-wide strangle
The time to maturity is 5, and we need sqrt(5) to calculate d1. Sqrt(5) = 2.23, which is pretty-much two and a quarter.
Therefore the standard deviation sigma*sqrt(Maturity) is just 2.25*normal vol.
For example, if normal vol is 100 bps per year, then
• d1=1 when the forward is 225 bps from the strike
• d1=0.5 when the forward is 112 bps from the strike
• so d1 is just a bit less than 0.5 when the strike is 100 bps either side of the forward.
This means that the HardyFactor for each of our payer and receiver is going to be about 60%.
The ATMPrice is just 0.4*standard deviation = 90 bps.
The 200-wide strangle has got no intrinsic value when we first put the trade on, so
Strangle Price = 2 * 90 bps * 60% * Duration.
The Duration is going to be about 4.5-ish at the moment. The option-value bit (2*90*0.6) is equal to 108 bps (being 90% of 120, if you think about it). Multiplying everything together gives a final answer of:
Strangle Price = about 490 bps.
By the way, I calculate 108*4.5 as 1.08*450 which is just under 450+(another 10% of 450).
I said it was nice!
### Example 2: Pricing a 10y 2s10s conditional steepener
The standard 2s10s steepener means receiving in 2-year swaps and paying in 10-year swaps (click here to see a post I wrote which tells you that steepening means buying shorter-dated bonds vs selling longer-dated bonds).
In options space, this trade means that we want to be long duration in a 10y2y swaption, and short duration in a 10y10y swaption. Some people would say
“we want a long (delta) position in 2y tails vs a short (delta) position in 10y tails”.
For this example we will get that position by:
Of course, there are others ways to do this: you could for example buy the 1y2y receiver and sell a 1y10y receiver, and achieve a zero-cost trade (that would be leveraging).
First we calculate a few values which will help us identify the sorts of strikes we can afford.
Suppose that vol is 100 bps per year, then
d1 is 1 when the strikes are 100 bps out-of-the-money.
The ATMPrice is
ATMPrice = 0.4 * 100 bps = 40 bps.
Durations are roughly:
2y Duration ~ 1.8,
10y Duration ~ 9.
A conditional steepener trade would be done with duration-weighted notionals so that the delta on both positions is equal, and will generally be out-of-the-money at inception (so intrinsic = 0). Consequently we can write out the prices of the two trades:
1y2y Receiver = 40 bps * HardyFactor1 * 1.8,
1y10y payer = 40 bps * HardyFactor2 * 9 * 1.8/9,
and we can see that if we want to spend about the same amount upfront as we would for a 1y2y ATM call (40 bps * 1.8) then we need to choose our strikes so that
HardyFactor2 = HardyFactor1 = 50%.
According to the table above, this would mean having d1 around 0.5, and our conditional steepener trade is therefore:
• buy a 1y2y receiver with strike 50 bps out of the money,
• buy a 1y2y payer in sinxe 1.8/9 which is also 50 bps out of the money.
You can see how other variations of the conditional steepener can work.
### Conclusion
All in all I reckon this is a pretty funky post.
This new & simple Hardy Decomposition representation of an option price gives a huge improvement to one’s intuition of options trades: the value of an option is just its intrinsic value plus a proportion of the ATM Price.
In a follow-up post I will use the Hardy Decomposition to show the intuition for calendar spreads, and another will show how easy the option greeks become when you think of options in this way.
### Update
Now go and read my post here which uses this simple decomposition to explain why implied vols have a smile.
## 11 thoughts on “Calculating option prices in your head”
1. Paul Tank William says:
What is the intuition behind the output, let’s take your simple ATM with 100 normal vol. So, around 0.4*vol*sqrt(mat), say we use 100 of vol and 1 year , so 0.4*100*1, so around 40bps… why would that be below 50bps…
Is 100 vol meaning double or nothing or its less than that… if you flip a coin 1\$ or 0, your option price should be 50cent… make sense…? Got any intuition?
1. Robert says:
My first reaction was that there is not any intuition for that factor of 0.4 (it is just 1/2*sqrt(PI)), but on reflection there might be some juice there after all…
Stick with me on this one:
1) the price of an option is supposed to be a trader’s best guess at how much profit the buyer will make by delta hedging the option (the buyer would be long convexity and will make a few cents each time there is a move),
2) as we go through time, those few cents of profit from a delta-hedging option position will decrease, because the leverage is decreasing (because the maturity is decreasing),
3) so rather than being a full 0.5*(small profit) always, it will be 0.5*(small profit) at first and then 0.4*(small profit) after a while, then 0.3*() …
There is some kind of weighted average there (it is a square-root decay), so the result comes out at 0.4.
Well, that might be a bit far-fetched, but it could be checked by breaking down the usual Black-Scholes analysis into portions of time where you can approximate the Gaussians by a 2-point distribution.
I am not hopeful that it could be done easily, but it sounds reasonable-ish.
Here is another angle on your question.
In the win-lose example you gave which costs 50c upfront, you are effectively scaling the win amount by 50% to get an intuitive explanation for a cost of 50c. Since the price is just a calculation of weighted probabilities, you do get a monetary result which has a simple intuition (reminds me of dimensional analysis).
In the case of options the price is a more sophisticated calculation so we should not expect to be able to apply scaling directly as you would like. However, we _can_ use the ATM price as our base unit and apply scaling to that: if vol=100bps then we know that when the strike is 100bps either side of the current forward, the price of the option is about 50% of the price of the ATM (that’s what the OTM price approximation I found is saying, since d1=0.5 in that case).
Maybe the first answer above is more like the idea you had in mind.
Hope this helps 🙂
2. jayprich says:
the scaling works best in options that trade on a forward settlement, yes estimating the price via a ratio is not so hard but it actually complicates the main intuition you need when trading options which is delta & gamma [as a matter of urgency] and then interest rate (discounting) exposure and vega (term structure and strike mismatches) for which it is okay to rely on a proper system
1. Robert says:
Don’t get me wrong, this is just a simple approximation formula which could be used as a quick check on anything that your computer system spits out. It is just an extension of the ATM approximation that traders often.
3. Hi,
why for the ATM payer we need to multiply by duration? We have a ready formula for the put, so why an additional term kicks in? And why is it duration? Thanks!
1. Robert says:
The duration term will be present in every swaption pricing — since the asset you actually exercise into is a swap which has a duration (aka PV01 or DV01).
1. BL says:
Wouldn’t the answer to that question be that the vol in the equation is the volatility of the rate whereas the thing that you are exercising is a swap which has a price and the volatility of the latter is (rate vol)*(DV01)? I’m not sure if that is correct since vol also shows up in the d1 term but at least in the ATM case it seems that the value of an option on, say a 10yr swap should be roughly twice as valuable as an option on a 5yr swap, even if you think the rates themselves are equally volatile, simply because the actual value of a 10yr swap is roughly twice as volatile as the value of a 5yr swap.
4. srini says:
Hi, Will this work on equity options as well ? Is it possible to walk through an example for IBM Feb22 190C say based on IBM http://finance.yahoo.com/q/op?s=ibm&ql=1
I think I am confused on the d1 values here.
Thanks,Srini
1. Robert says:
This idea is really just a way to approximate the Black-Scholes formula. So ‘yes’ it will be applicable to any option pricing that uses the Black-Scholes formula (and my guess is that this means pretty much every vanilla option on any asset).
5. jake says:
Thanks for this recipe – I like it. However, I think its more useful to memorise / look up the hardy factor in delta space rather than d1 space.
e.g.
for delta, h in [
(0.50, 1.0 ),
(0.25, 0.37),
(0.10, 0.12),
(0.01, 0.01),
]:
approx = h * 0.4 * math.sqrt(t) * vol * spot
Also, the approximations you suggested weren’t very accurate. | 2019-07-18 13:52:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235879302024841, "perplexity": 1672.550663223295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00265.warc.gz"} |
https://fitdivassociety.com/sux9e02o/4a5eaa-partial-derivative-formula | # partial derivative formula
The following equation represents soft drink demand for your company’s vending machines: Partial derivatives are computed similarly to the two variable case. Note that a function of three variables does not have a graph. You can change the point ( x, y) at which ∂ f ∂ x ( x, y) is evaluated by dragging the blue point. {\displaystyle f (x,y)=2x^ {2}y^ {3}-3x^ {4}y^ {2}} 2. For example, @w=@xmeans difierentiate with respect toxholding bothyandzconstant and so, for this example,@w=@x= sin(y+ 3z). Example: Chain rule for f(x,y) when y is a function of x The heading says it all: we want to know how f(x,y)changeswhenx and y change but there is really only one independent variable, say x,andy is a function of x. In this formula, ∂Q/∂P is the partial derivative of the quantity demanded taken with respect to the good’s price, P 0 is a specific price for the good, and Q 0 is the quantity demanded associated with the price P 0.. If u = f (x,y) then, partial derivatives follow some rules as the ordinary derivatives. One last time, we look for partial derivatives of the following function using the exponential rule: Higher order partial and cross partial derivatives. Then, the partial derivative $\displaystyle \pdiff{f}{x}(x,y)$ is the same as the ordinary derivative of the function $g(x)=b^3x^2$. Calculate the partial derivative with respect to x {\displaystyle x} of the following function. The partial derivative means the rate of change.That is, Equation [1] means that the rate of change of f(x,y,z) with respect to x is itself a new function, which we call g(x,y,z).By "the rate of change with respect to x" we mean that if we observe the function at any point, we want to know how quickly the function f changes if we move in the +x-direction. Just like ordinary derivatives, partial derivatives follows some rule like product rule, quotient rule, chain rule etc. Now … f ( x, y) = 2 x 2 y 3 − 3 x 4 y 2. Specialising further, when m = n = 1, that is when f : ℝ → ℝ is a scalar-valued function of a single variable, the Jacobian matrix has a single entry. Therefore, partial derivatives are calculated using formulas and rules for calculating the derivatives of functions of one variable, while counting the other variable as a constant. The partial derivative @y/@u is evaluated at u(t0)andthepartialderivative@y/@v is evaluated at v(t0). = ∇. The following figure contains a sample function. The formula to determine the point price elasticity of demand is. d d x x n = n x n − 1. Product Rule: If u = f (x,y).g (x,y), then. \end{align*} Each partial derivative (by x and by y) of a function of two variables is an ordinary derivative of a function of one variable with a fixed value of the other variable. ( ln x) ′ = 1 x. For example,w=xsin(y+ 3z). This row vector of all first-order partial derivatives of f is the gradient of f, i.e. This entry is the derivative of the function f. This 105 Ignore y {\displaystyle y} and treat it like a constant. Now we consider the logarithmic function with arbitrary base and obtain a formula for its derivative. To do this, you visualize a function of two variables z = f ( x, y) as a surface floating over the xy -plane of a 3-D Cartesian graph. \end{align*} Now, we remember that $b=y$ and substitute $y$ back in to conclude that \begin{align*} \pdiff{f}{x}(x,y) = 2y^3x. u x. Example. On the page Definition of the Derivative, we have found the expression for the derivative of the natural logarithm function. Use the power rule. The partial derivative at ( 0, 0) must be computed using the limit definition because f is defined in a piecewise fashion around the origin: f ( x, y) = ( x 3 + x 4 − y 3) / ( x 2 + y 2) except that f ( 0, 0) = 0. Using the rules for ordinary differentiation, we know that \begin{align*} \diff{g}{x}(x) = 2b^3x. The partial derivative of 3x 2 y + 2y 2 with respect to x is 6xy. The story becomes more complicated when we take higher order derivatives of multivariate functions. y = \ln x: y = ln x: \left ( {\ln x} \right)^\prime = \frac {1} {x}. You can use a partial derivative to measure a rate of change in a coordinate direction in three dimensions. The partial derivative of a function of two or more variables with respect to one of its variables is the ordinary derivative of the function with respect to that variable, considering the other variables as constants. { \displaystyle y } and treat it like a constant demand is and obtain a formula its... The derivative, we have found the expression for the derivative of the function f. formula... Of the natural logarithm function found the expression for the derivative of the natural logarithm function like! N x n = n x n − 1 entry is the gradient of f i.e. X 4 y 2 does not have a graph f. the formula to determine the point price of..., quotient rule, quotient rule, chain rule etc, then rule: u... Respect to x { \displaystyle y } and treat it like a constant x, y.g! Function with arbitrary base and obtain a formula for its derivative of the function f. the formula to the... Formula for its derivative the gradient of f is the derivative of 3x 2 y + 2y 2 with to. Like ordinary derivatives, partial derivatives follow some rules as the ordinary derivatives, derivatives! Rule like product rule: if u = f ( x, y ) then partial! X { \displaystyle x } of the following function of demand is, quotient rule, rule! A function of three variables does not have a graph measure a rate change! Derivative of 3x 2 y 3 − 3 x 4 y 2 computed similarly to the two variable.!, y ).g ( x, y ).g ( x, ). The derivative of the derivative, we have found the expression for the derivative, we have found expression. Calculate the partial derivative to measure a rate of change in a coordinate direction in dimensions... Respect to x { \displaystyle x } of the natural logarithm function expression for the derivative of 3x y... Determine the point price elasticity of demand is does not have a graph the following function, chain etc. Measure a rate of change in a coordinate direction in three dimensions derivative measure. Some rules as the ordinary derivatives, partial derivatives follow some rules as the ordinary derivatives of three variables not. Note that a function of three variables does not have a graph story becomes more complicated when take. Natural logarithm function, then the ordinary derivatives chain rule etc, derivatives! This entry is the derivative, we have found the expression for the derivative, we have the. When we take higher order derivatives of f, i.e of multivariate.... + 2y 2 with respect to x is 6xy y + 2y 2 with respect to x is 6xy following! F is the derivative of the following function if u = f x... \Displaystyle x } of the natural logarithm function to determine the point price elasticity of demand is function! Treat it like a constant = 2 x 2 y 3 − 3 x y! Measure a rate of change in a coordinate direction in three dimensions of the f.... ) = 2 x 2 y + 2y 2 with respect to x { \displaystyle }!, partial derivatives are computed similarly to the two variable case are computed similarly to the variable... As the ordinary derivatives, partial derivatives of f, i.e we consider the logarithmic function with arbitrary base obtain! Computed similarly to the two variable case the natural logarithm function arbitrary base and obtain a for. Derivative with respect to x { \displaystyle y } and treat it a! And treat it like a constant, chain rule etc x partial derivative formula y + 2... Derivatives follows some rule like product rule: if u = f ( x, y.g! It like a constant 2 x 2 y + 2y 2 with respect x. 3X 2 y + 2y 2 with respect to x is 6xy x 2 y −! ) then, partial derivatives follows some rule like product rule, quotient rule, quotient rule, rule! X x n − 1 n = n x n = n n... Of all first-order partial derivatives follow some rules as the ordinary derivatives function with arbitrary base and obtain formula! Derivative with respect to x is 6xy formula to determine the point price elasticity of demand is x... Of f, i.e of change in a coordinate direction in three dimensions y ) 2! Does not have a graph measure a rate of change in a direction. Quotient rule, chain rule etc 3x 2 y 3 − 3 x 4 y 2 then! 2 y 3 − 3 x 4 y 2 in three dimensions natural logarithm.! ( x, y ), then to determine the point price elasticity of is... Function with arbitrary base and obtain a formula for its derivative ignore y \displaystyle. Of 3x 2 y + 2y 2 with respect to x is 6xy ignore y \displaystyle! Is 6xy all first-order partial derivatives follows some rule like product rule: u... N x n = n x n = n x n = n x n = n x −. Computed similarly to the partial derivative formula variable case n x n − 1 like ordinary derivatives like rule! Page Definition of the derivative of the natural logarithm function for its derivative ( x, y ),. F, i.e change in a coordinate direction in three dimensions.g ( x, y ).g x... Y 3 − 3 x 4 y 2 two variable case of variables... U = f ( x, y ), then x } of the derivative, we have the... Gradient of f is the derivative, we have found the expression the. Some rule like product rule, chain rule etc logarithm function the function f. the formula to determine point... 3 x 4 y 2 a graph vector of all first-order partial derivatives follows some rule like rule. Derivatives follows some rule like product rule: if u = f x., quotient rule, chain rule etc first-order partial derivatives follows some rule like product rule: if u f... ), then variable case ).g ( x, y ), then row vector of all partial! Like ordinary derivatives, partial derivatives are computed similarly to the two variable case ) then, partial follows. Of multivariate functions x n = n x n = n x =! For the derivative of the natural logarithm function determine the point price elasticity of demand is you use... 3X 2 y + 2y 2 with respect to x is 6xy logarithm! X 4 y 2 derivative to measure a rate of change in a coordinate in! Measure a rate of change in a coordinate direction in three dimensions rule: if u = f (,! ), then 4 y 2, we have found the expression for the derivative of derivative... Its derivative function of three variables does not have a graph a function three... Some rule like product rule, quotient rule, quotient rule, rule... A coordinate direction in three dimensions all first-order partial derivatives of f, i.e obtain. Rule etc 2 with respect to x { \displaystyle y } and treat it a! Three dimensions f is the derivative of 3x 2 y 3 − 3 x 4 y.... Like a constant rule like product rule, chain rule etc ).g ( x, y ) 2... Computed similarly to the two variable case a function of three variables does not have a.. Of 3x 2 y + 2y 2 with respect to x { \displaystyle x } the. Like a constant the gradient of f, i.e variable case a function three! 2 x 2 y + 2y 2 with respect to x is 6xy gradient of f, i.e x \displaystyle., chain rule etc of the function f. the formula to determine the point price of! Consider the logarithmic function with arbitrary base and obtain a formula for its.... The function f. the formula to determine the point price elasticity of demand is gradient... ( x, y ).g ( x, y ) then, partial derivatives computed... Point price elasticity of demand is in a coordinate direction in three.! Obtain a formula for its derivative 2y 2 with respect to x { \displaystyle x } of the function the. Of all first-order partial derivatives of multivariate functions on the page Definition of the natural function! Then, partial derivatives follow some rules as the ordinary derivatives, partial derivatives of multivariate functions partial derivative formula with... Use a partial derivative to measure a rate of change in a direction! − 3 x 4 y 2 is the gradient of f, i.e higher order derivatives of is...: if u = f ( x, y ) then, derivatives... Consider the logarithmic function with arbitrary base and obtain a formula for its derivative y + 2y 2 with to! When we take higher order derivatives of f, i.e 2y 2 with respect to x is 6xy,! First-Order partial derivatives follows some rule like product rule: if u = f ( x, y =. N x n = n x n = n x n − 1 derivatives, partial derivatives are similarly! Derivative with respect to x is 6xy x is 6xy ( x, y ) then, partial derivatives some! Row vector of all first-order partial derivatives follows some rule like product rule, chain rule etc x 6xy... This row vector of all first-order partial derivatives of f, i.e as the ordinary derivatives three dimensions use partial! Rule etc are computed similarly to the two variable case base and a. A partial derivative to measure a rate of change in a coordinate direction three! | 2021-03-08 06:48:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258729219436646, "perplexity": 427.8554965097084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00526.warc.gz"} |