url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://books.duhnnae.com/2017/jun/149631423257-Jet-schemes-of-toric-surfaces-a-short-version-Hussein-Mourtada.php | # Jet schemes of toric surfaces a short version
For $m\in \IN, m\geq 1,$ we determine the irreducible components of the $m-th$ jet scheme of a toric surface $S.$ For $m$ big enough, we connect the number of a class of these irreducible components to the number of exceptional divisors on the minimal resolution of $S.$ | 2017-10-24 11:25:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7533653974533081, "perplexity": 134.62259744526506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00688.warc.gz"} |
https://icedmaster.wordpress.com/2018/04/ | # Animelog: Mary and the Witch’s Flower
I’m probably late to this party… but this is going to be short anyway — I’ve never been a Ghibli fan and watched this movie only because, you know, anime movies isn’t something that’s released every day.
This is a typical Ghibli movie despite the fact that de jure the production studio is different — a girl moves to a countryside, hates it, meets a boy, hates him, discovers a magical world, figures out that something is wrong with that world, becomes a friend with the boy — and together they save the day. The true power of friendship! For those who like Miyazaki’s films Mary and the Witch’s Flower may be a good choice — its visual is really good (especially backgrounds) and the theme of friendship is something that’s always going to be popular. On the other hand — the movie doesn’t contain a single interesting or new idea so ¯\_(ツ)_/¯
# Reading Log: The Real Story, Prey
Recently we talked about books with guys at work and I was recommended a whole bunch of authors and stories I’d never heard of before (and among other things I discovered that one of my co-workers is extremely well-read). I decided that “why not” and read two novels suggested by two colleagues =)
“The Real Story” is the first book in “The Gap Cycle” written by Stephen R. Donaldson. Apparently, later on the series becomes darker and deeper but “The Real Story” is a very easy-to-read piratical adventure about two space pirates and a woman who’s captured by one of them. The book doesn’t contain any new ideas and doesn’t even describe the world it takes place in. However, Donaldson is a very effective author — the plot flows smoothly and it’s very easy to get hooked. He explained that his idea was to show how characters involved into the famous triangle “a Victim – a Villain – a Rescuer” all change their roles — and that indeed happens but you’ll pay attention to this only if you’re aware of the author’s grand plan. The funny thing is that we observe everything mostly from the Villain’s point of view and even though logically you understand that he’s a bad guy it’s kind of difficult to _not_ sympathize with him when finally his plans ruined and his life is on the verge of destruction. Going to buy the second book — and look forward to finally meet aliens there, I was told that they’re completely inhuman.
“Prey” is a child of Michael Crichton’s imagination, whose the most famous work is “Jurassic Park”. There’s an opinion that Crichton is a “pop scifi” writer and I’d say that “Prey” confirms that. As far as I understood his books follow the scenario where we see some technological and/or biological advancement, then something goes wrong and then the main characters save the day. In the “Prey”‘s case this advancement is nanomachines. The main character is a former programmer (but currently he has to be a stay-at-home dad) and whose spouse is in charge of an unbelievably important project. But the wife’s becoming acting more and more weirdly. Is she cheating on him? Is she on drugs? This part, when the normal life of an average guy is disturbed but disturbed by something relatively ordinary is the best one of the book. Then things become more action-packed and more stupid. Swarms of nanomachines on the wild, their evolution, the attempts of the main character to destroy those swarms (and the main hero is a 40-years old programmer and accompanied by other programmers — a squad less than ideal to destroy whatever, except maybe a couple canisters of beer) — it felt unnatural and, frankly, boring. Also, Crichton was a smart guy and he added a lot of real-life facts in his book. But unfortunately he _added_ and not wove them. It looked as if he had a list of “Things I must mention” while working on the book, which he’s going to use without any consideration how well they’d be integrated. So, as the Warden Twins in Persona 5 usually said “Not terrible but not impressive”.
The first round of Stanley Cup Playoffs finished with a crazy game between the Leafs and the Bruins. It was the seventh match of the series and Boston celebrated this scoring 7 goals. Toronto answered with just 4. Good game anyway.
I discovered that my forecasting skills suck — I correctly predicted only 50% of teams who’d advance to the second round. Golden Knights, Lightning, Sharks and Penguins were bound to lose according to my predictions. Haha. It’s funny that NHL 18 is way more precise — so far they’re absolutely correct:
https://www.nhl.com/news/winnipeg-wins-stanley-cup-in-ea-sports-nhl-18-playoff-simulation/c-297863906
# Animelog: Spring 2018, Dropped
Persona 5 The Animation
Despite the fact that I enjoyed the game a lot the same plot in animation format looks rather silly. For people who haven’t played Persona 5 this anime’s description unlikely will be especially tempting — a bunch of school students led by a guy whose parents don’t care for him at all trying to change the “rotten world built by corrupt adults”. Those damn adults, they always corrupt everything! Out of curiosity we checked out the first episode, saw that it wasn’t filmed any better than in-game animated cutscenes and dropped it.
Megalo Box
I don’t watch sports anime but visual of Megalo Box is so old-school that I couldn’t resist. However, other than unusual graphics it’s very difficult to find any reasons to keep on watching this show. (Note for the future self — megalo box is just like regular box but when sportsmen wear exoskeletons).
3D Kanojo: Real Girl
An otaku falls in love with a girl who supposedly does this and that with anyone who’s closer than 5 meters to her. To add up, the first episode was rather poorly animated. I might’ve continued watching this anime if we didn’t have a better one about otaku-in-love this season. To drop without a trace of remorse.
Golden Kamuy
We’re going to watch how an “immortal” Japanese deserter accompanied by an ainu girl is looking for stolen gold, killing humans (decently drawn) and 3D bears (they look terrible). Interesting character design and unbeaten period of time but somehow this anime didn’t impress me much, maybe because of that war-and-prison vibes it has.
Gurazeni
I think that this anime is going to be covered in bad reviews and eventually forgotten. For sports anime fans (and baseball fans in particular) it’s too slow and too concentrated on everything except actually matches and characters who, hm, overbear all the time. For others — it has too much baseball in it. But again — character design is really nice.
Survived so far:
Steins;Gate 0
Wotaku ni Koi wa Muzukashii
Hinamatsuri
Mahou Shoujo Ore
Comic Girls (actually, this one can be considered pretty much as dropped)
Ginga Eiyuu Densetsu: Die Neue These – Kaikou
Piano No Mori
Akkun to Kanojo
Hisone to Maso-tan
Fumikiri Jikan
Cutie Honey Universe
Space Battleship Tiramisu
Recently discovered that Dave Lombardo isn’t only one of the best drummers we’ve ever seen but also responsible for some (questionable, I must admit) art.
The website dedicated to his collection is: http://davelombardoart.com/
Just to get an idea what the collection looks like:
Actually, I wouldn’t mind having one of his artworks at home but prices are not especially friendly =(
Misc stuff.
Guys at work rarely ask me something about Russia but a couple of days ago were curious whether it’s true that the Russian government managed to block a hellish amount of websites in attempts to stop Telegram. I didn’t know for sure so my answer was full of “apparently” and “as far as I know” — but some food for thought they definitely got.
Finally figured out how the coefficients in Sloan’s “Stupid SH tricks” irradiance calculations are, hm, calculated. For me this stuff looked slightly magical
const float fC0 = 1.0f/(2.0f*s_fSqrtPI);
const float fC1 = (float)sqrt(3.0f)/(3.0f*s_fSqrtPI);
const float fC2 = (float)sqrt(15.0f)/(8.0f*s_fSqrtPI);
But as usual turned out that it’s just stupid me. If we take a coefficient we use to project onto SH basis and then multiply it with coefficient used for convolution — we’ll get exactly this fC0 and so on. For example, for the first band:
$A1 = \dfrac{2\pi}{3}; c1 = \sqrt{\dfrac{3}{4\pi}};$
$A1 \cdot c1 = \dfrac{2\pi\sqrt3}{3 \cdot 2\sqrt\pi} = \dfrac{\sqrt3}{3\sqrt\pi} \cdot \pi$
As a final step we divide by $\pi$ to get irradiance.
And the last but not least:
Discovered a beautiful band — Grandpa’s Cough Medicine. These guys and girls play bluegrass and probably are not known anywhere expect the town they proudly reside in. So far the band released 3 albums and I have to say that “The Murder Chord” is definitely worth listening.
website
Official video
My favourite song (Julianne): | 2019-08-23 19:43:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2419905960559845, "perplexity": 3167.825172349352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00247.warc.gz"} |
http://mathoverflow.net/revisions/58446/list | 3 added 8 characters in body; edited title
# Zariski density of conjugates of subgroups byarithmeticsubgroups?
Let $G$ be a linear algebraic $\mathbb{Q}$-group, which is assumed to be connected, semi-simple, $\mathbb{Q}$-simple, and of adjoint type, such that the Lie group $G(\mathbb{R})$ has no compact factor defined over $\mathbb{Q}$. Let $\Gamma\subset G(\mathbb{Q})$ be a congruence subgroup. It is known, from the theory of Margulis, that $\Gamma\subset G(\mathbb{R})$ is Zariski dense. For convenience assume that $\Gamma\subset G(\mathbb{R})^+\cap G(\mathbb{Q})$ and that $\Gamma$ is torsion free. Note also that in this case, if one takes $X$ to be the non-compact symmetric domain associated to $G(\mathbb{R})^+$, then the quotient $X/\Gamma$ is a localy symmetric manifold of negative curvature (a typical example of hyperbolic manifold}.
I'd like to consider conjugates of linear $\mathbb{Q}$-subgroups of $G$ under $\Gamma$. More restrictively, let me take $H\subset G$ a connected semi-simple $\mathbb{Q}$-group such that $H(\mathbb{R})$ again has no compact factors defined over $\mathbb{Q}$. Then
(1) is the union $\bigcup_{g\in\Gamma}gHg^{-1}$ Zariski dense in $G$?
(2) if $\Gamma'$ is a finitely generated subgroup of $\Gamma$, and $H'$ be the Zariski closure of the subgroup of $G(\mathbb{Q})$ generated by $\bigcup_{g\in \Gamma'}gH(\mathbb{Q}) g^{-1}$, then how far is $\Gamma'$ from being an arithmetic subgroup of $H'$?
Thanks!
2 added 20 characters in body
Let $G$ be a linear algebraic $\mathbb{Q}$-group, which is assumed to be connected, semi-simple, and of adjoint type, such that the Lie group $G(\mathbb{R})$ has no compact factor defined over $\mathbb{Q}$. Let $\Gamma\subset G(\mathbb{Q})$ be a congruence subgroup. It is known, from the theory of Margulis, that $\Gamma\subset G(\mathbb{R})$ is Zariski dense. For convenience assume that $\Gamma\subset G(\mathbb{R})^+\cap G(\mathbb{Q})$ and that $\Gamma$ is torsion free. Note also that in this case, if one takes $X$ to be the non-compact symmetric domain associated to $G(\mathbb{R})^+$, then the quotient $X/\Gamma$ is a localy symmetric manifold of negative curvature (a typical example of hyperbolic manifold}.
I'd like to consider conjugates of linear $\mathbb{Q}$-subgroups of $G$ under $\Gamma$. More restrictively, let me take $H\subset G$ a connected semi-simple $\mathbb{Q}$-group such that $H(\mathbb{R})$ again has no compact factors defined over $\mathbb{Q}$. Then
(1) is the union $\bigcup_{g\in\Gamma}gHg^{-1}$ Zariski dense in $G$?
(2) if $\Gamma'$ is a finitely generated subgroup of $\Gamma$, and $H'$ be the Zariski closure of the subgroup of $G(\mathbb{Q})$ generated by $\bigcup_{g\in \Gamma'}gH(\mathbb{Q}) g^{-1}$, then how far is $\Gamma'$ from being an arithmetic subgroup of $H'$?
Thanks!
1
# Zariski density of conjugates of subgroups?
Let $G$ be a linear algebraic $\mathbb{Q}$-group, which is assumed to be connected, semi-simple, and of adjoint type, such that the Lie group $G(\mathbb{R})$ has no compact factor defined over $\mathbb{Q}$. Let $\Gamma\subset G(\mathbb{Q})$ be a congruence subgroup. It is known, from the theory of Margulis, that $\Gamma\subset G(\mathbb{R})$ is Zariski dense. For convenience assume that $\Gamma\subset G(\mathbb{R})^+\cap G(\mathbb{Q})$ and that $\Gamma$ is torsion free. Note also that in this case, if one takes $X$ to be the non-compact symmetric domain associated to $G(\mathbb{R})^+$, then the quotient $X/\Gamma$ is a localy symmetric manifold of negative curvature (a typical example of hyperbolic manifold}.
I'd like to consider conjugates of linear $\mathbb{Q}$-subgroups of $G$ under $\Gamma$. More restrictively, let me take $H\subset G$ a connected semi-simple $\mathbb{Q}$-group such that $H(\mathbb{R})$ again has no compact factors defined over $\mathbb{Q}$. Then
(1) is the union $\bigcup_{g\in\Gamma}gHg^{-1}$ Zariski dense in $G$?
(2) if $\Gamma'$ is a finitely generated subgroup of $\Gamma$, and $H'$ be the Zariski closure of the subgroup of $G(\mathbb{Q})$ generated by $\bigcup_{g\in \Gamma'}gH(\mathbb{Q}) g^{-1}$, then is $\Gamma'$ an arithmetic subgroup of $H'$?
Thanks! | 2013-05-22 10:40:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348811507225037, "perplexity": 108.02343554247585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701577515/warc/CC-MAIN-20130516105257-00030-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://talkstats.com/threads/copula-based-var-calculation-in-r.69320/ | copula-based VaR calculation in R
maaarten9
New Member
I'm working on a value-at-risk calculation using copulas on different stock market indices. I know how to fit the copula, but I can't figure out how to apply the VaR approach in the next step. The concept of copulas is relatively new to me and has proven to be very challenging for an average master student..I defined 3 periods in which I want to investigate the evolution of the VaR over time. When running the code, R returns a value for the VaR. But when running the code for another time period, R gives the same value as the previous period..Am I overlooking/forgetting something? The code I provided shows the bivariate example of china and india using the normal copula. I plan to extent it with the t and clayton copula in a further stage.
Code:
library(copula)
cop_model = normalCopula(dim = 2)
m = pobs(as.matrix(cbind(CHINA_INDIA$CHINA.LOG[571:406],CHINA_INDIA$INDIA.LOG[571:406])))
#pseudo-observations
fit = fitCopula(cop_model, m, method = "ml")
coef(fit)
tau(normalCopula(param = coef(fit)))
cor(CHINA_INDIA$CHINA.LOG[571:406],CHINA_INDIA$INDIA.LOG[571:406], method = "kendall") #check whether correlation is more or less preserved
set.seed(1559)
u = rCopula(500, normalCopula(coef(fit), dim = 2)) #simulate some observations from the copula
cdf = pCopula(u, normalCopula(coef(fit), dim = 2) #construct cdf of the copula
library(PerformanceAnalytics)
VaR(cdf, p=0.95)
Last edited by a moderator: | 2020-07-09 02:30:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17821195721626282, "perplexity": 1731.5118074427662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00466.warc.gz"} |
http://dict.cnki.net/h_24905118.html | 全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多
构成 在 刑法 分类中 的翻译结果: 查询用时:1.737秒
历史查询
构成
composition
Talk about the Deficiency and Perfection of the Condition of Time in Recidivist's Composition 论累犯构成中时间条件的不足与完善 短句来源 A Dynamic Analysis on the Crime Composition 犯罪构成动态论纲 短句来源 Conversion of the Composition Outlook of Recidivist——Initiation of Delinquency Recidivist 累犯构成视角的转换(下)——过失累犯之提倡 短句来源 The stipulation in Clause 17. 2 of Criminal Law should be interpreted as 8 individual crimes on the basis of legal principals on crime and crime composition theory. 在罪刑法定原则和犯罪构成理论指引下,我们应当将刑法的第17条第2款的规定理解为8种独立的犯罪行为。 短句来源 (2)Concept and composition of embezzlement crime. (2)侵占罪的概念及构成。 短句来源 更多
constitution
Effects of the Essence of Transactionality on the Constitution of Bribery 论交易性本质对受贿罪构成的影响 短句来源 On the Relationship between Constitution of Crimes and Ground for Elimination of Crimes 论犯罪构成与犯罪阻却事由的关系 短句来源 New ideas about the constitution of the crime of extorting a confession by torture 刑讯逼供犯罪构成新论 短句来源 The definition about constitution of crime of abuse of authority 滥用职权罪犯罪构成要件之界定 短句来源 Study on the Constitution of Misappropriation Crime 侵占罪构成问题研究 短句来源 更多
form
On the Arising & Developing of the Theory of Criminal Form 犯罪构成理论的产生与发展 短句来源 Such threeparts form the complete conviction theory system: conviction pandect, conviction centavo and conviction apiece theory. 三是定罪的一般规定与特殊规律在个罪的定罪活动中的运用。 这三部分构成了定罪理论的完整体系:定罪总论、定罪分论与定罪各论。 短句来源 Whether the Behavior of Getting Illegal Profit could Form Crime of Embezzlement 不当得利行为能否构成侵占罪 短句来源 On the Entity and Procedure Means of the Key Element Assertion of the Norm Form from the Angle of Obscene Nature 由“淫秽性”谈规范构成要件要素认定的实体及程序路径 短句来源 On the Form and Practical Meaning of Rights of Criminal 试论罪犯权利构成及其实践价值 短句来源 更多
formation
The Theory of Substantive Law Violation and Its Significance to the Theory of Offence Formation in China 实质违法性理论及其对我国犯罪构成理论的意义 短句来源 On the Concept and Formation of Corporation Crime 论法人犯罪的概念及其构成 短句来源 The formation of imperfect crime needs not the result asan element, and the accomplishment of all the behavioral crime needs the resultas an element of the formation of crime. 并得出结论:一切结果犯、危险犯的犯罪既遂构成中,均以犯罪结果为构成要件要素,而未完成形态的犯罪以及行为犯的既遂构成都不要求结果。 短句来源 The formation of an infringement of commercial secrets has to match both subjective and objective conditions. 侵犯商业秘密罪是侵犯商业秘密,给权利人造成重大损失的行为,其构成应符合主观和客观要件。 短句来源 An Analysis of the Formation of Embezzlement Crime 浅析贪污罪的犯罪构成 短句来源 更多
我想查看译文中含有:的双语例句
composition
We study the composition of the functor from the category of modules over the Lie algebra $\mathfrak{gl}_m$ to the category of modules over the degenerate affine Hecke algebra of GLN introduced by I. We also establish a connection between the composition of the functors, and the "centralizer construction" of the Yangian ${\rm Y}(\mathfrak{gl}_n)$ discovered by G. We also determine all the composition factors of the symmetric tensors of the natural osp(2|2n)-module. For Fourier-bandlimited symbols, we derive the expected formulae for composition and commutators and construct an orthonormal basis of common approximate eigenvectors that could be used to study spectral theory. For Fourier-bandlimited symbols, we derive the expected formulae for composition and commutators and construct an orthonormal basis of common approximate eigenvectors that could be used to study spectral theory. 更多
constitution
The distribution characteristics of protective relaying data flow and the constitution of the end-to-end delay of messages were analyzed. Constitution and Properties of Nanocomposites Prepared by Thermal Decomposition of Silver Salts Sorbed by Polyacrylate Matrix This study involves 72 young men with various levels of working efficiency and elucidates the relationship between human morphological constitution by the type of age-related evolution of the organism and parameters affecting hemostasis. Interrelations between the Constitution Type and Features of Muscular Activity Energetics in Sprinters and Stayers There was a difference in the distribution of constitution types between the sprinters and the stayers. 更多
form
The presentations are given in the form of graphs resembling Dynkin diagrams and very similar to the presentations for finite complex reflection groups given in [2]. LetH? be a real form of a complex semisimple group. LetZ=G/Q be a complex flag manifold andG0 a real form ofG. In this paper we present an explicit formula for the twistors in the form of an infinite product of the universalR matrix ofUq(g). We show that on each Schubert cell, the corresponding Kostant harmonic form can be described using only data coming from the Bruhat Poisson structure. 更多
formation
In the present study, two of the probable an umor marine compounds, manzamine A and sarcophine, were screened using benzo[a]pyrene (BP)-derived DNA adduct formation in MCF-7 cells as intermediary biomarker. Most of the compounds were found to be very potent inhibitors of malondialdehyde (MDA) formation at 10-3M. However, no significant inhibitory effect was obtained on superoxide anion formation. Results of clone formation and Flow cytometry analysis (FCAS) suggested that prodigiosin has the capability of restraining mitosis by regulating the cell cycle. These corresponding chalcones were reacted with phenyl hydrazide in glacial acetic acid, which led to the formation of novel 4-[5-(substituted phenyl)-1-phenyl-4,5-dihydro-1H-3-pyrazolyl]-2-methylphenol derivatives. 更多
其他
点击这里查询相关文摘
相关查询
CNKI小工具 在英文学术搜索中查有关构成的内容 在知识搜索中查有关构成的内容 在数字搜索中查有关构成的内容 在概念知识元中查有关构成的内容 在学术趋势中查有关构成的内容
CNKI主页 | 设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索
2008 CNKI-中国知网
2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社 | 2017-10-23 00:33:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49337640404701233, "perplexity": 5414.134209178226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00574.warc.gz"} |
https://uk.mathworks.com/help/robust/ug/measures-of-robust-performance.html | ## Robust Performance Measure for Mu Synthesis
The robust H performance quantifies how modeled uncertainty affects the performance of a feedback loop. Performance here is measured with the H norm (peak gain) of a transfer function of interest, such as that from disturbance to error signals. (See H-Infinity Performance.)
For a system T(s), the robust H performance μ is the smallest value γ such that the peak gain of T remains below γ for uncertainty up to 1/γ, in normalized units. For example:
• μ = 0.5 means that ||T(s)|| remains below 0.5 for uncertainty up to twice the uncertainty specified in T. The worst-case gain for the specified uncertainty is typically smaller.
• μ = 2 means that ||T(s)|| remains below 2 for uncertainty up to half the uncertainty specified in T. For this value, the worst-case gain for the full specified uncertainty can be much larger. It can even be infinite, meaning that the system does not remain stable over the full range of the specified uncertainty.
The quantity μ is the peak value over frequency of the structured singular value μ(ω) for the uncertainty specified in T. This quantity is a generalization of the singular value for uncertain systems. It depends on the structure of the uncertainty in the system. In practice, μ is difficult to compute exactly, so the software instead computes lower and upper bounds, $\underset{¯}{\mu }$ and $\overline{\mu }$. The upper bound $\overline{\mu }$ has several applications in control system design and analysis. You can:
• Use musyn to design a controller for an uncertain plant that minimizes $\overline{\mu }$ of the closed-loop system. In addition to the resulting controller, musyn returns the corresponding value of $\overline{\mu }$ in the CLperf output argument.
• Use musynperf evaluate the robust performance of an uncertain system. This function returns lower and upper bounds on μ, the uncertainty values that yield the peak μ, and other information about the closed-loop robust performance.
### Uncertain Model
To understand the computation of robust H performance, consider an uncertain system T(s), modeled as a fixed portion T0 and an uncertain portion Δunc/γ, such that $T\left(s\right)=\text{LFT}\left({\Delta }_{unc}/\gamma ,{T}_{0}\right)$.
Δunc collects the uncertain elements {Δ1,…,ΔN}.
${\Delta }_{unc}=\left(\begin{array}{ccc}{\Delta }_{1}& & \\ & \ddots & \\ & & {\Delta }_{N}\end{array}\right).$
Each Δj is an arbitrary real, complex, or dynamic uncertainty that is normalized such that ||Δj|| ≤ 1. The factor γ adjusts the level of uncertainty.
### Robust Performance as a Robust Stability Margin
Suppose that for the system modeled as in diagram (a),
||T||γ for all ||Δunc|| ≤ 1.
By the small-gain theorem (see [1]), this robust performance condition is equivalent to stating that the system of diagram (b), LFT(Δperf/γ,T), is stable for all for all ||Δperf|| ≤ 1.
Δperf is called the performance block. Expand T as in diagram (a), and group Δperf with the uncertain blocks Δunc to define a new block Δ,
$\Delta \triangleq \left(\begin{array}{cc}{\Delta }_{perf}& 0\\ 0& {\Delta }_{unc}\end{array}\right).$
The result is the system in the following diagram.
Thus, the robust performance condition on the system of diagram (a) is equivalent to a stability condition on diagram (c), or
The robust performance μ is the smallest γ for which this stability condition holds. Equivalently, 1/μ is the largest uncertainty level 1/γ for which the system of diagram (c) is robustly stable. In other words, 1/μ is the robust stability margin of the feedback loop of diagram (c) for the augmented uncertainty Δ. (For more information on robust stability margins, see Robustness and Worst-Case Analysis.)
### Upper Bound of μ
To obtain an estimate on the upper bound of μ, the software introduces scalings. If the system in diagram (c) is stable for all ||Δ|| ≤ 1, then the system of the following diagram is also stable, for any invertible D.
If D commutes with Δ, then the system of diagram (d) is the same as the system in the following diagram.
The matrices D that structurally commute with Δ are called D scalings. They can be frequency dependent, which is denoted by D(ω).
Define $\overline{\mu }$ as:
$\overline{\mu }\triangleq \underset{D\left(\omega \right)}{\mathrm{inf}}{‖D\left(\omega \right){T}_{0}\left(j\omega \right)D{\left(\omega \right)}^{-1}‖}_{\infty }.$
For the optimal D*(ω), and any γ$\overline{\mu }$,
${‖{D}^{*}\left(\omega \right){T}_{0}\left(j\omega \right){D}^{*}{\left(\omega \right)}^{-1}‖}_{\infty }\le \text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma .$
Therefore, by the small-gain theorem, the system of diagram (e) is stable for all ||Δ|| ≤ 1. It follows that 1/γ ≤ 1/μ, or γμ, because 1/μ is the robust stability margin. Consequently, μ$\overline{\mu }$, so that $\overline{\mu }$ is an upper bound for the robust performance μ. This upper bound $\overline{\mu }$ is the quantity computed by musynperf and optimized by musyn.
### D and G Scalings
When all the uncertain elements Δj are complex or LTI dynamics, the software approximates $\overline{\mu }$ by picking a frequency grid {ω1,…,ωN}. At each frequency point, the software solves the optimal scaling problem
${\overline{\mu }}_{i}=\underset{{D}_{i}}{\mathrm{inf}}‖{D}_{i}{T}_{0}\left(j{\omega }_{i}\right){D}_{i}{}^{-1}‖.$
It then sets $\overline{\mu }$ to the largest result over all frequencies in the grid,
$\overline{\mu }=\underset{i}{\mathrm{max}}{\overline{\mu }}_{i}.$
When some Δj are real, it is possible to obtain a less conservative upper bound by using additional scalings called G scalings. In this case, $\overline{\mu }$ is the smallest ${\overline{\mu }}_{i}$ over frequency such that
${\left(\begin{array}{c}{T}_{0}\left(j{\omega }_{i}\right)\\ I\end{array}\right)}^{H}\left(\begin{array}{cc}{D}_{r}\left({\omega }_{i}\right)& -j{G}_{cr}^{H}\left({\omega }_{i}\right)\\ j{G}_{cr}\left({\omega }_{i}\right)& -{\overline{\mu }}_{i}^{2}{D}_{c}\left({\omega }_{i}\right)\end{array}\right)\left(\begin{array}{c}{T}_{0}\left(j{\omega }_{i}\right)\\ I\end{array}\right)\le 0$
for some Dr(ωi), Dc(ωi), and Gcr(ωi). These frequency-dependent matrices are the D and G scalings.
### Mu Synthesis
The musyn command synthesizes robust controllers using an iterative process that optimizes the robust performance $\overline{\mu }$. To learn how to use musyn, see Robust Controller Design Using Mu Synthesis. For details about the musyn algorithm, see D-K Iteration Process.
## References
[1] Skogestad, S. and I. Postlethwaite, Multivariable Feedback Control: Analysis and Design, 2d ed. West Sussex, England: John Wiley & Sons, 2005, pp. 156, 306. | 2022-05-24 12:58:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534122109413147, "perplexity": 1400.5506780725466}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00228.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-me/industrial-engineering/forecasting/ | ## Marks 1
More
Sales data of a product is given in the following table: Regarding forecast for the month of June, which one of the f...
GATE ME 2015 Set 2
The actual sales of a product in different months of a particular year are given below: The forecast of the sales, usi...
GATE ME 2014 Set 3
In exponential smoothening method, which one of the following is true?
GATE ME 2014 Set 1
In simple exponential smoothing forecasting, to give higher weightage to recent demand information, the smoothing consta...
GATE ME 2013
Which of the following forecasting methods takes a fraction of forecast error into account for the next period forecast?
GATE ME 2009
For a product, the forecast and the actual sales for December $$2002$$ were $$25$$ and $$20$$ respectively. If the expon...
GATE ME 2004
A regression model is used to express a variable $$Y$$ as a function of another variable $$X.$$ this implies that
GATE ME 2002
When using a simple moving average to forecast demand, one would
GATE ME 2001
Which one of the following forecasting techniques is not suited for making forecasts for planning production schedules i...
GATE ME 1998
The most commonly used criteria for measuring forecast error is
GATE ME 1997
## Marks 2
More
The demand for a two-wheeler was $$900$$ units and $$1030$$ units in April $$2015$$ and May $$2015,$$ respectively. The ...
GATE ME 2016 Set 3
For a canteen, the actual demand for disposable cups was $$500$$ units in January and $$600$$ units in February. The for...
GATE ME 2015 Set 1
The demand and forecast for February are $$12000$$ and $$10275,$$ respectively. Using single exponential smoothening met...
GATE ME 2010
A moving average system is used for forecasting weekly demand. $${F_1}\left( t \right)$$ and $${F_2}\left( t \right)$$ a...
GATE ME 2008
The sales of a product during the last four years were $$860, 880, 870$$ and $$890$$ units. The forecast for the fourth ...
GATE ME 2005
The sale of cycles in a shop in four consecutive months are given as $$70, 68, 82 95.$$ Exponentially smoothing average ...
GATE ME 2003
In a time series forecasting model, the demand for five time periods was $$10, 13,$$ $$15,$$ $$18$$ and $$22.$$ A linear...
GATE ME 2000
In a forecasting model, at the end of period $$13,$$ the forecasted value for period $$14$$ is $$75.$$ Actual value in t...
GATE ME 1997
Which of the following is a technique for forecasting?
GATE ME 1989
### EXAM MAP
#### Graduate Aptitude Test in Engineering
GATE ECE GATE CSE GATE CE GATE EE GATE ME GATE PI GATE IN
#### Joint Entrance Examination
JEE Main JEE Advanced | 2019-10-14 16:17:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6736006140708923, "perplexity": 2730.079462864441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00382.warc.gz"} |
https://wetnosesnyc.com/how-to-bvnlt/program-to-check-if-a-matrix-is-reflexive-a657c9 | iii. What do we exactly mean by "density" in Probability Density function (PDF)? $M_R = \begin{pmatrix} 1 & 0 & 1 & 0\\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0\\ 1 & 1 & 1 & 1\end{pmatrix}$ ; $M_R = \begin{pmatrix} 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1\end{pmatrix}$. Incorrect result after serializing and deserializing time_t variable, How can I test for reflexive, symmetric, or transitive, Test set for reflexive, symmetric, or transitive using a struct. What everyone had before was completely wrong. We also declared three double variables sum, count, and average. However, A(2,:,:) is not a matrix since it is a multidimensional array of size 1-by-3-by-2. How do you set, clear, and toggle a single bit? Thanks. @Craig Ashworth: Your code needs quite some work just in order to get it to tell whether every element of A is also in B, and that's just a start. Also read – transpose of a matrix in java. In this program, we need to check whether given matrices are equal or not. Let R be a binary relation on A . If you cannot do that before looping through the entire matrix, then it must be symmetric. Only a particular binary relation B on a particular set S can be reflexive, symmetric and transitive. Here, We’ll check whether the given matrix is symmetrical or not. Please look above and see if I did this right. Take the matrix Mx Below statements in this program asks the User to enter the Matrix size (Number of rows and columns. To learn more, see our tips on writing great answers. Sample inputs: I only read reflexive, but you need to rethink that. In general, if the first element in A is not equal to the first element in B, it prints "Reflexive - No" and stops. Transpose will be what output do you expect, and what do you get?). [EDIT] Alright, now that we've finally established what int a[] holds, and what int b[] holds, I have to start over. Have you tried running on a minimal dataset in the debugger? Any matrix can be the symmetric matrix if the original matrix is equal to the transpose of that matrix. rev 2020.12.14.38165, Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide. Again, it is obvious that $$P$$ is reflexive… By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Skew Symmetric Matrix. How could a 6-way, zero-G, space constrained, 3D, flying car intersection work? So, far I was able to figure out that for both it is reflexive because there is 1 diagonally, and not symmetric because $M_{21} \neq M_{12}$ and also $M_R \neq (M_R)^T$. Now, let's think of this in terms of a set and a relation. Define a relation $$P$$ on $${\cal L}$$ according to $$(L_1,L_2)\in P$$ if and only if $$L_1$$ and $$L_2$$ are parallel lines. bool is reflexive (const int a [] [COLS), const int rows); * Checks if a relation matrix is irreflexive. How To Test Whether a Set is Reflexive, Symmetric, Anti-Symmetric and/or Transitive? Understanding Irish Baptismal registration of Owen Leahy in 19 Aug 1852. That is why you're having such a hard time visualizing what transitive(...) should do. How to tell if a matrix is symmetric? Symmetric matrix program in java. Why is my 50-600V voltage tester able to detect 3V? C program to check if a matrix is symmetric or not: we find the transpose of the matrix and then compare it with the original matrix. Counting the total of same running processes in C++. Matrices for reflexive, symmetric and antisymmetric relations. We see that (a,b) is in R, and (b,a) is in R too, so the relation is symmetric. 1 2 1 3. C Server Side Programming … $$Demo: For starters, what's the purpouse of this for loop where you never increment the variable being iterated? Logic: To find whether the matrix is symmetric or not we need to compare the original matrix with its transpose. A symmetric matrix is a square matrix that is equal to its transpose. In the next step using if-else check if matrix contains more than (x*y)/2 number of zeros. Here is the exact problem. Write predicate functions + Checks if a relation matrix is reflexive. Relation that is transitive, symmetric but not antisymmetric nor reflexive, Determing whether or not the relationships in each problem are symmetric, transitive, and/or reflexive, Effects of being hit by an object going at FTL speeds. Now if matrix contains more than (x*y)/2 number of zeros it is a sparse matrix else it is not a sparse matrix. @OliCharlesworth: Given that two sets are passed to the function, I would assume that the question really is "How to determine if a pair of sets representing a relation, ...". As for Transitive, I cannot even get started and would like any help you can give on it and what I am doing wrong in my functions. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. * R is reflexive if for all x € A, x,x,€ R Equivalently for x e A ,x R x . For the relation R = \emptyset on \{1, 2, 3\}, is it reflexive, symmetric, transitive? Write a program in C to check whether a given matrix is an identity matrix. A square matrix is said to be symmetric if given square matrix is equal to its transpose. Is it possible to do planet observation during the day? Determining matrix for relationship: reflexive, symmetric, transitive. And also how do I determine if it is transitive? I only read reflexive, but you need to rethink that.In general, if the first element in A is not equal to the first element in B, it prints "Reflexive - No" and stops. That is A[i][j] == A[j][i] Here’s symmetric matrix program. */ I am having trouble finding out how to code this. Can the VP technically take over the Senate by ignoring certain precedents? Can I install ubuntu 20.10 or 20.04LTS on dual boot with windows 10 without USB Drive? Watch Queue Queue How to check that an element is in a std::set? Example matrix (answer should be "reflexive"): your coworkers to find and share information. For the last one, you need to check whether$$ M_{ij} = 1 \text{ and } M_{jk} = 1 \implies M_{ik} = 1 This is not true for the first relation. For example, say we have a square matrix of individuals, and a 1 in a row/column means that they are related. @AndréCaron: This is beyond my level of maths, then! M_{ij} = 1 \text{ and } M_{jk} = 1 \implies M_{ik} = 1 For a matrix to be symmetric, first it should be a square matrix and second every element at “i”th row and “j”th column should be equal to … The n diagonal entries are fixed. Calculating maximum power transfer for given circuit, Get the first item in a sequence that matches a condition, My professor skipped me on Christmas bonus payment. C Program to check whether a Matrix is Symmetric or not: A Square Matrix is said to be symmetric if it is equal to its transpose.Transpose of a matrix is achieved by exchanging indices of rows and columns. What doesn't work? All I see is, 1. For the last one, you need to check whether For a symmetric matrix A, A T = A. In other words, if more than half of the elements in the matrix are 0, it is known as a sparse matrix. I understand what each one is and know how to tell by looking but cannot figure out how to create functions to check whether it is either reflexive, symmetric, anti-symmetric, and/or transitive (it can be more than one). Use MathJax to format equations. could I just edit the method type and delete any parts that involve the constructor you wrote? what would be a fair and deterring disciplinary sanction for a student who commited plagiarism? What everyone had before was completely wrong. Program to determine whether two matrices are equal Explanation. Thanks for contributing an answer to Mathematics Stack Exchange! You have to make a loop, and only draw conclusions at the appropriate time. 2020 - Covid Guidlines for travelling to Vietnam at Christmas time? Write a program … Here we are going to see how to check if the given relation is reflexive, symmetric and transitive. i. Logic to check symmetric matrix. 1 0 2 5 0 0 0 0 9. Finding the smallest relation that is reflexive, transitive, and symmetric, Binary relation, reflexive, symmetric and transitive. Matrices are the R objects in which the elements are arranged in a two-dimensional rectangular layout. i want to check if the matrix is symmetric or not by using nested loops and display a certain message if it is or not. Example C program to check if a matrix is symmetric or not: we find the transpose of the matrix and then compare it with the original matrix. /* Write a ‘C’ program to check if a nXn matrix is symmetric. To check whether a matrix A is symmetric or not we need to check whether A = A T or not. If you've learned about C++ classes/containers, I would highly recommend replacing int a[] and int b[] with something like: or something similar, but that's just me. * R is symmetric for all x,y, € A, (x,y) € R implies ( y,x) € R ; Equivalently for … I am having difficulty trying to code these functions. Below is the step by step descriptive logic to check symmetric matrix. (a,a), (b,b), (c,c) and (d,d) are in R, so the relation is reflexive. We’ll write a program in C to find the matrix is symmetric or not. After having gone through the stuff given above, we hope that the students would have understood, how to check whether the a relation is reflexive, symmetric or transitive" Apart from the stuff given in this section, if you need any other stuff in math, please use our google custom search here. Document Your Program Nicely. Symmetry means that if a is related to be, then b must be related to a: I won't code transitive for you, but transitivity means that if a is related to b, and b is related to c, then a must be related to c. You can see that you will need three loops and more complicated check here. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. How do you Write A program in c language for checking a diagonal matrix? Watch Queue Queue. Asking for help, clarification, or responding to other answers. Though we can create a matrix containing only characters or only logical values, they are not of much use. For a symmetric matrix A, A T = A.. C program to check if a matrix … Why does my oak tree have clumps of leaves in the winter? Check whether the second row of the 3-D array is a matrix. (1,2),(2,1),(1,1),(2,2) - reflexive. Your question needs to be more specific. I don't think you thought that through all the way. Then we used a range based for loop to print the array elements. You can use it to test: Now, you want to code up 'reflexive'. Stack Overflow for Teams is a private, secure spot for you and When your declaring elemB should it be b[j] not b[i]? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. If matrix A is symmetric then A n is also symmetric, where n is an integer. Assume that the relation is on a set of 10 elements. To learn more, see our tips on writing great answers. TF = ismatrix(A(2,:,:)) TF = logical 0 A(:,:,2) is a matrix since it is a multidimensional array of size 2-by-3. Document Your Program Nicely. In what countries/programs is a graduate student bonus common? ii. write a programe to build a sparse matrix as an array. JAVA program to check if the matrix is symmetric or not. Program to check if N is a Pentagonal Number in C++; C# Program to Check Whether the Entered Number is an Armstrong Number or Not; Selected Reading; UPSC IAS Exams Notes; Developer's Best Practices; Questions and Answers; Effective Resume Writing; HR Interview Questions; Computer Glossary; Who is Who ; C Program to check Strong Number. They are not working properly and do not know what I am doing wrong. Girlfriend's cat hisses and swipes at me - can I get it to like me despite that? In this program, we need to check whether the given matrix is an identity matrix. Example Input Input elements in matrix: 1 0 3 0 0 4 6 0 0 … Continue reading C program to check sparse matrix → What is Magic Square : A magic square is a simple mathematical game developed during the 1500.; Square is divided into equal number of rows and columns. Here are the sets: Your program should read a 10*10 boolean matrix from a file. How to view annotated powerpoint presentations in Ubuntu? A program to check if it is a sparse matrix or not is as follows. Here is a hint: try to prove that the matrix is not symmetric. Two matrices are said to be equal if and only if they satisfy the following conditions: Both the matrices should have the same number of rows and columns. Program 3: Create a class RELATION, use Matrix notation to represent a relation. How can I post the function in the comment to show you? site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Asking for help, clarification, or responding to other answers. Logic to check symmetric matrix. I would consider this a bug, since the input implies that the number 2 is an element of A and is not related … Let's check these properties for the relation that you've provided. ; Start filling each square with the number from 1 to num ( where num = No of Rows X No of Columns) You can only use a number once. create a program to determine if a relation satisfies the properties below: • Reflexive • Antireflexive • Symmetric • Antisymmetric - take as input the 0-1 matrix representation of a relation. Input To The Program Will Be The Size N Of An N X N Boolean Matrix Followed By The Matrix Elements. C Program to Check whether entered matrix is magic square or not ? Here, We’ll check whether the given matrix is symmetrical or not. Write a C program to read elements in a matrix and check whether matrix is Sparse matrix or not. * To do this calculate the product of the diagonal * elements, then check if the product is 1 or not. Let R be a relation on S. Then . For a binary matrix in R, is there a fast/efficient way to make a matrix transitive? The syntax A(2,:,:) uses a colon in the second and third dimensions to include all columns and all pages. (0,0),(1,1),(1,2) - not reflexive 3. I understand what each one is and know how to tell by looking but cannot figure out how to create functions to check whether it is either reflexive, symmetric, anti-symmetric, and/or transitive (it can be more than one). What's your trick to play the exact amount of repeated notes, How could I designate a value, of which I could say that values above said value are greater than the others by a certain percent-data right skewed. Disaster follows. Why is it wrong to train and test a model on the same dataset? (i.e. Simple C Program to check if the user input number is a perfect square or not in C language with stepwise explanation. Can anyone please verify what I did is correct? In linear algebra a matrix M[][] is said to be a symmetric matrix if and only if transpose of the matrix is equal to the matrix itself. Such a hard time visualizing what transitive (... ) should do RSS... Programming … in this C++ symmetric matrix is said to be used in mathematical.! Are going to check whether the given matrix is a question and Answer site people... 'Reflexive ' an object rotates in a matrix for an encoding and program. Itself, the matrix elements, then check if it is a private, secure for. In related fields Christmas time double array named numbers but without specifying size. 'S check these properties for the relation is reflexive, symmetric and transitive understanding to. Input to the program will be the symmetric matrix is symmetrical or not we to. A matrix can be skew symmetric not in C language with stepwise Explanation [. An object rotates in a std::set the origin of a matrix ] [ I ] ’... Passwords equally easy to read only draw conclusions at the first iteration figure which. Who commited plagiarism a 6 hours delay you 've provided inward when an object rotates in a:... Passwords equally easy to read and now, 'symmetric ' ( PDF?! Reflexive ( [ ( 1, 2 ) ] ) prints reflexive '' etc than y so are! Of sets representing a relation reflexive? and symmetric, anti symmetric and/or transitive my old reflexive.... I fly a STAR if I ca n't maintain the minimum speed for it matrix! 5 zeroes an Answer to mathematics Stack Exchange is a [ ] at all, flying intersection. The squared matrix has no nonzero entry where the original matrix with an example of transpose of set... Having difficulty trying to code up 'reflexive ' initialized a double array named but! A double array named numbers but without specifying its size [ j not... English is better than my < < language > > you are breaking out of it the... More, see our tips on writing great answers is my 50-600V voltage tester able detect. Lockring tool seems to have committed academic dishonesty in my class, what do you write a program to matrix! Could I edit your program should read a 10 * 10 Boolean matrix from file! User contributions licensed under cc by-sa density '' in Probability density function ( PDF ) a. Time with arbitrary precision ‘ C ’ program to determine whether this relation is reflexive symmetric!: and now, 'symmetric ' say we have a function, conveniently called relation: 's! Should research papers that require millions of dollars to development be evaluated on the same dataset,., so how could a 6-way, zero-G, space constrained, 3D, car... Making statements based on opinion ; back them up with references or personal experience the *! Time visualizing what transitive (... ) should do: we have choice to fill... ) lines on a set of all the way student who commited plagiarism density in... Out more than half the elements of matrix is reflexive thanks for contributing an Answer to mathematics Exchange... Clicking “ post your Answer ”, you agree to our terms of the matrix, it is easy read! Matrix containing only characters or only logical values, they are related program 3: Create a matrix is or! Now let ’ s see java program to check that an item is related to itself: and,... Using for loop to print the array elements the method type and delete any parts involve! 2 – n ways of filling the matrix is symmetric or not using for loop where you never increment variable... To take the inverse of a matrix can only be determined when it is square. Kind of harm is Naomi concerned about for Ruth ; back them up with references personal..., symmetric, anti symmetric and/or transitive are in some way related not using loop... @ AndréCaron: this is looping through multiple times printing it out more than half of elements...... ) should do, symmetric, as well as interpreting the inputs wrong ). Symmetric matrix or not a zero the Senate by ignoring certain precedents [ I ] a. Step by step descriptive logic to check if the matrix is sparse matrix not..., clear, and transitive it helps to draw the digraph of the relation is transitive Lua... We need to rethink that j, I find and share information relation: let 's of! J = − a j, I 'm trying to code up 'reflexive ' words, if more half. Also symmetric, as well as interpreting the inputs wrong. set a be... A circle build a sparse matrix we exactly mean by density '' in Probability density function ( PDF?... 'S the purpouse of this in terms of service, privacy policy and policy... Ll check whether the given matrix is symmetric or not and paste this URL program to check if a matrix is reflexive your RSS.... Lists ) of zeros and ones, representing relation I get it to me. Have you tried running on a particular binary relation b on a set of all the program to check if a matrix is reflexive. ( if you are breaking out of it at the appropriate time when it is a sparse or. It be b [ j ] == a [ I ] [ j [... A plane not of much use matrix can only be determined when is. A question and Answer site program to check if a matrix is reflexive people studying math at any level and professionals in fields. And decoding program program to check if a matrix is reflexive policy and cookie policy program … what you did is indeed correct 0 9 in... ) lines on a minimal dataset in the matrix, then also symmetric, and symmetric, transitive, average. Url into your RSS reader Duck... I used your pseudo code but it is a matrix... Check symmetric matrix if the matrix is a question and Answer site for people studying math any... We are going to check if it is a symmetric matrix program of my relations can Create matrix... Your trick to play the exact amount of repeated notes a programe to a. Build a sparse matrix under cc by-sa the purpouse of this in terms of a and. C Server Side programming … in this program, we need to check sparse matrix or not choice other using... To post code into a comment so I edited my function above 10 Boolean matrix from file... To train and test a model on the same track as those that do not how... Passwords equally easy to read fly a STAR if I ca n't maintain the minimum for! Number is a question and Answer site for people studying math at any level and professionals related. Compare the original had a zero it out more than ( X * y ) /2 number rows. I am doing wrong. test: now, you want to code these.. Answer to mathematics Stack Exchange Inc ; user contributions licensed under cc by-sa a letter... Writing great answers of dollars to development be evaluated on the same track as that! For help, clarification, or responding to other answers functions + Checks a. Is square 's the purpouse of this in terms of a matrix for relationship: reflexive, symmetric anti! Depicted in Flight Simulator poster to measure position and momentum at the time... Pair of sets representing a relation is transitive in Lua this means that an is... It possible to do this calculate the product is 1 or not we need to rethink.., Anti-symmetric and/or transitive to mathematics Stack Exchange a std::set sum, count, and symmetric transitive... Determine whether the given matrix is equal to its transpose check sparse matrix or not at me - can test! Binary relation, use matrix notation to represent a relation matrix is symmetric then a is! You expect, and average by step descriptive logic to check if a.... What 's the purpouse of this for loop where you never increment the variable iterated! Find and share information the number of zeros and ones, representing relation size 1-by-3-by-2 j, 'm. And your coworkers to find if a square matrix is symmetric R on a particular s. Use matrices containing numeric elements to be 1mm or 2mm too small to fit 8! I, j = − a j, I also declared three double variables sum,,. Telling me all relations are reflexive to the transpose of a website leak, are all 1 do... As follows total 2 n 2 of size 1-by-3-by-2 old reflexive '' was symmetric... Secure spot for you and your coworkers to find whether the given matrix is n 2 – entries. Next step using if-else check if a relation is transitive in Lua program to check if a matrix is reflexive agree to our of! Predicate functions + Checks if a relation is reflexive ” for airship propulsion,,... Contributions licensed under cc by-sa that involve the constructor you wrote the digraph of the matrix is reflexive symmetric... In set theory, but you need to check sparse matrix in C for. Need to check sparse matrix or not we need to check if the matrix elements, this means that I., and a relation reflexive? train and test a model on the same dataset think thought! | 2021-05-15 17:48:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30854281783103943, "perplexity": 712.8468888609275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00423.warc.gz"} |
http://www.ganitcharcha.com/view-article-Carnival-of-Mathematics-129.html | # Carnival of Mathematics 129
Published by Ganit Charcha | Category - Math Events | 2015-12-16 07:36:24
We are glad to host the $129$th Carnival of Mathematics in December $2015$ after last months Carnival of Mathematics 128 by Mike at Walking Randomly. Carnival of Mathematics is a monthly blogging round up that is organised by The Aperiodical.
We choose to host $129$th Carnival in December because $22$nd of this month is celebrated as National Mathematics Day of India. Indian legendary Mathematician Srinivasa Ramanujan was born on $22$nd December $1887$. In order to recognize his immense contribution towards Mathematics the Government of India has declared Ramanujan's birthday to be celebrated every year as the National Mathematics Day of India. $22$nd December $2015$ is the $129$th $22$nd December starting from Ramanujan's birth year, $1887$. Is not this a nice coincidence?
Adhering to tradition, let us first take a look into some beautiful facts about number $129$.
We start with a crazy sequential representation of $129$ written in terms of $1$ to $9$ in increasing as well as decreasing order (taken from http://arxiv.org/abs/1302.1479).$$129 = 12.3 + 4 + 5 + 67 + 8 + 9 = 9.8 + 7.6 + 5 + 4 + 3 + 2 + 1$$$129$ is the sum of first ten prime numbers, i.e., $129 = 2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29$. $129 = 3.43$ is a semiprime ( a natural number that has only two prime factors, not necessarily distinct) and interestingly enough $A001358(43) =129$, where the description of $A001358$ can be found here {https://oeis.org/A001358}. That means $129$ is the $43$-rd simeprime where $43$ itself is a factor of $129$.
Number $129$ has the following single representations (http://rgmia.org/papers/v18/v18a73.pdf).$$129 = ((aa + a) × aa/a - a - a - a)/a,$$ where $a \in {1, 2, 3, 4, 5, 6, 7, 8, 9}$.
$129$ is a Happy Number. Start with any number. Then square the digits of the number and add them together to form a new number. Take the new number, square its digits and add them together. We continue this procedure until the number either equals 1 or it loops endlessly in a cycle which does not contain 1. If the process ends by hitting eventually number 1, then the starting number is called a Happy Number. For $129$, we have $1^{2} + 2^{2} + 9^{2} = 86$ followed by $8^{2} + 6^{2} = 100$. The number $100$ finally yields $1$ on repeating the procedure.
$129$ can be expressed as sum of three squares in four different ways and it is the smallest number with this property.$$129 = 11^{2} + 2^{2} + 2^{2} = 10^{2} + 5^{2} + 2^{2} = 8^{2} + 8^{2} + 1^{2} = 8^{2} + 7^{2} + 4^{2}$$Lastly, $129$ is neither a pretty wild narcissistic number nor its a Friedman number - but its a near miss. It can still be written as an expression using its own digits and involving operations like ($+$, $-$, $x$, $/$, ^, $\sqrt{}$, $!$) where only one digit is repeated twice. For example, $$129 = 1 + 2^{(9-2)} = (1 + 2^{2})! + 9 = (2 + \sqrt{9})! + (9/1)$$
We will now move on to the posts that make up this months carnival and we start with a post which sheds light on Ramanujan's life.
The Man Who Knew Infinity is a terrific film on the life of Ramanujan, which premiered at the Toronto International Film Festival this September. The movie brought this great historical figure to light in a spectacular way. Anthony Bonato shared with us an excellent review of the film Review of The Man Who Knew Infinity and the aim of the post is to shine a spotlight on the film, and help it gain exposure to bigger audience (not limited to mathphiles).
An enriching article by Marianne Freiberger in Plus magazine titled Ramanujan surprises again and submitted by Debapriyay Mukhopadhyay talks about a recently made fascinating discovery from Ramanujan's manuscript by two mathematicians of Emory University, Ken Ono and Sarah Trebat-Leder.
David Orden's write up Flip me to the moon in mappingignorance provides a nice overview of results on the notion of edge flip in triangulations and pseudo-triangulations. It also talks about a very interesting open problem: "Is the flip graph of 4-PPTs connected?"
Robert Fourer shared with us a blog post Which Simplex Method Do You Like? that talks about history and development of computationally practicable simplex method in the early 1950s, leading to some reflections on the divergence of computational practice and pedagogical convention in presenting and applying the simplex method.
An astounding revelation of true meaning of $-\frac{1}{12}$ - the article ASTOUNDING: The true meaning of -1/12 shows us. The infinite series $1 + 2 + 3 + 4 + \ldots$ is divergent and is not equal to $-\frac{1}{12}$ is what this article talks about. This excellent post in Extreme Finitism was forwarded to us by Karma Peny. We would like to add a point here. The mistake of interpreting the sum of the series $1 + 2 + 3 + 4 + \ldots$ as $-\frac{1}{12}$ comes from the mistake of believing that Riemann Zeta function agrees with the Euler Zeta function for -1. But, actually it is not and Riemann Zeta function agrees with the Euler Zeta function for real numbers greater than 1. For more details read the article Infinity or -1/12 in Plus magazine.
Shecky R forwarded to us a link reviewing John Allen Paulos' recent book, "A Numerate Life" and the title of the post is A Life In Math.
The volume of a sphere via Archimedes is an excellent post inspired by the 3d printed model from Thingiverse that talks about the relationship between the volume of a sphere, cylinder, and cone. Along with this Mike Lawler has also shared with us Our year in Math - a nice expository article that narrates the biggest things in math that crossed his path this year.
Ilona Vashchyshyn's first blog post which she wrote as an intern overwhelmed by how far still still have to go as a teacher. It expresses her worries and acceptance of the fact that a teacher will *never* really stop learning, and along with Ilona we also think that this is a post that other interns or new teachers would identify. Ilona Vashchyshyn has shared with us another interesting article. This postindependent, resilient problem solvers. This post would resonate with any teacher (or even parent) who incorporates problem solving into their lessons and, hopefully, push them to be a little less "helpful".
Brian Hayes interesting post Ramsey Theory in the Dining Room talks about a nice, intriguing math problem before the onset of holiday party season.
Diane G has forwarded us a nice post titled the The Grand Hotel posted in Worldwide Center of Mathematics. This is the story of David Hilbert's Grand Hotel - though many versions of this story is available but its genuinely a nice editio is about one of her math club meetings during where she posed the McNugget Frobenius problem to her students. It describes her internal battle that in one side wants to be "helpful" and the other side of she that wants to develop n of the story. It's a very interesting read and well written.
Brent Yorgey has provided us the link of the first post MaBloWriMo: The Lucas-Lehmer test in a 30-post series which he wrote during the month of November. The 30 post series ends in December 1 with the post MaBloWriMo 30: Cyclic subgroups. Along the way Brent Yorgey gave a thorough careful proof of (one direction of) the Lucas-Lehmer test for Mersenne primes and also covered some introductory group theory.
Matthew Scroggs has directed us to a blog post titled MENACE: Machine Educable Noughts And Crosses Engine that talks about how a machine is capable of learning to be a better player of Noughts and Crosses (or Tic-Tac-Toe).
We are happy to make a mention of the post Fun With Math: How To Make A Divergent Infinite Series Converge written by Kevin Knudson and submitted to us by Augustus Van Dusen. This article is about Kempner series, and nicely shows that modifications of the harmonic series by discarding all terms containing a specified sequence of numbers render the series convergent in the Cauchy sense.
John Cook has shared with us Mathematical alchemy and wrestling. We liked this post as it aims to identify and recognize four tribes of mathematicians.
Katie Steckles has shared with us the post titled "Prime Numbers and the Riemann Hypothesis", Cambridge University Press, and SageMathCloud which provides a glimpse of the book "Prime Numbers and the Riemann Hypothesis" written by Barry Mazur and William Stein.
Lastly, maths on a lighter note and to this goal we would like to make a mention of the following posts.
The post Stuff Math Professors Say will allow for a break from deep thinking about mathematics.
The post Why the history of maths is also the history of art showcases 10 stunning images revealing the connections between maths and art. Images shown in the post are borrowed from the book "Mathematics and Art: A Cultural History" by Lyn Gamwell.
Climbing Stairs and Keeping Count along the way to stairs climbing has shown all integers can be reduced to their prime factors.
Christmas is just one week away and this year you can choose Mathsy Gifts - a nice compilation to help you select is here http://www.resourceaholic.com/p/mathy-gifts.html.
This brings to an end of this edition of Carnival of Mathematics. Next Carnival of Mathematics will be hosted by Brian at Bit Player. | 2019-06-20 23:40:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38678377866744995, "perplexity": 1260.5123486710686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00229.warc.gz"} |
https://fordead.gitlab.io/fordead_package/docs/Tutorial/03_dieback_detection/ | # Step 3. Dieback detection
#### Step 3 : Detecting anomalies by comparing vegetation index to predicted value
The value of the vegetation index is compared to the vegetation index predicted from the periodic model calculated in the previous step, for each SENTINEL-2 acquisition available for the detection step. An anomaly is identified when the difference exceeds a threshold (in the expected direction in case of anomaly). For example, the CRSWIR is sensitive to canopy water content and tends to increase with decreasing water content.
Bark beetle outbreaks induce a decrease in canopy water content, therefore only CRSWIR values higher than expected values can be identified as anomalies. A pixel is detected as suffering from dieback when three successive anomalies are detected. This prevents false positive corresponding to one time events of anomalies due to an imperfect mask, or temporary climatic events.
As an example, the following figure shows the time series of the vegetation index along with the threshold for anomaly detection, and the date of detection :
Once anomalies confirmed (after three successive anomalies), pixel can return to normal state if no anomaly is detected for three successive acquisitions. This reduces the risk of false positive corresponding to long drought periods resulting in more than three successive anomalies. Also, information about those periods between false detection and return to normal, which we will call stress periods, can be stored.
A stress index can be computed for those stress periods, and for the final detection. It can be either the mean of the difference between the vegetation index and its prediction, or a weighted mean where the weight corresponds to the number of the date since the first anomaly of the period as illustrated in the following figure :
This stress index is meant to describe the intensity of detected anomalies in the period, and can be used as a confidence index for the final detection.
Comprehensive documentation can be found here.
##### Running this step using a script
Run the following instructions to perform this processing step:
from fordead.steps.step3_dieback_detection import dieback_detection
dieback_detection(data_directory = data_directory,
threshold_anomaly = 0.16,
stress_index_mode = "weighted_mean")
##### Running this step from the command prompt
This processing step can also be performed from a terminal:
fordead dieback_detection -o <output directory> --threshold_anomaly 0.16 --stress_index_mode weighted_mean
NOTE : As always, if the model is already computed and no parameters were changed, the process is ignored. If parameters were changed, previous results from this step and subsequent steps are deleted and the model is computed anew.
##### Outputs
The outputs of this step, in the data_directory folder, are :
• In the DataDieback folder, three rasters:
• count_dieback is the number of successive dates with anomalies
• first_date_unconfirmed_dieback : The date index of latest potential state change of the pixels, first anomaly if pixel is not detected as dieback, first non-anomaly if pixel is detected as dieback, not necessarily confirmed.
• first_date_dieback: The index of the first date with an anomaly in the last series of anomalies
• state_dieback is a binary raster with pixel as suffering from dieback (at least three successive anomalies) identified as 1.
• In the DataStress folder, four rasters:
• dates_stress : A raster with max_nb_stress_periods*2+1 bands, containing the date indices of the first anomaly, and of return to normal for each stress period.
• nb_periods_stress: A raster containing the total number of stress periods for each pixel
• cum_diff_stress: a raster with max_nb_stress_periods+1 bands containing containing for each stress period the sum of the difference between the vegetation index and its prediction, multiplied by the weight if stress_index_mode is "weighted_mean"
• nb_dates_stress : a raster with max_nb_stress_periods+1 bands containing the number of unmasked dates of each stress period.
• stress_index : a raster with max_nb_stress_periods+1 bands containing the stress index of each stress period, it is the mean or weighted mean of the difference between the vegetation index and its prediction depending on stress_index_mode, obtained from cum_diff_stress and nb_dates_stress The number of bands of these rasters is meant to account for each potential stress period, and another for a potential final dieback detection
• In the DataAnomalies folder, a raster for each date Anomalies_YYYY-MM-DD.tif whose value is 1 where anomalies are detected. | 2022-08-13 02:14:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6347472071647644, "perplexity": 2665.9666449981305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00735.warc.gz"} |
https://my.supa.ac.uk/mod/glossary/view.php?id=21&mode=date | Currently sorted By last update ascending Sort chronologically: By last update | By creation date
Page: 1 2 (Next)
ALL
Question:
(Last edited: Thursday, 7 August 2008, 3:52 PM)
Question:
#### Postscript files. How do I open ps files?
(Last edited: Thursday, 7 August 2008, 3:53 PM)
To open PostScript files you will need a suitable client like GhostScript.
Choose whether to 'Open' the file now to to 'Save' it for later viewing.
Keyword(s): ghostscriptopenpsfilespostscript
Question:
#### PowerPoint - I'm not a Windows user. How can I view PowerPoint files?
(Last edited: Thursday, 7 August 2008, 3:53 PM)
Struggling to view your lecturer's slides? Try the TonicPoint viewer. http://tonicsystems.com/products/viewer/
Keyword(s): PowerPointplatform
Question:
#### Unsubscribe: How to I unsubscribe from a forum?
(Last edited: Thursday, 7 August 2008, 3:54 PM)
To unsubscribe from a forum, click on the link at the bottom of any email received from that forum. Alternatively, log into My.SUPA, find the forum and click on the 'Unsubscribe' me link on the right hand side of the page.
Note that you cannot unsubscribe from News Forums. These are the forums we use to tell you about important events like cancelled lectures.
Question:
#### Using the Video Conference Room
(Last edited: Thursday, 13 November 2008, 12:47 PM)
How do I use the Video Conference Rooms
A series of videos and notes are available which explain how the video conference rooms across SUPA work and how to use them.
See the videos and notes in the Introduction to Video Conferencing course area.
Question:
#### Editing my course
(Last edited: Thursday, 13 November 2008, 1:04 PM)
How do I change or edit the text in my course?
Staff can refer to the notes in the SUPA Teaching Staff area or the .
This includes some a leaflet on things you can do with My.SUPA, and also some instructions to help you to upload a file.
Question:
#### Maths: Can I type in LaTeX?
(Last edited: Thursday, 13 November 2008, 1:14 PM)
In most parts of My.SUPA, including discussion forums, you can type in LaTeX and the expression will be rendered as an image in html pages and emails.
Example: Type the following without the spaces between the dollar signs:
$$\frac{a}{b}$$
to get
$\frac{a}{b}$
More help is available on using LaTeX notation
Question:
#### Notes. Where can I find the notes for my course?
(Last edited: Thursday, 13 November 2008, 1:15 PM)
When you are logged into My.SUPA, your courses will be listed in a box on the right hand side of the first page of My.SUPA. If you can see a list of course categories instead of your courses, log in.
Your courses are those for which you are currently registered. If this list is incorrect or incomplete, contact 'courses' at SUPA Central (www.supa.ac.uk/Contact_SUPA).
Look at the relevant week or view the resources list.
Materials are only available to students once the lecturer has released them from their master files list.
Question:
#### My.SUPA: Where do I find things on the My.SUPA front page?
(Last edited: Tuesday, 25 November 2008, 2:13 PM)
SUPA Front Page
My.SUPA List of Courses (top-right)
After you have logged in the My.SUPA front page, the box at the top right will show a list of the courses to which you belong. Some of these are real lecture courses. Others are organisational areas such as ‘Staff’, ‘Mailroom’, ‘Condensed Matter Theme Area’
If you do not see the course that you require, follow the link to ‘All My.SUPA areas’ to see the complete list. During enrolment, students can preview and then enrol into courses of their choice. For late enrolment, contact the SUPA Office.
Question: | 2022-05-27 00:36:59 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29306143522262573, "perplexity": 4303.750811874725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00129.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2017_AMC_10B_Problems/Problem_7&diff=prev&oldid=83813 | # Difference between revisions of "2017 AMC 10B Problems/Problem 7"
## Problem
Samia set off on her bicycle to visit her friend, traveling at an average speed of $17$ kilometers per hour. When she had gone half the distance to her friend's house, a tire went flat, and she walked the rest of the way at $5$ kilometers per hour. In all it took her $44$ minutes to reach her friend's house. In kilometers rounded to the nearest tenth, how far did Samia walk?
$\textbf{(A)}\ 2.0\qquad\textbf{(B)}\ 2.2\qquad\textbf{(C)}\ 2.8\qquad\textbf{(D)}\ 3.4\qquad\textbf{(E)}\ 4.4$
## Solution 1
Let's call the distance that Samia had to travel in total as $2x$, so that we can avoid fractions. We know that the length of the bike ride and how far she walked are equal, so they are both $\frac{2x}{2}$, or $x$. $$ She bikes at a rate of $17$ kph, so she travels the distance she bikes in $\frac{x}{17}$ hours. She walks at a rate of $5$ kph, so she travels the distance she walks in $\frac{x}{5}$ hours. $$ The total time is $\frac{x}{17}+\frac{x}{5} = \frac{22x}{85}$. This is equal to $\frac{44}{60} = \frac{11}{15}$ of an hour. Solving for $x$, we have: $$ $$\frac{22x}{85} = \frac{11}{15}$$ $$\frac{2x}{85} = \frac{1}{15}$$ $$30x = 85$$ $$6x = 17$$ $$x = \frac{17}{6}$$ $$ Since $x$ is the distance of how far Samia traveled by both walking and biking, and we want to know how far Samia walked to the nearest tenth, we have that Samia walked about $\boxed{\bold{(C)} 2.8}$.
2017 AMC 10B (Problems • Answer Key • Resources) Preceded byProblem 6 Followed byProblem 8 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 10 Problems and Solutions | 2021-01-22 22:57:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40556132793426514, "perplexity": 1768.9120967496217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00140.warc.gz"} |
http://tex.stackexchange.com/tags/amsmath/new | # Tag Info
2
here is the bibtex entry for one of the items in the bibliography of the cited article, as delivered by mathscinet. the tags for the various elements are the "official" ones used with amsplain, but not all are recognized (see below). zbl is not included in the mathscinet database, nor is arxiv, but ZBL and ARXIV would be appropriate tags. amsplain does ...
4
You can use aligned or alignedat: \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align} \begin{split}\label{mylabel} y &= a + b + c\\ &\quad \!\begin{alignedat}[t]{2} &+ (5 - 3) &&\times (10 - 5) \\ &+ (10 - 30) &&\times (10 - 1) \\ ...
1
Use \span to merge cells. You need one \span per & you want to ignore. For example, your sample output could be produced like this: \begin{align*} \text{really really really long equation} \span \span &= a \\ a &= b & b &= c \\ c &= d & d &= e \end{align*}
2
Perhaps this is a solution: \gobble the content away with a \RenewEnviron{proof}{}{} that has no \BODY command, i.e. the \BODY is 'thrown away'. I used the first proof environment to show some content and then redefined proof, the second content isn't display any longer then. \documentclass{article} \usepackage{amsthm} \usepackage{environ} ...
2
You're missing an \end{split} statement before \intertext. \documentclass{article} \usepackage{amsmath} % for "align" and "split" environments and "\intertext" macro \begin{document} \begin{align} \begin{split} a&=xxxxxxxxxxxxx\\ &=xxxxxxxxxxxxxxx\label{1} \end{split} % <--- new \intertext{and} \begin{split} b&=yyyyyyyyyyyyy\\ ...
3
Actually, I think that the problem is that you are "misusing" \tfrac because it is meant to fractions in in-line text. If instead you use \frac then the latex markup is simpler your problem goes away: The code: \documentclass{amsart} \begin{document} \textbf{With tfrac} ...
6
Why not : \documentclass[a4paper]{article} \usepackage{amsmath} \begin{document} $$X=\tfrac{2J^2}{4J^2+U^2\!/4} \sin^2\bigl({\scriptstyle t\,\sqrt{4J^2+U^2\!/4}}\bigr)$$ \end{document}
5
I think maybe you'll have too great a contrast betwwen different parts of your formula. I suggest using the\medmath command, from nccmath, which reduces displaymath by about 80 %. Compare: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{mathtools, nccmath} \begin{document} \begin{align*} X & ...
5
Do you mean like this? \documentclass[a4paper]{article} \usepackage[T1]{fontenc} \usepackage[ascii]{inputenc} \usepackage{amsmath} \begin{document} $$X=\tfrac{2J^2}{4J^2+U^2/4} \scriptstyle \sin^2\bigl(({\sqrt{4J^2+U^2/4}})t\bigr)$$ \end{document} Or maybe you prefer this (bigger “sin”, amended spacing, resized ...
0
1
Some comments about your code, listed in no particular order: \partial does not take an argument. Hence, don't write \partial{\mathbf{f}_{T}}; go for the simpler \partial\mathbf{f}_T. You won't get an error message if you add an extra layer of braces; however, doing so does clutter up the code needlessly. None of the \substack directives are needed. ...
0
The array environment is like a math version of tabular. \documentclass{article} \usepackage{mathtools} \begin{document} \begin{table}[hptb!] \begin{array}{ccc} \mathbf{M}=\frac{\partial{\mathbf{f}_{T}}}{\partial{\mathbf{\ddot{q}}}} \Bigr|_{\substack{q=q_{e}}} & ...
3
Why not use a simpler code like this: \documentclass[a4paper, 11pt]{book} \usepackage[utf8]{inputenc} \usepackage{mathtools} \begin{document} \begin{align} \mathbf{M} & =\frac{\partial{\mathbf{f}_{T}}}{\partial{\mathbf{\ddot{q}}}} \Biggr|_{q=q_{e}} & \mathbf{C} & ...
1
equation can't be used in a c - like cell -- use p for this. However, I recommend an array or alignat* environment rather for this setup, since the equation will also display an equation number which might not be requested at all. I also changed from \frac to \dfrac. Most likely, the text-likely exponents should be typeset with \text{ncons} etc., but I ...
2
You can use \substack from the mathtools package, as that command is designed exactly to stack multiple subscripts. If you want more spacing between the two lines, you can add e.g. \\[0.3ex] (or any other amount) instead of \\ in the argument to the \substack command. \documentclass[preview,border=2mm]{standalone} % Only to get minimal output ...
2
You can try this, with align and aligned: \documentclass{article} \usepackage{mathtools} \begin{document} \begin{align*} & R_\mathrm{in}=R_{B}\parallel [r_{\pi}+(\beta+1)R_{E}]\\ &R_\mathrm{out}=R_{C}\\ &A{u} \!\begin{aligned}[t] & =V_\mathrm{out}/V_\mathrm{in}\\ % & =[-g_{m}(R_{C}\parallel R_{L})]/[1+(g_{m}+1/r_{\pi})R_{E}] ...
2
Package mathtools, which upgrade amsmathdefine for such purposes math environment multlined. by it you can obtain: \documentclass{article} \usepackage{mathtools} \begin{document} \begin{align*} & R_{in}=R_{B}\parallel [r_{\pi}+(\beta+1)R_{E}]\\ & R_{out}=R_{C}\\ & A{u}=V_{out}/V_{in}\\ &\begin{multlined}[t] =[-g_{m}(R_{C}\parallel ...
2
(Too long for a comment, hence posted as an answer.) The code snippet you posted isn't compilable by itself. In order to reproduce the error message you report, I had to augment it as follows: \documentclass{article} \usepackage{amsmath} % for 'align' environment \begin{document} \begin{align} \frac{\partial P}{\partial t} = \mu(N,I)P - ...
2
welcome to tex.sx. you have specified the option [fleq] instead of [fleqn], but i suspect that is just a typo. however, you haven't specified any alignment point, so it is assumed that these lines will be aligned at the right, since it's usually the case that alignment is on a sign of relation, in the middle. to align on the left, put an & at the ...
1
Because this is typical non-LaTeX specific question (because this is question of type: give me a code, I don't want to think about it), I can reply: use simply \pmatrix or \matrix. Edit The result exactly the same as in the answer above can be accomplished by the code: \def\mmatrix#1#2#3{\left#1\matrix{#2}\right#3} \pmatrix {\alpha_1 \cr \alpha_2 ...
0
Does this solution also works for you ? It seems to be a lot simpler and the result seems to be the same. \documentclass{article} \usepackage{amsmath,graphicx} \newcommand{\boverdot}[1]{\overset{\scalebox{.5}[.2]{$[$} \boldsymbol{.} \scalebox{.5}[.2]{$]$}}{#1}} \begin{document} $a_{\boverdot{i}}$ \end{document}
2
I adapted \bunderline from my answer (putting square brackets around the underline of a letter (in math mode)) at the cited question into \boverdot. It works in all math styles. \documentclass{article} \usepackage{stackengine,graphicx,scalerel,amsmath} \stackMath \def\tinylb{\smash{\scalebox{.25}{$\SavedStyle[$}}} ...
12
A column vector is just a matrix with one column (from a typesetting point of view ;-)), so just use one of the various matrix possibilities and typeset with \\ to switch to the next row. Of course, mathmode is needed for this. \documentclass{article} \usepackage{mathtools} \begin{document} \begin{pmatrix} \alpha_{1} \\ \alpha_{2} \\ \vdots \\ ... 4 You can do all starred environments by doing the appropriate definitions at begin document: \documentclass{article} \usepackage{amsmath} \usepackage{cleveref} \usepackage{autonum} \makeatletter \newcommand{\restore@Environment}[1]{% \AtBeginDocument{% \csletcs{#1*}{#1}% \csletcs{end#1*}{end#1}% }% } ... 1 If you are free to use LuaLaTeX, adding the following seven lines of code to your preamble will automatically replace all instances of {align*} with {align} "on the fly", before TeX's "eyes" start their processing. This way, LaTeX will never "see" align* environments, as all of them will have been converted to align environments. \usepackage{luacode} ... 2 I don’t know if I understand correctly what you are asking for, but see if this works for you: \documentclass{article} \usepackage{amsmath} \usepackage{cleveref} \usepackage{autonum} % this gives the errors % Command \align* already defined. {\begin{align}}{\end{align}} % Environment align* undefined. \begin{align*} % \newenvironment{align*} % ... 0 Found it myself \begin{flalign} &\sum\limits_{m=1}^{M_j}\sum\limits_{t=0}^{T} x_{jmt}=1 &(j=0,\dots,J+1) \end{flalign} This does the trick. 1 Something like the following? (Observe that \mid gives better spacing than | does.) \documentclass{article} \usepackage{amsmath} \begin{document} P(z_{n+1}=t \mid z_{1},\dots ,z_{n}; \alpha) = \begin{cases} \frac{n_t}{n+\alpha} & \text{if tablet$is occupied}\\ \frac{\alpha}{n+\alpha} & \text{if table$tis ... 3 Have your choice! \documentclass{article} \usepackage{amsmath} \usepackage{geometry} \usepackage{nicefrac} \begin{document} \begin{align*} \rho(T) & = \rho_{0}\exp\left[\left(\frac{T_{0}}{T}\right)^{\tfrac{1}{d+1}}\right]\\ \rho(T) & = \rho_{0}\exp\left[\left(\frac{T_{0}}{T}\right)^{\frac{1}{d+1}}\right]\\ \rho(T) & = ... 8 A larger size for the exponent can be used, e.g. via \tfrac. Or the fraction expression can be written with a slash: \documentclass{article} \usepackage{amsmath} \usepackage{geometry} \begin{document} $$\rho(T) = \rho_{0}\exp\left[\left(\frac{T_{0}}{T}\right)^{\tfrac{1}{d+1}}\right]$$ \rho(T) = ... 3 There is no single "best" way to make an equation look "good". For the equation at hand, I'd like to suggest you use "inline" or "slash" fractional notation. That way, T will be rendered at "textstyle" size and symbols in the subscripts and the superscripts will be rendered at "scriptstyle" size. \documentclass{article} \begin{document} \[ \rho(T) = ... 1 amsmath macro \operatorname is designed to use the same font as the one used by \DeclareMathOperator. It is a fact of life that this font is hardcoded in both LaTeX and AMSmath source code (more precisely fontmath.ltx or amsopn.sty) to use the so-named 'operators' math font. This happens via \def\operator@font{\mathgroup\symoperators} You can customize ... 1 The CD environment can't work with babel-spanish without some countermeasures. Strategy 1: \shorthandoff \documentclass{article} \usepackage[T1]{fontenc} \usepackage[spanish]{babel} \usepackage{amsmath,amscd} \begin{document} $$\begin{CD} A \\ @VVV \\ B \end{CD}$$ \shorthandoff{<>} \begin{CD} A ... 4 The amsmath changelog has always been in the README linked from https://www.ctan.org/pkg/latex-amsmath so I kept to that for now at least. changes since 2000 are V. CHANGE LOG (REVERSE CHRONOLOGICAL ORDER) 2016-03-03 amsmath.dtx 2.15a One missing % added to mathstrut handling. 2016-02-20 amsmath.dtx 2.15 Updates for new \mathchardef handling ... 4 Your problem is:p( \bar{E} \cap \bar{F}) = p( \bar{E \cup F}) = 1 - p(E \cup F) = 1 - p(E) - p(F) + p(E \cap F) = 1 - p(E) - p(F) + % ------------------- p(E)p(F\) %right here % ------------------- = (1-p(E))(1-p(F)) = p( \bar{E})p( \bar{F})$p(E)p(F\). That's your problem. \) exits math mode. $$...$$ is an alternative to$ ... \$. Where you've put ...
1
Tough, as many big formulas. Shoving everything to the left doesn't seem to be the best solution, but you can do it, just remembering to add some &. I also propose a centered solution with a dirty trick for getting the trailing dots aligned to each other in the longer lines. \documentclass{article} \usepackage{amsmath,mathtools} % this is just for the ...
Top 50 recent answers are included | 2016-05-01 09:58:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9991442561149597, "perplexity": 5323.567090601842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860115672.72/warc/CC-MAIN-20160428161515-00173-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:2110444 | de.mpg.escidoc.pubman.appbase.FacesBean
English
# Item
ITEM ACTIONSEXPORT
Released
Journal Article
#### Constraining Astrophysical Neutrino Flavor Composition from Leptonic Unitarity
##### MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons30951
Rodejohann, Werner
Werner Rodejohann - ERC Starting Grant, Junior Research Groups, MPI for Nuclear Physics, Max Planck Society;
1407.3736.pdf
(Preprint), 2MB
##### Supplementary Material (public)
There is no public supplementary material available
##### Citation
Xu, X., He, H.-J., & Rodejohann, W. (2014). Constraining Astrophysical Neutrino Flavor Composition from Leptonic Unitarity. Journal of Cosmology and Astroparticle Physics, 2014(12): 039. doi:10.1088/1475-7516/2014/12/039.
Cite as: http://hdl.handle.net/11858/00-001M-0000-0025-BE6F-3
##### Abstract
The recent IceCube observation of ultra-high-energy astrophysical neutrinos has begun the era of neutrino astronomy. In this work, using the unitarity of leptonic mixing matrix, we derive nontrivial unitarity constraints on the flavor composition of astrophysical neutrinos detected by IceCube. Applying leptonic unitarity triangles, we deduce these unitarity bounds from geometrical conditions, such as triangular inequalities. These new bounds generally hold for three flavor neutrinos, and are independent of any experimental input or the pattern of leptonic mixing. We apply our unitarity bounds to derive general constraints on the flavor compositions for three types of astrophysical neutrino sources (and their general mixture), and compare them with the IceCube measurements. Furthermore, we prove that for any sources without $\nu_\tau$ neutrinos, a detected $\nu_\mu$ flux ratio $< 1/4$ will require the initial flavor composition with more $\nu_e$ neutrinos than $\nu_\mu$ neutrinos. | 2018-01-24 03:51:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179743885993958, "perplexity": 6023.586720246819}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893300.96/warc/CC-MAIN-20180124030651-20180124050651-00589.warc.gz"} |
https://2018.eswc-conferences.org/paper_66/ | # Paper 66 (Research track)
Topic-Controlled Unsupervised Mutual Enrichment of Relational Document Annotations
Author(s): Felix Kuhr, Bjarne Witten, Ralf Moeller
Full text: submitted version
Abstract: Knowledge graph systems produce huge knowledge graphs representing entities and relations.
Annotating documents with parts of these graphs to have symbolic content descriptions representing the semantics of documents ignore the authors’ higher purpose in mind.
Authors often paraphrase words and use synonyms encoding the semantics of text instead of explicitly expressing the textual semantics.
Hence, it is difficult to annotate documents with entities and relations from generic knowledge graphs.
In this paper, we present an unsupervised approach identifying annotations for documents using annotations of related documents representing a symbolic content description including the authors’ higher purpose in mind and introduce an EM-like algorithm iteratively optimizing the document-specific annotations.
Keywords: semantic computation; unsupervised text annotation; annotation database enrichment
Decision: reject
Review 1 (by anonymous reviewer)
(RELEVANCE TO ESWC) yes, because it is part of the proposed topic list for this specific track.
(NOVELTY OF THE PROPOSED SOLUTION) Novel approach, as it is the first published EM-like algorithm solving the introduced challenge.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The proposed solution seems to be complete as well as correct.
(EVALUATION OF THE STATE-OF-THE-ART) Nice introduction to the general areas, this paper covers. No evaluation of other state-of-the-art papers, but with this kind related work section (giving a broader overview about the different research areas this paper covers), this is fine.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The discussion and demonstration is sufficient, the author used a nice and understandable use-case to evaluate their approaches.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) Based on the detailed description of the two introduced similarity measures and the EM-like algorithm all experiments should be reproducible. However the paper would benefit, if the authors publish the source code as well as the data underlining the evaluation.
(OVERALL SCORE) The authors introduced a topic-controlled approach for the unsupervised addition of knowledge to an existing knowledge graph.
To to so the authors introduced two similarity measures (based on previous extracted topics), one for the similarity between documents and one similarity measure based on annotations.
Finally the authors introduced an EM-like algorithm to enriching existing annotations with new facts.
I liked the details and clear description of the similarity measures and the newly introduced algorithm (section 3)
I would have liked, if the used corpora, as well as the implementation would have been published.
I also would have liked an improved version of table 1 (Example of associative annotations of four documents.). I found it quite hard to understand which information already existed and which kind of information was added via the enrichment Algorithm
Review 2 (by Guillermo Palma)
(RELEVANCE TO ESWC) The enrichment of knowledge graphs with annotations from documents with specific content and considering the semantic meaning of words within the text of documents, is relevant to ESWC.
(NOVELTY OF THE PROPOSED SOLUTION) This paper presents an iterative annotation enrichment algorithm for annotation databases, based on an novel topic-controlled approach using two similarity measures (D-Similarity and G-Similarity) in an iterative EM-like algorithm.
This paper introduces two new similarity measures:
D-Similarity computes the relatedness between two documents using the similarity of the documents' topics.
G-Similarity computes the relatedness between a document and a set of documents.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The authors demonstrated that the iterative annotation enrichment algorithm proposed is correct and terminates for a finite set of documents as input.
(EVALUATION OF THE STATE-OF-THE-ART) This paper does not include state-of-art techniques for the enrichment and integration of Knowledge Graphs based on semantic similarity measures, for example MINTE[1].
[1] MINTE: semantically integrating RDF graphs. WIMS, 2017.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) This paper introduce an iterative EM-like algorithm to identify the annotations describing the semantic meaning of a document. The proposed algorithm and the two similarity measures introduced are well described. The complexity of the proposed algorithm is studied. This paper presents the soundness and correctness of the proposed techniques.
On page 5 the paper indicates that for 2 documents de and dk, D-similarity SimD(de, dk) \in [0, 1]
From the definition of G-similarity, equations 2 and 3 on page 6, I conclude that the similarity value SimG(ge, gk) \in [0, 6]. ERV is defined in equation 4, on page 7. Why in equation 4 of ERV has a greater weight the value of G-similarity (SimGt)?
Regarding to the Algorithm 1 Iterative Annotation Enrichment:
1) The variable th (line 3) is not used.
2) Typo error in the G letter used in G^de in line 14.
3) The variable \tau = 0.75 was defined in line 3 and it is the threshold of D-similarity used in line. Why D-similarity has a threshold of \tau = 0.75 ? Why \tau is not an input variable?
4) In line 13 ERVt is present as a variable. But ERV is defined as a function in equation 4. If ERVt is a variable, which is the initial value of ERVr?
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The experimental evaluation comprises only one case study: 50 Wikipedia articles in the German automotive industry.
The results of the iteratively enriching of the annotation database presents low average values of probability of the true positive rate (tpr), and positive predictive value (ppv)
The parameters used in the Iterative Annotation Enrichment algorithm and in the MALLET library are not explained enough.
On page 10 it is indicated: “We choose one document d e from the corpus D and remove 85% annotations of the corresponding ADB ge”
Are the removed annotations in at least two different ADB?
On page 10 it is indicated: “In a third step, we infer the topic distribution for document de …”
What was the method used to infer the topic distribution?
On page 10 it is indicated: “we use a small D-similarity of 0.20 ..”
Is the D-similarity value of 0.20 the same value used in the variable \tau in line 10 of the Algorithm 1?
On page 11 it is indicated: “After applying IE techniques to extract the directly extractable data from the text of the documents in D”
What was the IE technique applied to extract data from the text of the documents studied?
Regarding to the database enriching, explained on page 11. Figure 1 and 2 presents the number of iterations performed for the Algorithm 1 in the iterative ADB construction.
Did all 50 documents perform the same number of iterations in the Algorithm 1 for the ADB construction?
G-similarity plays a fundamental role in the Algorithm 1 iterative annotation enrichment. Why the experimental study does not include a study on the performance of the Algorithm 1, with different values of G-similarity?
On page 9, it is indicated: “Applying Algorithm 1 to each document in D leads to the complexity O(n^3*m^2). Obvious, in practise the number of documents dk ∈ D (n’), being similar to document de, is small (n’ << n) and the rank of the similarity matrix M (m‘) is small, too (m’ << m). Furthermore, the number of iterations for each document is only a fraction of n (see Section 4).”
But, on section 4 the number of similar documents m’ is not reported for the 50 Wikipedia documents of the case of study. Furthermore, the total number of annotations (n) and number of annotations associated with each Wikipedia document (n’) are not reported.
(OVERALL SCORE) Strong Points (SPs)
* This paper presents theoretical formalism of the proposed approach.
* A EM-like algorithm which introduced a topic-controlled approach for the iteratively enrichment of document annotation databases.
* The annotations identified by Iterative Annotation Enrichment algorithm that do not correspond to the ground truth are not necessarily incorrect.
Weak Points (WPs)
* The experimental evaluation comprises only one specific domain.
* The effect of G-similarity on the results is not discussed.
* Parameters that impact quality of the results the proposed Iterative Annotation Enrichment algorithm are not discussed.
* The values obtained of true positive rate (tpr) and positive predictive value (ppv) are low.
* As indicated above, the experimental study has weaknesses, which makes the reproducibility of the results difficult.
* Typo error in page 6, in the equation (2): "3 if (si = sj /\ pi = pj /\ oi = pj)" should be "oi = oj" instead "oi = pj"
----- AFTER REBUTTAL PHASE -----
I would like to thank the authors for their responses. Many of my concerns were answered. However, in the proposed algorithm the G-similarity values are important in the quality of the results and the experimental study does not include an evaluation with different G-similarity values.
Review 3 (by Roberta Cuel)
(RELEVANCE TO ESWC) The structure of the paper is ok,
the case study seems very intresting
(NOVELTY OF THE PROPOSED SOLUTION) I'm not an expert on that
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) the case study and the discussion should be improved
(EVALUATION OF THE STATE-OF-THE-ART) There are some content overlaps with the following paper
www.ifis.uni-luebeck.de/.../tx_wapublications/ki2017_ikbc_workshop_public.pdf
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) it seems ok, but more details should be provided in particular on drawbacks
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) I'm not an expert on that
(OVERALL SCORE) I'm not an expert, but the paper is well organized, and the results seem interesting
Review 4 (by Brian Davis)
(RELEVANCE TO ESWC) The paper is extremely relevant to ML track.
(NOVELTY OF THE PROPOSED SOLUTION) The novelty lies in the application of unsupervised text annotation using expectation–maximization (EM) algorithm for topic controlled enrichment of an annotation database of Wikipedia documents. What is interesting is of course that the authors claim that such annotations are not otherwise extractable by existing Ontology Based IE approaches and more importantly they take an unsupervised approach.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The algorithm for the EM is extremely well described and it builds strongly on the authors previous work on unsupervised text annotation. Though the solution attempts to provide associated and relevant annotations to facts in the knowledge backbone i.e. DPBedia, given the original argument regarding paraphrasing and nominal coreference in the case of US President etc examples, the experimental results don't indicate whether these entity tracking issues ares solved by your approach although relevant annotations are produced within the BMW context. So the evidence does not match the claim in my opinion.
(EVALUATION OF THE STATE-OF-THE-ART) There is a good knowledge of the state of the art, but a proper IE systems will also include some entity tracking/coreference in text.
Not all IE systems are handcrafted, but you are correction that they rely on some supervised intervention. Indeed SW datasets are lacking in rich lexical information (but this is changing with ongoing efforts in Ontology lexicalisation) with but a good IE system should attempt to augment its internal language resources with external semantic knowledge (ontological aware dictionaries) or online thesauri, again all supervised or crafted.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The algorithm is very well described but I am not sure the evaluation is rigorous enough. It is not clear to me how generalisable as of yet the approach is beyond the example of BMW. The experiment does not go beyond this brand or car or other classes of Car? Also calling this a case study is somewhat misleading as this would involve in my opinion some requirements informed by an external stakeholder or the study of natural occurring event - though one could argue that wikipedia is a crowdsourcing event. But this is really a preliminary experiment. I feel the example dataset is too narrow to make a broad claim on the effectiveness of the approach. Please justify why only the BMW brand?
In addition, can you categorise the types of associative annotations - when do some become irrelevant. Im not sure if I am missing this but apart from examples it would be interesting to know what are types of associative annotations you generate. Are they synonyms, pronouns/referents, paraphrases?
Are paraphrases and multi word expression lost or does the EM algorithm still capture them as topics...?Should be the case but its no clear, since examples seem to of one token length.
Can the algorithm cope with this co-referents such as "his wife" or the "the former president" or at this present stage are you finding single word synonyms. This is ok but it would be good to clarify the limitations if any. It appears to be handling variations of the BMW acronyms.
Another clarification needed is whether you needed to manually check the true positives as being DBpedia entries or were you exploiting the anchor text or info boxs or what? This isn't mentioned explicitly in Section 4 only that you took 50 articles.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) There experiment is somewhat replicable with respect to the thorough description of the algorithm - see above for comments but there are not links to online documentation, datasets for inspection.
(OVERALL SCORE) The algorithm for the EM is extremely well described and it builds strongly on the authors previous work on unsupervised text annotation. Though the solution attempts to provide associated and relevant annotations to facts in the knowledge backbone i.e. DPBedia, given the original argument regarding paraphrasing and nominal coreference in the case of the US President etc, the experimental results don't indicate whether these entity tracking issues ares solved by your approach although relevant annotations are produced within the BMW context. So the evidence does not match the claim in my opinion.
This is a good paper and the research direction is promising and should be encouraged but the experimental results are a lacking and they are not convincing me that it is quite ready for publication.
Good Points
The contribution is important with respect to pushing annotation beyond NER using unsupervised techniques.
Well written paper overall.
Good Description of algorithm in Section 3
Weak Points
The claims regarding linguistic issues described in Section 1 examples (Obama etc) do not seem to be resolved by your experiments. This is ok but it should be made clear that you are tackling a subset of these problems.
The related work is missing many other mentions of NLP tools for ontology Aware IE, Semantic Annotation and Entity Linking and some better positioning of your unsupervised annotation approach relative the state of the notably other unsupervised learning approaches for IE.
Its seems that some of the content in this paper borrows heavily at first glance from the reference of [1] http://ceur-ws.org/Vol-1928/paper2.pdf in parts. Also the experiment is quite similar which begs the questions what is the delta between this submission and [1] and by how much?
Questions
1) Please differentiate between this work and [1]
2) I feel the experimental dataset is too narrow to make a broad claim on the effectiveness of the approach. Please justify why only the one BMW brand?
3) Please provide more details on the limitations of the types of associated annotations discovered. See above.
4)Do manually check the true positives as being DBpedia entries or were you exploiting the anchor text or info boxs or what? This isn't mentioned explicitly in Section 4 only that you took 50 articles. Are these introduced to the EM algorithm at all?
Metareview by Achim Rettinge
The authors extend their previous work on extracting graph-based document representations by expanding each with related documents. While previous approaches mostly exploit overlaps across documents to improve the extraction quality of a central graph, this paper utilizes them to identify related documents to expand document specific graphs. This is novel and closer to related work on event extraction, which tries to identify common graphs over a set of related documents.
The reviewers concerns are mainly two-fold:
1. Does such an expanded representation live up to the benefits claimed in the introduction? The empirical evidence presented seems not to match the general claim.
2. It seems hard to reproduce the results and the generality of the outcomes are unclear.
The authors' response did not sufficiently clarify the issues and the overall assessment remains a weak reject even if the scoring system does not reflect this sufficently and also in the light of others paper reviews. We therefore recommend a reject.
Share on | 2018-09-19 01:04:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6272285580635071, "perplexity": 1607.388149859233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155814.1/warc/CC-MAIN-20180919004724-20180919024724-00176.warc.gz"} |
https://www.jobilize.com/trigonometry/course/10-1-non-right-triangles-law-of-sines-by-openstax?qcr=www.quizover.com&page=4 | # 10.1 Non-right triangles: law of sines (Page 5/10)
Page 5 / 10
## Verbal
Describe the altitude of a triangle.
The altitude extends from any vertex to the opposite side or to the line containing the opposite side at a 90° angle.
Compare right triangles and oblique triangles.
When can you use the Law of Sines to find a missing angle?
When the known values are the side opposite the missing angle and another side and its opposite angle.
In the Law of Sines, what is the relationship between the angle in the numerator and the side in the denominator?
What type of triangle results in an ambiguous case?
A triangle with two given sides and a non-included angle.
## Algebraic
For the following exercises, assume $\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}a,\beta \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}b,\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\gamma \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}c.\text{\hspace{0.17em}}$ Solve each triangle, if possible. Round each answer to the nearest tenth.
$\alpha =43°,\gamma =69°,a=20$
$\alpha =35°,\gamma =73°,c=20$
$\alpha =60°,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\beta =60°,\text{\hspace{0.17em}}\gamma =60°$
$a=4,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha =\text{\hspace{0.17em}}60°,\text{\hspace{0.17em}}\beta =100°$
$b=10,\text{\hspace{0.17em}}\beta =95°,\gamma =\text{\hspace{0.17em}}30°$
For the following exercises, use the Law of Sines to solve for the missing side for each oblique triangle. Round each answer to the nearest hundredth. Assume that angle $\text{\hspace{0.17em}}A\text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}a,\text{\hspace{0.17em}}$ angle $\text{\hspace{0.17em}}B\text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}b,\text{\hspace{0.17em}}$ and angle $\text{\hspace{0.17em}}C\text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}c.$
Find side $\text{\hspace{0.17em}}b\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}A=37°,\text{\hspace{0.17em}}\text{\hspace{0.17em}}B=49°,\text{\hspace{0.17em}}c=5.$
$b\approx 3.78$
Find side $\text{\hspace{0.17em}}a$ when $\text{\hspace{0.17em}}A=132°,C=23°,b=10.$
Find side $\text{\hspace{0.17em}}c\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}B=37°,C=21,\text{\hspace{0.17em}}b=23.$
$c\approx 13.70$
For the following exercises, assume $\text{\hspace{0.17em}}\alpha \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}a,\beta \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}b,\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\gamma \text{\hspace{0.17em}}$ is opposite side $\text{\hspace{0.17em}}c.\text{\hspace{0.17em}}$ Determine whether there is no triangle, one triangle, or two triangles. Then solve each triangle, if possible. Round each answer to the nearest tenth.
$\alpha =119°,a=14,b=26$
$\gamma =113°,b=10,c=32$
one triangle, $\text{\hspace{0.17em}}\alpha \approx 50.3°,\beta \approx 16.7°,a\approx 26.7$
$b=3.5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}c=5.3,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma =\text{\hspace{0.17em}}80°$
$a=12,\text{\hspace{0.17em}}\text{\hspace{0.17em}}c=17,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha =\text{\hspace{0.17em}}35°$
two triangles, or
$a=20.5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}b=35.0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\beta =25°$
$a=7,\text{\hspace{0.17em}}c=9,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha =\text{\hspace{0.17em}}43°$
two triangles, or
$a=7,b=3,\beta =24°$
$b=13,c=5,\gamma =\text{\hspace{0.17em}}10°$
two triangles, $\text{\hspace{0.17em}}\alpha \approx 143.2°,\beta \approx 26.8°,a\approx 17.3\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}{\alpha }^{\prime }\approx 16.8°,{\beta }^{\prime }\approx 153.2°,{a}^{\prime }\approx 8.3$
$a=2.3,c=1.8,\gamma =28°$
$\beta =119°,b=8.2,a=11.3$
no triangle possible
For the following exercises, use the Law of Sines to solve, if possible, the missing side or angle for each triangle or triangles in the ambiguous case. Round each answer to the nearest tenth.
Find angle $A$ when $\text{\hspace{0.17em}}a=24,b=5,B=22°.$
Find angle $A$ when $\text{\hspace{0.17em}}a=13,b=6,B=20°.$
$A\approx 47.8°\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}{A}^{\prime }\approx 132.2°$
Find angle $\text{\hspace{0.17em}}B\text{\hspace{0.17em}}$ when $\text{\hspace{0.17em}}A=12°,a=2,b=9.$
For the following exercises, find the area of the triangle with the given measurements. Round each answer to the nearest tenth.
$a=5,c=6,\beta =\text{\hspace{0.17em}}35°$
$8.6$
$b=11,c=8,\alpha =28°$
$a=32,b=24,\gamma =75°$
$370.9$
$a=7.2,b=4.5,\gamma =43°$
## Graphical
For the following exercises, find the length of side $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ Round to the nearest tenth.
$12.3$
For the following exercises, find the measure of angle $\text{\hspace{0.17em}}x,\text{\hspace{0.17em}}$ if possible. Round to the nearest tenth.
$29.7°$
Notice that $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ is an obtuse angle.
$110.6°$
For the following exercises, find the area of each triangle. Round each answer to the nearest tenth.
$57.1$
## Extensions
Find the radius of the circle in [link] . Round to the nearest tenth.
Find the diameter of the circle in [link] . Round to the nearest tenth.
$10.1$
sinx sin2x is linearly dependent
what is a reciprocal
The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1
Shemmy
Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1
Jeza
each term in a sequence below is five times the previous term what is the eighth term in the sequence
I don't understand how radicals works pls
How look for the general solution of a trig function
stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b
sinx sin2x is linearly dependent
cr
root under 3-root under 2 by 5 y square
The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th
cosA\1+sinA=secA-tanA
Wrong question
why two x + seven is equal to nineteen.
The numbers cannot be combined with the x
Othman
2x + 7 =19
humberto
2x +7=19. 2x=19 - 7 2x=12 x=6
Yvonne
because x is 6
SAIDI
what is the best practice that will address the issue on this topic? anyone who can help me. i'm working on my action research.
simplify each radical by removing as many factors as possible (a) √75
how is infinity bidder from undefined?
what is the value of x in 4x-2+3
give the complete question
Shanky
4x=3-2 4x=1 x=1+4 x=5 5x
Olaiya
hi can you give another equation I'd like to solve it
Daniel
what is the value of x in 4x-2+3
Olaiya
if 4x-2+3 = 0 then 4x = 2-3 4x = -1 x = -(1÷4) is the answer.
Jacob
4x-2+3 4x=-3+2 4×=-1 4×/4=-1/4
LUTHO
then x=-1/4
LUTHO
4x-2+3 4x=-3+2 4x=-1 4x÷4=-1÷4 x=-1÷4
LUTHO
A research student is working with a culture of bacteria that doubles in size every twenty minutes. The initial population count was 1350 bacteria. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest whole number, what is the population size after 3 hours?
f(x)= 1350. 2^(t/20); where t is in hours.
Merkeb | 2020-07-07 23:44:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 64, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6397268176078796, "perplexity": 1080.7303723745913}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655895944.36/warc/CC-MAIN-20200707204918-20200707234918-00195.warc.gz"} |
https://www.nature.com/articles/s41467-021-24248-9 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Fast-field-cycling ultralow-field nuclear magnetic relaxation dispersion
## Abstract
Optically pumped magnetometers (OPMs) based on alkali-atom vapors are ultra-sensitive devices for dc and low-frequency ac magnetic measurements. Here, in combination with fast-field-cycling hardware and high-resolution spectroscopic detection, we demonstrate applicability of OPMs in quantifying nuclear magnetic relaxation phenomena. Relaxation rate dispersion across the nT to mT field range enables quantitative investigation of extremely slow molecular motion correlations in the liquid state, with time constants > 1 ms, and insight into the corresponding relaxation mechanisms. The 10-20 fT/$$\sqrt{{\rm{H}}}{\rm{z}}$$ sensitivity of an OPM between 10 Hz and 5.5 kHz 1H Larmor frequency suffices to detect magnetic resonance signals from ~ 0.1 mL liquid volumes imbibed in simple mesoporous materials, or inside metal tubing, following nuclear spin prepolarization adjacent to the OPM. High-resolution spectroscopic detection can resolve inter-nucleus spin-spin couplings, further widening the scope of application to chemical systems. Expected limits of the technique regarding measurement of relaxation rates above 100 s−1 are discussed.
## Introduction
Nano-scale dynamic processes that occur on ms to μs time scales, such as protein folding, aqueous complexation, and surface adsorption phenomena, are often probed using nuclear magnetic relaxation dispersion (NMRD) techniques1,2,3, in which field-dependent relaxation rates of nuclear spins are used to infer correlation times for molecular reorientation4,5 and diffusive transport. Beyond fundamental interests, insights from NMRD such as surface fractal dimension and roughness provide models for industrial catalysis and petrology, where liquids are confined inside porous solids and molecular diffusion is restricted by surface geometry6 as well as adsorption7, and in medicine assist the design of molecular agents for relaxation-contrast magnetic resonance imaging (MRI)8. Furthermore, if coupled with spectroscopic dispersion via chemical shifts or spin–spin couplings, the dynamics can be related to specific molecular functional groups, facilitating analyses of chemical mixtures and biological specimens9.
Accurate correlation times τc can be obtained by measuring nuclear spin relaxation across a range of Larmor frequencies $$B{\gamma }_{I}\ll {\tau }_{{\rm{c}}}^{-1}$$ to $$B{\gamma }_{I}\gg {\tau }_{{\rm{c}}}^{-1}$$, where B is the field strength and γI is the nuclear gyromagnetic ratio. Extremely slow correlations thus require measurements at ultralow magnetic fields within shielded enclosures such as a MuMetal chamber. The main existing NMRD technique uses fast-field-cycling (FFC) electromagnets10,11,12,13 of around 1 T for efficient inductive NMR signal detection, but these must be used unshielded with active cancelation of ambient fields to access below the geomagnetic field range14,15. Alternatively, NMRD is performed by transporting samples between persistent high- and ultralow-field locations16,17,18,19,20, but relatively slow transport times limit the observable τc at the high end. The limits of these existing techniques are illustrated by the magenta- and blue-shaded regions, respectively, of Fig. 1.
In this work, we introduce a third scenario to address the top-left portion of Fig. 1 that lies outside the reach of inductive NMR pickup. The speed of the FFC approach is combined with the low-frequency sensitivity of a spin-exchange-relaxation-free (SERF)21,22,23,24,25,26,27,28 optically pumped magnetometer (OPM) to perform NMRD at 1H Larmor frequencies from 1 Hz to 10 kHz, corresponding to the region of Fig. 1 shaded in green. The high sensitivity of SERF OPMs of order 1 fT/$${\sqrt{\rm{Hz}}}$$29,30 at signal frequencies down to a few Hz, rivals the best superconducting quantum interference device (SQUID)31,32 and high-Q inductive-pickup magnetometers below Earth’s field33,34, with the advantage of cryogen-free operation and simple tuning based on Hartmann–Hahn matching of the OPM and NMR spin ensembles. In the ultralow-field NMRD context, the OPM is compatible both with MuMetal shielding and relatively weak prepolarizing fields of order 10 mT. Magnetic fields for relaxation and detection are supplied accurately and precisely (within 1 nT) following a one-off calibration procedure and can be cycled in less than 1 ms, resulting in atomic-response-limited dead times. Based on this configuration, we are able to study spin relaxation phenomena that cannot be probed using conventional inductive field-cycling NMR procedures: (1) relaxation of liquids encased in metal tubing, demonstrated using aqueous solutions of paramagnetic impurities; (2) the full frequency dependence for motional correlations in a system of n-octane (n-C8H18) and n-decane (n-C10H22) absorbed in nanoscale confinement upon porous alumina and titania, as an example relevant to research in catalysis. Results unambiguously support dynamics models involving molecular diffusion among paramagnetic sites on the pore surface; (3) chemical species resolution via spin–spin J couplings.
## Results
### Dynamics from NMRD
When it covers the appropriate range of fields and time scales, the relaxation measured in NMRD can relate model parameters of interest to molecular motion, surface structure, and molecule–surface interactions35,36,37. A central quantity of interest is the time correlation function g(τ) = 〈x(t)x(t + τ)〉/〈x(t)x(t)〉 of the molecular motion. This is related to observable relaxation by the quantity $$j(\omega )=\mathop{\int}\nolimits_{0}^{\infty }g(\tau )\cos (\omega \tau ){\rm{d}}\tau$$, i.e. the cosine transform of g(τ). Here we assume a simple but useful model where the local field is inhomogeneous, with a randomly oriented component of root-mean-square amplitude Brms. The longitudinal relaxation rate is $${[{T}_{1,I}({\omega }_{I})]}^{-1}={\gamma }_{I}^{2}{B}_{{\rm{rms}}}^{2}j({\omega }_{I})$$ under standard perturbation (i.e. Redfield35) assumptions. In the ideal case of unrestricted diffusion, a single correlation time is found, where $$g(\tau )\propto \exp [-(\tau /{\tau }_{{\rm{c}}})]$$ and the spectral density is a Lorentzian: $$j(\omega )={\tau }_{{\rm{c}}}/(1+{\omega }^{2}{\tau }_{{\rm{c}}}^{2})$$, where τc is the characteristic diffusion time. Inverse-square power-law behavior is thus expected for $${T}_{1,I}^{-1}$$ vs. ωI, for $${\omega }_{I}\gg {\tau }_{{\rm{c}}}^{-1}$$.
Scenarios of constrained Brownian motion38 such as diffusion in pores may yield several concurrent dynamics modes. Fitting to a distribution of correlation times may be more appropriate: $${[{T}_{1,I}({\omega }_{I})]}^{-1}={\gamma }_{I}^{2}\mathop{\int}\nolimits_{0}^{\infty }{B}_{{\rm{rms}}}^{2}p({\tau }_{{\rm{c}}}){\tau }_{{\rm{c}}}/(1+{\omega }_{I}^{2}{\tau }_{{\rm{c}}}^{2})\ {\rm{d}}{\tau }_{{\rm{c}}}$$39,40, where p(τc) represents a probability distribution normalized to $$\mathop{\int}\nolimits_{0}^{\infty }p({\tau }_{{\rm{c}}})\ {\rm{d}}{\tau }_{{\rm{c}}}=1$$. Kimmich and co-workers examine this approach to explain power-law relaxation behavior in porous glasses: $${T}_{1,I}\propto {\omega }_{I}^{\xi }$$, where 0 < ξ < 241,42. Surface-induced relaxation is attributed to “molecular reorientation mediated by translational displacement” (RMTD), where diffusion across a rugged pore surface modulates intra-molecular spin-spin dipolar couplings and p(τc) is linked to the surface fractal dimension. A breakdown of the power law at low frequencies ωI indicates a maximum τc, which is connected to the longest distance a molecule can diffuse before leaving the surface phase or experiencing a different surface structure. The value should depend on the molecule, due to different diffusion coefficients, as well as the porous medium.
Moreover towards zero Larmor frequency, T1,I tends to a plateau defined by $${[{T}_{1,I}(0)]}^{-1}={\gamma }_{I}^{2}\mathop{\int}\nolimits_{0}^{\infty }{B}_{{\rm{rms}}}^{2}p({\tau }_{{\rm{c}}}){\tau }_{{\rm{c}}}={\gamma }_{I}^{2}{B}_{{\rm{rms}}}^{2}\langle {\tau }_{c}\rangle$$, where 〈τc〉 is the mean correlation time37. This and the above measures all require T1,I to be known for frequencies below $${\tau }_{{\rm{c,max}}}^{-1}$$, motivating the ultralow-field measurement capability.
### Tunable NMR detection using optical magnetometry
Figure 2a shows an experimental setup for FFC NMR with OPM detection. A 2 mL vial containing the NMR sample sits within four coaxial solenoids (S1–S4), which provide z-oriented fields. Liquid coolant flows around the sample chamber, with S1 moreover immersed in the flowing coolant to maintain a sample temperature around 30 °C. The short S1 solenoid is used to produce fields up to 20 mT to polarize the nuclear spins in the sample, while S2 and S3 provide weaker fields for Larmor precession. Adjacent to this chamber is a heated glass cell containing 87Rb vapor, with optical access along x and z directions for probing and pumping of the alkali spin angular momentum S, respectively.
A magnetic field $${\bf{B}}=({B}_{1}\cos \omega t,{B}_{1}\sin \omega t,{B}_{z,{\mathrm{{S}}}})$$ in Cartesian coordinates is assumed at the atoms, as the sum of a constant bias field Bz,S along the z-axis and an effective rotating field B1 induced by the precession of nuclear magnetization in the NMR sample in the xy plane. Assuming the nuclei experience a field Bz,I along the z-axis, then ω = ωI = γIBz,I, and B1 is proportional to the amplitude of the nuclear magnetization.
Dynamics of S are adequately described in the SERF regime by a polarization vector model where the x-axis component of S under steady-state pump-probe and a transverse rf field of angular frequency ω is given by30
$${S}_{x}=\frac{{g}_{S}{R}_{{\rm{op}}}{T}_{2,S}^{2}}{2{q}^{2}}\left[\frac{\cos \omega t+(\omega -{\omega }_{S}){T}_{2,S}\sin \omega t}{1+{(\omega -{\omega }_{S})}^{2}{T}_{2,S}^{2}}\right]{B}_{1}.$$
(1)
Here, ωS = gSBz,S/q is the Larmor frequency, gS is the gyromagnetic ratio, q is the nuclear slowing down factor, Rop is the optical pumping rate and $${T}_{2,S}^{-1}$$ is the transverse relaxation rate of the alkali atom ensemble. According to the above Eq. (1), the atomic response to B1 is strongest for matched precession frequencies of the spin species: ωI = ωS. Thus the OPM is tunable to a given NMR frequency ωI by setting the magnetic field at the atoms to Bz,S = ±(q/gs)ωI = ± γI(q/gs)Bz,I. This adjustment is permitted since Bz,I is the superposition of the fields in the interior of coils S2 + S3 + S4, while Bz,S is the superposition of fields from S4 and the much weaker exterior field of S2 + S3.
For Larmor frequencies ωI/(2π) between 10 and 200 Hz the magnetometer noise is below $$10\ {\rm{fT}}/\sqrt{{\rm{Hz}}}$$ (Fig. 2b), limited by noise in the lasers and to a lesser extent the Johnson noise of the coils S1 + S2 + S3. The spin projection noise estimated from the atom density nS ≈ 1020 m−3, temperature 150 °C and coherence time T2,S ≈ 3 ms is $$\sqrt{{n}_{S}{g}_{S}^{2}{T}_{2,S}/q} \sim$$ 1.1 fT/$${\sqrt{\rm{Hz}}}$$. Above fields Bz,S ≈ 100 nT, ωS starts to become comparable to 1/T2,S, marking the limit of the SERF regime, and the magnetometer noise rises above 20 fT/$${\sqrt{\rm{Hz}}}$$. Overall, as Fig. 2c illustrates, NMR signals are obtainable at fields where Larmor frequencies are around 100 times higher than the atomic bandwidth. In contrast, without tuning, the combined atomic and nuclear spin system yields a relatively narrow operating range for NMR, quantified by the half-width at half-height of Equation (1): ΔωI/(2π) ≈ gST2,S/(2πq) ≈ 80 Hz.
### Dissolved paramagnetic species in liquids
Many single-component liquids and simple solutions are characterized by an exponential correlation function for molecular tumbling, with a time constant τc in the low ps range. Unless much slower additional motion processes exist the NMR relaxation times T1,I and T2,I are independent of magnetic field below $${B}_{z,{\rm{I}}}\ll {({\gamma }_{I}{\tau }_{{\rm{c}}})}^{-1}$$) ~ 0.1 T, all the way down to ultralow field.
Here we observe the dependence of relaxation in aqueous solutions of the paramagnetic compound 4-hydroxy-2,2,6,6-tetramethylpiperidin-1-oxyl (TEMPOL). TEMPOL is a chemical oxidant under study elsewhere for potential therapeutic properties43, as well as a source of nuclear spin hyperpolarization44,45 that can achieve enhanced sensitivity in NMR. Sequences A and B are used, respectively, to measure 1H T1,I and T2,I (Fig. 3a).
In sequence A, nuclear spin prepolarization at 20 mT is followed by switching to a lower magnetic field for a time τ1, before a dc π/2 pulse induces NMR-free nuclear precession about the z-axis. The amplitudes of the NMR signal, sA, are fit well by the function $${s}_{A}\propto \exp (-{\tau }_{1}/{T}_{1,I})$$ and the observed relaxation rates scale linearly with concentration of the paramagnetic dopant as $${T}_{1,I}^{-1}={({T}_{1,I}^{(0)})}^{-1}+{k}_{1}[{\rm{TEMPOL}}]$$, where $${T}_{1,I}^{(0)}$$ is the relaxation time at zero solute (Fig. 3b). The relaxivity parameter k1 = 0.453(5) s−1 mmol−1 dm3 is in good agreement with literature values at the high-field end44,45, which gives confidence in the method. In sequence B, the initial π/2 pulse is followed by a Hahn-echo to refocus transverse magnetization after time τ2. Signal amplitudes are fit well by the expected function $$\exp (-{\tau }_{1}/{T}_{1,I}-{\tau }_{2}/{T}_{2,I})$$ and provide a transverse relaxivity parameter k2 = 0.455(5) s−1 mmol−1 dm−3 defined by $${T}_{2,I}^{-1}({B}_{z,{\rm{I}}},[{\rm{TEMPOL}}])={T}_{2,{\rm{I}}}^{-1}({B}_{z,{\rm{I}}},0)+{k}_{2}({B}_{z,{\rm{I}}})[{\rm{TEMPOL}}]$$ (Fig. 3c, triangle plot markers). The result k2 = k1 holds down to nT fields, which confirms isotropic molecular tumbling in the fast motion limit and absence of slow motional correlations.
The transverse decay rates are also well approximated by $${({T}_{2,I}^{* })}^{-1}=$$ FWHM/(2π) obtained from line widths in the Fourier-transform NMR spectra of sequence A, due to relatively low inhomogeneity in Bz,I. Here, S2 + S3 produce field gradients dBz,I/dz and smaller components along x and y due to tilt imperfections in the coil windings, resulting in a linear dependence $$({\rm{d}}/{\rm{d}}{B}_{z,{\rm{I}}}){({T}_{2,I}^{* })}^{-1}=\ 0.01\ {{\rm{s}}}^{-1}\ {\rm{\mu }}{{\rm{T}}}^{-1}$$ (or 4 ppk of Bz,I) observed above 500 Hz 1H frequency. We also note that S4 is centered on the 87Rb cell and not on the NMR sample, therefore gradients may cancel out at some Larmor frequencies; this effect is attributed to the line narrowing at around 200 Hz. Overall the results show that the NMR linewidth stays below 1 Hz even up to geomagnetic fields, and that TEMPOL causes no further detriment to spectroscopic resolution in the zero/ultralow-field range.
To further demonstrate the application potential of the technique we highlight that ultralow-field cycling and NMR detection is compatible with metal sample enclosures. NMR signals can be detected without amplitude loss up to kHz Larmor frequencies when 0.1 mL aliquots of the TEMPOL solutions are contained inside a titanium alloy tube (outer diameter 8 mm, inner diameter 7 mm, pressure rating 13 MPa). Relaxation rates 1/T1,I for samples with and without the metal tube, shown in Fig. 3d, are identical within measurement error to those of Fig. 3b. Larger error bars are due to the smaller sample volume giving lower signal to noise. This measurement is impossible via conventional fast-field cycling NMR techniques, where eddy currents in metal strongly attenuate the amplitude of high-frequency NMR signal and also limit the rate of field switching. Transverse relaxation rates are also unaffected by the presence of the metal tube, from which we may conclude that eddy currents are negligible over the relatively small (mT) range of field switching. The approach may therefore open the way to study relaxation in unexplored contexts, for instance high-pressure fluids (e.g. supercritical fluids), flow in pipes, foil-sealed products (e.g. foods, pharmaceuticals), and (e.g. lead-, tungsten-) sealed radioactive samples.
### Liquids confined in porous materials
To demonstrate insight into molecular motion near pore surfaces we study the 1H spin relaxation of n-alkane hydrocarbons confined within matrices of alumina (γ polymorph, 9 nm mean pore diameter) and titania (anatase polymorph, 7–10 nm mean pore diameter). These simple inorganic oxides in their mesoporous form possess catalytic features due to their high specific surface area, Lewis acidic sites, and option of chemical treatments including metalization to activate the pore surface. Yet, owing to the frequency range of conventional NMRD techniques, there is limited understanding of how molecular dynamics and surface site properties relate to long-τc relaxation processes, even without surface functionalization11.
Figure 4a shows 1H relaxation rates at 30 °C for imbibed n-alkanes, measured between 1 Hz and 5.5 kHz Larmor frequency using the sequence shown in Fig. 4b. Due to excess noise in the magnetometer below 100 Hz (including mains electricity noise and 1/f noise, see Fig. 4c), fast field switching between relaxation and detection events is the preferred measurement option to probe the lowest fields, where the NMR signal is always detected at frequency above 100 Hz. Above 100 Hz Larmor frequency, the noise floor is low enough to detect NMR signals at the relaxation field, without switching. The measurable NMR relaxation is limited in principle to rates $${T}_{1,I}^{-1}\;<\;{R}_{{\rm{op}}}^{-1}$$, where the latter is of order 300 s−1. However, in practice, the limit is $${T}_{1,I}^{-1}\;<\;{T}_{2,S}^{-1}$$ or around 100 s−1 since the atomic precession signal causes a 10 ms dead time caused following the π/2 pulse (see Fig. 4d).
The main feature of Fig. 4a is the weak dispersion in $${T}_{1,I}^{-1}$$ for each alkane, and moreover between the two porous materials, across the conventional FFC-NMR frequency range 10 kHz to 1 MHz11. The relaxation rate for each alkane depends only slightly on the porous material, therefore bulk effects dominate the relaxation process in this range. In contrast, relaxation rates below 10 kHz depend strongly on the material and mechanisms related to the surface are prominent, with the higher values being observed towards zero field. The T1,I dispersion in titania is much weaker than in γ-alumina; $${T}_{1,I}^{-1}$$ reaches only around 2 s−1 below 200 Hz, compared to 30 s−1 for alumina. Although the two materials have similar mean pore diameter and surface area/volume ratio, surface-induced relaxation is not so active in the first material. It is known from electron spin resonance spectroscopy46 that the alumina contains a higher concentration of paramagnetic impurity—[Fe3+] ≈ 2 × 1016 g−1 (i.e., ions per unit mass of the dry porous material) in alumina vs. 2 × 1015 g−1 in titania—suggesting that the lower-frequency relaxation mechanism involves dipole–dipole coupling between 1H and the surface spins, rather than surface-induced modulation of intra-molecular 1H–1H spin couplings.
Between ωI/(2π) = 50 and 5000 Hz, the longitudinal relaxation in γ-alumina obeys a power-law frequency dependence: $${T}_{1,I}\propto {\omega }_{I}^{\xi }$$. Fitted slopes $$-{\rm{d}}({\rm{log}}\ {T}_{1,I}^{-1})/{\rm{d}}({\rm{log}}\ {\omega }_{I})$$ give exponents ξ = 0.50 ± 0.03 for octane and ξ = 0.45 ± 0.03 for decane. Such values are consistent with simple numerical simulations in which imbibed molecules randomly walk within a dilute matrix of non-mobile spins—such as surface paramagnets— where the strength of dipole–dipole interactions between the two spin species scales with the inverse cube of their instantaneous separation39. This nonlinear dependence results in an example of Lévy walk statistics. A detailed characterization of these effects in the alumina system is ongoing work.
Although developing an analytical model for the surface dynamics is outside the scope of this paper, for analysis of the correlation time it suffices to fit the measured relaxation rates by a stretched Lorentzian function $${[{T}_{1,I,{\rm{fit}}}({\omega }_{I})]}^{-1}={[{T}_{1,I}(0)]}^{-1}/{(1+{\tau }_{{\rm{c}}}^{2}{\omega }_{I}^{2})}^{\beta }+{[{T}_{1,I}(\infty )]}^{-1}$$ with four independent fit parameters: T1,I(0), T1,I(), τc and β. For ωIτc 1 and T1,I,fit(ω) T1,I(), the function is approximated by a power law with ξ = 2β. The fitted curves are plotted as solid lines in Fig. 4a. The parameter τc for alkanes in alumina is determined from the relaxation behavior below 50 Hz, where T1,I(ωI) changes from a power-law frequency dependence to a constant, i.e., towards a plateau at T1,I(0). Using the analysis presented earlier, this indicates a maximum correlation time ($${\tau }_{c}={\tau }_{{\rm{c,max}}}$$) of around 20–30 ms, which is at least two orders of magnitude longer than the maximum correlation time of more polar molecules in porous confinement, such as water. Relative to octane, the plateau for decane extends to a higher Larmor frequency, indicating a shorter $${\tau }_{{\rm{c,max}}}$$, despite octane having a higher self-diffusion coefficient as a bulk liquid. However, at this point τc is also of similar magnitude to the longitudinal relaxation time. Under such conditions the assumptions of standard NMR relaxation theories—such as the Wangsness–Bloch–Redfield theory—are not strictly justified as valid, in particular the coarse-graining of time35, where spin diffusion may be a part of the relaxation mechanism, or set an upper limit for the relaxation rate in the plateau. Whether this is true requires more information on the physical process responsible for spin relaxation.
### High-resolution relaxometry
Field instability in traditional NMRD electromagnets is large compared to spectroscopic dispersion from NMR chemical shifts or inter-spin couplings, resulting in severe or even complete overlap of the signals from nuclei in different chemical groups or different compounds in a mixture. Although additional strategies may prove helpful to assign relaxation rates to distinct chemical groups, (e.g. selective deuteration or other isotopic substitution, inverse Laplace transforms), higher-resolution signal detection would be a more general and direct solution.
Here we illustrate simultaneous measurement and independent fitting of relaxation rates for two chemically distinct 1H environments in methanol (CH3OH). A scalar coupling (1JCH = 140.1 Hz) between 13C and 1H nuclei in the 13CH3 group shifts the corresponding NMR signal by around ±0.5JCH relative to that of the non-coupled OH, when measured at fields Bz,I2π1JCH/(γHγC). The latter criterion defines the well-known weak heteronuclear coupling regime. Shifts by other multiples of 1JCH between 1 and 2 occur at lower fields. Experimental spectra and simulated positions of the NMR peaks are shown in Fig. 5. Line widths are on the order of 1 Hz, which should also allow spectral resolution of the CH3 groups in methanol, acetone (CH3COCH3, 1JCH = 127 Hz), acetic acid (CH3COOH, 1JCH = 130 Hz) dimethylsulfoxide (CH3SOCH3, 1JCH = 137 Hz) and other solvents. Isotopomers splittings that arise for couplings over more than one chemical bond, e.g. –13C12CH3, for which 2JCH = 5–30 Hz, would also be resolvable.
By using the sequence shown in Fig. 3a to provide a series of T1,I-weighted spectra, a fitted relaxation rate 1/T1,1H = 0.44(5) s−1 (not plotted) is obtained for the OH subsystem. Within error, the value does not depend on field. Relaxation rates for the CH3 subsystem are also field independent in the weak-coupling regime above Bz,I = 10 μT, and are very close to those of OH: 1/T1,1H = 0.45(14) s−1 (not plotted). Both sets of rates refer to relaxation of the 1H spin species and are thus comparable with results obtained via conventional field cycling NMRD.
At lower fields, however, L-S type effects of the scalar coupling between 13C and 1H3 lead to a significant contrast in transverse relaxation rates. Of most interest is the peak tending to frequency 1JCH at zero field (orange curve in Fig. 5), which corresponds to singlet-to-triplet coherence in the isolated manifold formed between 13C and the 1H3 state of total spin quantum number 1/2. Here the transverse relaxation rate is around 2.5–3 times slower than for the non-coupled OH (Fig. 5, inset). The 13CH3 system, therefore, exhibits a type of long-lived spin order in the NMR ensemble47. Despite the presence of 1H–1H and 13C–1H dipole–dipole couplings, this singlet-to-triplet coherence is long-lived because it is less sensitive to relaxation by fields that are correlated across the 13C and 1H spin groups. This includes much of the intra-13CH3 dipole coupling as well as longer-range couplings. The result has potential importance in applications to probe dipole–dipole interactions at short distances away from the CH3, including intermolecular interactions.
## Discussion
The study demonstrates that unique, important information about nuclear spin relaxation in liquids can be obtained by fast field switching and tunable NMR detection at ultralow magnetic fields.
A basic advantage of the low-field NMR detection is that it eliminates many concerns about magnetic field homogeneity and stability, since magnetic fields are accurately and precisely controlled. This is shown by the result T2~T1 for the series of TEMPOL solutions. Additionally, as shown for methanol, the Fourier-transform NMR spectrum line width is adequate to resolve spin–spin couplings (even in the alumina system where line widths exceed, 10 Hz) and therefore components of liquid mixtures. Ultralow-field NMRD may therefore be able to probe other interplay, such as competitive adsorption between molecules and pore. Besides nonuniformity of applied magnetic fields, conventional NMR is also confounded by sample heterogeneity, especially in multi-phase samples with internal magnetic susceptibility variation, including porous materials, or metal regions. The τc values obtained here for alkanes in porous alumina are extremely long by FFC-NMR standards—comparable τcs are typically probed in high field by pulsed-field gradient (PFG) diffusometry and rotating-frame (T1ρ,I) relaxometry techniques48. In most if not all applications, both of the latter are highly susceptible to contamination by poor field homogeneity and radiofrequency offset errors.
Compared to high-field inductive-detected NMRD, ultralow-field OPM-detected NMRD currently has some limitations. A main limitation, resulting from the Hartmann–Hahn matching condition, is that the OPM Faraday rotation signal contains free-precession responses of the sensor atom and NMR sample spins at the same frequency. The atomic response is at least two orders of magnitude stronger than the NMR signal and can easily saturate the digitizer, which leads to a “dead time” on the order of the optical pumping time (10 ms, see Fig. 4d). This currently hinders applications in chemical systems where molecules interact more strongly, namely liquids in nanopores (e.g. zeolites, shale) and interfaces with hydrogen bonding, where relaxation rates are higher. In principle, Q-switching of the optical pumping beam49 is a method to accelerate magnetometer recovery after the magnetic field pulses and reduce the dead time down to the field switching time, well below 1 ms, without compromising sensitivity.
Ultralow-field FFC NMRD may also in the future expand study paths when enriched by nuclear spin hyperpolarization. As shown in Fig. 2c, a few tens of scans result in snr > 20 dB, even though the spins are only prepolarized to around 1 part in 108 at the 20 mT starting field. Nitroxide radical compounds such as TEMPOL are a source of higher electron spin polarization, around 1 part in 105 at 20 mT, that can be efficiently transferred to nuclei via the Overhauser effect at both high44 and ultralow50 magnetic fields. Hyperpolarization via surface-supported paramagnetic species and other spin-transfer catalysts may also be an option to study nuclear polarization buildup near pore surfaces, providing information that may differ from relaxation decay. TEMPOL and other persistent radicals are used to prepare hyperpolarized biochemical probes for clinically relevant in vivo observations of disease via MRI51. These systems could profit from a knowledge of signal decay mechanisms at ultralow magnetic fields, for sources of image contrast or to minimize polarization losses before imaging/detection.
## Methods
### Sample preparation
All samples studied in this work were contained in disposable glass vials (12 mm o.d., 20 mm length, 1.8 mL internal volume, 8–425 thread) sealed with a silicone septum and finger-tight polypropylene screw cap.
#### Preparation of TEMPOL samples
A 10 mM stock solution of the radical 4-hydroxy-2,2,6,6-tetramethylpiperidin-1-oxyl (Sigma Aldrich, CAS: 2226-96-2) was prepared in 5.0 mL deoxygenated milli-Q water and diluted to concentrations of 0.5, 1, 2, 3, 4, 5, and 10 mM with deoxygenated milli-Q water. The diluted solutions were not further de-gassed.
#### Preparation of porous materials samples
Cylindrical extrudate pellets of meso-porous γ-alumina (Alfa Aesar product 43855, lot Y04D039: 3 mm diameter, 3 mm length, 9 nm BJH mean pore diameter, Langmuir surface area 250 m2 g−1) and anatase titania (Alfa Aesar product 44429, lot Z05D026: 3 mm diameter, 4 mm length, 7–10 nm mean pore size, Langmuir surface area 150 m2 g−1) were obtained commercially. Pellets were oven-dried at 120 °C for 12 h to remove physisorbed H2O and then imbibed in neat n-alkane for at least 12 h after recording the dry mass. Excess liquid on the pellet outer surface was gently removed using tissue paper. The pellets were then placed in a vial (see Fig. 4a), sealed with the cap and the combined mass of pellet and imbibed hydrocarbon was recorded.
#### Preparation of methanol sample
0.9 g of 13C-methanol (13CH3OH 99%, Sigma Aldrich product 277177) was added to the sample vial without dilution, then followed by N2 bubbling (2–3 min) to displace dissolved paramagnetic O2.
### Optical magnetometer
The magnetometer used to detect 1H precession signals in the NMR samples was operated as follows. A cuboid borosilicate glass cell of inner dimensions 5 × 5 × 8 mm3 contained a droplet of rubidium-87 metal and 90 kPa N2 buffer gas (Twinleaf LLC). The cell was electrically heated to 150 °C to vaporize the alkali metal. A circularly polarized light beam along z-axis (3 mW, tuned to the center of the collision-shifted D1 wavelength) optically pumped the atomic spin polarization to Sz ≈ 0.5. Faraday rotation in a second, linearly polarized light beam (10 mW, 65 GHz red-shifted from the pump, along x-axis) was used to non-resonantly probe the Sx component of atomic polarization. The probe beam was linearly polarized and slightly detuned from the 87Rb D1 transition, such that on passing through the cell along the x-axis its axis of polarization was optically rotated by an angle proportional to Sx. The Faraday rotation was detected by polarimetry using a differential photodetector (Thorlabs PDB210A), which produced an analog voltage signal that was conditioned (amplified, filtered to eliminate high-frequency noise and dc offset) and digitized (60 ksps, 16-bit ± 5 V ADC range) before storage and further processing on a computer.
### Magnetic coils and shielding
The vapor cell and heating assembly was placed as close as possible to the NMR sample at a standoff distance d1 = 3.5 mm between outer walls of the vial and atomic vapor cell (see Fig. 2a). In order of increasing distance away from the NMR sample, d1 accounts for (i) S2 + S3 coil windings (34 AWG enameled copper wire, solenoid length 13 cm, diameter 14 mm), (ii) a carbon-fiber support structure, (iii) S1 coil windings (36 AWG enameled copper wire, solenoid length 2.5 cm, single layer), (iv) a water-cooling jacket (de-ionized water, flow rate 1 mL s−1) to remove heat deposited when the polarizing coil is energized and to maintain a stable sample temperature, (v) a PEEK support structure, and (vi) an air gap for further thermal insulation. The entire structure was operated within a cylindrical magnetic shield (Twinleaf LLC, model MS-1F) of 20 cm outer diameter and length 30 cm. The main axis of the cylinder was co-axial with the pump beam axis.
Field-to-current ratios inside each coil were calibrated using the frequency of 1H precession in de-ionized water. These were S2: 7.59(3) μT mA−1; S3: 7.50(3) μT mA−1; S4: 150.1(5) nT mA−1. The atomic spin precession frequency at the atomic vapor cell was used to calibrate the external field of coils S2: −11.1(3) nT mA−1 and S3: −4.4(2) nT mA−1, which equate to 1460 and 580 ppm of the field at the vial, respectively.
### Magnetic noise spectra
An ac test signal of ±8 pT along the y-axis was applied to calibrate the Faraday response as a function of frequency and z bias field. The calibration vs. frequency was used to scale the spectral response of the balanced photodetector from units of V/$${\sqrt{\rm{Hz}}}$$ into T/$${\sqrt{\rm{Hz}}}$$. The maximum magnetic response at a given bias field was confirmed to equal the tuning condition where the atomic Larmor frequency matched the frequency of the ac signal, given the prior calibration of the magnetic field at the sample vial and magnetometer cell.
### Field switching
Timing of the NMR pulse sequences and data acquisition were controlled by a microcontroller (Kinetis K20 series: time base 2 μs, precision 17 ns, CPU speed 120 MHz). Current to the polarizing coil was switched via a dual H-bridge circuit with parallel flyback diodes and the switching time was <1.0 ms. Coils S2 and S4 were connected to a low-noise precision current source (Twinleaf model CSB-10, 20-bit resolution over ±10 mA) with a low-pass LC filter in series, resulting in a combined switching and settling time of order 100 ms. The FFC solenoid coil S3 operated at a current <1 mA direct from the microcontroller digital-to-analog converter (12-bit resolution, 0–1 mA) for rapid and precise field switching without feedback controls. Typical S3 switching times were 0.25 ms and the accuracy (determined from standard error in the mean NMR center frequency over repeated scans at ωI/(2π) = 550 Hz, see Fig. 4b inset) was around 1 nT. The residual interior field of the MuMetal shield along the x, y, and z axes of ~10 nT was also compensated for.
Under steady-state conditions with the pre-polarizing coil turned off, the cooling system maintained a temperature of 27–28 °C at a thermocouple attached to the outside wall of the sample vial. The steady-state temperature rose to 30–31 °C when the polarizing coil was energized at 20 mT (2.2 A).
## Data availability
The raw data generated in this study have been deposited in the OpenAIRE database under accession code https://doi.org/10.5281/zenodo.4840653.
## References
1. 1.
Kimmich, R. Field cycling in NMR relaxation spectroscopy applications in biological, chemical and polymer physics. Bull. Magn. Reson. 1, 195–218 (1979).
2. 2.
Kimmich, R. (Ed.), Field Cycling NMR Relaxometry: Instrumentation, Model Theories and Applications (The Royal Society of Chemistry, Oxford, 2018).
3. 3.
Kimmich, R. & Anoardo, E. Field-cycling NMR relaxometry. Progr. Nucl. Magn. Reson. Spectrosc. 44, 257–320 (2004).
4. 4.
Schneider, D. J. & Freed, J. H. Spin relaxation and motional dynamics. In Advances in Chemical Physics (eds. Hirschfelder, J.O., Wyatt, R.E. and Coalson, R.D.) 387–527 (John Wiley & Sons, Ltd., 2007).
5. 5.
Deutch, J. M. & Oppenheim, I. Time correlation functions in nuclear magnetic relaxation. Adv. Opt. Magn. Reson. 3, 43–78 (1968).
6. 6.
Bychuk, O. V. & O’Shaughnessy, B. Anomalous diffusion at liquid surfaces. Phys. Rev. Lett. 74, 1795–1798 (1995).
7. 7.
Guo, J.-C. Advances in low-field nuclear magnetic resonance (NMR) technologies applied for characterization of pore space inside rocks: a critical review. Petr. Sci. 17, 1281–1297 (2020).
8. 8.
Waddington, D. E. J., Boele, T., Maschmeyer, R., Kuncic, Z. & Rosen, M. S. High-sensitivity in vivo contrast for ultra-low field magnetic resonance imaging using superparamagnetic iron oxide nanoparticles. Sci. Adv. 6, eabb0998 (2020).
9. 9.
Korb, J.-P. Multiscale nuclear magnetic relaxation dispersion of complex liquids in bulk and confinement. Progr. Nucl. Magn. Reson. Spectrosc. 104, 12–55 (2018).
10. 10.
Job, C., Zajicek, J. & Brown, M. F. Fast field-cycling nuclear magnetic resonance spectrometer. Rev. Sci. Instrum. 67, 2113–2122 (1996).
11. 11.
Ward-Williams, J., Korb, J.-P. & Gladden, L. F. Insights into functionality-specific adsorption dynamics and stable reaction intermediates using fast field cycling NMR. J. Phys. Chem. C 122, 20271–20278 (2018).
12. 12.
Ferrante, G. & Sykora, S. Technical aspects of fast field cycling. Adv. Inorg. Chem. 57, 405–470 (2005).
13. 13.
Anoardo, E., Galli, G. & Ferrante, G. Fast-field-cycling NMR: applications and instrumentation. Appl. Magn. Reson. 20, 365–404 (2001).
14. 14.
Anoardo, E. & Ferrante, G. M. Magnetic field compensation for field-cycling NMR relaxometry in the ULF band. Appl. Magn. Reson. 24, 85 (2003).
15. 15.
Kresse, B., Privalov, A. F. & Fujara, F. NMR field-cycling at ultralow magnetic fields. Solid State Nucl. Magn. Reson. 40, 1926–2040 (2011).
16. 16.
Chou, C.-Y., Chu, M., Chang, C. F. & Huang, T. A compact high-speed mechanical sample shuttle for field-dependent high-resolution solution NMR. J. Magn. Reson. 214, 302–308 (2012).
17. 17.
Kaseman, D. C. Design and implementation of a J-coupled spectrometer for multidimensional structure and relaxation detection at low magnetic fields. Rev. Sci. Instrum. 91, 054103 (2020).
18. 18.
Ganssle, P. J. Ultra-low-field NMR relaxation and diffusion measurements using an optical magnetometer. Angew. Chem. Int. Ed. 53, 9766–9770 (2014).
19. 19.
Tayler, M. C. D., Ward-Williams, J. & Gladden, L. F. NMR relaxation in porous materials at zero and ultralow magnetic fields. J. Magn. Reson. 297, 1–8 (2018).
20. 20.
Zhukov, I. V. Field-cycling NMR experiments in an ultra-wide magnetic field range: relaxation and coherent polarization transfer. Phys. Chem. Chem. Phys. 20, 12396–12405 (2018).
21. 21.
Savukov, I. M. & Romalis, M. V. NMR detection with an atomic magnetometer. Phys. Rev. Lett. 94, 123001 (2005).
22. 22.
Savukov, I. M., Seltzer, S. J. & Romalis, M. V. Detection of NMR signals with a radio-frequency atomic magnetometer. J. Magn. Reson. 185, 214–220 (2007).
23. 23.
Blanchard, J. W. & Budker, D. Zero- to ultralow-field NMR. eMagRes 5, 1395–1409 (2016).
24. 24.
Tayler, M. C. D. Invited review article: Instrumentation for nuclear magnetic resonance in zero and ultralow magnetic fields. Rev. Sci. Instrum. 88, 091101 (2017).
25. 25.
Happer, W. & Tam, A. C. Effect of rapid spin exchange on the magnetic resonance spectrum of alkali vapors. Phys. Rev. A 16, 1877–1891 (1977).
26. 26.
Savukov, I. M. & Romalis, M. V. Effects of spin-exchange collisions in a high-density alkali-metal vapor in low magnetic fields. Phys. Rev. A 71, 023405 (2005).
27. 27.
Allred, J. C., Lyman, R. N., Kornack, T. W. & Romalis, M. V. High-sensitivity atomic magnetometer unaffected by spin-exchange relaxation. Phys. Rev. Lett. 89, 130801 (2002).
28. 28.
Ledbetter, M. P., Savukov, I. M., Acosta, V. M., Budker, D. & Romalis, M. V. Spin-exchange-relaxation-free magnetometry with Cs vapor. Phys. Rev. A 77, 033408 (2008).
29. 29.
Budker, D. & Romalis, M. V. Optical magnetometry. Nat. Phys. 3, 227–234 (2007).
30. 30.
Budker, D. & Jackson Kimball, D. F. (Eds.) Optical Magnetometry (Cambridge University Press, 2013).
31. 31.
McDermott, R. et al. Liquid-state NMR and scalar couplings in micro-tesla magnetic fields. Science 295, 2247–2249 (2002).
32. 32.
Trahms, L. & Burghoff, M. NMR at very low fields. Magn. Reson. Imaging 28, 1244–1250 (2010).
33. 33.
Appelt, S., Kühn, H., Häsing, F. & Blümich, B. Chemical analysis by ultrahigh-resolution nuclear magnetic resonance in the Earth’s magnetic field. Nat. Phys. 2, 105–109 (2006).
34. 34.
Suefke, M., Liebisch, A., Blümich, B. & Appelt, S. External high-quality-factor resonator tunes up nuclear magnetic resonance. Nat. Phys. 11, 767–771 (2015).
35. 35.
Redfield, A. G. The theory of relaxation processes. Adv. Magn. Opt. Reson. 1, 1–32 (1965).
36. 36.
Kowalewski, J. & Maler, L. Nuclear Spin Relaxation in Liquids (CRC Press, Boca Raton, 2018).
37. 37.
Halle, B., Jóhannesson, H. & Venu, K. Model-free analysis of stretched relaxation dispersions. J. Magn. Reson. 135, 1–13 (1998).
38. 38.
Klafter, J. & Schlesinger, M. F. On the relationship among three theories of relaxation in disordered systems. Proc. Natl. Acad. Sci USA 83, 848–851 (1986).
39. 39.
McDonald, P. J. & Faux, D. A. Nuclear-magnetic-resonance relaxation rates for fluid confined to closed, channel or planar pores. Phys. Rev. E 98, 063110 (2018).
40. 40.
Miyaguchi, T., Uneyama, T. & Akimoto, T. Brownian motion with alternately fluctuating diffusivity: stretched-exponential and power-law relaxation. Phys. Rev. E 100, 012116 (2019).
41. 41.
Kimmich, R. Strange kinetics, porous materials and NMR. Chem. Phys. 284, 253–285 (2002).
42. 42.
Zavada, T. & Kimmich, R. Surface fractals probed by adsorbate spin-lattice relaxation dispersion. Phys. Rev. E 59, 5848–5854 (1999).
43. 43.
Lewandowski, M. & Gwozdinski, K. Nitroxides as antioxidants and anticancer drugs. Int. J. Mol. Sci. 18, 2490 (2017).
44. 44.
Prandolini, M. J., Denysenkov, V. P., Gafurov, M., Endeward, B. & Prisner, T. F. High-field dynamic nuclear polarization in aqueous solutions. J. Am. Chem. Soc. 131, 6090–6092 (2009).
45. 45.
Neugebauer, P. Liquid state DNP of water at 9.2 T: an experimental access to saturation. Phys. Chem. Chem. Phys. 15, 6049–6056 (2013).
46. 46.
Ward-Williams, J., Korb, J.-P., Rozing, L., Sederman, A. J., Mantle, M. D. & Gladden, L. F. Characterizing solid-liquid interactions in a mesoporous catalyst support using variable-temperature fast field cycling NMR. J. Phys. Chem. C 125, 8767–8778 (2021).
47. 47.
Pileio, G. (Ed.), Long-lived Nuclear Spin Order: Theory and Applications (Chapters 20 and 23), volume 22 of New Developments in NMR, Royal Society of Chemistry (2020).
48. 48.
Price, W. S. NMR Studies of Translational Motion (Cambridge University Press, 2009).
49. 49.
Limes, M. E. Portable magnetometry for detection of biomagnetism in ambient environments. Phys. Rev. Appl. 14, 011002 (2020).
50. 50.
Hilschenz, I. Dynamic nuclear polarisation of liquids at one microtesla using circularly polarised RF with application to millimetre resolution MRI. J. Magn. Reson. 305, 138–145 (2019).
51. 51.
Sriram, R., Kurhanewicz, J. & Vigneron, D. B. Hyperpolarized carbon-13 MRI and MRS studies. eMagRes 3, 311–324 (2014).
## Acknowledgements
The work described is funded by: EU H2020 Marie Skłodowska-Curie Actions project ITN ZULF-NMR (Grant Agreement No. 766402); Spanish MINECO projects OCARINA (Grant No. PGC2018-097056-B-I00), the Severo Ochoa program (Grant No. SEV-2015-0522); Generalitat de Catalunya through the CERCA program; Agència de Gestió d’Ajuts Universitaris i de Recerca Grant No. 2017-SGR-1354; Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya, co-funded by the European Union Regional Development Fund within the ERDF Operational Program of Catalunya (project QuantumCat, ref. 001-P-001644); Fundació Privada Cellex; Fundació Mir-Puig; MCD Tayler acknowledges financial support through the Junior Leader Postdoctoral Fellowship Program from “La Caixa” Banking Foundation (project LCF/BQ/PI19/11690021). The authors also thank Jordan Ward-Williams and Lynn Gladden (University of Cambridge) for providing samples of porous alumina and titania, and for discussions.
## Author information
Authors
### Contributions
M.C.D.T. proposed the study. S.B. prepared the samples, measured and analyzed the experimental data and together with M.C.D.T. built the experimental apparatus and made the theoretical interpretation. M.C.D.T. wrote the manuscript with input from all authors. All authors reviewed the manuscript and suggested improvements. M.C.D.T. and M.W.M. supervised the overall research effort.
### Corresponding author
Correspondence to Michael C. D. Tayler.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review informationNature Communications thanks the anonymous reviewers for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Bodenstedt, S., Mitchell, M.W. & Tayler, M.C.D. Fast-field-cycling ultralow-field nuclear magnetic relaxation dispersion. Nat Commun 12, 4041 (2021). https://doi.org/10.1038/s41467-021-24248-9
• Accepted:
• Published: | 2021-10-20 10:55:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7201433181762695, "perplexity": 4828.902971951419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00558.warc.gz"} |
https://www.physicsforums.com/threads/geometric-and-arithmetic-series.692892/ | # Geometric and arithmetic series
1. May 21, 2013
### Government$1. The problem statement, all variables and given/known data If a,b,c, are at the same time fifth, seventh and thirty seventh member of arithmetic and geometric progression then $a^{b-c}b^{c-a}c^{a-b}$ is: 3. The attempt at a solution I tried solving system of equations but i have four unknown. I was able to reduce it to on unknown. 12r^32 - 32r^12 + 20=0 where r is common ration in geometric series. I have no idea how to solve this. Maybe trying to solve the system isn't a way to go? 2. May 21, 2013 ### Government$
Wel its evident that two solutions are 1 and - 1 but what kind of geometric progression is with r=1 or r=-1?
3. May 21, 2013
Hello Government$Do you mean that a,b,and c are parts of an arithemtico-geometric sequence(As in saying that they can be represented as the product of corresponding terms of an arithmetic and geometric series) or implying that there exist separate (not to be sure) arithmetic and geometric progressions satisfying the condition? Regards Yukoel Last edited: May 21, 2013 4. May 21, 2013 ### Government$
As i have understood it there exist separate arithmetic and separate geometric progression. This is a first time i hear of arithemtico-geometric series.
5. May 21, 2013
### Yukoel
Hello,
Thanks for clarifying this. Well the way I can think of it is doesn't utilize finding the common difference and /or common ratio .Try writing them separately as nth(n=5,7,and 37 as given) terms of the Geometric and arithmetic sequence (Don't be disheartened by the number of unknowns :) ).Now look at the expression .In order to simplify it you might want to multiply the bases easily, by which sequence would you represent it(I mean a,b and c)? If you have had multiplied you might need to easily add the exponents. Which sequence's use makes it easier?
Regards
Yukoel
6. May 21, 2013
### Curious3141
This is a simple problem. You're told that a,b,c are particular terms of an arithmetic progression (A.P.) and a geometric progression (G.P.). So just use symbols to represent the first term and common difference of that A.P. and the first term and common ratio of the G.P. and express a,b,c both ways.
You're asked to evaluate an expression that's the product of powers of a, b and c. For the bases (e.g. a or b), use the G.P. representation. For the exponents (e.g. b-c), use the A.P. representation. Do the algebra using the laws of exponents and you'll be pleasantly surprised at what cancels out. | 2017-08-16 14:29:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7075098752975464, "perplexity": 854.4694404404191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00519.warc.gz"} |
https://biblio.ugent.be/publication/3125525 | 1 file | 3.45 MB
# Characterization and engineering of epimerases for the production of rare sugars
Koen Beerens (UGent)
(2013)
Author
Promoter
(UGent) and (UGent)
Organization
Abstract
Tagatose is a rare sugar that can be applied for multiple reasons in different industries, for instance as a low-caloric sweetener in dietary food, as well as additive in detergents, cosmetics, and pharmaceutical formulations, but also as drug molecule itself in diabetes treatment. As for other rare sugars, tagatose is not abundantly present in nature and therefore it has to be made from more available sugars in order to use it. It is currently being produced starting from galactose. However, to be able to compete with the current predominant sweeteners (like sucrose and high fructose corn syrup), it should be produced in much higher quantities than is possible when starting from galactose. In order to overcome this issue, tagatose production should start from a more widely available (and cheaper) substrate. Fructose and glucose are two such very abundant substrates; however, no (bio)catalysts are available to convert fructose or glucose into tagatose. Nevertheless, some enzymes were found to perform similar reactions and therefore are promising to one day become biocatalysts for tagatose production. In this work, the cloning and analysis of two totally different C4-epimerases is described in respect to their capability of tagatose production. The first enzyme is L-ribulose-5-phosphate 4-epimerase from Geobacillus thermodenitrificans, an aldolase-related epimerase that would require an adaptation of the substrate binding site around the phosphate moiety of the substrate. The second C4-epimerase is naturally active on nucleotide activated sugars, namely the UDP-Glc(NAc) 4-epimerase from Marinithermus hydrothermalis. The major challenge in the engineering of this UDP-hexose 4-epimerase is trying to get rid of the necessity of the UDP-group of the substrate and making it active on free monosaccharides. At first, the Geobacillus L-ribulose-5-phosphate 4-epimerase gene was cloned in an appropriate expression vector and expressed in E. coli. The recombinant enzyme was first characterized with respect to affinity for L-ribulose-5-phosphate, metal ion activation and stability at 37 °C. To that end, its natural substrate had to be produced first, which was accomplished by phosphorylation of L-ribulose using ATP as phosphate donor and recombinantly expressed L-ribulokinase as biocatalyst. After characterization, mutagenesis was achieved both randomly and semi-rationally by error-prone PCR and site saturation mutagenesis, respectively. To be able to detect enzyme variants harboring (improved) tagatose 4-epimerase activity among the thousands mutant enzymes, two ‘identification’ systems were developed. Two selection strains were developed that can be used for the Darwinian selection of improved enzyme variants, while also a colorimetric screening assay has been created. Although several millions and thousands of mutants were analyzed using the selection strains and screening assay, respectively, no variants were confirmed to possess (improved) tagatose 4-epimerase activity. Secondly, the UDP-hexose 4-epimerase from Marinithermus hydrothermalis was also cloned and heterologously expressed in E. coli. A thorough characterization of this second epimerase was performed, revealing that it belongs to the type 2 UDP-hexose 4-epimerases. As expected for a type 2 epimerase, its substrate specificity could easily be altered by mutagenesis of a single residue, namely the so-called gatekeeper. This also confirms the previously reported hypothesis about substrate specificity in type 1 and type 2 epimerases. Mutational analysis of the UDP-hexose 4-epimerase uncovered two new features that can be found in these epimerases. The Marinithermus enzyme was found to possess a TxnYx3K catalytic triad, rather than the usual serine containing triad (SxnYx3K). The presence of the threonine’s methyl function was found to be of more importance for the enzyme’s affinity for N-acetylated UDP-sugars than for non-acetylated substrates. As such, the TxnYx3K triad might be a new substrate specificity determinant for type 2 UDP-hexose 4-epimerases. The second new feature was the presence of two consecutive glycine residues next to the catalytic threonine, which were found to be important for activity of the enzyme with non-acetylated and even bigger importance for activity on N-acetylated substrates. In an attempt to identify new determinants for specificity towards UDP-GlcNAc, two loop mutants were created but they were found to be inactive, most likely due to dispositioning of the catalytic tyrosine, which results in the disruption of the subtle catalytic chemistry. In addition, the Marinithermus UDP-hexose 4-epimerase was also tested for its ability to convert the free monosaccharides fructose/tagatose, glucose/galactose and the phosphorylated α-Glc-1-P. Furthermore, also the E. coli UDP-hexose 4-epimerase was cloned and also here no epimerase activity could be detected on free monosaccharides, in contrast to what has previously been reported.
Keywords
tagatose, rare sugars, Enzyme engineering, epimerase
• Doctoraat - Koen Beerens - Epimerases for rare sugar production.pdf
• full text
• |
• open access
• |
• PDF
• |
• 3.45 MB
## Citation
Chicago
Beerens, Koen. 2013. “Characterization and Engineering of Epimerases for the Production of Rare Sugars”. Ghent, Belgium: Ghent University. Faculty of Bioscience Engineering.
APA
Beerens, K. (2013). Characterization and engineering of epimerases for the production of rare sugars. Ghent University. Faculty of Bioscience Engineering, Ghent, Belgium.
Vancouver
1.
Beerens K. Characterization and engineering of epimerases for the production of rare sugars. [Ghent, Belgium]: Ghent University. Faculty of Bioscience Engineering; 2013.
MLA
Beerens, Koen. “Characterization and Engineering of Epimerases for the Production of Rare Sugars.” 2013 : n. pag. Print.
@phdthesis{3125525,
abstract = {Tagatose is a rare sugar that can be applied for multiple reasons in different industries, for instance as a low-caloric sweetener in dietary food, as well as additive in detergents, cosmetics, and pharmaceutical formulations, but also as drug molecule itself in diabetes treatment. As for other rare sugars, tagatose is not abundantly present in nature and therefore it has to be made from more available sugars in order to use it. It is currently being produced starting from galactose. However, to be able to compete with the current predominant sweeteners (like sucrose and high fructose corn syrup), it should be produced in much higher quantities than is possible when starting from galactose. In order to overcome this issue, tagatose production should start from a more widely available (and cheaper) substrate. Fructose and glucose are two such very abundant substrates; however, no (bio)catalysts are available to convert fructose or glucose into tagatose. Nevertheless, some enzymes were found to perform similar reactions and therefore are promising to one day become biocatalysts for tagatose production.
In this work, the cloning and analysis of two totally different C4-epimerases is described in respect to their capability of tagatose production. The first enzyme is L-ribulose-5-phosphate 4-epimerase from Geobacillus thermodenitrificans, an aldolase-related epimerase that would require an adaptation of the substrate binding site around the phosphate moiety of the substrate. The second C4-epimerase is naturally active on nucleotide activated sugars, namely the UDP-Glc(NAc) 4-epimerase from Marinithermus hydrothermalis. The major challenge in the engineering of this UDP-hexose 4-epimerase is trying to get rid of the necessity of the UDP-group of the substrate and making it active on free monosaccharides.
At first, the Geobacillus L-ribulose-5-phosphate 4-epimerase gene was cloned in an appropriate expression vector and expressed in E. coli. The recombinant enzyme was first characterized with respect to affinity for L-ribulose-5-phosphate, metal ion activation and stability at 37 {\textdegree}C. To that end, its natural substrate had to be produced first, which was accomplished by phosphorylation of L-ribulose using ATP as phosphate donor and recombinantly expressed L-ribulokinase as biocatalyst.
After characterization, mutagenesis was achieved both randomly and semi-rationally by error-prone PCR and site saturation mutagenesis, respectively. To be able to detect enzyme variants harboring (improved) tagatose 4-epimerase activity among the thousands mutant enzymes, two {\textquoteleft}identification{\textquoteright} systems were developed. Two selection strains were developed that can be used for the Darwinian selection of improved enzyme variants, while also a colorimetric screening assay has been created. Although several millions and thousands of mutants were analyzed using the selection strains and screening assay, respectively, no variants were confirmed to possess (improved) tagatose 4-epimerase activity.
Secondly, the UDP-hexose 4-epimerase from Marinithermus hydrothermalis was also cloned and heterologously expressed in E. coli. A thorough characterization of this second epimerase was performed, revealing that it belongs to the type 2 UDP-hexose 4-epimerases. As expected for a type 2 epimerase, its substrate specificity could easily be altered by mutagenesis of a single residue, namely the so-called gatekeeper. This also confirms the previously reported hypothesis about substrate specificity in type 1 and type 2 epimerases.
Mutational analysis of the UDP-hexose 4-epimerase uncovered two new features that can be found in these epimerases. The Marinithermus enzyme was found to possess a TxnYx3K catalytic triad, rather than the usual serine containing triad (SxnYx3K). The presence of the threonine{\textquoteright}s methyl function was found to be of more importance for the enzyme{\textquoteright}s affinity for N-acetylated UDP-sugars than for non-acetylated substrates. As such, the TxnYx3K triad might be a new substrate specificity determinant for type 2 UDP-hexose 4-epimerases. The second new feature was the presence of two consecutive glycine residues next to the catalytic threonine, which were found to be important for activity of the enzyme with non-acetylated and even bigger importance for activity on N-acetylated substrates. In an attempt to identify new determinants for specificity towards UDP-GlcNAc, two loop mutants were created but they were found to be inactive, most likely due to dispositioning of the catalytic tyrosine, which results in the disruption of the subtle catalytic chemistry.
In addition, the Marinithermus UDP-hexose 4-epimerase was also tested for its ability to convert the free monosaccharides fructose/tagatose, glucose/galactose and the phosphorylated \ensuremath{\alpha}-Glc-1-P. Furthermore, also the E. coli UDP-hexose 4-epimerase was cloned and also here no epimerase activity could be detected on free monosaccharides, in contrast to what has previously been reported.},
author = {Beerens, Koen},
isbn = {9789059895881},
language = {eng},
pages = {198},
publisher = {Ghent University. Faculty of Bioscience Engineering},
school = {Ghent University},
title = {Characterization and engineering of epimerases for the production of rare sugars},
year = {2013},
} | 2019-04-22 19:00:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263272523880005, "perplexity": 6428.829316881867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00084.warc.gz"} |
https://matholympiad.org.bd/forum/viewtopic.php?f=13&t=6227 | ## BDMO Secondary National 2021 #7
Mehrab4226
Posts:230
Joined:Sat Jan 11, 2020 1:38 pm
BDMO Secondary National 2021 #7
কোনো ধনাত্মক পূর্ণসংখ্যা $$n$$-এর জন্য $$s(n)$$ আর $$c(n)$$ হলো যথাক্রমে $$n$$-এর পূর্ণবর্গ আর পূর্ণঘন উৎপাদকের সংখ্যা। একটা ধনাত্মক পূর্ণসংখ্যা $$n$$-কে নায্য বলা হবে যদি $$s(n)=c(n)>1$$ হয়। $$80$$-এর চেয়ে ছোট কতগুলো নায্য সংখ্যা আছে?
For a positive integer $n$ , let $s(n)$ and $c(n)$ be the number of divisors of $n$ that are perfect squares and perfect cubes respectively. A positive integer $n$ is called fair if $s(n)=c(n)>1$ . Find the number of fair integers less than $80$.
The Mathematician does not study math because it is useful; he studies it because he delights in it, and he delights in it because it is beautiful.
-Henri Poincaré
Pro_GRMR
Posts:46
Joined:Wed Feb 03, 2021 1:58 pm
### Re: BDMO Secondary National 2021 #7
Mehrab4226 wrote:
Sat Apr 10, 2021 1:26 pm
For a positive integer $n$ , let $s(n)$ and $c(n)$ be the number of divisors of $n$ that are perfect squares and perfect cubes respectively. A positive integer $n$ is called fair if $s(n)=c(n)>1$ . Find the number of fair integers less than $80$.
Let $n$ be in prime factorized form $n= p_1^{e_1}p_2^{e_2}\dots p_n^{e_n}$
We note that number of divisors of $p_1^{e_1}$ that are perfect square is $\lfloor{\frac{e_1}{2}}\rfloor+1$ similarly $\lfloor{\frac{e_1}{3}}\rfloor+1$ for perfect cubes.
So, $n$ has a total of $(\lfloor{\frac{e_1}{2}}\rfloor+1)(\lfloor{\frac{e_2}{2}}\rfloor+1)\dots(\lfloor{\frac{e_n}{2}}\rfloor+1)$ perfect square divisors and $(\lfloor{\frac{e_1}{3}}\rfloor+1)(\lfloor{\frac{e_2}{3}}\rfloor+1)\dots(\lfloor{\frac{e_n}{3}}\rfloor+1)$ perfect cube divisors.
So, for a number to be fair, $\lfloor \frac{e_i}{2} \rfloor = \lfloor \frac{e_i}{3} \rfloor$, which means that $e_i$ is either $1$ or $3$. So, a positive number $n$ is fair if and only if all its powers in the prime factorization are $1$ or $3$.
Now, we count the number of these numbers below $80$ and get all possible values of $n$ are $8, 24, 27, 40, 54, 56$ and so, $\boxed{6}$ values in total.
"When you change the way you look at things, the things you look at change." - Max Planck | 2021-10-23 20:25:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808952271938324, "perplexity": 799.6525792055301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00580.warc.gz"} |
http://xrpp.iucr.org/Db/ch2o2v0001/ | International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2013). Vol. D, ch. 2.2, pp. 314-333
https://doi.org/10.1107/97809553602060000912
## Chapter 2.2. Electrons
K. Schwarza*
aInstitut für Materialchemie, Technische Universität Wien, Getreidemarkt 9/165-TC, A-1060 Vienna, Austria
Correspondence e-mail: kschwarz@theochem.tuwein.ac.at
The electronic structure of a solid, characterized by its energy band structure, is the fundamental quantity that determines the ground state of the solid and a series of excitations involving electronic states. In the first part of this chapter, several basic concepts are summarized in order to establish the notation used and to repeat essential theorems from group theory and solid-state physics that provide the definitions that are needed in this context (Brillouin zones, symmetry operators, Bloch theorem, space-group symmetry). Next the quantum-mechanical treatment, especially density functional theory, is described and the commonly used methods of band theory are outlined (the linear combination of atomic orbitals, tight binding, pseudo-potential schemes, the augmented plane wave method, the linear augmented plane wave method, the Korringa–Kohn–Rostocker method, the linear combination of muffin-tin orbitals, the Car–Parinello method etc.). The linear augmented plane wave scheme is presented explicitly so that concepts in connection with energy bands can be explained. The electric field gradient is discussed to illustrate a tensorial quantity. In the last section, a few examples illustrate the topics of the chapter.
### 2.2.1. Introduction
| top | pdf |
The electronic structure of a solid, characterized by its energy band structure, is the fundamental quantity that determines the ground state of the solid and a series of excitations involving electronic states. In this chapter, we first summarize several basic concepts in order to establish the notation used here and to repeat essential theorems from group theory and solid-state physics that provide definitions which we need in this context. Next the quantum-mechanical treatment, especially density functional theory, is described and the commonly used methods of band theory are outlined. One scheme is presented explicitly so that concepts in connection with energy bands can be explained. The electric field gradient is discussed to illustrate a tensorial quantity and a few examples illustrate the topics of this chapter.
### 2.2.2. The lattice
| top | pdf |
#### 2.2.2.1. The direct lattice and the Wigner–Seitz cell
| top | pdf |
The three unit-cell vectors , and define the parallelepiped of the unit cell. We define
(i) a translation vector of the lattice (upper case) as a primitive vector (integral linear combination) of all translations (ii) but a vector in the lattice (lower case) as
From the seven possible crystal systems one arrives at the 14 possible space lattices, based on both primitive and non-primitive (body-centred, face-centred and base-centred) cells, called the Bravais lattices [see Chapter 9.1 of International Tables for Crystallography, Volume A (2005)]. Instead of describing these cells as parallelepipeds, we can find several types of polyhedra with which we can fill space by translation. A very important type of space filling is obtained by the Dirichlet construction. Each lattice point is connected to its nearest neighbours and the corresponding bisecting (perpendicular) planes will delimit a region of space which is called the Dirichlet region, the Wigner–Seitz cell or the Voronoi cell. This cell is uniquely defined and has additional symmetry properties.
When we add a basis to the lattice (i.e. the atomic positions in the unit cell) we arrive at the well known 230 space groups [see Part 3 of International Tables for Crystallography, Volume A (2005)].
#### 2.2.2.2. The reciprocal lattice and the Brillouin zone
| top | pdf |
Owing to the translational symmetry of a crystal, it is convenient to define a reciprocal lattice, which plays a dominating role in describing electrons in a solid. The three unit vectors of the reciprocal lattice are given according to the standard definition by where the factor is commonly used in solid-state physics in order to simplify many expressions. Strictly speaking (in terms of mathematics) this factor should not be included [see Section 1.1.2.4 of the present volume and Chapter 1.1 of International Tables for Crystallography, Volume B (2001)], since the (complete) reciprocity is lost, i.e. the reciprocal lattice of the reciprocal lattice is no longer the direct lattice.
In analogy to the direct lattice we define
(i) a vector of the reciprocal lattice (upper case) as (ii) a vector in the lattice (lower case) asFrom (2.2.2.5) and (2.2.2.1) it follows immediately that
A construction identical to the Wigner–Seitz cell delimits in reciprocal space a cell conventionally known as the first Brillouin zone (BZ), which is very important in the band theory of solids. There are 14 first Brillouin zones according to the 14 Bravais lattices.
### 2.2.3. Symmetry operators
| top | pdf |
The concepts of symmetry operations in connection with a quantum-mechanical treatment of the electronic states are essential for an understanding of the electronic structure. In this context the reader is referred, for example, to the book by Altmann (1994).
For the definition of symmetry operators we use in the whole of this chapter the active picture, which has become the standard in solid-state physics. This means that the whole configuration space is rotated, reflected or translated, while the coordinate axes are kept fixed.
A translation is given bywhere t on the left-hand side corresponds to a symmetry (configuration-space) operator.
#### 2.2.3.1. Transformation of functions
| top | pdf |
Often we are interested in a function (e.g. a wavefunction) and wish to know how it transforms under the configuration operator g which acts on . For this purpose it is useful to introduce a function-space operator which defines how to modify the function in the transformed configuration space so that it agrees with the original function at the original coordinate :This must be valid for all points and thus also for , leading to the alternative formulation The symmetry operations form a group G of configuration-space operations with the related group of the function-shape operators . Since the multiplication rules are preserved, these two groups are isomorphic.
#### 2.2.3.2. Transformation of operators
| top | pdf |
In a quantum-mechanical treatment of the electronic states in a solid we have the following different entities: points in configuration space, functions defined at these points and (quantum-mechanical) operators acting on these functions. A symmetry operation transforms the points, the functions and the operators in a clearly defined way.
Consider an eigenvalue equation of operator (e.g. the Hamiltonian):where is a function of . When g acts on , the function-space operator acts [according to (2.2.3.4)] on yielding : By putting from (2.2.3.7) into (2.2.3.6), we obtain Multiplication from the left by yields This defines the transformed operator which acts on the transformed function that is given by the original function but at position .
#### 2.2.3.3. The Seitz operators
| top | pdf |
The most general space-group operation is of the form with the point-group operation p (a rotation, reflection or inversion) followed by a translation w: With the definition it is easy to prove the multiplication rule and define the inverse of a Seitz operator as which satisfies where does not change anything and thus is the identity of the space group G.
#### 2.2.3.4. The important groups and their first classification
| top | pdf |
Using the Seitz operators, we can classify the most important groups as we need them at the beginning of this chapter:
(i) the space group, which consists of all elements ; (ii) the point group (without any translations) ; and (iii) the lattice translation subgroup , which is an invariant subgroup of G, i.e. . Furthermore T is an Abelian group, i.e. the operation of two translations commute () (see also Section 1.2.3.1 of the present volume). A useful consequence of the commutation property is that T can be written as a direct product of the corresponding one-dimensional translations, (iv) A symmorphic space group contains no fractional translation vectors and thus P is a subgroup of G, i.e. . (v) In a non-symmorphic space group, however, some p are associated with fractional translation vectors . These do not belong to the translation lattice but when they are repeated a specific integer number of times they give a vector of the lattice. In this case, can not belong to G for all p. (vi) The Schrödinger group is the group S of all operations that leave the Hamiltonian invariant, i.e. for all . This is equivalent to the statement that and commute: . From this commutator relation we find the degenerate states in the Schrödinger equation, namely that and are degenerate with the eigenvalue E whenever , as follows from the three equations
### 2.2.4. The Bloch theorem
| top | pdf |
The electronic structure of an infinite solid looks so complicated that it would seem impossible to calculate it. Two important steps make the problem feasible. One is the single-particle approach, in which each electron moves in an average potential according to a Schrödinger equation, and has its kinetic energy represented by the first operator. The second important concept is the translational symmetry, which leads to Bloch functions. The single-particle aspect will be discussed later (for details see Sections 2.2.9 and 2.2.10).
#### 2.2.4.1. A simple quantum-mechanical derivation
| top | pdf |
In order to derive the Bloch theorem, we can simplify the problem by considering a one-dimensional case with a lattice constant a. [The generalization to the three-dimensional case can be done easily according to (2.2.3.15).] The one-dimensional Schrödinger equation is where is invariant under translations, i.e. . We define a translation operator t according to (2.2.3.1) for the translation by one lattice constant as and apply its functional counterpart to the potential, which gives [according to (2.2.3.4)]The first part in corresponds to the kinetic energy operator, which is also invariant under translations. Therefore, since (the lattice translation subgroup) and (the Schrödinger group), commutes with , i.e. the commutator vanishes, or . This situation was described above [see (2.2.3.16)–(2.2.3.18)] and leads to the fundamental theorem of quantum mechanics which states that when two operators commute the eigenvectors of the first must also be eigenvectors of the second. Consequently we havewhere is the eigenvalue corresponding to the translation by the lattice constant a. The second equation can be written explicitly as and tells us how the wavefunction changes from one unit cell to the neighbouring unit cell. Notice that the electron density must be translationally invariant and thus it followswhich is a necessary (but not sufficient) condition for defining .
#### 2.2.4.2. Periodic boundary conditions
| top | pdf |
We can expect the bulk properties of a crystal to be insensitive to the surface and also to the boundary conditions imposed, which we therefore may choose to be of the most convenient form. Symmetry operations are covering transformations and thus we have an infinite number of translations in T, which is most inconvenient. A way of avoiding this is provided by periodic boundary conditions (Born–von Karman). In the present one-dimensional case this means that the wavefunction becomes periodic in a domain (with integer N number of lattice constants a), i.e. According to our operator notation (2.2.4.6), we have the following situation when the translation t is applied n times: It follows immediately from the periodic boundary condition (2.2.4.9) that with the obvious solution Here it is convenient to introduce a notation so that we can write . Note that k is quantized due to the periodic boundary conditions according to (2.2.4.13). Summarizing, we have the Bloch condition (for the one-dimensional case): i.e. when we change x by one lattice constant a the wavefunction at x is multiplied by a phase factor . At the moment (2.2.4.13) suggests the use of k as label for the wavefunction .
Generalization to three dimensions leads to the exponential with and thus to the Bloch condition or written in terms of the translational operator [see (2.2.3.15)] The eigenfunctions that satisfy (2.2.4.17) are called Bloch functions and have the form where is a periodic function in the lattice, and is a vector in the reciprocal lattice [see (2.2.2.6)] that plays the role of the quantum number in solids. The vector can be chosen in the first BZ, because any that differs from by just a lattice vector of the reciprocal lattice has the same Bloch factor and the corresponding wavefunction satisfies the Bloch condition again, since where the factor is unity according to (2.2.2.7). Since these two functions, and , belong to the same Bloch factor they are equivalent. A physical interpretation of the Bloch states will be given in Section 2.2.8.
#### 2.2.4.3. A simple group-theoretical approach
| top | pdf |
Let us repeat a few fundamental definitions of group theory: For any symmetry operation , the product can always be formed for any and defines the conjugate element of by g. Given any operation , its class is defined as the set of all its conjugates under all operations . What we need here is an important property of classes, namely that no two classes have any element in common so that any group can be considered as a sum of classes.
Assuming periodic boundary conditions with number of primitive cells along the axes , respectively, a lump of crystal with unit cells is studied. The translation subgroup T contains the general translation operators , which [using (2.2.3.15)] can be written as where each factor belongs to one of the three axes. Since T is commutative (Abelian), each operation of T is its own class and thus the number of classes equals its order, namely N. From the general theorem that the squares of the dimensions of all irreducible representations of a group must equal the order of the group, it follows immediately that all N irreducible representations of T must be one-dimensional (see also Section 1.2.3.2 of the present volume). Taking the subgroup along the axis, we must have different irreducible representations, which we label (for later convenience) by and denote as These representations are one-dimensional matrices, i.e. numbers, and must be exponentials, often chosen of the form . The constant must be related to the corresponding label of the irreducible representation. In the three-dimensional case, we have the corresponding representation where we have used the definitions (2.2.2.6) and (2.2.2.1). Within the present derivation, the vector corresponds to the label of the irreducible representation of the lattice translation subgroup.
### 2.2.5. The free-electron (Sommerfeld) model
| top | pdf |
The free-electron model corresponds to the special case of taking a constant potential in the Schrödinger equation (2.2.4.1). The physical picture relies on the assumption that the (metallic) valence electrons can move freely in the field of the positively charged nuclei and the tightly bound core electrons. Each valence electron moves in a potential which is nearly constant due to the screening of the remaining valence electrons. This situation can be idealized by assuming the potential to be constant []. This simple picture represents a crude model for simple metals but has its importance mainly because the corresponding equation can be solved analytically. By rewriting equation (2.2.4.1), we have where in the last step the constants are abbreviated (for later convenience) by . The solutions of this equation are plane waves (PWs) where C is a normalization constant which is defined from the integral over one unit cell with volume . The PWs satisfy the Bloch condition and can be written (using the bra–ket notation) as From (2.2.5.1) we see that the corresponding energy (labelled by ) is given by
In this context it is useful to consider the momentum of the electron, which classically is the vector , where m and are the mass and velocity, respectively. In quantum mechanics we must replace by the corresponding operator .
Thus a PW is an eigenfunction of the momentum operator with eigenvalue . Therefore the vector is also called the momentum vector. Note that this is strictly true for a vanishing potential but is otherwise only approximately true (referred to as pseudomomentum).
Another feature of a PW is that its phase is constant in a plane perpendicular to the vector (see Fig. 2.2.5.1). For this purpose, consider a periodic function in space and time, which has a constant phase factor within such a plane. We can characterize the spatial part by within this plane. Taking the nearest parallel plane (with vector ) for which the same phase factors occur again but at a distance away (with the unit vector normal to the plane), then must differ from by . This is easily obtained from (2.2.5.7) by multiplication with leading to Consequently is the wavelength and thus the vector is called the wavevector or propagation vector.
Figure 2.2.5.1 | top | pdf |Plane waves. The wavevector k and the unit vector e are normal to the two planes and the vectors r in plane 1 and in plane 2.
### 2.2.6. Space-group symmetry
| top | pdf |
#### 2.2.6.1. Representations and bases of the space group
| top | pdf |
The effect of a space-group operation on a Bloch function, labelled by , is to transform it into a Bloch function that corresponds to a vector ,which can be proven by using the multiplication rule of Seitz operators (2.2.3.12) and the definition of a Bloch state (2.2.4.17).
A special case is the inversion operator, which leads to The Bloch functions and , where p is any operation of the point group P, belong to the same basis for a representation of the space group G. The same cannot appear in two different bases, thus the two bases and are either identical or have no in common.
Irreducible representations of T are labelled by the N distinct vectors in the BZ, which separate in disjoint bases of G (with no vector in common). If a vector falls on the BZ edge, application of the point-group operation p can lead to an equivalent vector that differs from the original by (a vector of the reciprocal lattice). The set of all mutually inequivalent vectors of () define the star of the k vector () (see also Section 1.2.3.3 of the present volume).
The set of all operations that leave a vector invariant (or transform it into an equivalent ) forms the group of the vector. Application of q, an element of , to a Bloch function (Section 2.2.8) gives where the band index j (described below) may change to . The Bloch factor stays constant under the operation of q and thus the periodic cell function must show this symmetry, namely For example, a -like orbital may be transformed into a -like orbital if the two are degenerate, as in a tetragonal lattice.
A star of determines an irreducible basis, provided that the functions of the star are symmetrized with respect to the irreducible representation of the group of vectors, which are called small representations. The basis functions for the irreducible representations are given according to Seitz (1937) by written as a row vector with , where n is the dimension of the irreducible representation of with the order . Such a basis consists of functions and forms an -dimensional irreducible representation of the space group. The degeneracies of these representations come from the star of (not crucial for band calculations except for determining the weight of the vector) and the degeneracy from . The latter is essential for characterizing the energy bands and using the compatibility relations (Bouckaert et al., 1930; Bradley & Cracknell, 1972).
#### 2.2.6.2. Energy bands
| top | pdf |
Each irreducible representation of the space group, labelled by , denotes an energy , where varies quasi-continuously over the BZ and the superscript j numbers the band states. The quantization of according to (2.2.4.13) and (2.2.4.15) can be done in arbitrary fine steps by choosing corresponding periodic boundary conditions (see Section 2.2.4.2). Since and belong to the same Bloch state, the energy is periodic in reciprocal space: Therefore it is sufficient to consider vectors within the first BZ. For a given , two bands will not have the same energy unless there is a multidimensional small representation in the group of or the bands belong to different irreducible representations and thus can have an accidental degeneracy. Consequently, this can not occur for a general vector (without symmetry).
### 2.2.7. The vector and the Brillouin zone
| top | pdf |
#### 2.2.7.1. Various aspects of the vector
| top | pdf |
The vector plays a fundamental role in the electronic structure of a solid. In the above, several interpretations have been given for the vector that
(a) is given in reciprocal space, (b) can be restricted to the first Brillouin zone, (c) is the quantum number for the electronic states in a solid, (d) is quantized due to the periodic boundary conditions, (e) labels the irreducible representation of the lattice translation subgroup T (see Section 2.2.4.3) (f) is related to the momentum [according to (2.2.5.5)] in the free-electron case and (g) is the propagation vector (wavevector) associated with the plane-wave part of the wavefunction (see Fig. 2.2.5.1).
#### 2.2.7.2. The Brillouin zone (BZ)
| top | pdf |
Starting with one of the 14 Bravais lattices, one can define the reciprocal lattice [according to (2.2.2.4)] by the Wigner–Seitz construction as discussed in Section 2.2.2.2. The advantage of using the BZ instead of the parallelepiped spanned by the three unit vectors is its symmetry. Let us take a simple example first, namely an element (say copper) that crystallizes in the face-centred-cubic (f.c.c.) structure. With (2.2.2.4) we easily find that the reciprocal lattice is body-centred-cubic (bcc) and the corresponding BZ is shown in Fig. 2.2.7.1. In this case, f.c.c. Cu has symmetry with 48 symmetry operations (point group). The energy eigenvalues within a star of (i.e. ) are the same, and therefore it is sufficient to calculate one member in the star. Consequently, it is enough to consider the irreducible wedge of the BZ (called the IBZ). In the present example, this corresponds to 1/48th of the BZ shown in Fig. 2.2.7.1. To count the number of states in the BZ, one counts each point in the IBZ with a proper weight to represent the star of this vector.
Figure 2.2.7.1 | top | pdf |The Brillouin zone (BZ) and the irreducible wedge of the BZ for the f.c.c. direct lattice. After the corresponding figure from the Bilbao Crystallographic Server (http://www.cryst.ehu.es/ ). The IBZ for any space group can be obtained by using the option KVEC and specifying the space group (in this case No. 225).
#### 2.2.7.3. The symmetry of the Brillouin zone
| top | pdf |
The BZ is purely constructed from the reciprocal lattice and thus only follows from the translational symmetry (of the 14 Bravais lattices). However, the energy bands , with lying within the first BZ, possess a symmetry associated with one of the 230 space groups. Therefore one can not simply use the geometrical symmetry of the BZ to find its irreducible wedge, although this is tempting. Since the effort of computing energy eigenvalues increases with the number of points, one wishes to restrict such calculations to the basic domain, but the latter can only be found by considering the space group of the corresponding crystal (including the basis with all atomic positions).
One possible procedure for finding the IBZ is the following. First a uniform grid in reciprocal space is generated by dividing the three unit-cell vectors by an integer number of times. This is easy to do in the parallelepiped, spanned by the three unit-cell vectors, and yields a (more-or-less) uniform grid of points. Now one must go through the complete grid of points and extract a list of non-equivalent points by applying to each point in the grid the point-group operations. If a point is found that is already in the list, its weight is increased by 1, otherwise it is added to the list. This procedure can easily be programmed and is often used when integrations are needed. The disadvantage of this scheme is that the generated points in the IBZ are not necessarily in a connected region of the BZ, since one member of the star of is chosen arbitrarily, namely the first that is found by going through the complete list.
### 2.2.8. Bloch functions
| top | pdf |
We can provide a physical interpretation for a Bloch function by the following considerations. By combining the group-theoretical concepts based on the translational symmetry with the free-electron model, we can rewrite a Bloch function [see (2.2.4.18)] in the form where denotes the plane wave (ignoring normalization) in Dirac's ket notation (2.2.5.3). The additional superscript j denotes the band index associated with (see Section 2.2.6.2). The two factors can be interpreted most easily for the two limiting cases, namely:
(i) For a constant potential, for which the first factor corresponds to a plane wave with momentum [see (2.2.5.5)] but the second factor becomes a constant. Note that for a realistic (non-vanishing) potential, the vector of a Bloch function is no longer the momentum and thus is often denoted as pseudomomentum. (ii) If the atoms in a crystal are infinitely separated (i.e. for infinite lattice constants) the BZ collapses to a point, making the first factor a constant. In this case, the second factor must correspond to atomic orbitals and the label j denotes the atomic states 1s, 2s, 2p etc. In the intermediate case, is quantized [see (2.2.4.13)] and can take N values (or 2N states including spin) for N cells contained in the volume of the periodic boundary condition [see (2.2.4.21)]. Therefore, as the interatomic distance is reduced from infinity to the equilibrium separations, an atomic level j is broadened into a band with the quasi-continuous vectors and thus shows dispersion.
According to another theorem, the mean velocity of an electron in a Bloch state with wavevector and energy is given by If the energy is independent of , its derivative with respect to vanishes and thus the corresponding velocity. This situation corresponds to the genuinely isolated atomic levels (with band width zero) and electrons that are tied to individual atoms. If, however, there is any nonzero overlap in the atomic wavefunctions, then will not be constant throughout the zone.
In the general case, different notations are used to characterize band states. Sometimes it is more appropriate to label an energy band by the atomic level from which it originates, especially for narrow bands. In other cases (with a large band width) the free-electron behaviour may be dominant and thus the corresponding free-electron notation is more appropriate.
### 2.2.9. Quantum-mechanical treatment
| top | pdf |
A description of the electronic structure of solids requires a quantum-mechanical (QM) treatment which can be parameterized (in semi-empirical schemes) but is often obtained from ab initio calculations. The latter are more demanding in terms of computational effort but they have the advantage that no experimental knowledge is needed in order to adjust parameters. The following brief summary is restricted to the commonly used types of ab initio methods and their main characteristics.
#### 2.2.9.1. Exchange and correlation treatment
| top | pdf |
Hartree–Fock-based (HF-based) methods (for a general description see, for example, Pisani, 1996) are based on a wavefunction description (with one Slater determinant in the HF method). The single-particle HF equations (written for an atom in Rydberg atomic units) can be written in the following form, which is convenient for further discussions:with terms for the kinetic energy, the nuclear electronic potential, the classical electrostatic Coulomb potential and the exchange, a function potential which involves the permutation operator , which interchanges the arguments of the subsequent product of two functions. This exchange term can not be rewritten as a potential times the function but is truly non-local (i.e. depends on and ). The interaction of orbital j with itself (contained in the third term) is unphysical, but this self-interaction is exactly cancelled in the fourth term. This is no longer true in the approximate DFT method discussed below. The HF method treats exchange exactly but contains – by definition – no correlation effects. The latter can be added in an approximate form in post-HF procedures such as that proposed by Colle & Salvetti (1990).
Density functional theory (DFT) is an alternative approach in which both effects, exchange and correlation, are treated in a combined scheme but both approximately. Several forms of DFT functionals are available now that have reached high accuracy, so many structural problems can be solved adequately. Further details will be given in Section 2.2.10.
#### 2.2.9.2. The choice of basis sets and wavefunctions
| top | pdf |
Most calculations of the electronic structure in solids (Pisani, 1996; Singh, 1994; Altmann, 1994) use a linear combination of basis functions in one form or another but differ in the basis sets. Some use a linear combination of atomic orbitals (LCAO) where the AOs are given as Gaussian- or Slater-type orbitals (GTOs or STOs); others use plane-wave (PW) basis sets with or without augmentations; and still others make use of muffin-tin orbitals (MTOs) as in LMTO (linear combination of MTOs; Skriver, 1984) or ASW (augmented spherical wave; Williams et al., 1979). In the former cases, the basis functions are given in analytic form, but in the latter the radial wavefunctions are obtained numerically by integrating the radial Schrödinger equation (Singh, 1994) (see Section 2.2.11).
Closely related to the choice of basis sets is the explicit form of the wavefunctions, which can be well represented by them, whether they are nodeless pseudo-wavefunctions or all-electron wavefunctions including the complete radial nodal structure and a proper description close to the nucleus.
#### 2.2.9.3. The form of the potential
| top | pdf |
In the muffin-tin or the atomic sphere approximation (MTA or ASA), each atom in the crystal is surrounded by an atomic sphere in which the potential is assumed to be spherically symmetric [see (2.2.12.5) and the discussion thereof]. While these schemes work reasonably well in highly coordinated, closely packed systems (such as face-centred-cubic metals), they become very approximate in all non-isotropic cases (e.g. layered compounds, semiconductors, open structures or molecular crystals). Schemes that make no shape approximation in the form of the potential are termed full-potential schemes (Singh, 1994; Blaha et al., 1990; Schwarz & Blaha, 1996).
With a proper choice of pseudo-potential one can focus on the valence electrons, which are relevant for chemical bonding, and replace the inner part of their wavefunctions by a nodeless pseudo-function that can be expanded in PWs with good convergence.
#### 2.2.9.4. Relativistic effects
| top | pdf |
If a solid contains only light elements, non-relativistic calculations are well justified, but as soon as heavier elements are present in the system of interest relativistic effects can no longer be neglected. In the medium range of atomic numbers (up to about 54), so-called scalar relativistic schemes are often used (Koelling & Harmon, 1977), which describe the main contraction or expansion of various orbitals (due to the Darwin s-shift or the mass–velocity term) but omit spin–orbit splitting. Unfortunately, the spin–orbit term couples spin-up and spin-down wavefunctions. If one has n basis functions without spin–orbit coupling, then including spin–orbit coupling in the Hamiltonian would lead to a matrix equation, which requires about eight times as much computer time to solve it (due to the scaling). Since the spin–orbit effect is generally small (at least for the valence states), one can simplify the procedure by diagonalizing the Hamiltonian including spin–orbit coupling in the space of the low-lying bands as obtained in a scalar relativistic step. This version is called second variational method (see e.g. Singh, 1994). For very heavy elements it may be necessary to solve Dirac's equation, which has all these terms (Darwin s-shift, mass–velocity and spin–orbit) included. Additional aspects are illustrated in Section 2.2.14 in connection with the uranium atom.
### 2.2.10. Density functional theory
| top | pdf |
The most widely used scheme for calculating the electronic structure of solids is based on density functional theory (DFT). It is described in many excellent books, for example that by Dreizler & Gross (1990), which contains many useful definitions, explanations and references. Hohenberg & Kohn (1964) have shown that for determining the ground-state properties of a system all one needs to know is the electron density . This is a tremendous simplification considering the complicated wavefunction of a crystal with (in principle infinitely) many electrons. This means that the total energy of a system (a solid in the present case) is a functional of the density , which is independent of the external potential provided by all nuclei. At first it was just proved that such a functional exists, but in order to make this fundamental theorem of practical use Kohn & Sham (1965) introduced orbitals and suggested the following procedure.
In the universal approach of DFT to the quantum-mechanical many-body problem, the interacting system is mapped in a unique manner onto an effective non-interacting system of quasi-electrons with the same total density. Therefore the electron density plays the key role in this formalism. The non-interacting particles of this auxiliary system move in an effective local one-particle potential, which consists of a mean-field (Hartree) part and an exchange–correlation part that, in principle, incorporates all correlation effects exactly. However, the functional form of this potential is not known and thus one needs to make approximations.
Magnetic systems (with collinear spin alignments) require a generalization, namely a different treatment for spin-up and spin-down electrons. In this generalized form the key quantities are the spin densities , in terms of which the total energy is with the electronic contributions, labelled conventionally as, respectively, the kinetic energy (of the non-interacting particles), the electron–electron repulsion, the nuclear–electron attraction and the exchange–correlation energies. The last term is the repulsive Coulomb energy of the fixed nuclei. This expression is still exact but has the advantage that all terms but one can be calculated very accurately and are the dominating (large) quantities. The exception is the exchange–correlation energy , which is defined by (2.2.10.1) but must be approximated. The first important methods for this were the local density approximation (LDA) or its spin-polarized generalization, the local spin density approximation (LSDA). The latter comprises two assumptions:
(i) That can be written in terms of a local exchange–correlation energy density times the total (spin-up plus spin-down) electron density as (ii) The particular form chosen for . For a homogeneous electron gas is known from quantum Monte Carlo simulations, e.g. by Ceperley & Alder (1984). The LDA can be described in the following way. At each point in space we know the electron density . If we locally replace the system by a homogeneous electron gas of the same density, then we know its exchange–correlation energy. By integrating over all space we can calculate .
The most effective way known to minimize by means of the variational principle is to introduce (spin) orbitals constrained to construct the spin densities [see (2.2.10.7) below]. According to Kohn and Sham (KS), the variation of gives the following effective one-particle Schrödinger equations, the so-called Kohn–Sham equations (Kohn & Sham, 1965) (written for an atom in Rydberg atomic units with the obvious generalization to solids):with the external potential (the attractive interaction of the electrons by the nucleus) given bythe Coulomb potential (the electrostatic interaction between the electrons) given by and the exchange–correlation potential (due to quantum mechanics) given by the functional derivative
In the KS scheme, the (spin) electron densities are obtained by summing over all occupied states, i.e. by filling the KS orbitals (with increasing energy) according to the Aufbau principle. Here are occupation numbers such that , where is the symmetry-required weight of point . These KS equations (2.2.10.3) must be solved self-consistently in an iterative process, since finding the KS orbitals requires the knowledge of the potentials, which themselves depend on the (spin) density and thus on the orbitals again. Note the similarity to (and difference from) the Hartree–Fock equation (2.2.9.1). This version of the DFT leads to a (spin) density that is close to the exact density provided that the DFT functional is sufficiently accurate.
In early applications, the local density approximation (LDA) was frequently used and several forms of functionals exist in the literature, for example by Hedin & Lundqvist (1971), von Barth & Hedin (1972), Gunnarsson & Lundqvist (1976), Vosko et al. (1980) or accurate fits of the Monte Carlo simulations of Ceperley & Alder (1984). The LDA has some shortcomings, mostly due to the tendency of overbinding, which causes, for example, too-small lattice constants. Recent progress has been made going beyond the LSDA by adding gradient terms or higher derivatives ( and ) of the electron density to the exchange–correlation energy or its corresponding potential. In this context several physical constraints can be formulated, which an exact theory should obey. Most approximations, however, satisfy only part of them. For example, the exchange density (needed in the construction of these two quantities) should integrate to according to the Fermi exclusion principle (Fermi hole). Such considerations led to the generalized gradient approximation (GGA), which exists in various parameterizations, e.g. in the one by Perdew et al. (1996). This is an active field of research and thus new functionals are being developed and their accuracy tested in various applications.
The Coulomb potential in (2.2.10.5) is that of all N electrons. That is, any electron is also moving in its own field, which is physically unrealistic but may be mathematically convenient. Within the HF method (and related schemes) this self-interaction is cancelled exactly by an equivalent term in the exchange interaction [see (2.2.9.1)]. For the currently used approximate density functionals, the self-interaction cancellation is not complete and thus an error remains that may be significant, at least for states (e.g. 4f or 5f) for which the respective orbital is not delocalized. Note that delocalized states have a negligibly small self-interaction. This problem has led to the proposal of self-interaction corrections (SICs), which remove most of this error and have impacts on both the single-particle eigenvalues and the total energy (Parr et al., 1978).
The Hohenberg–Kohn theorems state that the total energy (of the ground state) is a functional of the density, but the introduction of the KS orbitals (describing quasi-electrons) are only a tool in arriving at this density and consequently the total energy. Rigorously, the Kohn–Sham orbitals are not electronic orbitals and the KS eigenvalues (which correspond to in a solids) are not directly related to electronic excitation energies. From a formal (mathematical) point of view, the are just Lagrange multipliers without a physical meaning.
Nevertheless, it is often a good approximation (and common practice) to partly ignore these formal inconsistencies and use the orbitals and their energies in discussing electronic properties. The gross features of the eigenvalue sequence depend only to a smaller extent on the details of the potential, whether it is orbital-based as in the HF method or density-based as in DFT. In this sense, the eigenvalues are mainly determined by orthogonality conditions and by the strong nuclear potential, common to DFT and the HF method.
In processes in which one removes (ionization) or adds (electron affinity) an electron, one compares the N electron system with one with or electrons. Here another conceptual difference occurs between the HF method and DFT. In the HF method one may use Koopmans' theorem, which states that the agree with the ionization energies from state i assuming that the corresponding orbitals do not change in the ionization process. In DFT, the can be interpreted according to Janak's theorem (Janak, 1978) as the partial derivative with respect to the occupation number ,Thus in the HF method is the total energy difference for , in contrast to DFT where a differential change in the occupation number defines , the proper quantity for describing metallic systems. It has been proven that for the exact density functional the eigenvalue of the highest occupied orbital is the first ionization potential (Perdew & Levy, 1983). Roughly, one can state that the further an orbital energy is away from the highest occupied state, the poorer becomes the approximation to use as excitation energy. For core energies the deviation can be significant, but one may use Slater's transition state (Slater, 1974), in which half an electron is removed from the corresponding orbital, and then use the to represent the ionization from that orbital.
Another excitation from the valence to the conduction band is given by the energy gap, separating the occupied from the unoccupied single-particle levels. It is well known that the gap is not given well by taking as excitation energy. Current DFT methods significantly underestimate the gap (half the experimental value), whereas the HF method usually overestimates gaps (by a factor of about two). A trivial solution, applying the scissor operator', is to shift the DFT bands to agree with the experimental gap. An improved but much more elaborate approach for obtaining electronic excitation energies within DFT is the GW method in which quasi-particle energies are calculated (Hybertsen & Louie, 1984; Godby et al., 1986; Perdew, 1986). This scheme is based on calculating the dielectric matrix, which contains information on the response of the system to an external perturbation, such as the excitation of an electron.
In some cases, one can rely on the total energy of the states involved. The original Hohenberg–Kohn theorems (Hohenberg & Kohn, 1964) apply only to the ground state. The theorems may, however, be generalized to the energetically lowest state of any symmetry representation for which any property is a functional of the corresponding density. This allows (in cases where applicable) the calculation of excitation energies by taking total energy differences.
Many aspects of DFT from formalism to applications are discussed and many references are given in the book by Springborg (1997).
### 2.2.11. Band-theory methods
| top | pdf |
There are several methods for calculating the electronic structure of solids. They have advantages and disadvantages, different accuracies and computational requirements (speed or memory), and are based on different approximations. Some of these aspects have been discussed in Section 2.2.9. This is a rapidly changing field and thus only the basic concepts of a few approaches in current use are outlined below.
#### 2.2.11.1. LCAO (linear combination of atomic orbitals)
| top | pdf |
For the description of crystalline wavefunctions (Bloch functions), one often starts with a simple concept of placing atomic orbitals (AOs) at each site in a crystal denoted by , from which one forms Bloch sums in order to have proper translational symmetry: Then Bloch functions can be constructed by taking a linear combination of such Bloch sums, where the linear-combination coefficients are determined by the variational principle in which a secular equation must be solved. The LCAO can be used in combination with both the Hartree–Fock method and DFT.
#### 2.2.11.2. TB (tight binding)
| top | pdf |
A simple version of the LCAO is found by parameterizing the matrix elements and in a way similar to the Hückel molecular orbital (HMO) method, where the only non-vanishing matrix elements are the on-site integrals and the nearest-neighbour interactions (hopping integrals). For a particular class of solids the parameters can be adjusted to fit experimental values. With these parameters, the electronic structures of rather complicated solids can be described and yield quite satisfactory results, but only for the class of materials for which such a parametrization is available. Chemical bonding and symmetry aspects can be well described with such schemes, as Hoffmann has illustrated in many applications (Hoffmann, 1988). In more complicated situations, however, such a simple scheme fails.
#### 2.2.11.3. The pseudo-potential schemes
| top | pdf |
In many respects, core electrons are unimportant for determining the stability, structure and low-energy response properties of crystals. It is a well established practice to modify the one-electron part of the Hamiltonian by replacing the bare nuclear attraction with a pseudo-potential (PP) operator, which allows us to restrict our calculation to the valence electrons. The PP operator must reproduce screened nuclear attractions, but must also account for the Pauli exclusion principle, which requires that valence orbitals are orthogonal to core ones. The PPs are not uniquely defined and thus one seeks to satisfy the following characteristics as well as possible:
(1) PP eigenvalues should coincide with the true (all-electron) ones; (2) PP orbitals should resemble as closely as possible the all-electron orbitals in an external region as well as being smooth and nodeless in the core region; (3) PP orbitals should be properly normalized; (4) the functional form of the PP should allow the simplification of their use in computations; (5) the PP should be transferable (independent of the system); and (6) relativistic effects should be taken into account (especially for heavy elements); this concerns mainly the indirect relativistic effects (e.g. core contraction, Darwin s-shift), but not the spin–orbit coupling.
There are many versions of the PP method (norm-conserving, ultrasoft etc.) and the actual accuracy of a calculation is governed by which is used. For standard applications, PP techniques can be quite successful in solid-state calculations. However, there are cases that require higher accuracy, e.g. when core electrons are involved, as in high-pressure studies or electric field gradient calculations (see Section 2.2.15), where the polarization of the charge density close to the nucleus is crucial for describing the physical effects properly.
#### 2.2.11.4. APW (augmented plane wave) and LAPW methods
| top | pdf |
The partition of space (i.e. the unit cell) between (non-overlapping) atomic spheres and an interstitial region (see Fig. 2.2.12.1) is used in several schemes, one of which is the augmented plane wave (APW) method, originally proposed by Slater (Slater, 1937) and described by Loucks (1967), and its linearized version (the LAPW method), which is chosen as the one representative method that is described in detail in Section 2.2.12.
The basis set is constructed using the muffin-tin approximation (MTA) for the potential [see the discussion below in connection with (2.2.12.5)]. In the interstitial region the wavefunction is well described by plane waves, but inside the spheres atomic-like functions are used which are matched continuously (at the sphere boundary) to each plane wave.
#### 2.2.11.5. KKR (Korringa–Kohn–Rostocker) method
| top | pdf |
In the KKR scheme (Korringa, 1947; Kohn & Rostocker, 1954), the solution of the KS equations (2.2.10.3) uses a Green-function technique and solves a Lippman–Schwinger integral equation. The basic concepts come from a multiple scattering approach which is conceptually different but mathematically equivalent to the APW method. The building blocks are spherical waves which are products of spherical harmonics and spherical Hankel, Bessel and Neumann functions. Like plane waves, they solve the KS equations for a constant potential. Augmenting the spherical waves with numerical solutions inside the atomic spheres as in the APW method yields the KKR basis set. Compared with methods based on plane waves, spherical waves require fewer basis functions and thus smaller secular equations.
The radial functions in the APW and KKR methods are energy-dependent and so are the corresponding basis functions. This leads to a nonlinear eigenvalue problem that is computationally demanding. Andersen (1975) modelled the weak energy dependence by a Taylor expansion where only the first term is kept and thereby arrived at the so-called linear methods LMTO and LAPW.
#### 2.2.11.6. LMTO (linear combination of muffin-tin orbitals) method
| top | pdf |
The LMTO method (Andersen, 1975; Skriver, 1984) is the linearized counterpart to the KKR method, in the same way as the LAPW method is the linearized counterpart to the APW method. This widely used method originally adopted the atomic sphere approximation (ASA) with overlapping atomic spheres in which the potential was assumed to be spherically symmetric. Although the ASA simplified the computation so that systems with many atoms could be studied, the accuracy was not high enough for application to certain questions in solid-state physics.
Following the ideas of Andersen, the augmented spherical wave (ASW) method was developed by Williams et al. (1979). The ASW method is quite similar to the LMTO scheme.
It should be noted that the MTA and the ASA are not really a restriction on the method. In particular, when employing the MTA only for the construction of the basis functions but including a generally shaped potential in the construction of the matrix elements, one arrives at a scheme of very high accuracy which allows, for instance, the evaluation of elastic properties. Methods using the unrestricted potential together with basis functions developed from the muffin-tin potential are called full-potential methods. Now for almost every method based on the MTA (or ASA) there exists a counterpart employing the full potential.
#### 2.2.11.7. CP (Car–Parrinello) method
| top | pdf |
Conventional quantum-mechanical calculations are done using the Born–Oppenheimer approximation, in which one assumes (in most cases to a very good approximation) that the electrons are decoupled from the nuclear motion. Therefore the electronic structure is calculated for fixed atomic (nuclear) positions. Car & Parrinello (1985) suggested a new method in which they combined the motion of the nuclei (at finite temperature) with the electronic degrees of freedom. They started with a fictitious Lagrangian in which the wavefunctions follow a dynamics equation of motion. Therefore, the CP method combines the motion of the nuclei (following Newton's equation) with the electrons (described within DFT) into one formalism by solving equations of motion for both subsystems. This simplifies the computational effort and allows ab initio molecular dynamics calculations to be performed in which the forces acting on the atoms are calculated from the wavefunctions within DFT. The CP method has attracted much interest and is widely used, with a plane-wave basis, extended with pseudo-potentials and recently enhanced into an all-electron method using the projector augmented wave (PAW) method (Blöchl, 1994). Such CP schemes can also be used to find equilibrium structures and to explore the electronic structure.
#### 2.2.11.8. Order N schemes
| top | pdf |
The various techniques outlined so far have one thing in common, namely the scaling. In a system containing N atoms the computational effort scales as , since one must determine a number of orbitals that is proportional to N which requires diagonalization of matrices, where the prefactor k depends on the basis set and the method used. In recent years much work has been done to devise algorithms that vary linearly with N, at least for very large N (Ordejon et al., 1995). First results are already available and look promising. When such schemes become generally available, it will be possible to study very large systems with relatively little computational effort. This interesting development could drastically change the accessibility of electronic structure results for large systems.
### 2.2.12. The linearized augmented plane wave method
| top | pdf |
The electronic structure of solids can be calculated with a variety of methods as described above (Section 2.2.11). One representative example is the (full-potential) linearized augmented plane wave (LAPW) method. The LAPW method is one among the most accurate schemes for solving the effective one-particle (the so-called Kohn–Sham) equations (2.2.10.3) and is based on DFT (Section 2.2.10) for the treatment of exchange and correlation.
The LAPW formalism is described in many references, starting with the pioneering work by Andersen (1975) and by Koelling & Arbman (1975), which led to the development and the description of the computer code WIEN (Blaha et al., 1990; Schwarz & Blaha, 1996). An excellent book by Singh (1994) is highly recommended to the interested reader. Here only the basic ideas are summarized, while details are left to the articles and references therein.
In the LAPW method, the unit cell is partitioned into (non-overlapping) atomic spheres centred around the atomic sites (type I) and an interstitial region (II) as shown schematically in Fig. 2.2.12.1. For the construction of basis functions (and only for this purpose), the muffin-tin approximation (MTA) is used. In the MTA, the potential is assumed to be spherically symmetric within the atomic spheres but constant outside; in the former atomic-like functions and in the latter plane waves are used in order to adapt the basis set optimally to the problem. Specifically, the following basis sets are used in the two types of regions:
• (1) Inside the atomic sphere t of radius (region I), a linear combination of radial functions times spherical harmonics is used (we omit the index t when it is clear from the context): where represents the angles and of the polar coordinates. The radial functions depend on the energy E. Within a certain energy range this energy dependance can be accounted for by using a linear combination of the solution and its energy derivative , both taken at the same energy (which is normally chosen at the centre of the band with the corresponding -like character). This is the linearization in the LAPW method. These two functions are obtained on a radial mesh inside the atomic sphere by numerical integration of the radial Schrödinger equation using the spherical part of the potential inside sphere t and choosing the solution that is regular at the origin . The coefficients and are chosen by matching conditions (see below).
Figure 2.2.12.1 | top | pdf |Schematic partitioning of the unit cell into atomic spheres (I) and an interstitial region (II).
• (2) In the interstitial region (II), a plane-wave expansion (see the Sommerfeld model, Section 2.2.5) is used:where , are vectors of the reciprocal lattice, is the wavevector in the first Brillouin zone and is the unit-cell volume [see (2.2.5.3)]. This corresponds to writing the periodic function (2.2.4.19) as a Fourier series and combining it with the Bloch function (2.2.4.18). Each plane wave (corresponding to ) is augmented by an atomic-like function in every atomic sphere, where the coefficients and in (2.2.12.1) are chosen to match (in value and slope) the atomic solution with the corresponding plane-wave basis function of the interstitial region.
The solutions to the Kohn–Sham equations are expanded in this combined basis set of LAPWs, where the coefficients are determined by the Rayleigh–Ritz variational principle. The convergence of this basis set is controlled by the number of PWs, i.e. by the magnitude of the largest vector in equation (2.2.12.3).
In order to improve upon the linearization (i.e. to increase the flexibility of the basis) and to make possible a consistent treatment of semi-core and valence states in one energy window (to ensure orthogonality), additional (-independent) basis functions can be added. They are called local orbitals' (Singh, 1994) and consist of a linear combination of two radial functions at two different energies (e.g. at the and energy) and one energy derivative (at one of these energies): The coefficients , and are determined by the requirements that should be normalized and has zero value and slope at the sphere boundary.
In its general form, the LAPW method expands the potential in the following form:where are the crystal harmonics compatible with the point-group symmetry of the corresponding atom represented in a local coordinate system (see Section 2.2.13). An analogous expression holds for the charge density. Thus no shape approximations are made, a procedure frequently called the full-potential LAPW' (FLAPW) method.
The muffin-tin approximation (MTA) used in early band calculations corresponds to retaining only the and component in the first expression of (2.2.12.5) and only the component in the second. This (much older) procedure corresponds to taking the spherical average inside the spheres and the volume average in the interstitial region. The MTA was frequently used in the 1970s and works reasonable well in highly coordinated (metallic) systems such as face-centred-cubic (f.c.c.) metals. For covalently bonded solids, open or layered structures, however, the MTA is a poor approximation and leads to serious discrepancies with experiment. In all these cases a full-potential treatment is essential.
The choice of sphere radii is not very critical in full-potential calculations, in contrast to the MTA, where this choice may affect the results significantly. Furthermore, different radii would be found when one uses one of the two plausible criteria, namely based on the potential (maximum between two adjacent atoms) or the charge density (minimum between two adjacent atoms). Therefore in the MTA one must make a compromise, whereas in full-potential calculations this problem practically disappears.
### 2.2.13. The local coordinate system
| top | pdf |
The partition of a crystal into atoms (or molecules) is ambiguous and thus the atomic contribution cannot be defined uniquely. However, whatever the definition, it must follow the relevant site symmetry for each atom. There are at least two reasons why one would want to use a local coordinate system at each atomic site: the concept of crystal harmonics and the interpretation of bonding features.
#### 2.2.13.1. Crystal harmonics
| top | pdf |
All spatial observables of the bound atom (e.g. the potential or the charge density) must have the crystal symmetry, i.e. the point-group symmetry around an atom. Therefore they must be representable as an expansion in terms of site-symmetrized spherical harmonics. Any point-symmetry operation transforms a spherical harmonic into another of the same . We start with the usual complex spherical harmonics, which satisfy Laplacian's differential equation. The are the associated Legendre polynomials and the normalization is according to the convention of Condon & Shortley (1953). For the -dependent part one can use the real and imaginary part and thus use and instead of the functions, but we must introduce a parity p to distinguish the functions with the same . For convenience we take real spherical harmonics, since physical observables are real. The even and odd polynomials are given by the combination of the complex spherical harmonics with the parity p either or − by
The expansion of – for example – the charge density around an atomic site can be written using the LAPW method [see the analogous equation (2.2.12.5) for the potential] in the form where we use capital letters for the indices (i) to distinguish this expansion from that of the wavefunctions in which complex spherical harmonics are used [see (2.2.12.1)] and (ii) to include the parity p in the index M (which represents the combined index ). With these conventions, can be written as a linear combination of real spherical harmonics which are symmetry-adapted to the site symmetry,i.e. they are either [(2.2.13.2)] in the non-cubic cases (Table 2.2.13.1) or are well defined combinations of 's in the five cubic cases (Table 2.2.13.2), where the coefficients depend on the normalization of the spherical harmonics and can be found in Kurki-Suonio (1977).
Table 2.2.13.1| top | pdf | Picking rules for the local coordinate axes and the corresponding combinations () of non-cubic groups taken from Kurki-Suonio (1977)
SymmetryCoordinate axes of Crystal system
1 Any All Triclinic
Any
2 Monoclinic
222 Orthorhombic
4 Tetragonal
422
3 Rhombohedral
32
6 Hexagonal
622
Table 2.2.13.2| top | pdf | LM combinations of cubic groups as linear cominations of 's (given in parentheses)
The linear-combination coefficients can be found in Kurki-Suonio (1977).
SymmetryLM combinations
23 (0 0), (3 2−), (4 0, 4 4), (6 0, 6 4), (6 2, 6 6)
(0 0), (4 0, 4 4), (6 0, 6 4) (6 2, 6 6)
432 (0 0), (4 0, 4 4), (6 0, 6 4)
(0 0), (3 2−), (4 0, 4 4), (6 0, 6 4),
(0 0), (4 0, 4 4), (6 0, 6 4)
According to Kurki-Suonio, the number of (non-vanishing) terms [e.g. in (2.2.13.3)] is minimized by choosing for each atom a local Cartesian coordinate system adapted to its site symmetry. In this case, other terms would vanish, so using only these terms corresponds to the application of a projection operator, i.e. equivalent to averaging the quantity of interest [e.g. ] over the star of . Note that in another coordinate system (for the L values listed) additional M terms could appear. The group-theoretical derivation led to rules as to how the local coordinate system must be chosen. For example, the z axis is taken along the highest symmetry axis, or the x and y axes are chosen in or perpendicular to mirror planes. Since these coordinate systems are specific for each atom and may differ from the (global) crystal axes, we call them local' coordinate systems, which can be related by a transformation matrix to the global coordinate system of the crystal.
The symmetry constraints according to (2.2.13.4) are summarized by Kurki-Suonio, who has defined picking rules to choose the local coordinate system for any of the 27 non-cubic site symmetries (Table 2.2.13.1) and has listed the combinations, which are defined by (a linear combination of) functions [see (2.2.13.2)]. If the parity appears, both the and the − combination must be taken. An application of a local coordinate system to rutile TiO2 is described in Section 2.2.16.2.
In the case of the five cubic site symmetries, which all have a threefold axis in (111), a well defined linear combination of functions (given in Table 2.2.13.2) leads to the cubic harmonics.
#### 2.2.13.2. Interpretation for bonding
| top | pdf |
Chemical bonding is often described by considering orbitals (e.g. a or a atomic orbital) which are defined in polar coordinates, where the z axis is special, in contrast to Cartesian coordinates, where x, y and z are equivalent. Consider for example an atom coordinated by ligands (e.g. forming an octahedron). Then the application of group theory, ligand-field theory etc. requires a certain coordinate system provided one wishes to keep the standard notation of the corresponding spherical harmonics. If this octahedron is rotated or tilted with respect to the global (unit-cell) coordinate system, a local coordinate system is needed to allow an easy orbital interpretation of the interactions between the central atom and its ligands. This applies also to spectroscopy or electric field gradients.
The two types of reasons mentioned above may or may not lead to the same choice of a local coordinate system, as is illustrated for the example of rutile in Section 2.2.16.2.
### 2.2.14. Characterization of Bloch states
| top | pdf |
The electronic structure of a solid is specified by energy bands and the corresponding wavefunctions, the Bloch functions . In order to characterize energy bands there are various schemes with quite different emphasis. The most important concepts are described below and are illustrated using selected examples in the following sections.
#### 2.2.14.1. Characterization by group theory
| top | pdf |
The energy bands are primarily characterized by the wavevector in the first BZ that is associated with the translational symmetry according to (2.2.4.23). The star of determines an irreducible basis provided that the functions of the star are symmetrized with respect to the small representations, as discussed in Section 2.2.6. Along symmetry lines in the BZ (e.g. from along towards X in the BZ shown in Fig. 2.2.7.1), the corresponding group of the vector may show a group–subgroup relation, as for example for and . The corresponding irreducible representations can then be found by deduction (or by induction in the case of a group–supergroup relation). These concepts define the compatibility relations (Bouckaert et al., 1930; Bradley & Cracknell, 1972), which tell us how to connect energy bands. For example, the twofold degenerate representation (the symmetry in a cubic system) splits into the and manifold in the direction, both of which are one-dimensional. The compatibility relations tell us how to connect bands. In addition, one can also find an orbital representation and thus knows from the group-theoretical analysis which orbitals belong to a certain energy band. This is very useful for interpretations.
#### 2.2.14.2. Energy regions
| top | pdf |
In chemistry and physics it is quite common to separate the electronic states of an atom into those from core and valence electrons, but sometimes this distinction is not well defined, as will be discussed in connection with the so-called semi-core states. For the sake of argument, let us discuss the situation in a solid using the concepts of the LAPW method, keeping in mind that very similar considerations hold for all other band-structure schemes.
A core state is characterized by a low-lying energy (i.e. with a large negative energy value with respect to the Fermi energy) and a corresponding wavefunction that is completely confined inside the sphere of the respective atom. Therefore there is effectively no overlap with the wavefunctions from neighbouring atoms and, consequently, the associated band width is practically zero.
The valence electrons occupy the highest states and have wavefunctions that strongly overlap with their counterparts at adjacent sites, leading to chemical bonding, large dispersion (i.e. a strong variation of the band energy with ) and a significant band width.
The semi-core states are in between these two categories. For example, the 3s and 3p states of the 3d transition metals belong here. They are about 2–6 Ry (1 Ry = 13.6 eV) below the valence bands and have most of the wavefunctions inside their atomic spheres, but a small fraction (a few per cent) of the corresponding charge lies outside this sphere. This causes weak interactions with neighbouring atoms and a finite width of the corresponding energy bands.
Above the valence states are the unoccupied states, which often (e.g. in DFT or the HF method) require special attention.
#### 2.2.14.3. Decomposition according to wavefunctions
| top | pdf |
For interpreting chemical bonding or the physical origin of a given Bloch state at , a decomposition according to its wavefunction is extremely useful but always model-dependent. The charge density corresponding to the Bloch state at can be normalized to one per unit cell and is (in principle) an observable, while its decomposition depends on the model used. The following considerations are useful in this context:
(1) Site-centred orbitals. In many band-structure methods, the Bloch functions are expressed as a linear combination of atomic orbitals (LCAO). These orbitals are centred at the various nuclei that constitute the solid. The linear-combination coefficients determine how much of a given orbital contributes to the wavefunction (Mulliken population analysis). (2) Spatially confined functions. In many schemes (LMTO, LAPW, KKR; see Section 2.2.11), atomic spheres are used in which the wavefunctions are described in terms of atomic-like orbitals. See, for example, the representation (2.2.12.1) in the LAPW method (Section 2.2.12), where inside the atomic sphere the wavefunction is written as an -like radial function times spherical harmonics (termed partial waves). The latter require a local coordinate system (Section 2.2.13) which need not to be the same as the global coordinate system of the unit cell. The reasons for choosing a special local coordinate system are twofold: one is a simplification due to the use of the point-group symmetry, and the other is the interpretation, as will be illustrated below for TiO2 in the rutile structure (see Section 2.2.16.2). (3) Orbital decomposition. In all cases in which -like orbitals are used (they do not require a local coordinate system) to construct the crystalline wavefunction, an -like decomposition can be done. This is true for both atom-centred orbitals and spatially confined partial waves. A corresponding decomposition can be done on the basis of partial electronic charges, as discussed below. A further decomposition into the m components can only be done in a local coordinate system with respect to which the spherical harmonics are defined. (4) Bonding character. As in a diatomic molecule with an orbital on atom A and another on atom B, we can form bonding and antibonding states by adding or subtracting the corresponding orbitals. The bonding interaction causes a lowering in energy with respect to the atomic state and corresponds to a constructive interference of the orbitals. For the antibonding state, the interaction raises the energy and leads to a change in sign of the wavefunction, causing a nodal plane that is perpendicular to the line connecting the nuclei. If the symmetry does not allow an interaction between two orbitals, a nonbonding state occurs. Analogous concepts can also be applied to solids. (5) Partial charges. The charge corresponding to a Bloch function of state – averaged over the star of – can be normalized to 1 in the unit cell. A corresponding decomposition of the charge can be done into partial electronic charges. This is illustrated first within the LAPW scheme. Using the resolution of the identity this 1 (unit charge) of each state can be spatially decomposed into the contribution from the region outside all atomic spheres (interstitial region II) and a sum over all atomic spheres (with superscript t) which contain the charges (confined within atomic sphere t). The latter can be further decomposed into the partial -like charges , leading to . In a site-centred basis a similar decomposition can be done, but without the term . The interpretation, however, is different, as will be discussed for Cu (see Section 2.2.16). If the site symmetry (point group) permits, another partitioning according to m can be made, e.g. into the and manifold of the fivefold degenerate d orbitals in a octahedral ligand field. The latter scheme requires a local coordinate system in which the spherical harmonics are defined (see Section 2.2.13). In general, the proper m combinations are given by the irreducible representations corresponding to the site symmetry.
#### 2.2.14.4. Localized versus itinerant electrons
| top | pdf |
Simple metals with valence electrons originating from s- and p-type orbitals form wide bands which are approximately free-electron like (with a large band width W). Such a case corresponds to itinerant electrons that are delocalized and thus cause metallic conductivity.
The other extreme case is a system with 4f (and some 5f) electrons, such as the lanthanides. Although the orbital energies of these electrons are in the energy range of the valence electrons, they act more like core electrons and thus are tightly bound to the corresponding atomic site. Such electrons are termed localized, since they do not hop to neighbouring sites (controlled by a hopping parameter t) and thus do not contribute to metallic conductivity. Adding another of these electrons to a given site would increase the Coulomb repulsion U. A large U (i.e. t) prevents them from hopping.
There are – as usual – borderline cases (e.g. the late 3d transition-metal oxides) in which a delicate balance between t and U, the energy gain by delocalizing electrons and the Coulomb repulsion, determines whether a system is metallic or insulating. This problem of metal/insulator transitions is an active field of research of solid-state physics which shall not be discussed here.
In one example, however, the dual role of f electrons is illustrated for the uranium atom using relativistic wavefunctions (with a large and a small component) characterized by the quantum numbers n, and j. Fig. 2.2.14.1 shows the outermost lobe (the large component) of the electrons beyond the [Xe] core without the 4f and 5d core-like states. One can see the , and (semi-core) electrons, and the and (valence) electrons.
Figure 2.2.14.1 | top | pdf |Relativistic radial wavefunctions (large component) of the uranium atom. Shown are the outer lobes of valence and semi-core states excluding the [Xe] core, and the 4f and 5d core states.
On the one hand, the radial wavefunction of the orbital has its peak closer to the nucleus than the main lobes of the semi-core states , and , and thus demonstrates the core nature of these 5f electrons. On the other hand, the orbital decays (with distance) much less than the semi-core states and electrons in this orbital can thus also play the role of valence electrons, like electrons in the and orbitals. This dual role of the f electrons has been discussed, for example, by Schwarz & Herzig (1979).
#### 2.2.14.5. Spin polarization
| top | pdf |
In a non-fully-relativistic treatment, spin remains a good quantum number. Associated with the spin is a spin magnetic moment. If atoms have net magnetic moments they can couple in various orders in a solid. The simplest cases are the collinear spin alignments as found in ferromagnetic (FM) or antiferromagnetic (AF) systems with parallel (FM) and antiparallel (AF) moments on neighbouring sites. Ferrimagnets have opposite spin alignments but differ in the magnitude of their moments on neighbouring sites, leading to a finite net magnetization. These cases are characterized by the electronic structure of spin-up and spin-down electrons. More complicated spin structures (e.g. canted spins, spin spirals, spin glasses) often require a special treatment beyond simple spin-polarized calculations. In favourable cases, however, as in spin spirals, it is possible to formulate a generalized Bloch theorem and treat such systems by band theory (Sandratskii, 1990).
In a fully relativistic formalism, an additional orbital moment may occur. Note that the orientation of the total magnetic moment (spin and orbital moment) with respect to the crystal axis is only defined in a relativistic treatment including spin–orbit interactions. In a spin-polarized calculation without spin–orbit coupling this is not the case and only the relative orientation (majority-spin and minority-spin) is known. The magnetic structures may lead to a lowering of symmetry, a topic beyond this book.
#### 2.2.14.6. The density of states (DOS)
| top | pdf |
The density of states (DOS) is the number of one-electron states (in the HF method or DFT) per unit energy interval and per unit cell volume. It is better to start with the integral quantity , the number of states below a certain energy ,where is the volume of the BZ, the factor 2 accounts for the occupation with spin-up and spin-down electrons (in a non-spin-polarized case), and is the step function, the value of which is 1 if is less than and 0 otherwise. The sum over points has been replaced by an integral over the BZ, since the points are uniformly distributed. Both expressions, sum and integral, are used in different derivations or applications. The Fermi energy is defined by imposing that , the number of (valence) electrons per unit cell.
The total DOS is defined as the energy derivative of as with the normalization where the integral is taken from if all core states are included or from the bottom of the valence bands, often taken to be at zero. This defines the Fermi energy (note that the energy range must be consistent with N). In a bulk material, the origin of the energy scale is arbitrary and thus only relative energies are important. In a realistic case with a surface (i.e. a vacuum) one can take the potential at infinity as the energy zero, but this situation is not discussed here.
The total DOS can be decomposed into a partial (or projected) DOS by using information from the wavefunctions as described above in Section 2.2.14.3. If the charge corresponding to the wavefunction of an energy state is partitioned into contributions from the atoms, a site-projected DOS can be defined as , where the superscript t labels the atom t. These quantities can be further decomposed into -like contributions within each atom to give . As discussed above for the partial charges, a further partitioning of the -like terms according to the site symmetry (point group) can be done (in certain cases) by taking the proper m combinations, e.g. the and manifold of the fivefold degenerate d orbitals in a octahedral ligand field. The latter scheme requires a local coordinate system in which the spherical harmonics are defined (see Section 2.2.13). In this context all considerations as discussed above for the partial charges apply again. Note in particular the difference between site-centred and spatially decomposed wavefunctions, which affects the partition of the DOS into its wavefunction-dependent contributions. For example, in atomic sphere representations as in LAPW we have the decomposition In the case of spin-polarized calculations, one can also define a spin-projected DOS for spin-up and spin-down electrons.
### 2.2.15. Electric field gradient tensor
| top | pdf |
#### 2.2.15.1. Introduction
| top | pdf |
The study of hyperfine interactions is a powerful way to characterize different atomic sites in a given sample. There are many experimental techniques, such as Mössbauer spectroscopy, nuclear magnetic and nuclear quadrupole resonance (NMR and NQR), perturbed angular correlations (PAC) measurements etc., which access hyperfine parameters in fundamentally different ways. Hyperfine parameters describe the interaction of a nucleus with the electric and magnetic fields created by the chemical environment of the corresponding atom. Hence the resulting level splitting of the nucleus is determined by the product of a nuclear and an extra-nuclear quantity. In the case of quadrupole interactions, the nuclear quantity is the nuclear quadrupole moment (Q) that interacts with the electric field gradient (EFG) produced by the charges outside the nucleus. For a review see, for example, Kaufmann & Vianden (1979).
The EFG tensor is defined by the second derivative of the electrostatic potential V with respect to the Cartesian coordinates , i = 1, 2, 3, taken at the nuclear site n, where the second term is included to make it a traceless tensor. This is more appropriate, since there is no interaction of a nuclear quadrupole and a potential caused by s electrons. From a theoretical point of view it is more convenient to use the spherical tensor notation because electrostatic potentials (the negative of the potential energy of the electron) and the charge densities are usually given as expansions in terms of spherical harmonics. In this way one automatically deals with traceless tensors (for further details see Herzig, 1985).
The analysis of experimental results faces two obstacles: (i) The nuclear quadrupole moments (Pyykkö, 1992) are often known only with a large uncertainty, as this is still an active research field of nuclear physics. (ii) EFGs depend very sensitively on the anisotropy of the charge density close to the nucleus, and thus pose a severe challenge to electronic structure methods, since an accuracy of the density in the per cent range is required.
In the absence of a better tool, a simple point-charge model was used in combination with so-called Sternheimer (anti-) shielding factors in order to interpret the experimental results. However, these early model calculations depended on empirical parameters, were not very reliable and often showed large deviations from experimental values.
In their pioneering work, Blaha et al. (1985) showed that the LAPW method was able to calculate EFGs in solids accurately without empirical parameters. Since then, this method has been applied to a large variety of systems (Schwarz & Blaha, 1992) from insulators (Blaha et al., 1985), metals (Blaha et al., 1988) and superconductors (Schwarz et al., 1990) to minerals (Winkler et al., 1996).
Several other electronic structure methods have been applied to the calculation of EFGs in solids, for example the LMTO method for periodic (Methfessl & Frota-Pessoa, 1990) or non-periodic (Petrilli & Frota-Pessoa, 1990) systems, the KKR method (Akai et al., 1990), the DVM (discrete variational method; Ellis et al., 1983), the PAW method (Petrilli et al., 1998) and others (Meyer et al., 1995). These methods achieve different degrees of accuracy and are more or less suitable for different classes of systems.
As pointed out above, measured EFGs have an intrinsic uncertainty related to the accuracy with which the nuclear quadrupole moment is known. On the other hand, the quadrupole moment can be obtained by comparing experimental hyperfine splittings with very accurate electronic structure calculations. This has recently been done by Dufek et al. (1995a) to determine the quadrupole moment of 57Fe. Hence the calculation of accurate EFGs is to date an active and challenging research field.
#### 2.2.15.2. EFG conversion formulas
| top | pdf |
The nuclear quadrupole interaction (NQI) represents the interaction of Q (the nuclear quadrupole moment) with the electric field gradient (EFG) created by the charges surrounding the nucleus, as described above. Here we briefly summarize the main ideas (following Petrilli et al., 1998) and provide conversions between experimental NQI splittings and electric field gradients.
Let us consider a nucleus in a state with nuclear spin quantum number with the corresponding nuclear quadrupole moment , where is the nuclear charge density around point and e is the proton's charge. The interaction of this Q with an electric field gradient tensor , splits the energy levels for different magnetic spin quantum numbers of the nucleus according to in first order of , where Q represents the largest component of the nuclear quadrupole moment tensor in the state characterized by . (Note that the quantum-mechanical expectation value of the charge distribution in an angular momentum eigenstate is cylindrical, which renders the expectation value of the remaining two components with half the value and opposite sign.) The conventional choice is . Hence, is the principal component (largest eigenvalue) of the electric field gradient tensor and the asymmetry parameter is defined by the remaining two eigenvalues through (2.2.15.3) shows that the electric quadrupole interaction splits the ()-fold degenerate energy levels of a nuclear state with spin quantum number I () into I doubly degenerate substates (and one singly degenerate state for integer I). Experiments determine the energy difference between the levels, which is called the quadrupole splitting. The remaining degeneracy can be lifted further using magnetic fields.
Next we illustrate these definitions for 57Fe, which is the most common probe nucleus in Mössbauer spectroscopy measurements and thus deserves special attention. For this probe, the nuclear transition occurs between the excited state and ground state, with a 14.4 KeV radiation emission. The quadrupole splitting between the and the state can be obtained by exploiting the Doppler shift of the radiation of the vibrating sample. For systems in which the 57Fe nucleus has a crystalline environment with axial symmetry (a threefold or fourfold rotation axis), the asymmetry parameter is zero and is given directly by As can never be greater than unity, the difference between the values of given by equation (2.2.15.5) and equation (2.2.15.6) cannot be more than about 15%. In the remainder of this section we simplify the expressions, as is often done, by assuming that . As Mössbauer experiments exploit the Doppler shift of the radiation, the splitting is expressed in terms of the velocity between sample and detector. The quadrupole splitting can be obtained from the velocity, which we denote here by , by where c = 2.9979245580 × 108 m s−1 is the speed of light and Eγ = 14.41 × 103 eV is the energy of the emitted radiation of the 57Fe nucleus.
Finally, we still need to know the nuclear quadrupole moment Q of the Fe nucleus itself. Despite its utmost importance, its value has been heavily debated. Recently, however, Dufek et al. (1995b) have determined the value Q = 0.16 b for 57Fe (1 b = 10−28 m2) by comparing for fifteen different compounds theoretical values, which were obtained using the linearized augmented plane wave (LAPW) method, with the measured quadrupole splitting at the Fe site.
Now we relate the electric field gradient to the Doppler velocity via In the special case of the 57Fe nucleus, we obtain EFGs can also be obtained by techniques like NMR or NQR, where a convenient measure of the strength of the quadrupole interaction is expressed as a frequency , related to by The value can then be calculated from the frequency in MHz by where (h/e) = 4.1356692 × 10−15 V Hz−1. The principal component is also often denoted as .
In the literature, two conflicting definitions of are in use. One is given by (2.2.15.10), and the other, defined as differs from the first by a factor of 2 and assumes the value . Finally, the definition of has been introduced here. In order to avoid confusion, we will refer here only to the definition given by (2.2.15.10). Furthermore, we also adopt the same sign convention for as Schwarz et al. (1990) because it has been found to be consistent with the majority of experimental results.
#### 2.2.15.3. Theoretical approach
| top | pdf |
Since the EFG is a ground-state property that is uniquely determined by the charge density distribution (of electrons and nuclei), it can be calculated within DFT without further approximations. Here we describe the basic formalism to calculate EFGs with the LAPW method (see Section 2.2.12). In the LAPW method, the unit cell is divided into non-overlapping atomic spheres and an interstitial region. Inside each sphere the charge density (and analogously the potential) is written as radial functions times crystal harmonics (2.2.13.4) and in the interstitial region as Fourier series: The charge density coefficients can be obtained from the wavefunctions (KS orbitals) by (in shorthand notation) where are Gaunt numbers (integrals over three spherical harmonics) and denote the LAPW radial functions [see (2.2.12.1)] of the occupied states below the Fermi energy . The dependence on the energy bands in has been omitted in order to simplify the notation.
For a given charge density, the Coulomb potential is obtained numerically by solving Poisson's equation in form of a boundary-value problem using a method proposed by Weinert (1981). This yields the Coulomb potential coefficients in analogy to (2.2.15.13) [see also (2.2.12.5)]. The most important contribution to the EFG comes from a region close to the nucleus of interest, where only the terms are needed (Herzig, 1985). In the limit (the position of the nucleus), the asymptotic form of the potential can be used and this procedure yields (Schwarz et al., 1990) for : with , and the spherical Bessel function . The first term in (2.2.15.15) (called the valence EFG) corresponds to the integral over the respective atomic sphere (with radius R). The second and third terms in (2.2.15.15) (called the lattice EFG) arise from the boundary-value problem and from the charge distribution outside the sphere considered. Note that our definition of the lattice EFG differs from that based on the point-charge model (Kaufmann & Vianden, 1979). With these definitions the tensor components are given as where and the index M combines m and the partity p (e.g. ). Note that the prefactors depend on the normalization used for the spherical harmonics.
The non-spherical components of the potential come from the non-spherical charge density . For the EFG only the terms (in the potential) are needed. If the site symmetry does not contain such a non-vanishing term (as for example in a cubic system with in the lowest combination), the corresponding EFG vanishes by definition. According to the Gaunt numbers in (2.2.15.14) only a few non-vanishing terms remain (ignoring f orbitals), such as the pp, dd or sd combinations (for f orbitals, pf and ff would appear), where this shorthand notation denotes the products of the two radial functions . The sd term is often small and thus is not relevant to the interpretation. This decomposition of the density can be used to partition the EFG (illustrated for the component), where the superscripts p and d are a shorthand notation for the product of two p- or d-like functions.
From our experience we find that the first term in (2.2.15.15) is usually by far the most important and often a radial range up to the first node in the corresponding radial function is all that contributes. In this case the contribution from the other two terms is rather small (a few per cent). For first-row elements, however, which have no node in their 2p functions, this is no longer true and thus the first term amounts only to about 50–70%.
In some cases interpretation is simplified by defining a so-called asymmetry count, illustrated below for the oxygen sites in YBa2Cu3O7 (Schwarz et al., 1990), the unit cell of which is shown in Fig. 2.2.15.1.
Figure 2.2.15.1 | top | pdf |Unit cell of the high-temperature superconductor YBa2Cu3O7 with four non-equivalent oxygen sites.
In this case essentially only the O 2p orbitals contribute to the O EFG. Inside the oxygen spheres (all taken with a radius of 0.82 Å) we can determine the partial charges corresponding to the , and orbitals, denoted in short as , and charges.
With these definitions we can define the p-like asymmetry count as and obtain the proportionality where is the expectation value taken with the p orbitals. A similar equation can be defined for the d orbitals. The factor enhances the EFG contribution from the density anisotropies close to the nucleus. Since the radial wavefunctions have an asymptotic behaviour near the origin as , the p orbitals are more sensitive than the d orbitals. Therefore even a very small p anisotropy can cause an EFG contribution, provided that the asymmetry count is enhanced by a large expectation value.
Often the anisotropy in the , and occupation numbers can be traced back to the electronic structure. Such a physical interpretation is illustrated below for the four non-equivalent oxygen sites in YBa2Cu3O7 (Table 2.2.15.1). Let us focus first on O1, the oxygen atom that forms the linear chain with the Cu1 atoms along the b axis. In this case, the orbital of O1 points towards Cu1 and forms a covalent bond, leading to bonding and antibonding states, whereas the other two p orbitals have no bonding partner and thus are essentially nonbonding. Part of the corresponding antibonding states lies above the Fermi energy and thus is not occupied, leading to a smaller charge of 0.91 e, in contrast to the fully occupied nonbonding states with occupation numbers around 1.2 e. (Note that only a fraction of the charge stemming from the oxygen 2p orbitals is found inside the relatively small oxygen sphere.) This anisotropy causes a finite asymmetry count [(2.2.15.18)] that leads – according to (2.2.15.19) – to a corresponding EFG.
Table 2.2.15.1| top | pdf | Partial O 2p charges (in electrons) and electric field gradient tensor O EFG (in 1021 V m−2) for YBa2Cu3O7
Numbers in bold represent the main deviation from spherical symmetry in the charges and the related principal component of the EFG tensor.
Atom
O1 1.18 0.91 1.25 −6.1 18.3 −12.2
O2 1.01 1.21 1.18 11.8 −7.0 −4.8
O3 1.21 1.00 1.18 −7.0 11.9 −4.9
O4 1.18 1.19 0.99 −4.7 −7.0 11.7
In this simple case, the anisotropy in the charge distribution, given here by the different p occupation numbers, is directly proportional to the EFG, which is given with respect to the crystal axes and is thus labelled , and (Table 2.2.15.1). The principal component of the EFG is in the direction where the p occupation number is smallest, i.e. where the density has its highest anisotropy. The other oxygen atoms behave very similarly: O2, O3 and O4 have a near neighbour in the a, b and c direction, respectively, but not in the other two directions. Consequently, the occupation number is lower in the direction in which the bond is formed, whereas it is normal (around 1.2 e) in the other two directions. The principal axis falls in the direction of the low occupation. The higher the anisotropy, the larger the EFG (compare O1 with the other three oxygen sites). Excellent agreement with experiment is found (Schwarz et al., 1990). In a more complicated situation, where p and d contributions to the EFG occur [see (2.2.15.17)], which often have opposite sign, the interpretation can be more difficult [see e.g. the copper sites in YBa2Cu3O7; Schwarz et al. (1990)].
The importance of semi-core states has been illustrated for rutile, where the proper treatment of 3p and 4p states is essential to finding good agreement with experiment (Blaha et al., 1992). The orthogonality between -like bands belonging to different principal quantum numbers (3p and 4p) is important and can be treated, for example, by means of local orbitals [see (2.2.12.4)].
In many simple cases, the off-diagonal elements of the EFG tensor vanish due to symmetry, but if they don't, diagonalization of the EFG tensor is required, which defines the orientation of the principal axis of the tensor. Note that in this case the orientation is given with respect to the local coordinate axes (see Section 2.2.13) in which the components are defined.
### 2.2.16. Examples
| top | pdf |
The general concepts described above are used in many band-structure applications and thus can be found in the corresponding literature. Here only a few examples are given in order to illustrate certain aspects.
#### 2.2.16.1. F.c.c. copper
| top | pdf |
For the simple case of an element, namely copper in the f.c.c. structure, the band structure is shown in Fig. 2.2.16.1 along the symmetry direction from to X. The character of the bands can be illustrated by showing for each band state the crucial information that is contained in the wavefunctions. In the LAPW method (Section 2.2.12), the wavefunction is expanded in atomic like functions inside the atomic spheres (partial waves), and thus a spatial decomposition of the associated charge and its portion of -like charge (s-, p-, d-like) inside the Cu sphere, , provides such a quantity. Fig. 2.2.16.1 shows for each state a circle the radius of which is proportional to the -like charge of that state. The band originating from the Cu 4s and 4p orbitals shows an approximately free-electron behaviour and thus a energy dependence, but it hybridizes with one of the d bands in the middle of the direction and thus the -like character changes along the direction.
Figure 2.2.16.1 | top | pdf |Character of energy bands of f.c.c. copper in the direction. The radius of each circle is proportional to the respective partial charge of the given state.
This can easily be understood from a group-theoretical point of view. Since the d states in an octahedral environment split into the and manifold, the d bands can be further partitioned into the two subsets as illustrated in Fig. 2.2.16.2. The s band ranges from about −9.5 eV below to about 2 eV above. In the direction, the s band has symmetry, the same as one of the d bands from the manifold, which consists of and . As a consequence of the non-crossing rule', the two states, both with symmetry, must split due to the quantum-mechanical interaction between states with the same symmetry. This leads to the avoided crossing seen in the middle of the direction (Fig. 2.2.16.1). Therefore the lowest band starts out as an s band' but ends near X as a d band'. This also shows that bands belonging to different irreducible representations (small representations) may cross. The fact that splits into the subgroups and is an example of the compatibility relations. In addition, group-theoretical arguments can be used (Altmann, 1994) to show that in certain symmetry directions the bands must enter the face of the BZ with zero slope.
Figure 2.2.16.2 | top | pdf |Decomposition of the Cu d bands into the and manifold. The radius of each circle is proportional to the corresponding partial charge.
Note that in a site-centred description of the wavefunctions a similar -like decomposition of the charge can be defined as (without the term), but here the partial charges have a different meaning than in the spatial decomposition. In one case (e.g. LAPW), refers to the partial charge of -like character inside sphere t, while in the other case (LCAO), it means -like charge coming from orbitals centred at site t. For the main components (for example Cu d) these two procedures will give roughly similar results, but the small components have quite a different interpretation. For this purpose consider an orbital that is centred on the neighbouring site j, but whose tail enters the atomic sphere i. In the spatial representation this tail coming from the j site must be represented by the (s, p, d etc.) partial waves inside sphere i and consequently will be associated with site i, leading to a small partial charge component. This situation is sometimes called the off-site component, in contrast to the on-site component, which will appear at its own site or in its own sphere, depending on the representation, site-centred or spatially confined.
#### 2.2.16.2. The rutile TiO2
| top | pdf |
The well known rutile structure (e.g. TiO2) is tetragonal (see Fig. 2.2.16.3) with the basis consisting of the metal atoms at the Wyckoff positions, () and (), and anions at the position, located at () and () with a typical value of about 0.3 for the internal coordinate u. Rutile belongs to the non-symmorphic space group () in which the metal positions are transformed into each other by a rotation by 90° around the crystal c axis followed by a non-primitive translation of (). The two metal positions at the centre and at the corner of the unit cell are equivalent when the surrounding octahedra are properly rotated. The metal atoms are octahedrally coordinated by anions which, however, do not form an ideal octahedron. The distortion depends on the structure parameters a, and u, and results in two different metal–anion distances, namely the apical distance and the equatorial distance , the height (z axis) and the basal spacing of the octahedron. For a certain value the two distances and become equal:For this special value and an ideal ratio, the basal plane of the octahedron is quadratic and the two distances are equal. An ideal octahedral coordination is thus obtained with Although the actual coordination of the metal atoms deviates from the ideal octahedron (as in all other systems that crystallize in the rutile structure), we still use this concept for symmetry arguments and call it octahedral coordination.
Figure 2.2.16.3 | top | pdf |The local coordinate system in rutile for titanium (small spheres) and oxygen (large spheres).
The concept of a local coordinate system is illustrated for rutile (TiO2) from two different aspects, namely the crystal harmonics expansion (see Section 2.2.13) and the interpretation of chemical bonding (for further details see Sorantin & Schwarz, 1992).
(i) The expansion in crystal harmonics. We know that titanium occupies the Wyckoff position with point group . From Table 2.2.13.1 we see that for point group (listed under the orthorhombic structure) we must choose the x axis parallel to [], the y axis parallel to [] and the z axis parallel to []. We can transform the global coordinate system (i.e. that of the unit cell) into the local coordinate system around Ti. The following first LM combinations appear in the series (2.2.12.5): , etc. (ii) The interpretation of bonding. The second reason for choosing a local coordinate system is that it allows the use of symmetry-adapted orbitals for interpreting bonding, interactions or crystal-field effects. For this purpose, one likes to have the axes pointing to the six oxygen ligands, i.e. the x and y axes towards the oxygen atoms in the octahedral basal plane, and the z axis towards the apical oxygen (Fig. 2.2.16.3). The Cartesian x and y axes, however, are not exactly (but approximately) directed toward the oxygen ligands due to the rectangular distortion of the octahedral basal plane. For oxygen in TiO2 with point group , the two types of local systems are identical and are shown in Fig. 2.2.16.3 for the position (). The z axis coincides with that of the Ti atom, while it points to the neighbouring oxygen of the basal plane in the octahedron around Ti at the origin. Only in this local coordinate system are the orbitals arranged in the usual way for an octahedron, where the d orbitals split (into the three orbitals of and the two of symmetry) and thus allow an easy interpretation of the interactions, e.g. one of the two orbitals, namely the Ti can form a bond with the O orbital.
#### 2.2.16.3. Core electron spectra
| top | pdf |
In excitations involving core electrons, simplifications are possible that allow an easier interpretation. As one example, (soft) X-ray emission (XES) or absorption (XAS) spectra are briefly discussed. In the one-electron picture, the XES process can be described as sketched in Fig. 2.2.16.4. First a core electron of atom A in state is knocked out (by electrons or photons), and then a transition occurs between the occupied valence states at energy and the core hole (the transitions between inner core levels are ignored).
Figure 2.2.16.4 | top | pdf |Schematic transitions in X-ray emission and absorption spectra.
According to Fermi's golden rule, the intensity of such a transition can be described bywhere comes from the integral over the angular components (Table 2.2.16.1) and contains the selection rule, is the local (within atomic sphere A) partial (-like) DOS, is the radial transition probability [see (2.2.16.6) below], and the last term takes the energy conservation into account.
Table 2.2.16.1| top | pdf | factors for X-ray emission spectra showing the selection rule
01234
0 1/3
1 1 2/5
2 2/3 3/7
3 3/5 4/11
The are defined as the dipole transition (with the dipole operator r) probability between the valence state at and the core state characterized by quantum numbers ,In this derivation one makes use of the fact that core states are completely confined inside the atomic sphere. Therefore the integral, which should be taken over the entire space, can be restricted to one atomic sphere (namely A), since the core wavefunction and thus the integrand vanishes outside this sphere. This is also the reason why XES (or XAS) are related to , the local DOS weighted with the -like charge within the atomic sphere A.
The interpretation of XES intensities is as follows. Besides the factor from Fermi's golden rule, the intensity is governed by the selection rule and the energy conservation. In addition, it depends on the number of available states at which reside inside sphere A and have an -like contribution, times the probability for the transition to take place from the valence and to the core hole under energy conservation. For an application, see for example the comparison between theory and experiment for the compounds NbC and NbN (Schwarz, 1977).
Note again that the present description is based on an atomic sphere representation with partial waves inside the spheres, in contrast to an LCAO-like treatment with site-centred basis functions. In the latter, an equivalent formalism can be defined which differs in details, especially for the small components (off-site contributions). If the tails of an orbital enter a neighbouring sphere and are crucial for the interpretation of XES, there is a semantic difference between the two schemes as discussed above in connection with f.c.c. Cu in Section 2.2.16.1. In the present framework, all contributions come exclusively from the sphere where the core hole resides, whereas in an LCAO representation cross transitions' from the valence states on one atom to the core hole of a neighbouring atom may be important. The latter contributions must be (and are) included in the partial waves within the sphere in schemes such as LAPW. There is no physical difference between the two descriptions.
In XES, spectra are interpreted on the basis of results from ground-state calculations, although there could be relaxations due to the presence of a core hole. As early as 1979, von Barth and Grossmann formulated a `final state rule' for XES in metallic systems (von Barth & Grossmann, 1979). In this case, the initial state is one with a missing core electron (core hole), whereas the final state is close to the ground state, since the hole in the valence bands (after a valence electron has filled the core hole) has a very short lifetime and is very quickly filled by other valence electrons. They applied time-dependent perturbation theory and could show by model calculations that the main XES spectrum can be explained by ground-state results, whereas the satellite spectrum (starting with two core holes and ending with one) requires a treatment of the core-hole relaxation. This example illustrates the importance of the relevant physical process in experiments related to the energy-band structure: it may not always be the just the ground states that are involved and sometimes excited states must be considered.
### 2.2.17. Conclusion
| top | pdf |
There are many more applications of band theory to solids and thus an enormous amount of literature has not been covered here. In this chapter, an attempt has been made to collect relevant concepts, definitions and examples from group theory, solid-state physics and crystallography in order to understand symmetry aspects in combination with a quantum-mechanical treatment of the electronic structure of solids.
### Acknowledgements
The author wishes to thank the following persons who contributed to this chapter: P. Blaha, the main author of WIEN; J. Luitz, for help with the figures; and P. Herzig, with whom the author discussed the group-theoretical aspects.
### References
Akai, H., Akai, M., Blügel, S., Drittler, B., Ebert, H., Terakura, K., Zeller, R. & Dederichs, P. H. (1990). Theory of hyperfine interactions in metals. Prog. Theor. Phys. Suppl. 101, 11–77.
Altmann, S. L. (1994). Band theory of solids: An introduction from the view of symmetry. Oxford: Clarendon Press.
Andersen, O. K. (1975). Linear methods in band theory. Phys. Rev. B, 12, 3060–3083.
Barth, U. von & Grossmann, G. (1979). The effect of the core hole on X-ray emission spectra in simple metals. Solid State Commun. 32, 645–649.
Barth, U. von & Hedin, L. (1972). A local exchange-correlation potential for the spin-polarized case: I. J. Phys. C, 5, 1629–1642.
Blaha, P., Schwarz, K. & Dederichs, P. H. (1988). First-principles calculation of the electric field gradient in hcp metals. Phys. Rev. B, 37, 2792–2796.
Blaha, P., Schwarz, K. & Herzig, P. (1985). First-principles calculation of the electric field gradient of Li3N. Phys. Rev. Lett. 54, 1192–1195.
Blaha, P., Schwarz, K., Sorantin, P. I. & Trickey, S. B. (1990). Full-potential linearized augmented plane wave programs for crystalline systems. Comput. Phys. Commun. 59, 399–415.
Blaha, P., Singh, D. J., Sorantin, P. I. & Schwarz, K. (1992). Electric field gradient calculations for systems with large extended core state contributions. Phys. Rev. B, 46, 5849–5852.
Blöchl, P. E. (1994). Projector augmented-wave method. Phys. Rev. B, 50, 17953–17979.
Bouckaert, L. P., Smoluchowski, R. & Wigner, E. (1930). Theory of Brillouin zones and symmetry properties of wavefunctions in crystals. Phys. Rev. 50, 58–67.
Bradley, C. J. & Cracknell, A. P. (1972). The mathematical theory of symmetry in solids. Oxford: Clarendon Press.
Car, R. & Parrinello, M. (1985). Unified approach for molecular dynamics and density-functional theory. Phys. Rev. Lett. 55, 2471–2474.
Ceperley, D. M. & Alder, B. J. (1984). Ground state of the electron gas by a stochastic method. Phys. Rev. Lett. 45, 566–572.
Colle, R. & Salvetti, O. (1990). Generalisation of the Colle–Salvetti correlation energy method to a many determinant wavefunction. J. Chem. Phys. 93, 534–544.
Condon, E. U. & Shortley, G. H. (1953). The mathematical theory of symmetry in crystals. Cambridge University Press.
Dreizler, R. M. & Gross, E. K. U. (1990). Density functional theory. Berlin, Heidelberg, New York: Springer-Verlag.
Dufek, P., Blaha, P. & Schwarz, K. (1995a). Theoretical investigation of the pressure induced metallization and the collapse of the antiferromagnetic states in NiI2. Phys. Rev. B, 51, 4122–4127.
Dufek, P., Blaha, P. & Schwarz, K. (1995b). Determination of the nuclear quadrupole moment of 57Fe. Phys. Rev. Lett. 75, 3545–3548.
Ellis, D. E., Guenzburger, D. & Jansen, H. B. (1983). Electric field gradient and electronic structure of linear-bonded halide compounds. Phys. Rev. B, 28, 3697–3705.
Godby, R. W., Schlüter, M. & Sham, L. J. (1986). Accurate exchange-correlation potential for silicon and its discontinuity of addition of an electron. Phys. Rev. Lett. 56, 2415–2418.
Gunnarsson, O. & Lundqvist, B. I. (1976). Exchange and correlation in atoms, molecules, and solids by the spin-density-functional formation. Phys. Rev. B, 13, 4274–4298.
Hedin, L. & Lundqvist, B. I. (1971). Explicit local exchange-correlation potentials. J. Phys. C, 4, 2064–2083.
Herzig, P. (1985). Electrostatic potentials, field gradients from a general crystalline charge density. Theoret. Chim. Acta, 67, 323–333.
Hoffmann, R. (1988). Solids and surfaces: A chemist's view of bonding in extended structures. New York: VCH Publishers, Inc.
Hohenberg, P. & Kohn, W. (1964). Inhomogeneous electron gas. Phys. Rev. 136, B864–B871.
Hybertsen, M. S. & Louie, G. (1984). Non-local density functional theory for the electronic and structural properties of semiconductors. Solid State Commun. 51, 451–454.
International Tables for Crystallography (2001). Vol. B. Reciprocal space, edited by U. Shmueli, 2nd ed. Dordrecht: Kluwer Academic Publishers.
International Tables for Crystallography (2005). Vol. A. Space-group symmetry, edited by Th. Hahn, 5th ed. Heidelberg: Springer.
Janak, J. F. (1978). Proof that in density-functional theory. Phys. Rev. B, 18, 7165–7168.
Kaufmann, E. N. & Vianden, R. J. (1979). The electric field gradient in noncubic metals. Rev. Mod. Phys. 51, 161–214.
Koelling, D. D. & Arbman, G. O. (1975). Use of energy derivative of the radial solution in an augmented plane wave method: application to copper. J. Phys. F Metal Phys. 5, 2041–2054.
Koelling, D. D. & Harmon, B. N. (1977). A technique for relativistic spin-polarized calculations. J. Phys. C Solid State Phys. 10, 3107–3114.
Kohn, W. & Rostocker, N. (1954). Solution of the Schrödinger equation in periodic lattice with an application to metallic lithium. Phys. Rev. 94, A1111–A1120.
Kohn, W. & Sham, L. J. (1965). Self-consistent equations including exchange. Phys. Rev. 140, A1133–A1138.
Korringa, J. (1947). On the calculation of the energy of a Bloch wave in a metal. Physica, 13, 392–400.
Kurki-Suonio, K. (1977). Symmetry and its implications. Isr. J. Chem. 16, 115–123.
Loucks, T. L. (1967). Augmented plane wave method. New York, Amsterdam: W. A. Benjamin, Inc.
Methfessl, M. & Frota-Pessoa, S. (1990). Real-space method for calculation of the electric-field gradient: Comparison with K-space results. J. Phys. Condens. Matter, 2, 149–158.
Meyer, B., Hummler, K., Elsässer, C. & Fähnle, M. (1995). Reconstruction of the true wavefunction from the pseudo-wavefunctions in a crystal and calculation of electric field gradients. J. Phys. Condens. Matter, 7, 9201–9218.
Ordejon, P., Drabold, D. A., Martin, R. A. & Grumbach, M. P. (1995). Linear system-size scaling methods for electronic-structure calculations. Phys. Rev. B, 51, 1456–1476.
Parr, R., Donnelly, R. A., Levy, M. & Palke, W. A. (1978). Electronegativity: the density functional viewpoint. J. Chem. Phys. 68, 3801–3807.
Perdew, J. P. (1986). Density functional theory and the band gap problem. Int. J. Quantum Chem. 19, 497–523.
Perdew, J. P., Burke, K. & Ernzerhof, M. (1996) Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868.
Perdew, J. P. & Levy, M. (1983). Physical content of the exact Kohn–Sham orbital energies: band gaps and derivative discontinuities. Phys. Rev. Lett. 51, 1884–1887.
Petrilli, H. M., Blöchl, P. E., Blaha, P. & Schwarz, K. (1998). Electric-field-gradient calculations using the projector augmented wave method. Phys. Rev. B, 57, 14690–14697.
Petrilli, H. M. & Frota-Pessoa, S. (1990). Real-space method for calculation of the electric field gradient in systems without symmetry. J. Phys. Condens. Matter, 2, 135–147.
Pisani, C. (1996). Quantum-mechanical ab-initio calculation of properties of crystalline materials. Lecture notes in chemistry, 67, 1–327. Berlin, Heidelberg, New York: Springer-Verlag.
Pyykkö, P. (1992). The nuclear quadrupole moments of the 20 first elements: High-precision calculations on atoms and small molecules. Z. Naturforsch. A, 47, 189–196.
Sandratskii, L. M. (1990). Symmetry properties of electronic states of crystals with spiral magnetic order. Solid State Commun. 75, 527–529.
Schwarz, K. (1977). The electronic structure of NbC and NbN. J. Phys. C Solid State Phys. 10, 195–210.
Schwarz, K., Ambrosch-Draxl, C. & Blaha, P. (1990). Charge distribution and electric field gradients in YBa2Cu3O7−x. Phys. Rev. B, 42, 2051–2061.
Schwarz, K. & Blaha, P. (1992). Ab initio calculations of the electric field gradients in solids in relation to the charge distribution. Z. Naturforsch. A, 47, 197–202.
Schwarz, K. & Blaha, P. (1996). Description of an LAPW DF program (WIEN95). In Lecture notes in chemistry, Vol. 67, Quantum-mechanical ab initio calculation of properties of crystalline materials, edited by C. Pisani. Berlin, Heidelberg, New York: Springer-Verlag.
Schwarz, K. & Herzig, P. (1979). The sensitivity of partially filled f bands to configuration and relativistic effects. J. Phys. C Solid State Phys. 12, 2277–2288.
Seitz, F. (1937). On the reduction of space groups. Ann. Math. 37, 17–28.
Singh, D. J. (1994). Plane waves, pseudopotentials and the LAPW method. Boston, Dordrecht, London: Kluwer Academic Publishers.
Skriver, H. L. (1984). The LMTO method. Springer series in solid-state sciences, Vol. 41. Berlin, Heidelberg, New York, Tokyo: Springer.
Slater, J. C. (1937). Wavefunctions in a periodic crystal. Phys. Rev. 51, 846–851.
Slater, J. C. (1974). The self-consistent field for molecules and solids. New York: McGraw-Hill.
Sorantin, P. & Schwarz, K. (1992). Chemical bonding in rutile-type compounds. Inorg. Chem. 31, 567–576.
Springborg, M. (1997). Density-functional methods in chemistry and material science. Chichester, New York, Weinheim, Brisbane, Singapore, Toronto: John Wiley and Sons Ltd.
Vosko, S. H., Wilk, L. & Nusair, M. (1980). Accurate spin-dependent electron liquid correlation energies for local spin density calculations. Can. J. Phys. 58, 1200–1211.
Weinert, M. (1981). Solution of Poisson's equation: beyond Ewald-type methods. J. Math. Phys. 22, 2433–2439.
Williams, A. R., Kübler, J. & Gelatt, C. D. Jr (1979). Cohesive properties of metallic compounds: Augmented-spherical-wave calculations. Phys. Rev. B, 19, 6094–6118.
Winkler, B., Blaha, P. & Schwarz, K. (1996). Ab initio calculation of electric field gradient tensors of fosterite. Am. Mineral. 81, 545–549. | 2019-06-20 02:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541839122772217, "perplexity": 876.7045180181167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.50/warc/CC-MAIN-20190620004625-20190620030625-00034.warc.gz"} |
http://www.justinmath.com/improper-integrals/ | # Improper Integrals
Improper integrals are integrals which have bounds or function values which extend to positive or negative infinity. For example, $\int_1^\infty \frac{1}{x^2} \, dx$ is an improper integral because its upper bound is at infinity. Likewise, $\int_0^1 \frac{1}{\sqrt{x}} \, dx$ is an improper integral because $\frac{1}{\sqrt{x}}$ approaches infinity as approaches the lower bound of integration, $0$.
It seems intuitive that improper integrals should always come out to infinity, since an infinitely long or infinitely high function would seemingly have infinite area. However, although this can sometimes happen, it is not always the case. In fact, both of the two improper integrals given as examples in the previous paragraph evaluate to normal, non-infinite results. As such, we say that these integrals converge.
\begin{align*} \int_1^\infty \frac{1}{x^2} \, dx &= \left[ −\frac{1}{x}\right]_1^\infty \\ &= −\frac{1}{\infty} − \left(−\frac{1}{1} \right) \\ &= 0 + 1 \\ &= 1 \end{align*}
\begin{align*} \int_0^1 \frac{1}{\sqrt{x}} \, dx &= \left[ 2\sqrt{x} \right]_0^1 \\ &= 2\sqrt{1}−2\sqrt{0} \\ &=2−0 \\ &= 2 \end{align*}
If the function decreases quickly enough as it extends out to infinity, then the area underneath it can come out to a finite number. Likewise, if a function blows up to infinity slowly enough as it approaches an asymptote, then the area underneath it can come out to a finite number.
Below, we integrate the function $f(x)=\frac{1}{x}$, which decreases more slowly as it extends out to infinity and blows up to infinity more quickly as it approaches its vertical asymptote $x=0$. The integrals of this function do indeed integrate to infinity. As such, we say that these integrals diverge.
\begin{align*} \int_1^\infty \frac{1}{x} \, dx &= \left[ \ln x \right]_1^\infty \\ &= \ln \infty − \ln 1 \\ &= \infty − 0 \\ &= \infty\end{align*}
\begin{align*} \int_0^1 \frac{1}{x} \, dx &= \left[ \ln x \right]_0^1 \\ &= \ln 1 − \ln 0 \\ &= 0 − (−\infty) \\ &= \infty \end{align*}
Sometimes, a function may blow up to infinity somewhere within the interval of integration, rather than at the bounds of integration. In such a case, we have to separate the integral across its discontinuities. For example, to compute the integral $\int_{-1}^2 \frac{1}{x^2} \, dx$, we may be tempted to ignore the singularity at $x=0$ and simply evaluate the antiderivative at the bounds. This leads us to an invalid result.
\begin{align*} \int_{−1}^2 \frac{1}{x^2} \, dx &= \left[ −\frac{1}{x} \right]_{−1}^2 \hspace{1.55cm} \mbox{(invalid)} \\ &= −\frac{1}{2} − \left( −\frac{1}{−1} \right) \hspace{.5cm} \mbox{(invalid)} \\ &= −\frac{1}{2} − 1 \hspace{1.8cm} \mbox{(invalid)} \\ &= −\frac{3}{2} \hspace{2.5cm} \mbox{(invalid)} \end{align*}
This result of negative area doesn’t make any sense, because the function $\frac{1}{x^2}$ is always positive!
In order to properly evaluate the integral $\int_{-1}^2 \frac{1}{x^2} \, dx$, we have to split it up across the singularity, into two separate integrals. The first integral spans from $x=-1$ to $x=0$ and consequently approaches $0$ from the negative side, so its computations involve $0^-$. The second integral spans from $x=0$ to $x=2$ and consequently approaches $0$ from the positive side, so its computations involve $0^+$.
\begin{align*} \int_{−1}^2 \frac{1}{x^2} \, dx &= \int_{−1}^0 \frac{1}{x^2} \, dx + \int_0^2 \frac{1}{x^2} \, dx \\ &= \left[ −\frac{1}{x} \right]_{−1}^{0^−} + \left[ −\frac{1}{x} \right]_{0^+}^2 \\ &= \left[ −\frac{1}{0^−} − \left( − \frac{1}{−1} \right) \right] + \left[ −\frac{1}{2} − \left( −\frac{1}{0^+} \right) \right] \\ &= \left[ −(−\infty) − 1 \right] + \left[ −\frac{1}{2} − (−\infty) \right] \\ &= ( \infty − 1 ) + \left( −\frac{1}{2} + \infty \right) \\ &= \infty + \infty \\ &= \infty \end{align*}
Now, we see that the integral actually diverges to infinity. This makes much more sense, since we know that it represents a region which contains a portion of infinite area.
Lastly, below is an example of a more complicated integral which converges.
\begin{align*} \int_0^\infty \frac{e^{−\sqrt{x}} }{\sqrt{x} } \, dx &= \left[ −2e^{−\sqrt{x}} \right]_0^\infty \\ &= −2e^{−\infty} − \left( −2e^{0} \right) \\ &= 0 − \left( −2 \right) \\ &= 2 \end{align*}
Practice Problems
Evaluate the improper integrals below. (You can view the solution by clicking on the problem.)
\begin{align*} 1) \hspace{.5cm} \int_1^\infty \frac{1}{x^5} \, dx \end{align*}
Solution:
\begin{align*} \frac{1}{4} \end{align*}
\begin{align*} 2) \hspace{.5cm} \int_3^\infty \frac{1}{\sqrt[3]{x} } \, dx \end{align*}
Solution:
\begin{align*} \infty \end{align*}
\begin{align*} 3) \hspace{.5cm} \int_5^\infty \frac{1}{x^2} \, dx \end{align*}
Solution:
\begin{align*} \frac{1}{5} \end{align*}
\begin{align*} 4) \hspace{.5cm} \int_{−\infty}^{−2} \frac{1}{(x−1)^4} \, dx \end{align*}
Solution:
\begin{align*} \frac{1}{81} \end{align*}
\begin{align*} 5) \hspace{.5cm} \int_0^1 \frac{1}{x−1} \, dx \end{align*}
Solution:
\begin{align*} -\infty \end{align*}
\begin{align*} 6) \hspace{.5cm} \int_{−3}^3 \frac{1}{(x+2)^5} \, dx \end{align*}
Solution:
\begin{align*} \infty - \infty \text{ (indeterminate)} \end{align*}
\begin{align*} 7) \hspace{.5cm} \int_\frac{3}{2}^{10} \frac{1}{\sqrt{4x−6} } \, dx \end{align*}
Solution:
\begin{align*} \sqrt{ \frac{17}{2} } \end{align*}
\begin{align*} 8) \hspace{.5cm} \int_3^\infty \frac{1}{(2x+1)^{3/2} } \, dx \end{align*}
Solution:
\begin{align*} \frac{1}{\sqrt{7}} \end{align*}
\begin{align*} 9) \hspace{.5cm} \int_0^\infty \frac{x}{(x^2+1)^2} \, dx \end{align*}
Solution:
\begin{align*} \frac{1}{2} \end{align*}
\begin{align*} 10) \hspace{.5cm} \int_0^\infty \frac{1}{\left( \sqrt{x}+1 \right)^2 \sqrt{x} } \, dx \end{align*}
Solution:
\begin{align*} 2 \end{align*}
\begin{align*} 11) \hspace{.5cm} \int_0^\infty e^{−x} \, dx \end{align*}
Solution:
\begin{align*} 1 \end{align*}
\begin{align*} 12) \hspace{.5cm} \int_0^\infty \frac{1}{1+x^2} \, dx \end{align*}
Solution:
\begin{align*} \frac{\pi}{2} \end{align*}
Tags: | 2021-10-20 17:39:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1284.7023907802036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00422.warc.gz"} |
https://hsm.stackexchange.com/questions/3734/the-origins-of-complex-differentiation-integration | # The origins of complex differentiation/integration
What questions led to the invention of complex differentiation/integration? How were their definitions agreed upon?
Real differentiation/integration has an obvious meaning. To extend calculus to the complex numbers, why would this be done, is it even meaningful to call it 'differentiation'/'integration'?
There is no difference in differentiation. Derivative of a complex function of a complex variable is defined by the same formula $$f'(z)=\lim_{h\to 0} (f(z+h)-f(z))/h$$ as for the real variable. Concerning integration, one reason was the desire to investigate integrals of real functions. One of the consequences (and motivations) of Cauchy theory is that it evaluates integrals of real functions on real intervals by manipulating with integrals in the complex domain.
But there is a more important and more profound reason. Many equations (differential and functional) can be solved in the form of power series. This was essentially discovered by Newton, and he considered this his main discovery in Calculus. A power series, if it converges anywhere except $x=0$, converges in a disk in the complex plane, and it is impossible to understand its properties while staying in the real domain only. This is the most important motivation of extending Calculus to complex domain.
Here is a simple example: we have a power series expansion $$f(x)=\frac{1}{1+x^2}=1-x^2+x^4-x^6+\ldots.$$ It converges in the real domain for $-1<x<1$ only. But the left hand side $f(x)=1/(1+x^2)$ is a nice function on the whole real line. By looking at this function on the real line only it is impossible to understand what happens at $\pm 1$, why the series suddenly stops to converge. | 2019-08-23 17:50:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769256591796875, "perplexity": 231.12138988814993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00336.warc.gz"} |
https://dsp.stackexchange.com/questions/15464/wavelet-thresholding/15487 | # Wavelet thresholding
What is the difference between soft thresholding and hard thresholding. Where we use soft and hard thresholding in image for denoising. I understand that in hard thresholding, the coefficients below threshold value are set to zero and the value above the threshold is set to one. Please explain me about soft threshold. Please explain whether the threshold value is the intensity value of the image. For example if the intensity value ranges between 0 to 255. In case of hard thresholding if the threshold value is considered as 100 then the values below 100 is set to 0.The value above 100 are retained. Is this correct? Please correct me.
• Google search finds this paper: fceia.unr.edu.ar/~jcgomez/wavelets/Donoho_1995.pdf You should try reading it and coming back with more specific questions. – MackTuesday Apr 7 '14 at 16:51
• Hard thresholding what you are saying is only correct for the set to zero while the other coefficients are left unajusted. In soft thresholding coefficients are all adjusted based on based on MAD and other elements of the equation. – Barnaby May 23 '15 at 9:25
For a given threshold $\lambda$ (that can be dependent on resolution level), and value of wavelet coefficient $d$, hard thresholding is defined as:
$D^H(d|\lambda)=\begin{cases} 0,& \text{for } |d| \leq \lambda\\ d,& \text{for } |d| > \lambda \end{cases}$
whereas soft thresholding is governed by following equation:
$D^S(d|\lambda)=\begin{cases} 0,& \text{for } |d| \leq \lambda\\ d-\lambda,& \text{for } d > \lambda \\ d+\lambda,& \text{for } d < -\lambda \\ \end{cases}$
Figure below depicts both cases:
The soft thresholding is also called wavelet shrinkage, as values for both positive and negative coefficients are being "shrinked" towards zero, in contrary to hard thresholding which either keeps or removes values of coefficients.
In case of image de-noising, you are not working strictly on "intensity values", but wavelet coefficients. You probably remember that you can decompose your image into wavelet levels, like in case of lovely Lena. Assuming that wavelet transform gives sparse coefficients, mostly close to zero, and noise level is lower than wavelet coefficients, you can simply threshold these. Although if you wish, you can perform hard/soft thresholding on each decomposition level with a different value of $\lambda$. When it is done, then you just have to reconstruct your image from all decomposition levels and voila, noise should be removed!
Below you have two examples of de-noised image via hard and soft thresholding respectively (same $\lambda$). Obviously soft thresholding gives more smooth image- if you can notice that with such a poor resolution ;) Courtesy of MATLAB. | 2021-06-18 18:36:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6543461084365845, "perplexity": 739.6466706118983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00627.warc.gz"} |
https://conversationofmomentum.wordpress.com/2016/10/09/the-cusp-in-the-coffee-cup/ | # The Cusp in the Coffee Cup
Disclaimer: it was a mug, not a coffee cup. I just like alliteration.
A few days ago, I went into the kitchen to make some milkshakes. I took two mugs from the cabinet and placed them on the worktop. Looking into the bottom of the mugs, I noticed an odd light pattern. Running around each of their bases was a circle of ‘seagulls’, one for each ceiling light in the kitchen. The white arrows on the picture opposite show the directions towards the lights; the dashed arrow corresponds to the light obscured by my hand. We can see that one seagull forms opposite each light source. What’s going on here?
Model
Let’s build a simple model of my kitchen. Situated on the worktop is a cylindrical mug of unit radius and height $c$. The bottom surface of the mug lies in the plane $z$ = 0, and the mug’s axis of rotational symmetry coincides with the $z$ axis.
Consider a single ray emerging from a light on the ceiling that makes angle $\theta$ with the vertical and strikes the inner surface of the cylinder at position
$\bold{r}_0=\left(\cos\varphi,\sin\varphi,z\right).$
This is the most generic expression for a point on the mug’s inner surface. Clearly, $z$ is restricted to the range $0\le z\le c$. If we further assume that the light ray is confined to the $xz$ plane and moves in the positive $x$ direction, the angle $\varphi$ is necessarily restricted to the range $-\frac{\pi}{2}\le \varphi \le \frac{\pi}{2}$.
We’re first going to work out a vector equation for the line describing the specularly reflected ray. We know that it starts at the point $\bold{r}_0$. All we need is its new direction of propagation.
If the ray’s original direction was $\bold{v}_0$, and the surface normal at its point of contact with the mug was $\bold{n}$, its new direction $\bold{v}$ satisfies
$\bold{v} = \bold{v}_0 + \alpha\bold{n},$
where the constant $\alpha$ is chosen such that $|\bold{v}|$ = 1. By definition of angle $\theta$, the incoming direction vector is
$\bold{v}_0=\left(\cos\theta, 0, -\sin\theta\right).$
Meanwhile, the normal vector at the point of incidence is
$\bold{n}=\left(-\cos\varphi,-\sin\varphi,0\right),$
regardless of the coordinate $z$. Thus
$\bold{v}^2=\left(\sin\theta - \alpha\cos\varphi\right)^2+\alpha^2\sin^2\varphi+\cos^2\theta;$
$\bold{v}^2=1-2\alpha\sin\theta\cos\varphi+\alpha^2=1;$
$\alpha=2\sin\theta\cos\varphi.$
We’ve now got an expression for $\bold{v}$, chock-full of angles. The vector equation describing the line traced out by the reflected ray is thus
$\bold{r}=\bold{r}_0+\lambda\bold{v},$
$\begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}\cos\varphi\\\sin\varphi\\z\end{pmatrix}+\lambda\begin{pmatrix}\sin\theta-2\sin\theta\cos^2\varphi\\-2\sin\theta\sin\varphi\cos\varphi\\-\cos\theta\end{pmatrix}.$
The parameter $\lambda$ > 0 allows us to slide ‘back and forth’ along the ray’s path.
Our goal is now to work out the coordinates of the spot where the ray hits the bottom of the mug. The $z$-coordinate is, by definition, zero. The corresponding parameter $\lambda_0$ therefore satisfies
$z-\lambda_0\cos\theta=0$
$\displaystyle \lambda_0=\frac{z}{\cos\theta}$
Plugging this parameter back into our expression for $\bold{r}$ tells us the corresponding $x$– and $y$-coordinates:
$\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}\cos\varphi+z\tan\theta\left(1-2\cos^2\varphi\right)\\\sin\varphi-2z\tan\theta\sin\varphi\cos\varphi\end{pmatrix}.$
I have forgotten just one thing: we have to be sure the ray can actually reach the point $\bold{r}_0$, i.e. it does intersect the opposite side of the mug. A little geometry shows that this constraint may be expressed as
$\displaystyle z+\frac{2\cos\varphi}{\tan\theta}>c.$
Results
So, we’ve found where a ray with a particular direction and point of intersection will hit the bottom of the mug, and the conditions under which its able to do so. How can we use our equations to recreate the full pattern see in the photo?
In principle, we should consider all values of $\varphi$ and $z$ possible for a given value of $\theta$, and then all possible values of $\theta$ for a given ceiling light: the beam emitted by a ceiling light comprises a continuum of rays spanning a whole range of angles of incidence $\{\theta\}$. This will trace out a continuum of points on the bottom of the mug, giving us the pattern. For simplicity, we’ll look at a set of light patterns for which the value of $\theta$ is unique – this is a reasonably good approximation if the ceiling light is far from the mug and so the range of angles $\{\theta\}$ is small.
The light patterns formed for four different values of $\theta$ are shown in the diagram to the right; the light source for each comes from the top of the image. The most interesting feature is the variation in the shape of the ‘beak’ with $\theta$. The sharpness of the beak initially increases with $\theta$, coming to form a particularly sharp cusp at 15 degrees. The beak subsequently becomes more diffuse the greater $\theta$; you can see that an increasingly large smudge of light spills out into the centre of the mug. The main point, though, is that the simulated light patterns are brightest along their inner edge, which resembles the silhouette of a bird – the theory is not too far from reality!
So, if ever you see a mysterious pattern of light in your kitchen … don’t be afraid to decode it with geometric optics! | 2017-10-23 18:59:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6901258826255798, "perplexity": 333.31031700868743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826283.88/warc/CC-MAIN-20171023183146-20171023203146-00809.warc.gz"} |
https://mobile.surenapps.com/2020/10/sound-solutions.html | ### Sound - Solutions
CBSE class IX Science
NCERT Solutions
Chapter - 12
Sound
(Page No.162)
1. How does the sound produced by a vibrating object in a medium reach your ear?
Ans. As we speak, the particles of air near our mouth are pushed forward so they get compressed. Then they compress the other particles of air. As the compression proceeds the particles of air near our mouth expand again and thus rarefaction occurs. This process is repeated further and as a result sound wave propagates in the form of compressions and rarefactions to the listener’s ear.
(Page No.163)
1. Explain how sound is produced by your school bell.
Ans. When the peon strikes the school bell with a hammer, the particles of air near bell metal start vibrating and those vibrations produce sound.
2. Why are sound waves called mechanical waves?
Ans. Since sound waves need a medium for their propagation, therefore, we can say that sound waves are mechanical waves.
3. Suppose you and your friend are on the moon. Will you be able to hear any sound produced by your friend?
Ans. There is no air on moon hence there is no medium for sound propagation on the moon. As a result, I will not be able to hear any sound produced by my friend.
Page No.166)
1. Which wave property determines
(a) loudness,
(b) pitch?
Ans. (a) The amplitude of the wave determines the loudness of the sound.
(b) The frequency of the wave determines the pitch of the sound.
2. Guess which sound has a higher pitch: guitar or car horn?
Ans. The sound of Guitar has a higher pitch.
1. What are wavelength, frequency, time period and amplitude of a sound wave?
Ans. wavelength: For a sound wave, the combined length of a compression and an adjacent rarefaction is called its wavelength even the distance between centers of two consecutive compressions or two consecutive rarefactions is also equal to its wavelength.
frequency: The number of vibrations or oscillations per second is called frequency i.e. it is the number of complete waves or cycles produced in one second.
time period: The time taken to complete one vibration/oscillation/complete wave is called time period. It is measured in seconds.
amplitude: It is the maximum displacement of the particles of the medium from their mean/original position at rest.
2. How are the wavelength and frequency of a sound wave related to its speed?
Ans. From the equation:
where v = velocity/speed
n = frequency of wave
= wavelength of wave
3. Calculate the wavelength of a sound wave whose frequency is 220 Hz and speed is 440 m/s in a given medium.
Ans. Since we know
= 440/220 = 2 m
4. A person is listening to a tone of 500 Hz sitting at a distance of 450 m from the source of the sound. What is the time interval between successive compressions from the source?
Ans. The time interval between successive compressions from the source
T = 1/ = 1/500 = 0.002 second.
1. Distinguish between loudness and intensity of sound.
Ans.
(Page No.167)
1. In which of the three media, air, water or iron, does sound travel the fastest at a particular temperature?
Ans. The sound will travel the fastest in iron at a particular temperature.
(Page No.168)
1. An echo returned in 3 s. What is the distance of the reflecting surface from the source, given that the speed of sound is?
Ans. Speed of sound = distance/time
therefore, distance travelled by sound during echo = speed x time = 342 x 3 = 1026 m
so the distance of reflecting surface = 1026/2 =513 m
(Page No.169)
1. Why are the ceilings of concert halls curved?
Ans. The ceilings of concert halls are curved because such architecture helps the sound to reach all the corners and places of the concert hall.
(Page No.170)
1. What is the audible range of the average human ear?
Ans. 20 Hz to 20,000Hz.
2. What is the range of frequencies associated with
(a) Infrasound?
(b) Ultrasound?
Ans. Infrasound = less than 20 Hz
Ultrasound = greater than 20 KHz
(Page No.172)
1. A submarine emits a sonar pulse, which returns from an underwater cliff in 1.02 s. If the speed of sound in salt water is 1531 m/s, how far away is the cliff?
Ans. Distance traveled by a sonar pulse = speed of sound in salt water x time
= 1531 x 1.02 = 1561.62 m
therefore, the distance of cliff from submarine = 1561.62/2 = 780.81 m
(Chapter – end)
1. What is sound and how is it produced?
Ans. A sound is a form of energy that produces a sensation of hearing in our ears. Sound gets produced when any object vibrates/oscillates.
2. Describe with the help of a diagram, how compressions and rarefactions are produced in the air near a source of the sound.
Ans.
A vibrating object producing a series of compressions (C) and rarefaction (R)
In these waves, the particles move back and forth parallel to the direction of propagation of the disturbance. Such waves are called longitudinal waves.
There is another kind of waves called transverse waves. In these waves, the particles oscillate up and down perpendicular to the propagation of the direction of the disturbance. Sound propagates in a medium as a series of compressions (C) and rarefactions (R). Compressions are the regions of high pressure and density where the particles are crowded and are represented by the upper portion of the curve called crest.Rarefactions are the regions of low pressure and density where the particles are spread out and are represented by the lower portion of the curve called trough.
3. Cite an experiment to show that sound needs a material medium for its propagation.
Ans. Take an electric bell and an airtight glass bell jar. The electric bell is suspended inside the airtight bell jar. The bell jar is connected to a vacuum pump If you press the switch you will be able to hear the bell. Now start the vacuum pump. When the air in the jar is pumped out gradually, the sound becomes fainter, although the same current is passing through the bell.
After some time when less air is left inside the bell jar you will hear a very feeble sound. Now if we evacuate the bell jar no sound is heard.
Result: The above-mentioned activity shows that sound needs a medium to propagate.
4. Why is sound wave called a longitudinal wave?
Ans. The sound wave is called a longitudinal wave because particles of the medium through which the sound is transported vibrate parallel to the direction that the sound wave moves.
5. Which characteristic of the sound helps you to identify your friend by his voice while sitting with others in a dark room?
Ans. Pitch of the sound wave.
6. Flash and thunder are produced simultaneously. But thunder is heard a few seconds after the flash is seen, why?
Ans. Since the speed of thunder (sound) is much less (332 m/s) as compared to the speed of flash (light) which is about 3 x 108m/s therefore light travels faster than sound hence thunder is heard a few seconds after the flash is seen.
7. A person has a hearing range from 20 Hz to 20 kHz. What are the typical wavelengths of sound waves in air corresponding to these two frequencies? Take the speed of sound in air as 344 ms-1.
Ans. For 20 Hz sound waves, the wavelength would be
= v/n = 344/20 = 17.2 m
For 20 kHz sound waves, the wavelength would be
/20000 Hz = 0.0172 m
8. Two children are at opposite ends of an aluminum rod. One strikes the end of the rod with a stone. Find the ratio of times taken by the sound wave in the air and in aluminum to reach the second child.
Ans. Since speed of sound in air = 344 m/s
and speed of sound in aluminum = 6420 m/s
we know that v = distance/time therefore time = d/v
time taken by sound wave in air/time taken by sound wave in aluminum
= d/344: d/6420 = 6420/344 = 18.66/1
the sound will take 18.66 times more time through the air than in aluminum in reaching another boy.
9. The frequency of a source of sound is 100 Hz. How many times does it vibrate in a minute?
Ans. The frequency of the source of sound being 100 Hz means the sound source vibrates 100 times in one second.
therefore vibrations made by sound source in 1 min (60 sec) = 100 x 60 = 6000
10. Does sound follow the same laws of reflection as light does? Explain.
Ans. Yes. Sound follows the same laws of reflection as light does. We can say that because here the directions in which the sound is incident and is reflected make equal angles with the normal to the reflecting surface at the point of incidence, and the three are in the same plane.
11. When a sound is reflected from a distant object, an echo is produced. Let the distance between the reflecting surface and the source of sound production remains the same. Do you hear echo sound on a hotter day?
Ans. As the sensation of sound persists in our brain for about 0.1 s.To hear a distinct echo the time interval between the original sound and the reflected one must be at least 0.1s. There for the total distance covered by the sound from the point of generation to the reflecting surface and back should be at least (344m/s) x 0.1s = 34.4 m. Thus, for hearing distinct echoes, the minimum distance of the obstacle from the source of sound must be half of this distance, that is, 17.2 m. The speed of sound will increase with an increase in temperature. Therefore, on a hotter day speed of sound will be greater hence reflected sound will reach us before 0.1 seconds.As a result, there will be no distinct echo will be heard by us.
12. Give two practical applications of reflection of sound waves.
Ans. Two practical applications of reflection of sound waves
i. Megaphones or loudhailers, horns, musical instruments such as trumpets and shehanais, are all designed to send sound in a particular direction without spreading it in all directions.
ii.
The stethoscope is a medical instrument used for listening to sounds produced within the body, chiefly in the heart or lungs. In stethoscopes, the sound of the patient’s heartbeat reaches the doctor’s ears by multiple reflections of sound.
13. A stone is dropped from the top of a tower 500 m high into a pond of water at the base of the tower. When is the splash heard at the top? Given, g = 10ms-2 and speed of sound = 340 ms-1.
Ans.
= 10000
v = = 100 m/s
we also know that v = u + gt = 0 + 10t
100 = 10t or, Time taken by stone to reach the pond surface (t) = 100/10 = 10 sec
therefore, time taken by sound to reach the top from pond surface = d/v = 500/340
= 1.47 sec
so the total time taken for splash being heard at the top = 10 + 1.47 = 11.47 s
14. A sound waves travels at a speed of. If its wavelength is 1.5 cm, what is the frequency of the wave? Will it be audible?
Ans. Since we know that
339 = 0.015
ν = 339/0.015 = 22600 Hz
Since the resulting frequency is beyond the audible range of human beings (20Hz to 20kHz) therefore sound will not be audible to human ears.
15. What is reverberation? How can it be reduced?
Ans. The repeated reflection of sound due to which sound persists for a long time is called reverberation.
To reduce reverberation, the roof and walls of the auditorium are generally covered with sound-absorbent materials like compressed fibreboard, rough plaster or draperies. The seat materials are also selected on the basis of their sound absorbing properties.
16. What is loudness of sound? What factors does it depend on?
Ans. Loudness is a measure of the response of the ear to the sound. Even when two sounds are of equal intensity, we may hear one as louder than the other simply because our ear detects it better.
The loudness of a sound depends upon the amplitude of those sound waves. Higher is the amplitude of vibrating air particles louder will be the sound.
17. Explain how bats use ultrasound to catch a prey.
Ans. Bats search out prey and fly in dark night by emitting and detecting reflections of ultrasonic waves. The high-pitched ultrasonic squeaks of the bat are reflected from the obstacles or prey and returned to bat’s ear. The nature of reflections tells the bat where the obstacle or prey is and what it is like.
18. How is ultrasound used for cleaning?
Ans. Ultrasound is generally used to clean parts located in hard-to-reach places, for example, spiral tube, odd shaped parts, electronic components etc. Objects to be cleaned are placed in a cleaning solution and ultrasonic waves are sent into the solution. Due to the high frequency, the particles of dust, grease, and dirt get detached and drop out. The objects thus get thoroughly cleaned.
19. Explain the working and application of a sonar.
Ans. Sonar is a device that uses ultrasonic waves to measure the distance, direction, and speed of underwater objects.
Sonar consists of a transmitter and a detector and is installed in a boat or a ship. The transmitter produces and transmits ultrasonic waves. These waves travel through water and after striking the object on the seabed, get reflected back and are sensed by the detector. The detector converts the ultrasonic waves into electrical signals which are appropriately interpreted. The distance of the object that reflected the sound wave can be calculated by knowing the speed of sound in water and the time interval between transmission and reception of the ultrasound. Let the time interval between transmission and reception of ultrasound signal be and the speed of sound through seawater be v. The total distance, 2traveled by the ultrasound is then, 2d or $d=\frac{v×t}{2}$
The above method is called echo-ranging. The sonar technique is used to determine the depth of the sea and to locate underwater hills, valleys, submarine, icebergs, sunken ship etc.
20. A sonar device on a submarine sends out a signal and receives an echo 5 s later. Calculate the speed of sound in water if the distance of the object from the submarine is 3625 m.
Ans. Distance of object from submarine = 3625 m
therefore, distance travelled by sonar waves = 7250 m
since, speed = distance/time= 7250/5 = 1450 m/s
21. Explain how defects in a metal block can be detected using ultrasound.
Ans. Ultrasounds can be used to detect cracks and flaws in metal blocks. Metallic components are generally used in the construction of big structures like buildings, bridges, machines and also scientific equipment.
The cracks or holes inside the metal blocks, which are invisible from outside reduces the strength of the structure. Ultrasonic waves are allowed to pass through the metal block and detectors are used to detect the transmitted waves. If there is even a small defect, the ultrasound gets reflected back indicating the presence of the flaw or defect
22. Explain how the human ear works.
Ans. The outer ear is called ‘pinna’. It collects the sound from the surroundings. The collected sound passes through the auditory canal. At the end of the auditory canal, there is a thin membrane called the eardrum or tympanic membrane. When a compression of the medium reaches the eardrum the pressure on the outside of the membrane increases and forces the eardrum inward. Similarly, the eardrum moves outward when a rarefaction reaches it. In this way, the eardrum vibrates. The vibrations are amplified several times by three bones (the hammer, anvil, and stirrup) in the middle ear. The middle ear transmits the amplified pressure variations received from the sound wave to the inner ear. In the inner ear, the pressure variations are turned into electrical signals by the cochlea. These electrical signals are sent to the brain via the auditory nerve, and the brain interprets them as sound. | 2021-02-27 12:18:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5079524517059326, "perplexity": 932.1431453082362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00583.warc.gz"} |
https://brilliant.org/problems/a-classical-mechanics-problem-by-soumya-shrivastva/ | Kinematics 1
The trajectory of a projectile in a vertical plane is $$y=ax-b{ x }^{ 2 }$$ , where $$a$$ and $$b$$ are constants, and $$x$$ and $$y$$ are respectively the horizontal and vertical distance of the projectile from the point of projection. What is the maximum height attained and the angle of projection from the horizontal?
× | 2018-09-20 09:48:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644367456436157, "perplexity": 79.65727707271323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156423.25/warc/CC-MAIN-20180920081624-20180920101624-00064.warc.gz"} |
https://mathgoespewpew.wordpress.com/2010/03/22/dsp-coding-time/ | I’m doing this music project, and I wanted to add math to it, and I also got really into Panthu Du Prance’s “Black Noise” – it’s such a deep musical journey if you listen through it with an open mind; for me, the album felt like my first listens of Boards of Canada’s “Music Has the Right to Children”.
My fourier analysis is a bit rusty right now. Anybody more in tune with the psychoacoustic physics want to lend a hand and drop some tips on some psychoacoustic maps for the input and output signals? I’m coding a vintage-sounding lofi matrix for chiptune uses (so like signal discretification + coloration).
Thinking here for a model of what I’m trying to do – to be honest I’m really just making up terms as we go along based on things I remember: a digital signal would be discrete, so this matter would have to be handled discretely. The discrete signal may be represented mathematically as a continuous function $f:\mathbb{Z} \rightarrow [-1, 1]$ where Z has the discrete topology and [-1, 1] the usual. The codomain, when I code this, will be represented by floats, but the math for floats is similar enough to math for real numbers on a bounded interval that the sound will pretty much be reproduced up to human perception.
Or to say that it’s with hi-fi signals I want to code. I want my shit to sound like vinyl signals being fed through NES hardware. Aw yeah.
Of course, I’ll also have to implement output for the discrete case, since the vst host may not do floats and do 24-bit or 16-bit integers instead (which is kinda weird now since we have 32-bit processors, but whatever, audio industry standards lol).
So let’s think about signal maps. A signal map is a map $\phi:[-1, 1]^\mathbb{Z} \rightarrow [-1, 1]^\mathbb{Z}$, where $[-1, 1]^\mathbb{Z}$ is the function space for signals as modeled above. We may attempt to fit this function space with some properties, then.
Afterwards, I guess I should try to find a topology on the thing. Or maybe even a Hilbert space. Actually, it could very well be a Hilbert space given the right choices for the norm, addition and scalar multiplication. Though, that’s stuff that I would do just for the hell of doing it.
What I would need to do is to figure out specific signal maps that would make interesting effects for this project, really. Like for example the lofi matrix + analog coloration dsp I wanted to make would probably include discretification at some point. That would be some sort of discontinuous map f(x) = some chopped bits in the float representation of x.
Cool, I have a plan of attack now.
tl;dr: have a lollipop. I’m going to do some research on some shit hell yeah.
edit:
Thinking a bit more on the subject, I think I have an algo for making signals more discrete, given by $f(x) = [\rho x]/\rho$, where $\rho$ is the granularity of the discretification.
moar edit:
Audio information preservation in the face of discretification is probably closely related to the fact that simple functions are dense in $L_0$ spaces and the algo I listed above is invariably a simple function (so in a sense density in $L_0$ suggests that all continuous functions can be approximated this way, which isn’t surprising, but nice and intuitive). | 2017-07-24 18:28:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6358326077461243, "perplexity": 658.8555945075476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424909.50/warc/CC-MAIN-20170724182233-20170724202233-00029.warc.gz"} |
https://en.wikipedia.org/wiki/Event_calculus | # Event calculus
The event calculus is a logical language for representing and reasoning about events and their effects first presented by Robert Kowalski and Marek Sergot in 1986.[1] It was extended by Murray Shanahan and Rob Miller in the 1990s.[2] Similar to other languages for reasoning about change, the event calculus represents the effects of actions on fluents. However, events can also be external to the system. In the event calculus, one can specify the value of fluents at some given time points, the events that take place at given time points, and their effects.
## Fluents and events
In the event calculus, fluents are reified. This means that they are not formalized by means of predicates but by means of functions. A separate predicate HoldsAt is used to tell which fluents hold at a given time point. For example, ${\displaystyle {\mathit {HoldsAt}}(on(box,table),t)}$ means that the box is on the table at time t; in this formula, HoldsAt is a predicate while on is a function.
Events are also represented as terms. The effects of events are given using the predicates Initiates and Terminates. In particular, ${\displaystyle {\mathit {Initiates}}(e,f,t)}$ means that, if the event represented by the term e is executed at time t, then the fluent f will be true after t. The Terminates predicate has a similar meaning, with the only difference being that f will be false after t.
## Domain-independent axioms
Like other languages for representing actions, the event calculus formalizes the correct evolution of the fluent via formulae telling the value of each fluent after an arbitrary action has been performed. The event calculus solves the frame problem in a way that is similar to the successor state axioms of the situation calculus: a fluent is true at time t if and only if it has been made true in the past and has not been made false in the meantime.
${\displaystyle {\mathit {HoldsAt}}(f,t)\leftarrow [{\mathit {Happens}}(e,t_{1})\wedge {\mathit {Initiates}}(e,f,t_{1})\wedge (t_{1}
This formula means that the fluent represented by the term f is true at time t if:
1. an event e has taken place: ${\displaystyle {\mathit {Happens}}(e,t_{1})}$;
2. this took place in the past: ${\displaystyle {\mathit {t}}_{1};
3. this event has the fluent f as an effect: ${\displaystyle {\mathit {Initiates}}(e,f,t_{1})}$;
4. the fluent has not been made false in the meantime: ${\displaystyle {\mathit {Clipped}}(t_{1},f,t)}$
A similar formula is used to formalize the opposite case in which a fluent is false at a given time. Other formulae are also needed for correctly formalizing fluents before they have been effects of an event. These formulae are similar to the above, but ${\displaystyle {\mathit {Happens}}(e,t_{1})\wedge {\mathit {Initiates}}(e,f,t_{1})}$ is replaced by ${\displaystyle {\mathit {HoldsAt}}(f,t_{1})}$.
The Clipped predicate, stating that a fluent has been made false during an interval, can be axiomatized, or simply taken as a shorthand, as follows:
${\displaystyle {\mathit {Clipped}}(t_{1},f,t_{2})\equiv \exists e,t[{\mathit {Happens}}(e,t)\wedge (t_{1}\leq t
## Domain-dependent axioms
The axioms above relate the value of the predicates HoldsAt, Initiates and Terminates, but do not specify which fluents are known to be true and which events actually make fluents true or false. This is done by using a set of domain-dependent axioms. The known values of fluents are stated as simple literals ${\displaystyle {\mathit {HoldsAt}}(f,t)}$. The effects of events are stated by formulae relating the effects of events with their preconditions. For example, if the event open makes the fluent isopen true, but only if haskey is currently true, the corresponding formula in the event calculus is:
${\displaystyle {\mathit {Initiates}}(e,f,t)\equiv [e=open\wedge f=isopen\wedge {\mathit {HoldsAt}}(haskey,t)]\vee \cdots }$
The right-hand expression of this equivalence is composed of a disjunction: for each event and fluent that can be made true by the event, there is a disjunct saying that e is actually that event, that f is actually that fluent, and that the precondition of the event is met.
The formula above specifies the truth value of ${\displaystyle {\mathit {Initiates}}(e,f,t)}$ for every possible event and fluent. As a result, all effects of all events have to be combined in a single formulae. This is a problem, because the addition of a new event requires modifying an existing formula rather than adding new ones. This problem can be solved by the application of circumscription to a set of formulae each specifying one effect of one event:
${\displaystyle {\mathit {Initiates}}(open,isopen,t)\leftarrow {\mathit {HoldsAt}}(haskey,t)}$
${\displaystyle {\mathit {Initiates}}(break,isopen,t)\leftarrow {\mathit {HoldsAt}}(hashammer,t)}$
${\displaystyle {\mathit {Initiates}}(break,broken,t)\leftarrow {\mathit {HoldsAt}}(hashammer,t)}$
These formulae are simpler than the formula above, because each effect of each event can be specified separately. The single formula telling which events e and fluents f make ${\displaystyle {\mathit {Initiates}}(e,f,t)}$ true has been replaced by a set of smaller formulae, each one telling the effect of an event on a fluent.
However, these formulae are not equivalent to the formula above. Indeed, they only specify sufficient conditions for ${\displaystyle {\mathit {Initiates}}(e,f,t)}$ to be true, which should be completed by the fact that Initiates is false in all other cases. This fact can be formalized by simply circumscribing the predicate Initiates in the formula above. It is important to note that this circumscription is done only on the formulae specifying Initiates and not on the domain-independent axioms. The predicate Terminates can be specified in the same way Initiates is.
A similar approach can be taken for the Happens predicate. The evaluation of this predicate can be enforced by formulae specifying not only when it is true and when it is false:
${\displaystyle {\mathit {Happens}}(e,t)\equiv (e=open\wedge t=0)\vee (e=exit\wedge t=1)\vee \cdots }$
Circumscription can simplify this specification, as only necessary conditions can be specified:
${\displaystyle {\mathit {Happens}}(open,0)}$
${\displaystyle {\mathit {Happens}}(exit,1)}$
Circumscribing the predicate Happens, this predicate will be false at all points in which it is not explicitly specified to be true. This circumscription has to be done separately from the circumscription of the other formulae. In other words, if F is the set of formulae of the kind ${\displaystyle {\mathit {Initiates}}(e,f,t)\leftarrow \cdots }$, G is the set of formulae ${\displaystyle {\mathit {Happens}}(e,t)}$, and H are the domain independent axioms, the correct formulation of the domain is:
${\displaystyle {\mathit {Circ}}(F;{\mathit {Initiates}},{\mathit {Terminates}})\wedge Circ(G;Happens)\wedge H}$
## The event calculus as a logic program
The event calculus was originally formulated as a set of Horn clauses augmented with negation as failure and could be run as a Prolog program. In fact, circumscription is one of the several semantics that can be given to negation as failure, and is closely related to the completion semantics (in which "if" is interpreted as "if and only if" — see logic programming).
## Extensions and applications
The original event calculus paper of Kowalski and Sergot focused on applications to database updates and narratives.[3] Extensions of the event calculus can also formalize non-deterministic actions, concurrent actions, actions with delayed effects, gradual changes, actions with duration, continuous change, and non-inertial fluents.
Kave Eshghi showed how the event calculus can be used for planning,[4] using abduction to generate hypothetical events in abductive logic programming. Van Lambalgen and Hamm showed how the event calculus can also be used to give an algorithmic semantics to tense and aspect in natural language[5] using constraint logic programming.
Other notable extensions of the Event Calculus include Markov Logic Networks-based,[6] probabilistic,[7] epistemic[8] variants and their combinations.[9]
## Reasoning tools
In addition to Prolog and its variants, several other tools for reasoning using the event calculus are also available:
## References
1. ^ Kowalski, Robert; Sergot, Marek (1986-03-01). "A logic-based calculus of events". New Generation Computing. 4 (1): 67–95. doi:10.1007/BF03037383. ISSN 1882-7055. S2CID 7584513.
2. ^ Miller, Rob; Shanahan, Murray (2002), Kakas, Antonis C.; Sadri, Fariba (eds.), "Some Alternative Formulations of the Event Calculus", Computational Logic: Logic Programming and Beyond: Essays in Honour of Robert A. Kowalski Part II, Lecture Notes in Computer Science, Berlin, Heidelberg: Springer, pp. 452–490, doi:10.1007/3-540-45632-5_17, ISBN 978-3-540-45632-2, retrieved 2020-10-05
3. ^ Kowalski, Robert (1992-01-01). "Database updates in the event calculus". The Journal of Logic Programming. 12 (1): 121–146. doi:10.1016/0743-1066(92)90041-Z. ISSN 0743-1066.
4. ^ Eshghi, Kave (1988). "Abductive planning with event calculus". Iclp/SLP: 562–579.
5. ^ Lambalgen, Hamm (2005). The proper treatment of events. Malden, MA: Blackwell Pub. ISBN 978-0-470-75925-7. OCLC 212129657.
6. ^ Skarlatidis, Anastasios; Paliouras, Georgios; Artikis, Alexander; Vouros, George A. (2015-02-17). "Probabilistic Event Calculus for Event Recognition". ACM Transactions on Computational Logic. 16 (2): 11:1–11:37. arXiv:1207.3270. doi:10.1145/2699916. ISSN 1529-3785. S2CID 6389629.
7. ^ Skarlatidis, Anastasios; Artikis, Alexander; Filippou, Jason; Paliouras, Georgios (March 2015). "A probabilistic logic programming event calculus". Theory and Practice of Logic Programming. 15 (2): 213–245. doi:10.1017/S1471068413000690. ISSN 1471-0684. S2CID 5701272.
8. ^ Ma, Jiefei; Miller, Rob; Morgenstern, Leora; Patkos, Theodore (2014-07-28). "An Epistemic Event Calculus for ASP-based Reasoning About Knowledge of the Past, Present and Future". EPiC Series in Computing. EasyChair. 26: 75–87. doi:10.29007/zswj.
9. ^ D'Asaro, Fabio Aurelio; Bikakis, Antonis; Dickens, Luke; Miller, Rob (2020-10-01). "Probabilistic reasoning about epistemic action narratives". Artificial Intelligence. 287: 103352. doi:10.1016/j.artint.2020.103352. ISSN 0004-3702. S2CID 221521535. | 2022-07-07 15:05:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553478121757507, "perplexity": 6244.735548693343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00279.warc.gz"} |
https://www.nature.com/articles/srep04244?error=cookies_not_supported&code=18120618-5da9-4717-8da9-8fb0e0f89aec | ## Abstract
The Parrondo's paradox is a counterintuitive phenomenon where individually-losing strategies can be combined in producing a winning expectation. In this paper, the issues surrounding the Parrondo's paradox are investigated. The focus is lying on testifying whether the same paradoxical effect can be reproduced by using a simple capital dependent game. The paradoxical effect generated by the Parrondo's paradox can be explained by placing all the parameters in one probability space. Based on this framework, it is able to generate other possible paradoxical effects by manipulating the parameters in the probability space.
## Introduction
The Parrondo's paradox describes the counterintuitive situation where combining two individually-losing games could produce a winning expectation. The initial purpose of the Parrondo's paradox was to simulate a counterintuitive physical phenomenon generated by the flashing Brownian ratchet1 in terms of two gambling games2. Some studies were made to demonstrate the concept of the capital-dependent Parrondo's paradox3,4, to formulate the mathematical expressions of the Parrondo's paradox5,6 and to extend the capital-dependent Parrondo's paradox to a history-dependent version7.
The Parrondo's paradox has raised attention as it has tremendous potentials in describing the strategy of turning two unfavorable situations into a favorable one. The concept has been scrutinized8,9 since its first appearance and extended into other potential applications10,11,12,13.
In this paper, it begins with a short summarization of key concepts of the Parrondo's paradox. It is further ventured into the investigation on whether the analogous paradoxical effect can be reproduced by using a relatively simple capital-dependent game as claimed before14. In reality, all the parameters that used in the Parrondo's paradox can be analyzed in a probability space, which reveals the working principle of the paradox. Based upon this foundation, it is possible to generate other paradoxical effects by manipulating these parameters inside the probability space. In the end, the issues associated with paradox are discussed.
There are totally two versions of the Parrondo's paradox, which is referred to as capital- and history-dependent. The Parrondo's paradox consists of two games, namely, game A and game B. The only difference between these two versions of paradox is lying on the corresponding switching mechanisms of game B. For both versions, game A is exactly the same. It is a zero-order memoryless gambling game of winning probability of p1 and losing probability of 1p1. Game B is a condition-based game, also known as the second-order Markov game, which consists of two scenarios – scenario 1 and scenario2.
For the capital-dependent Parrondo's paradox, choosing which scenario to be played merely depends upon whether the instantaneous capital C(t) is a multiple of predefined integer M or not. If the capital C(t) is a multiple of M (i.e.C(t) modM = 0), scenario 1 is chosen to be played, in which the winning probability p2 is much lower than the losing probability 1p2. If the capital C(t) is not a multiple of M (i.e.C(t) mod M0), scenario 2 is selected, in which the winning probability p3 is slightly higher than the losing probability 1p3.
For the history-dependent Parrondo's paradox, deciding which scenario to be played relies on the outcomes of previous two games. As the outcome of each game is resulting in a win or loss, there are totally four different combinations of results of previous two games: {lose, lose}, {lose, win}, {win, lose} and {win, win}. Therefore, there are totally four different scenarios to be selected. Each scenario corresponds to one specific combination of results of previous two games.
Three probabilities, p1, p2 and p3, are controlled by using one single biasing parameter ε. The central idea is that, by setting biasing parameter ε > 0, both game A and game B are losing games (i.e. capital C(t) is decreasing with the advancement of number t of games played) if played individually. The Parrondo's games are illustrated in Figure 1.
Based on the rules of games as specified in Figure 1 and by setting the value of biasing parameter ε = 0.005 and predefined integer M = 3, respectively, a simulation can be generated by averaging the outcomes of 10,000 trials for each game and totally 200 games are played, as shown in Figure 2.
Figure 2 reveals two essential information: for the capital-dependent Parrondo's paradox, both game A (blue) and game B (pink) are losing games if played individually; however, once these two games are played in a mixed manner, in which both game A and game B have equal chance to be played (i.e.Probability(game A) = Probability(game B)), the resultant compound game (black) is a winning game.
## Results
The counterintuitive phenomenon, which is generated by the compound game, or randomly mixed game, of the capital-dependent Parrondo's paradox, can be analyzed by simply placing all the probabilities in one single probability space15. Such a probability space, as shown in Figure 3, consists of two elements: a straight line (red) and a curve (black).
The curve is specified by the game rules of game B. In order to make game B a fair game, the winning probability must equal to the losing probability, that is, . In the selected case of the capital-dependent Parrondo's paradox, predefined integer is selected to be M = 3. Therefore, in order to make the game B of the capital-dependent Parrondo's paradox a fair game, the probabilities of scenario 1 and scenario2 of game B must satisfy equation (1) as indicated below.
In addition, equation (1) can be modified by simply expressing the winning probability of scenario 1, p2, in terms of the winning probability of scenario 2, p3. The resultant function is equation (2), which is the curve (black) in Figure 3.
It divides the entire probability space into two separate regions: the region above the curve is termed as winning region (grey) due to the winning probability of game B is higher than the corresponding losing probability, i.e., ; the region below the curve is termed as losing region (yellow) due to the winning probability of game B is lower than the corresponding losing probability, i.e., .
In short, if the selected probabilities of game B, p2 and p3, are falling into the winning region, game B is a winning game. On the other hand, if the selected probabilities of game B, p2 and p3, are lying inside the losing region, game B is a losing game.
Similarly, by setting probabilities p1 = p2 = p3, equation (1) can be converted into equation (3) as stated below.
By solving equation (3), it returns with three solutions: one real solution and two imaginary solutions and . The real solution implies the winning probabilities equals to the losing probability of game A. Such a relationship can be expressed in terms of a straight line (red) in the probability space, as shown in Figure 3. The winning probability of game A, p1, is selected along this straight line. If the winning probability of game A is , the part of the straight line falls in the losing region and then game A is a losing game. On the other hand, if the winning probability of game A is larger than , the part of straight line falls into the winning region and then game A is a winning game. If the winning probability of game A is , the intersection point of the straight line and the curve and then game A is a fair game.
As specified in the game rules of the original capital-dependent Parrondo's paradox, game A is a losing game. Therefore, the winning probability p1 of game A is selected along the straight line (red) in the losing region. In the original capital-dependent Parrondo's paradox, the winning probability of game A is p1 = 0.495. Game B is also a losing game and therefore two winning probabilities of game B, p2 and p3, must be any single point, (p2, p3), located inside the losing region. The selected probabilities of the original Parrondo's paradox are p2 = 0.095 and p3 = 0.745. Therefore, it is possible to plot these two points, (p1, p1) = (0.495, 0.495) and (p2, p3) = (0.095, 0.745), inside the probability space.
The compound game is formed as a convex linear combination of two games, game A and game B, by introducing one additional parameter, namely, mixing parameter, denoted by γ. The parameter γ is defined as the probability of selecting game A. Analogous to the game B of the capital-dependent Parrondo's paradox, the compound game is also a condition-based game. Suppose the capital C(t) is divisible by M, the winning probability pc1 of compound game can be expressed by equation (4).
On the other hand, if the capital C(t) is not a multiple of M, the winning probability pc2 of compound game can be expressed by equation (5).
As all these probabilities are fixed once they are selected, the only method is to tweak the value of mixing parameter γ. In order to make the compound game a winning one, the selected mixing parameter must satisfy equation (6), that is, the winning probabilities of compound game is greater than the corresponding losing probabilities if the predefined integer M = 3.
Such a method can also be represented in the same probability space, as shown in Figure 3, by linking these two points, (p1, p1) and (p2, p3), using a straight connecting line (blue). It is able to observe there is a certain region of the line falling inside the winning region. The probabilities fall inside this region satisfy equation (6), which makes compound game a winning one. By adjusting the value of mixing parameter γ, i.e., changing the location of the point along the straight line, any selected points along this straight line falling into the winning region are the keys in producing a winning expectation. In the original capital-dependent Parrondo's paradox, mixing parameter γ equals to , which is the middle point of the straight line (blue). Such a point is located inside the winning region. Therefore, the compound game is a winning one.
Based upon the theoretical foundation, it is possible to construct several alternative designs, which can be used to explain how analogous paradoxical effect can be reproduced by simply manipulating parameters in the probability space.
It is started by providing a relatively simple alternative design, namely, the reversed Parrondo's paradox, that is, two individual winning games can also be combined in producing a losing expectation.
The reversed Parrondo's paradox is achieved by simply switching the winning probabilities with its losing probabilities. Similarly, the selected probabilities can be plotted inside the same probability space, as shown in Figure 4.
In the case, both probabilities of game A and game B are falling into the winning region. By setting the mixing parameter γ to , the middle point (of connecting line) is falling into the losing region. In the simple arrangement, it is to produce a totally reversed paradoxical effect.
The Parrondo's paradox is a combination of two losing games in producing one winning expectation. However, there are totally eight different combinations of two winning and/or losing games, including the Parrondo's paradox, which are summarized in Table 1.
Scheme #1 is the Parrondo's paradox. Here the aim is to investigate whether the remaining seven combinations, from scheme #2 to #8, are capable of producing other possible paradoxical effects. In order to preserve the consistence, the same value of biasing parameter ε = 0.005 and predefined integer M = 3 is used for all simulations. Analogous to the original version (scheme #1), a series of simulations in Figure 5 are generated by averaging the outcomes of 10,000 trials for each game and totally 200 games are played.
Both schemes #4 (Figure 5(c)) and #5 (Figure 5(d)) are belonging to trivial cases. In scheme #4 (Figure 5(c)), the winning probabilities, p1, p2 and p3, are smaller than the corresponding losing probabilities, 1p1, 1p2 and 1p3. Therefore, there is no doubt that both game A and game B are losing games and hence, the compound game is also a losing game. The same situation occurs in scheme #5 (Figure 5(d)), both game A and game B are winning games. It is intuitive to have the compound game a winning game. In short, these two schemes, #4 and #5, are not producing any paradoxical effects.
Schemes #3 (Figure 5(b)) and #6 (Figure 5(e)) are also regarded as trivial cases. In both schemes, game B is a complete either winning (scheme #3) or losing (scheme #6) game in both scenarios – scenario 1 and scenario2. The trend of compound game in each case is significantly influenced by that of game B in both schemes. In both schemes, #3 (Figure 5(b)) and #6 (Figure 5(e)), the instantaneous capital C(t) at any number of games played is equal to half the sum of game A and game B. The generated phenomenon in both schemes is intuitive and, hence, they are not regarded as paradoxes.
On the other hand, schemes #2 (Figure 5(a)) and #7 (Figure 5(f)) produce relatively strong paradoxical effect. In scheme #2, game A is slightly winning game, game B is a complete losing game. Intuitively, the compound game should be a slightly losing game. However, as shown in Figure 5(a), the compound game is definitely outperformed game A, which results in a winning game. The identical situation also occurs in scheme #7, the only difference is that playing the compound game results in an obvious inferior position than game A alone.
Finally, the scheme #8 (Figure 5(g)) is a complete reverse Parrondo's paradox, which produces a very strong paradoxical effect. In the original Parrondo's paradox (scheme #1), both game A and game B are losing games if played individually. The compound game of game A and game B, however, produces a complete counterintuitive phenomenon, resulting in a winning game. Similarly, in scheme #8, game A and game B are winning games if played individually. The compound game, as shown in Figure 5(g), is capable of producing a losing expectation.
## Discussion
From the scrutiny of the Parrondo's paradox8,9, there are several issues surround the Parrondo's paradox since its first appearance. Some of these issues were responded by its initiators16. The objective of this paper is to resolve the remaining issues associated with the Parrondo's paradox. It begins by focusing on testifying whether the identical paradoxical effect can be simply reproduced by replacing a relatively simple capital-dependent game as claimed before14, which also involves two games – game A: player loses $2 if his capital C(t) is an odd number and loses$1 if C(t) is an even number; game B: the player gains $6 if C(t) is an odd number and loses$7 is C(t) is an even number.
At first glance, the proposed game seems to be plausible. In order to verify whether the paradoxical effect could be generated by the proposed simple capital-dependent game, a simulation is presented in Figure 6. As indicated in Figure 6, game A is a losing game and the compound game is a winning game. However, game B is a winning game instead of a losing game as specified in the proposed game14. Actually, the trick employed in game B is relatively simple – no matter whether the starting capital for game B is an odd or even number, the resultant capital is becoming and subsequently maintaining as an odd number with the advancement of number of game played. In order to demonstrate the idea, it begins the game by using an odd number, for instance, $9, as starting capital for game B. With the advancement of number of games played, the capital C(t) becomes “”, resulting in a winning game. If game B starts with an even-number starting capital, for instance,$10, with the advancement of number of games played, the capital C(t) becomes “”, which also results in a winning game. Therefore, no matter the starting capital is an odd or even number, game B is always a winning game. There is no doubt that playing game B alone offers higher returns than playing the compound game, which is reflected in Figure 6.
The phenomenon generated by such a simple capital-dependent game is much similar to that of scheme #3 (Figure 5(b)), which should be treated as a trivial case. In other words, such an effect cannot be treated as the paradoxical effect at all. Therefore, the paradoxical effect cannot be simply created by replacing the original game with a primitive version. Unfortunately, the proposed simple capital-dependent game14 is failed to reproduce the analogous paradoxical effect. The Parrondo's paradox is caused by manipulating the probability distribution of individual losing games to form a winning compound game17,18.
These eight different combinations of two winning and/or losing games can be included into the same probability space, as shown in Figure 7.
Due to the feature of point symmetry, the analysis can be simply restricted to one side of the probability space, that is, schemes #1, #2, #4 and #6, in which scheme #1 is the original capital-dependent Parrondo's paradox.
In scheme #2, game A is a winning game as its winning probability is greater than and game B is a losing game, which is exactly the same as that of scheme #1. In this case, the compound game is also a winning game as the center point of the connecting line between these two points is falling inside the winning region. In scheme #4, game A is a losing game, which is the same as that of scheme #1 and game B is also a losing game as the probabilities of both scenarios are falling inside the losing region. There is no doubt that the compound game is also a losing game. Finally, in scheme #6, game A is a winning game as that of scheme #2 and game B is a strong losing game as that of scheme #4. The resultant compound game is also a losing game.
Based on the one-sided analysis, it is able to determine the results on the other side. The compound games of scheme #3 (reversed #6), scheme #5 (reversed #4), scheme #7 (reversed #2), scheme #8 (reversed #1) are winning, winning, losing and losing games, respectively. After conducting a series of simulations (Figure 5), the results for various schemes are summarized in Table 2. In summary, schemes #1 and #8 are able to produce very strong paradoxical effect. Schemes #2 and #7 are capable of producing relatively strong paradoxical effect. For the remaining schemes, #3 to #6, are failed to generate any paradoxical effects. These schemes can be regarded as trivial cases, labeled as “N/A”.
## Methods
### Modified probability curve
It is able to observe the fact that the Parrondo's paradox can be reproduced as long as there exists a connecting line of two selected points of probabilities across the curve boundary with two points located in the losing region and the middle section of the connecting line falling in the winning region. Therefore, it is possible to modify the probability curve based on this observation. The simplest modification is done by changing the value of predefined integer number M = 5. After the modification, the resultant function becomes equation (7).
The modified probability curve (solid black line) and its original probability curve (dash grey line) are shown in Figure 8.
Similarly, the curve divides the entire probability space into two regions, the winning region (yellow-line shaded area) and losing region (grey-line shaded area). The original and modified probability curves have one property in common, that is, both of them are symmetric about the intersection point of curve boundary and straight line that represents game A. Due to this specific property, the probability of game A remains unchanged, that is, p1 = 0.495. On the opposite, the original probabilities of game B, (p2, p3) = (0.095, 0.745), is no longer feasible as it is falling inside the winning region. By adjusting the point to the new location, (p2, p3) = (0.095, 0.625), the paradoxical effect can be reproduced for this modified case. As shown in Figure 8, the probability of game A is falling in the losing region as usual; the probability of game B is also falling inside the losing region. However, there is a certain region of connecting line located in the winning region. By manually controlling the location of compound game, i.e., modify the value of mixing parameter, the resultant compound game can also be a winning game. Based on these data and keeping the remaining parameters unchanged, the simulation of this case can be produced as shown in Figure 9.
### Non-linear combination of two games
In previous cases, the compound game is formed in terms of a convex linear combination of two individual games. In reality, the combination of these two individual games, game A and game B, can also be non-linear. In the following case, the concept of non-linear combination of two individual games is demonstrated to be outperformed the original linear combination of two games. The previous case, whereas predefined integer M = 5, is used in this demonstration. All these three non-linear combinations together with the linear combination are shown in Figure 10.
The original linear combination of two games is represented by a green solid line. By introducing one mixing parameter γ, it is possible to control the probability of the compound game along the line (also mentioned in previous section). The non-linear combinations of these two games are expressed in terms of dash lines in Figure 10. The functions of these lines are determined based upon these two probability points. Finally, only the middle points of these functions are selected as the probabilities of the compound game.
The simulation is produced based upon the same parameters as previous case. Only in this case, deciding which game to be played is no longer depending on the mixing parameter γ. Instead, two probabilities of compound game, pc1 and pc2, are firstly determined. As both compound game and game B of the capital-dependent Parrondo's paradox are condition-based game, it is possible to directly employ the probabilities of compound game under the paradigm of game B. As shown in Figure 11, the similar paradoxical effects can be reproduced even the compound game is formed in a way of non-linear combination. Furthermore, one additional intriguing feature can be observed from the simulation result, that is, the capital is proportional to the distance between the selected probabilities of compound game and the curve boundary.
### Concluding remarks
The paper investigates whether the combinations of two winning and/or losing games are capable of generating possible paradoxical effects. It is shown that the identical paradoxical effect cannot be simply reproduced by employing a relative simple capital-dependent game. In reality, the phenomenon generated by the Parrondo's paradox, can be explained by placing all probabilities in a probability space. The paradoxical effect can be produced by either modifying the probability boundary or arranging two winning and/or losing games in a way of linear/non-linear combination.
## References
1. Prost, J., Chauwin, J.-F., Peliti, L. & Ajdari, A. Asymmetric pumping of particles. Phys Rev Lett 72, 2652–2655 (1994).
2. Parrondo, J. M. R. & Español, P. Criticism of Feynman's analysis of the ratchet as an engine. Am J Phys 64, 1125–1130 (1996).
3. Harmer, G. P. & Abbott, D. Parrondo's paradox. Stat Sci 14, 206–213 (1999).
4. Harmer, G. P. & Abbott, D. Game theory - Losing strategies can win by Parrondo's paradox. Nature 402, 864–864 (1999).
5. Harmer, G. P., Abbott, D. & Taylor, P. G. The paradox of Parrondo's games. P Roy Soc A-Math Phy 456, 247–259 (2000).
6. Harmer, G. P., Abbott, D., Taylor, P. G. & Parrondo, J. M. R. Brownian ratchets and Parrondo's games. Chaos 11, 705–714 (2001).
7. Parrondo, J. M. R., Harmer, G. P. & Abbott, D. New paradoxical games based on Brownian ratchets. Phys Rev Lett 85, 5226–5229 (2000).
8. Iyengar, R. & Kohli, R. Why Parrondo's paradox is irrelevant for utility theory, stock buying and the emergence of life. Complexity 9, 23–27 (2003).
9. Martin, H. & von Baeyer, H. C. Simple games to illustrate Parrondo's paradox. Am J Phys 72, 710–714 (2004).
10. Allison, A. & Abbott, D. Control systems with stochastic feedback. Chaos 11, 715–724 (2001).
11. Toral, R. Cooperative Parrondo's games. Fluct Noise Lett 1, L7–L12 (2001).
12. Toral, R. Capital redistribution brings wealth by Parrondo's paradox. Fluct Noise Lett 2, L305–L311 (2002).
13. Iyengar, R. & Kohli, R. Why Parrondo's paradox is irrelevant for utility theory, stock buying and the emergence of life. Complexity 9, 23–27 (2003).
14. Philips, T. K. & Feldman, A. B. Parrondo's paradox is not paradoxical. SSRN, (2004).
15. Costa, A., Fackrell, M. & Taylor, P. G. Two issues surrounding Parrondo's paradox. Ann Int Soc Dyn Game 7, 599–609 (2005).
16. Harmer, G. P. & Abbott, D. A review of Parrondo's paradox. Fluct Noise Lett 2, R71–R107 (2002).
17. Shu, J.-J., Wang, Q.-W. & Yong, K.-Y. DNA-based computing of strategic assignment problems. Phys Rev Lett 106, 188702 (2011).
18. Shu, J.-J. On generalized Tian Ji's horse racing strategy. Interdiscipl Sci Rev 37, 187–193 (2012).
## Author information
J.J.S. conceived and designed the study and Q.W.W. performed the simulations. All authors reviewed the manuscript.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Rights and permissions
Reprints and Permissions
SHU, J., WANG, Q. Beyond Parrondo's Paradox. Sci Rep 4, 4244 (2015). https://doi.org/10.1038/srep04244
• Accepted:
• Published:
• ### Paradoxical Survival: Examining the Parrondo Effect across Biology
• Kang Hao Cheong
• , Jin Ming Koh
• & Michael C. Jones
BioEssays (2019)
• ### Parrondo’s paradox for chaos control and anticontrol of fractional-order systems
• Marius-F Danca
• & Wallace K S Tang
Chinese Physics B (2016)
• ### Mitigating Herding in Hierarchical Crowdsourcing Networks
• Han Yu
• , Chunyan Miao
• , Cyril Leung
• , Yiqiang Chen
• , Simon Fauvel
• , Victor R. Lesser
• & Qiang Yang
Scientific Reports (2016) | 2020-03-28 18:06:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5519977807998657, "perplexity": 1307.0295044723177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00074.warc.gz"} |
https://lp.tuhoc365.vn/the-eat-uzv/iazl5p.php?3025ff=atomic-structure-of-chlorine-atom | Protons and Neutrons in Chlorine. If an atom has 12 protons (atomic number = 12), it must be magnesium. Chlorine is the best known of the halogen elements. Soln: Atomic number of chlorine atom = 17. Atomic Number â Protons, Electrons and Neutrons in Chlorine. Now a chlorine ion has a closed outer shell, where as a chlorine atom lacks an electron of having a closed outer shell how does this relate to what is the difference atomic structure between the two isotopes of chlorine? 32) has isotopes with wt. Its electron configuration is ⦠It is a ratio of two masses. Crystal Structure, Element Groups: agreement. Transition Metals The atomic structure of atoms is down to the number of electrons that a certain element has. Now with the atomic weight information we can consider matching up atoms on a mass-to-mass basis. Atomic Structure P2: subatomic particles Subatomic particles are the smaller objects inside the atom: protons, neutrons and electrons! Chlorine (Cl). Please enable it in your browser. If an atom has 8 protons (atomic number = 8), it must be oxygen. United Kingdom, Telephone: +44 (0) 20 7432 1100 Links, Show Table With: W9 3RB Isotopes. of 35 may have an isotopic variety with 37 (37 Cl). The ratio of Cl-36 to stable Cl in the environment is about 700 E -15 : 1. 327-329 Harrow Road Comments Metalloids A chlorine atom has more protons in its nucleus than has a sodium atom (1) Both have three shells of electrons (1) Electrons more strongly attracted by chlorine nucleus so size smaller than Na (1) An electron shell is lost when a sodium ion is formed from a sodium atom (1) Inner electrons more strongly attracted so ion smaller than atom (1) The first electron shell belonging to chlorine contains a total of two electrons whereas the second electron shell of chlorine contains 8 electrons. So, its electronic configuration is. Electron Configuration An ion is a charged atom. Non-Metals For example, the atomic number of chlorine is 17. One hydrogen atom is attached to one chlorine atom, but they have different masses. Chlorine is a non-metal. Date of Discovery By sharing this link, I acknowledge that I have read and understand We have Provided Structure of the Atom Class 9 Science MCQs Questions with Answers to help students understand the concept very well. Help The nucleus consists of 17 protons (red) and 18 neutrons (orange). In the human body, it's found as the chloride ion, where it ⦠17 electrons (white) occupy available electron shells (rings). Alkali Metals The atomic number is also called the proton number. In elemental form it is a yellow-green, reactive, toxic gas (Cl2) that liquefies at minus 34 degrees Celsius. A chlorine atom has 7 electrons in its outer shell. Every chlorine atom has 17 protons and 17 electrons. VAT no. Charges of Atoms You can see that each part of the atom is labeled with a "+", "-", or a "0." The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Similarly, every chlorine atom (atomic number = 17) has 17 protons; every uranium atom (atomic number = 92) has 92 protons. Electronic structure. How many electrons are there in the L shell? The isotopes chlorine-35 and chlorine-37 make up nearly all chlorine atoms in nature. The atomic number of chlorine is 17; thus, it has seventeen protons and seventeen electrons. the Terms and Conditions. Electrons have a ⦠Chlorine is a chemical element with atomic number 17 which means there are 17 protons in its nucleus. chlorine-35 is about 3 times as abundant as chlorine-37. Cl-36 is produced in the atmosphere by spallation of Ar-36 by interactions with cos⦠How many electrons are there in the L shell? In the atom of an element X, 6 electrons are present in the outermost shell. For example, the atomic mass of chlorine (Cl) is 35.45 amu because chlorine is composed of several isotopes, some (the majority) with an atomic mass of 35 amu (17 protons and 18 neutrons) and some with an atomic mass ⦠London Chlorine is a chemical element with atomic number 17 which means there are 17 protons in its nucleus.Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z.The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. All Rights Reserved. Therefore, it wants to form one covalent bond with, in this case, another Chlorine atom to complete its octet. A periodic table shows you the number of electrons (for chlorine it's 17 and for oxygen its 8). The nucleus consists of 17 protons (red) and 18 neutrons (blue). Atomic structure of chlorine atom and Cl- 2 See answers DangerousMe2 DangerousMe2 Answer: An atom of chlorine-35 contains 18 neutrons (17 protons + 18 neutrons = 35 particles in the nucleus) while an atom of chlorine-37 contains 20 neutrons (17 protons + 20 neutrons = 37 particles in the nucleus). 33 and 34 (33 S, 34 S) and chlorine with the usual atomic wt. The atomic number of chlorine is 17. Science Photo Library's website uses cookies. Then based on the information given in the row, determine the element and its contents, thus filling out the entire chart. Some features of this website require JavaScript. Halogens Chlorine Atom. Alkaline Earth Metals (Atomic number of chlorine is 17). Relative reactivity. How would you account for the great chemical similarity of the two isotopes? Chlorine is an element in the periodic table which is denoted by Cl. Chlorine is a chemical element with atomic number 17 which means there are 17 protons and 17 electrons in the atomic structure. It has 17 positive charges and 17 negative charges, meaning that it is neutral overall. Please contact your Account Manager if you have any query. It is a halogen (17 th group) in the 3 rd period of the periodic table. This is because the abundance of isotopes of an element is different. Other Metals (Atomic number of chlorine is 17). Diagram of the nuclear composition, electron configuration, chemical data, and valence orbitals of an atom of chlorine-35 (atomic number: 17), the most common isotope of this element. Chlorine (Cl). 1550520. Therefore, the valency of chlorine is often consider⦠Copyright © 1996-2012 Yinon Bentor. The relative atomic masses are not whole numbers like mass numbers. Relative atomic mass has no units. Diagram of the nuclear composition and electron configuration of an atom of chlorine-35 (atomic number: 17), the most common isotope of this element. There are no media in the current basket. This implies that chlorine contains a total of 17 protons and 17 electrons in its atomic structure. These electrons are arranged into 3 primary electron shells. Atomic Number of Chlorine. Boiling Point So Chlorine-35 has a mass of 35. In the atom of an element X, 6 electrons are present in the outermost shell. A hydrogen atom has a mass of 1.008 AMU and a chlorine atom ⦠That number tells you the number of protons in every atom of the element. Note that, each element may contain more isotopes, therefore this resulting atomic mass is calculated from naturally ⦠This image is not available for purchase in your country. How many electrons are there in the L shell? There are two principal stable isotopes of chlorine, of mass 35 and 37, found in the relative proportions of 3:1 respectively, giving chlorine atoms in bulk an apparent atomic weight of 35.5. Atomic Structure. The stability of an element's outer (valence) electrons determines its chemical and physical properties. Chlorine is essential for living organisms. So, calcium atoms form metallic bonds and exist in a vast network structure ⦠A neutral chlorine atom can gain one electron to become a negatively charged chlorine ion ($$\text{Cl}^{-}$$). If it acquires noble gas configuration by accepting requisite number of electrons, then what would be the charge on the ion so formed? The nucleus consists of 17 protons (red) and 18 neutrons (orange). The number of neutrons in an atom can vary within small limits. Directions: Under the headings, explain how to obtain each piece of data. This page was created by Yinon Bentor. 17 electrons (green) bind to the nucleus, successively occupying available electron shells (rings). For example, a carbon atom weighs less than 2 × × 10 â23 g, and an electron has a charge of less than 2 × × 10 â19 C (coulomb). Write down the electron distribution of chlorine atom. Similarly sulphur (atomic wt. Model release not required. Chlorine is a group VII element, so it has 7 valence electrons. Chlorine is in group 7 of the periodic table. CARLOS CLARIVAN / SCIENCE PHOTO LIBRARY Atomic Number Often, the resulting number contains a decimal. Part of the series: Drawing Help & More. How to Draw the Atomic Model for Chlorine. Finally, the outermost electron shell of the chlorine atom (often referred to as the valence shell) contains a total of 7 electrons. Chlorine is a chemical element with the symbol Cl and atomic number 17. Those symbols refer to the charge of the particle. Check the below NCERT MCQ Questions for Class 9 Science Chapter 4 Structure of the Atom with Answers Pdf free download. This is a picture of the shared electrons making a covalent bond in a chlorine molecule. (Atomic number of chlorine is 17). Property release not required. Diagram of the nuclear composition, electron configuration, chemical data, and valence orbitals of an atom of chlorine-35 (atomic number: 17), the most common isotope of this element. Letâs take hydrogen chloride, HCl. US toll free: 1-844 677 4151, © Science Photo Library Limited 2021 Two chlorine atoms will each share one electron to get a full outer shell and form a stable Cl 2 molecule.. L shell of chlorine contains 8 electrons. Noble Gases The atom that is formed in either of these two cases is called an ion. Chlorine (Cl). Science Photo Library (SPL) Use of this web site is restricted by this site's license For example: a neutral sodium atom can lose one electron to become a positively charged sodium atom ($$\text{Na}^{+}$$). Why does a Chlorine Molecule have a Covalent Bond?. About This Site Neutrons = Mass Number - Protons 00:00:44,969 -- 00:00:51,969 When we look at Chlorine on the periodic table it has an atomic number of 17, therefore it 00:00:55,149 -- 00:00:59,860 has 17 protons. Number of Energy Levels: 3: First Energy Level: 2: Second Energy Level: 8: Third Energy Level: 7 By continuing, you agree to accept cookies in accordance with our Cookie policy. Melting Point Chlorine has 9 isotopes with mass numbers ranging from 32 to 40. Chlorine is a yellow-green gas at room temperature. MCQ Questions for Class 9 Science with Answers were prepared based on the latest exam pattern. The structure of each atoms electrons is unique but there is a pattern. Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z.The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. K L M. 2 8 7. Atomsâand the protons, neutrons, and electrons that compose themâare extremely small. Continue. Calcium is a group II element, but it's metal. Only three of these isotopes occur naturally: stable Cl-35 (75.77%)and Cl-37 (24.23%), and radioactive Cl-36. Every element is unique and has an atomic number. Vital to life in ionic form, chlorine is a halogen in group 17, period 3, and the p-block of the periodic table. 26. Write down the electron distribution of chlorine atom. If it acquires noble gas configuration by accepting requisite number of electrons, then what would be the charge on the ion so formed? GB 340 7410 88. Write down the electron distribution of chlorine atom. 17 electrons (white) occupy available electron shells (rings). The relative atomic masses are not whole numbers like mass numbers 18 neutrons orange... On the information atomic structure of chlorine atom in the L shell 35 may have an isotopic variety with 37 ( 37 Cl.... Electrons determines its chemical and physical properties there in the ratio of Cl-36 to Cl... In every atom of an element X, 6 electrons are arranged into 3 primary shells... The abundance of isotopes of an element is different vary within small.. Charges and 17 electrons ( for chlorine it 's metal is in group 7 of the halogens it. Occur naturally: stable Cl-35 ( 75.77 % ) and 18 neutrons ( blue ) charge... Table which is denoted by Cl this image is not available for purchase your! Atomic structure continuing, you agree to accept cookies in accordance with our Cookie policy with... With mass numbers ranging from 32 to 40, determine the element and its properties are mostly between. Toxic gas ( Cl2 ) that liquefies at minus 34 degrees Celsius, chlorine â 35 and chlorine with atomic. White ) occupy available electron shells atom can vary within small limits the concept very well is restricted by site... Use of this web site is restricted by this site 's license agreement a yellow-green, reactive toxic. Number of protons in its nucleus usual atomic wt different masses Class 9 Science Chapter 4 structure of periodic! Of each atoms electrons is unique but there is a group II,. Is different how to obtain each piece of data share one electron to get a full outer shell and a! ( for chlorine it 's 17 and for oxygen its 8 ) to.! 35 may have an isotopic variety with 37 ( 37 Cl ) nucleus, occupying. To complete its octet this image is not available for purchase in country. With, in this case, another chlorine atom = 17 are present in the periodic shows. Stable Cl-35 ( 75.77 % ) and chlorine with the atomic weight information we can consider matching atoms! & More this information, what can be concluded about the atomic number = 12 ), it seventeen... To form one covalent bond in a chlorine molecule have a covalent bond? get... One hydrogen atom is attached to one chlorine atom to complete its octet a certain element has atom Answers... Mcqs Questions with Answers were prepared based on the ion so formed Cl-35 ( 75.77 % ), radioactive. Configuration by accepting requisite number of electrons that compose themâare extremely small has 17 protons ( )!, neutrons and electrons of these two cases is called an ion environment is about 700 E -15 1! Mcq Questions for Class 9 Science with Answers were prepared based on the latest exam pattern with atomic =. Science Chapter 4 structure of each atoms electrons is unique but there is a yellow-green,,. Of protons in every atom of an element is different atom with Answers Pdf free.... Chlorine contains a total of 17 protons ( atomic number = 12 ) and. Extremely small which is denoted by Cl agree to accept cookies in accordance our... In every atom of an element is different electrons determines its chemical and properties! Up atoms on a mass-to-mass basis electrons determines its chemical and physical properties the distribution... Atom, but it 's found as the chloride ion, where it ⦠down... Thus filling out the entire chart only three of these isotopes occur naturally: stable Cl-35 75.77... The nucleus consists of 17 protons in its atomic structure P2: subatomic particles subatomic particles subatomic particles particles! Periodic table and its contents, thus filling out the entire chart Cookie policy ion. 34 S ) and Cl-37 ( 24.23 % ), it 's 17 and for oxygen 8..., 34 S ) and 18 neutrons ( orange ) MCQ Questions Class... Have read and understand the concept very well in group 7 of element! Contains 8 electrons form one covalent bond with, in this case another. ( 24.23 % ) and 18 neutrons ( orange ) best known of the halogens, it be... An isotopic variety with 37 ( 37 Cl ) you account for the great chemical of... Halogen ( 17 th group ) in the atomic number = 8 ), it wants to form one bond! The nucleus, successively occupying available electron shells a pattern as the ion. Rd period of the particle Cl2 ) that liquefies at minus 34 degrees Celsius its 8 ) it. Spallation of Ar-36 by interactions with cos⦠atomic number is also called the proton number: stable (. The series: Drawing help & More that chlorine contains a total of protons... Within small limits about the atomic weight information we can consider matching up atoms on a mass-to-mass basis site restricted..., where it ⦠Write down the electron distribution of chlorine atom has 8 (! Must be magnesium occupying available electron shells ( rings ) atom can vary within small.... Spallation atomic structure of chlorine atom Ar-36 by interactions with cos⦠atomic number 17 which means there are 17 protons ( atomic of... Ii element, but it 's 17 and for oxygen its 8 ) the concept very well protons! Minus 34 degrees Celsius isotopes occur naturally: stable Cl-35 ( 75.77 % ) 18! Cl 2 molecule the environment is about 3 times as abundant as.! Chlorine with the atomic weight information we can consider matching up atoms on a mass-to-mass basis available for purchase your. To get a full outer shell and form a stable Cl in the with! For oxygen its 8 ) is in group 7 of the atom that is in. Stable Cl-35 ( 75.77 % ) and 18 neutrons ( orange ) Science Chapter 4 structure of the particle a! Not whole numbers like mass numbers ranging from 32 to 40 17 th group ) the..., 6 electrons are there in the ratio 3: 1 with our Cookie policy in... Bond? an element X, 6 electrons are present in the outermost shell another chlorine atom properties are intermediate. Group 7 of the two isotopes atom has 8 protons ( atomic number = 12 ), it must magnesium. Carlos CLARIVAN / Science PHOTO LIBRARY carlos CLARIVAN / Science PHOTO LIBRARY isotopes with mass ranging... And Cl-37 ( 24.23 % ) and 18 neutrons ( orange ) 7! And physical properties numbers ranging from 32 to 40 electrons and neutrons in chlorine element and its contents thus. Site 's license agreement human body, it must be magnesium, it wants to form one covalent with. Requisite number of electrons ( for chlorine it 's 17 and for oxygen its 8,. In a chlorine atom has 7 valence electrons is down to the nucleus of.: Under the headings, explain how to obtain each piece of data 3 primary shells... 17 positive charges and 17 electrons Cl-35 ( 75.77 % ) and 18 neutrons ( orange ) usual atomic.... Nucleus consists of two electrons whereas the second electron shell of chlorine is called an ion this! Understand the concept very well electron distribution of chlorine atom to complete its octet abundance of isotopes of an 's. The series: Drawing help & More smaller objects inside the atom of the shared making... Science MCQs Questions with Answers Pdf free download ( for chlorine is group! Accept cookies in accordance with our Cookie policy directions: Under the,. 3 primary electron shells ( rings ) cases is called an ion of chlorine, it. Questions with Answers to help students understand the Terms and Conditions about atomic structure of chlorine atom. Its atomic structure of each atoms electrons is unique but there is chemical. And understand the atomic structure of chlorine atom and Conditions â 37 in the atom of the isotopes... 4 structure of atoms is down to the number of neutrons in an atom 12! 'S license agreement of Cl-36 to stable Cl in the atmosphere by spallation of Ar-36 by interactions cosâ¦! Of isotopes of an element X, 6 electrons are present in the atomic structure a chemical element with number... Halogen elements valence ) electrons determines its chemical and physical properties that have... 17 and for oxygen its 8 ), and electrons primary electron shells ( rings ) and electrons..., neutrons and electrons every chlorine atom has 12 protons ( red ) 18. On a mass-to-mass basis thus filling out the entire chart from 32 to 40 structure., it must be magnesium it is a halogen ( 17 th )! Best known of the element atomic structure of chlorine atom its properties are mostly intermediate between them directions Under. By Cl its chemical and physical properties covalent bond in a chlorine molecule stable. And physical properties this information, what can be concluded about the atomic weight information we can matching. Based on the information given in the L shell so it has 17 positive and... By Cl contains 8 electrons the halogen elements 33 S, 34 S ) and Cl-37 ( %. Valence ) electrons determines its chemical and physical properties to the nucleus consists of 17 protons every... Valence electrons proton number of Ar-36 by interactions with cos⦠atomic number is also called the proton.... Radioactive Cl-36 ( red ) and Cl-37 ( 24.23 % ) and chlorine â 37 in the L?.
Mura Is Waste Due To, Bodybuilding Photo Editor App, Franco-nevada Sec Filings, Codenames Game How To Play, Black Cauldron Gurgi, Morrowind Mysticism Trainer, How To Remove Wd Unlocker, Bamboo Png Hd, Grand Plie Pronunciation, Nakamura Japanese Meaning, Cats And Spray Foam Insulation, | 2021-06-12 15:17:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34993818402290344, "perplexity": 2064.9457901224796}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00152.warc.gz"} |
http://hysafe.org/wiki/D68/HorizontalBuoyantJet | Search:
D68 /
HorizontalBuoyantJet
For the numerical simulation of turbulent buoyant jets, normally there are two methods available: integral methods and simulation by CFD.
Integral methods are based on the basic laws of conservation of mass, momentum, energy, and species concentration. To integrate the partial differential equations, empirical profile shapes (i.e. Gaussian distribution) for velocity, temperature and concentration are assumed. The entrainment of ambient gas into the turbulent buoyant jet is also needed to be assumed to close the equation system [5-8]. Pantokratoras[9] modified the integral Fan-Brooks model[10] to calculate the horizontal penetration of inclined thermal buoyant water jets, and the modified model predictions are in good agreement with the trajectory measurements.
Jirka[11-12] formulated an integral model, named CorJet, for turbulent buoyant jets (round and planar) in unbounded stratified flows, i.e., the pure jet, the pure plume, the pure wake. Based on the Boussinesq approximation, CorJet integral model appears to provide a sound, accurate and reliable representation of complex buoyant jet physics.
However integral methods are difficult to extend to more complex turbulent buoyant flows which are essentially three dimensional and often obstacles and walls interact with the flow. It is very difficult to prescribe the profile shapes and to relate the entrainment rate to all of the local parameters.
Therefore CFD methods become more popular which do not make assumptions for the profile and entrainment but yield them as a part of the solution. However, if not applying DNS, the turbulence model in the engineering CFD code is one of the most significant factors to affect the simulations of turbulent buoyant jets and plumes. Most of the turbulence models were developed and tested for non-buoyant flows. Turbulence models including buoyant effects [13] were reviewed by Hossain and Rodi[14]. In recent studies models with buoyancy related modification were evaluated, assessing in particular buoyancy effect on the production and dissipation of the turbulent kinetic energy in buoyant plumes[15-17]. The standard models were suspected to seriously underpredict the spreading rate of vertical buoyant jets and to overpredict the entrainment of horizontal flows.
Comparing to the vertical buoyant jet, much fewer data and calculations for the study of horizontal turbulent buoyant jets can be found in the open literatures[18-23]. Almost all the models and calculations available are based on Boussinesq approximation, which the environmental and atmospheric engineers are more interested in. What we are concerning is hydrogen, helium, or steam released into air, which involves a large density difference between the jet and ambient, making the Boussinesq approximation invalid.
In the table below, integral models of horizontal buoyant round and plane jets are introduced. The system of ordinary differential equations can be solved by 4th order Runga-Kutta method to obtain the horizontal buoyant jet trajectory, the velocity, the density, and the tracer concentration. For small density difference cases, the Boussinesq integral model can obtain reasonable results as the CorJet model shows in the table. For hydrogen safety analysis, the large density difference will violate the Boussinesq approximation.
Horizontal buoyant plane jet Horizontal buoyant round jet Diagram of horizontal buoyant jet into uniform ambient Volume flux $$q_0 = U_0 b_0$$(specific flux) $$Q_0 = U_0 A_0 {\rm{ = }}U_0 \pi r_0^2$$ Momentum flux (in kinematic units) $$m_0 = U_0^2 b_0$$ (specific flux) $$M_0 = U_0^2 A_0 = U_0^2 \pi r_0^2$$ Buoyancy flux(in kinematic units) $$j_{\rm{0}} = U_0 \left( {{{(\rho _a - \rho _0 )g} \over {\rho _{\rm{0}} }}} \right)b_0$$ (specific flux) $$J_0 = U_0 \left( {{{(\rho _a - \rho _0 )g} \over {\rho _0 }}} \right)\pi r_0^2$$ Jet/plume transition length scale $$L_M = m_0 /j_0^{2/3}$$ $$L_M =M_0^{3/4}/J_0^{1/2}$$ Froude number $$Fs = {{U_0 } \over {\sqrt {g'b_0 } }} = {{U_0 } \over {\sqrt {\left( {{{(\rho _a - \rho _0 )g} \over {\rho _0 }}} \right)b_0 } }}$$ $$Fs = {{U_0 } \over {\sqrt {g'r_0 } }} = {{U_0 } \over {\sqrt {\left( {{{(\rho _a - \rho _0 )g} \over {\rho _0 }}} \right)r_0 } }}$$ Assumptions The fluids are incompressible; The flow is fully turbulent. Molecular transport can be neglected in comparison with turbulent transport which means there is no Reynolds number dependence; The profiles of velocity, buoyancy, and concentration are similar at all cross sections normal to the jet trajectory; longitudinal turbulent transport is small compared with latitudinal convective transport. Velocity profile $$u = u_s e^{ - n^2 /b^2 }$$ $$u = u_s e^{ - r^2 /b^2 }$$ Density deficiency profile $${{\rho _a - \rho } \over {\rho _a }} = \left( {{{\rho _a - \rho _s } \over {\rho _a }}} \right)e^{ - n^2 /(\lambda b)^2 }$$ $${{\rho _a - \rho } \over {\rho _a }} = \left( {{{\rho _a - \rho _s } \over {\rho _a }}} \right)e^{ - r^2 /(\lambda b)^2 }$$ Tracer concentration profile $$c = c_s e^{ - n^2 /(\lambda b)^2 }$$ $$c = c_s e^{ - r^2 /(\lambda b)^2 }$$ entrainment $$E = 2\alpha \rho _a u_s$$ $$E = 2\pi \alpha b\rho _a u_s$$ Entrainment coefficient $$\alpha = \alpha _j \exp \left[ {\ln \left( {{{\alpha _p } \over {\alpha _j }}} \right)\left( {{{Ri_{j - p} } \over {Ri_p }}} \right)^{3/2} } \right]$$ $$\alpha = \alpha _j \exp \left[ {\ln \left( {{{\alpha _p } \over {\alpha _j }}} \right)\left( {{{Ri_{j - p} } \over {Ri_p }}} \right)^2 } \right]$$ Jet entrainment coefficient $$\alpha _j = 0.052 \pm 0.003$$ $$\alpha _j = 0.0535 \pm 0.0025$$ Plume entrainment coefficient $$\alpha _p = 0.102$$ $$\alpha _{\rm{p}} = 0.0833 \pm 0.0042$$ Width ratio $$\lambda = 1.35$$ $$\lambda = 1.19$$ conservation equations $$\int_{ - \infty }^\infty {{{\partial (\rho u)} \over {\partial s}}dn} = 2\alpha \rho _a u_s$$ $$\int_{ - \infty }^\infty {{{\partial (\rho uu\cos \theta )} \over {\partial s}}dn} = 0$$ $$\int_{ - \infty }^\infty {{{\partial (\rho uu\sin \theta )} \over {\partial s}}dn} = g \int_{ - \infty }^\infty {(\rho _a - \rho )dn}$$ $$\int_{ - \infty }^\infty {{{\partial (cu)} \over {\partial s}}} dn = 0$$ $$\int_0^\infty {\int_0^{2\pi } {{{\partial (\rho u)} \over {\partial s}}r} drd\varphi } = 2\pi \alpha b\rho _a u_s$$ $$\int_0^\infty {\int_0^{2\pi } {{{\partial (\rho uu\cos \theta )} \over {\partial s}}r} drd\varphi } = 0$$ $$\int_0^\infty {\int_0^{2\pi } {{{\partial (\rho uu\sin \theta )} \over {\partial s}}r} drd\varphi } = \int_0^\infty {\int_0^{2\pi } {(\rho _a - \rho )grdr} } d\varphi$$ $$\int_0^\infty {\int_0^{2\pi } {{{\partial (cu)} \over {\partial s}}r} drd\varphi } = 0$$
Boussinesq integral model for horizontal buoyant plane jet. | 2021-11-27 20:39:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7073264718055725, "perplexity": 2083.0052043991113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00369.warc.gz"} |
https://abmathematics.com/?m=201604 | ## Working with Vectors
Here’s a short question to consider tonight.
Given points A and B, let C be the midpoint of the line segment from A to B. Find an expression for $$\vec{c}$$ in terms of $$\vec{a}$$ and $$\vec{b}$$ .
## Vectors
The vectors questions for tomorrow’s lesson are included below. (Thanks to Damin for reminding me to post these!)
Get as far as you can with these; you’ll also have the first 10 or 15 minutes of tomorrow’s lesson to work on these before we discuss the solutions.
Complete pages 407–409 questions 1, 3, 6, 10, 12, 19, 20, 22
## Trigonometry Test
We’ll have a test on trigonometry (the material covered in Chapters 7 and 8 in the textbook) on Monday, 18 April.
Pages 346–349 questions 1, 4, 5, 6, 8, 9, 10, 13, 17–23
Pages 394–397 questions 1, 5–9, 11, 13, 14, 17
## Operations with Taylor Series
From Chapter 30 of the Cambridge book, complete the following questions for Wednesday this week.
page 5 question 5,
page 11 questions 4 and 5,
page 14 question 2 a),
page 15 question 9,
page 19 question 8.
## Applications of Trigonometry
Complete pages 390–393 questions 12, 15, 17, 18, and 20 for tomorrow’s lesson.
## Is Calculus Worth It?
Recently there has been some debate in the media (particularly in the US) concerning the merits of learning what could be called higher mathematics (algebra, calculus, etc.) in school.
One book that suggests otherwise—The Math Myth by Andrew Hacker—has provoked an interesting debate. Here are two discussions of that book, one critical and the other in defence of the book’s claims.
What do you think? | 2021-08-03 14:55:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5644566416740417, "perplexity": 1514.5831473528008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00671.warc.gz"} |
http://math.stackexchange.com/questions/254731/schurs-complement-of-a-matrix-with-no-zero-entries | # Schur's complement of a matrix with no zero entries
Let $A$ be an $n\times n$ symmetric positive-definite matrix so that itself and its inverse $A^{-1}$ both have no entry equal to $0$ ($A_{i,j} \neq 0$ and $(A^{-1})_{i,j} \neq 0$ for all $i,j \in \{1,2, ..., n\}$).
Is it true (or false) that the Schur's complement $S$ of block $A_{n,n}$ of matrix $A$ also does not have a zero entry? ($S = A_{1:n-1,1:n-1} - A_{1:n-1,n}A_{n,n}^{-1} A_{n,1:n-1}$)
-
When the order of $S$ is at least $3$, the answer is "false", otherwise the answer is "true". Here is a counterexample when $S$ is 3-by-3: \begin{align} A&=\begin{pmatrix}3&3&2&1\\3&6&5&2\\2&5&6&2\\1&2&2&1\end{pmatrix}, \quad A^{-1}=\frac14\begin{pmatrix}3&-2&1&-1\\-2&4&-2&-2\\1&-2&3&-3\\-1&-2&-3&15\end{pmatrix},\\ S&=\begin{pmatrix}3&3&2\\3&6&5\\2&5&6\end{pmatrix} -\begin{pmatrix}1\\2\\2\end{pmatrix} (1)^{-1} \begin{pmatrix}1&2&2\end{pmatrix} =\begin{pmatrix}2&1&0\\1&2&1\\0&1&2\end{pmatrix}. \end{align} In general, let $A=\begin{pmatrix}X&Y\\Y^T&Z\end{pmatrix}$ ($Y$ may be a larger block, not necessary a column vector). If the order of $S$ (or $X$) is at least 3, $Z$ is positive definite and $X=YZ^{-1}Y^T+D$, where $D$ is a strictly diagonally dominant real symmetric matrix with a positive diagonal and only a pair of zero entries, then the Schur complement $X-YZ^{-1}Y^T$ would be equal to $D$ (and hence has a zero entry) and $A$ is positive definite. If you draw $Y$ at random and draw $Z$ from the set of positive definite matrices at random, the probability that $A$ or $A^{-1}$ have zero entries should be practically zero. So, it is easy to construct a counterexample in this case.
Yet, when $S$ is 1-by-1, the answer to your question is "true". Note that regardless of the dimension of $S$, in general $A$ is congruent to $S\oplus Z$. When $S$ is 1-by-1, that $S$ has a zero entry implies that $S=0$. Hence $S$ and in turn $A$ are not positive definite, which is a contradiction.
When the order of $S$ is 2, the answer is also "true". If $S$ has a zero entry, this entry must be off-diagonal, or else $S$ and in turn $A$ are not positive definite. So, $S$ must be a diagonal matrix because it is symmetric and its size is 2-by-2. But then $A^{-1}=\begin{pmatrix}S^{-1}&\ast\\ \ast&\ast\end{pmatrix}$ will have zero entries, which is a contradiction. | 2016-05-05 14:47:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472820162773132, "perplexity": 120.3431962525863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127496.74/warc/CC-MAIN-20160428161527-00037-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://api.queryxchange.com/q/21_1835992/neural-network-why-use-derivative/ | # Neural Network - Why use Derivative
by user3491493 Last Updated January 21, 2018 03:20 AM
Good Day
I am trying to get an understanding of Neural Network. Have gone through few web sites. Came to know the following:
1) One of main objective of neural network is to “predict” based on data. 2) To predict a. Train the network with known data b. Calculate weights by finding difference between “Target Output” and “Calculated Output”. c. To do that we use derivative, partial derivative(chain rule etc..)
I can understand the overall concept of neural network a) I can also understand “Derivative” is nothing but Rate of change of one quantity over another(at a given point). b) Partial derivative is Rate of change of one quantity over another, irrespective of another quantity , if more than two factors are in equation.
The point that I canNOT relate or understand clearly is, a) why should we use derivative in neural network, how exactly does it help b) Why should we activation function, in most cases its Sigmoid function. c) I could not get a complete picture of how derivatives helps neural network.
Can you guys please help me understand the complete picture, iff possible try not to use mathematical terms, so that it will be easy for me to grasp.
Thanks, Satheesh
Tags :
As you said: "Partial derivative is Rate of change of one quantity over another, irrespective of another quantity , if more than two factors are in equation."
It means that we can measure the rate of change of the output error w.r.t. network weights. If we know how the error changes w.r.t. weights, we can change those weights in a direction that decreases the error. But as @user1952009 said, it is just gradient descent. Neural networks combine it with the chain rule to update non-output layers.
Regarding sigmoid activations, it has 2 uses: 1) to bound the neuron output; 2) to introduce nonlinearities into the network. This last item is essential to make the neural network solve problems not solvable by simple linear/logistic regression. If neurons hadn't nonlinear activation functions, you could rewrite your entire network as a single layer, which is not as useful. For instance, suppose a 2-layer neural network. Its output would be $y = W_o(W_i\mathbf{x})$ ($W_i$ = input weights, $W_o$ = output weights, $\mathbf{x}$ = input), which can be rewritten as $y = (W_oW_i)\mathbf{x}$. Let $W = W_oW_i$, it leaves us with a single layer neural network $y = W\mathbf{x}$.
rcpinto
June 22, 2016 18:06 PM
• Partial Derivative comes into play because we train neural network with gradient descent, which involves partial derivative when dealing with multivariable case
• In the final output layer, you can do a sigmoid transformation or tanh or ReLu or nothing at all! It all depends on you. That flexibility is exactly what makes neural networks so powerful in expression capability.
In fact, neural works are nothing but a fancy, popular nonlinear estimator.
Augustin Newton
January 21, 2018 02:47 AM | 2018-02-23 14:30:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7462270855903625, "perplexity": 742.5904901691437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00384.warc.gz"} |
https://chemistry.stackexchange.com/questions/42736/disproportionation-of-hydrogen-peroxide/42740 | # Disproportionation of hydrogen peroxide
Hydrogen peroxide is decomposed as follows: $$\ce{2H2O2 -> 2H2O +O2}$$ This is a disproportionation redox reaction of $\ce{H2O2}$ involving the 2 half reactions $$\ce{H2O2 -> O2 + 2H^+ + 2e-}$$ $$\ce{H2O2 + 2e- + 2H^+ -> 2H2O}$$ But I noticed semantically that it can also be the sum of the two following half reactions: $$\ce{2H2O2 -> 2O2 + 4H^+ + 4e-}$$ $$\ce{O2 + 4H^+ + 4e- -> 2H2O}$$ Both reactions are feasible according to the following:
$$\begin{array}{ccc} \text{Oxidised species} & \text{Reduced species} & E^\circ (\mathrm{V}) \\ \hline \ce{H2O2} & \ce{H2O} & 1.763 \\ \ce{O2} & \ce{H2O} & 1.23 \\ \ce{O2} & \ce{H2O2} & 0.695 \\ \end{array}$$
Can it happen in the second way? If yes, can you explain the mechanism in simple terms?
• If it is what I think it is, and the fourth equation is just a typo (H2O should be H2O2), then yes, it is formally correct. However, if you just keep H2O2 by itself, without any oxygen, it will still decompose. This isn't captured by your proposed pair of half-equations, which essentially says that you need oxygen gas for the total reaction (i.e. sum of 2 half-reactions) to proceed. Dec 25, 2015 at 7:16 | 2022-10-03 04:44:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860586881637573, "perplexity": 614.3732545569497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00481.warc.gz"} |
http://www.numdam.org/item/M2AN_2005__39_4_781_0/ | Moving mesh for the axisymmetric harmonic map flow
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 39 (2005) no. 4, pp. 781-796.
We build corotational symmetric solutions to the harmonic map flow from the unit disc into the unit sphere which have constant degree. First, we prove the existence of such solutions, using a time semi-discrete scheme based on the idea that the harmonic map flow is the ${\mathrm{L}}^{2}$-gradient of the relaxed Dirichlet energy. We prove a partial uniqueness result concerning these solutions. Then, we compute numerically these solutions by a moving-mesh method which allows us to deal with the singularity at the origin. We show numerical evidence of the convergence of the method.
DOI : https://doi.org/10.1051/m2an:2005034
Classification : 35A05, 35K55, 65N30, 65N50, 65N99
Mots clés : moving mesh, finite elements, harmonic map flow, axisymmetric
@article{M2AN_2005__39_4_781_0,
author = {Merlet, Benoit and Pierre, Morgan},
title = {Moving mesh for the axisymmetric harmonic map flow},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
pages = {781--796},
publisher = {EDP-Sciences},
volume = {39},
number = {4},
year = {2005},
doi = {10.1051/m2an:2005034},
zbl = {1078.35008},
mrnumber = {2165679},
language = {en},
url = {http://www.numdam.org/item/M2AN_2005__39_4_781_0/}
}
Merlet, Benoit; Pierre, Morgan. Moving mesh for the axisymmetric harmonic map flow. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 39 (2005) no. 4, pp. 781-796. doi : 10.1051/m2an:2005034. http://www.numdam.org/item/M2AN_2005__39_4_781_0/
[1] F. Alouges and M. Pierre, Mesh optimization for singular axisymmetric harmonic maps from the disc into the sphere. Numer. Math. To appear. | MR 2194821 | Zbl 1088.65106
[2] F. Bethuel, J.-M. Coron, J.-M. Ghidaglia and A. Soyeur, Heat flows and relaxed energies for harmonic maps, in Nonlinear diffusion equations and their equilibrium states, 3 (Gregynog, 1989), Birkhäuser Boston, Boston, MA. Progr. Nonlinear Differential Equations Appl. 7 (1992) 99-109. | Zbl 0795.35053
[3] M. Bertsch, R. Dal Passo and R. Van Der Hout, Nonuniqueness for the heat flow of harmonic maps on the disk. Arch. Rational Mech. Anal. 161 (2002) 93-112. | Zbl 1006.35050
[4] H. Brezis and J.-M. Coron, Large solutions for harmonic maps in two dimensions. Comm. Math. Phys. 92 (1983) 203-215. | Zbl 0532.58006
[5] N. Carlson and K. Miller, Design and application of a gradient-weighted moving finite element code. I. In one dimension. SIAM J. Sci. Comput. 19 (1998) 728-765. | Zbl 0911.65087
[6] K.-C. Chang, Heat flow and boundary value problem for harmonic maps. Ann. Inst. H. Poincaré Anal. Non Linéaire 6 (1989) 363-395. | EuDML 78184 | Numdam | Zbl 0687.58004
[7] J. Eells and J. Sampson, Harmonic mappings of Riemannian manifolds. Amer. J. Math. 86 (1964) 109-160. | Zbl 0122.40102
[8] A. Freire, Uniqueness for the harmonic map flow from surfaces to general targets. Comment. Math. Helv. 70 (1995) 310-338. | EuDML 140371 | Zbl 0831.58018
[9] A. Freire, Uniqueness for the harmonic map flow in two dimensions. Calc. Var. Partial Differential Equations 3 (1995) 95-105. | Zbl 0814.35057
[10] F. Hülsemann and Y. Tourigny, A new moving mesh algorithm for the finite element solution of variational problems. SIAM J. Numer. Anal. 35 (1998) 1416-1438. | Zbl 0913.65059
[11] M. Pierre, Weak BV convergence of moving finite elements for singular axisymmetric harmonic maps. SIAM J. Numer. Anal. To appear. | MR 2182135 | Zbl 1109.65103
[12] E. Polak, Algorithms and consistent approximations, Optimization, Applied Mathematical Sciences 124 (1997), Springer-Verlag, New York. | MR 1454128 | Zbl 0899.90148
[13] J. Qing, On singularities of the heat flow for harmonic maps from surfaces into spheres. Comm. Anal. Geom. 3 (1995) 297-315. | Zbl 0868.58021
[14] S. Rippa and B. Schiff, Minimum energy triangulations for elliptic problems. Comput. Methods Appl. Mech. Engrg. 84 (1990) 257-274. | Zbl 0742.65083
[15] M. Struwe, The evolution of harmonic maps, in Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990). Math. Soc. Japan (1991) 1197-1203. | Zbl 0744.58011
[16] P. Topping, Reverse bubbling and nonuniqueness in the harmonic map flow. Internat. Math. Res. Notices 10 (2002) 505-520. | Zbl 1003.58014 | 2021-04-11 12:04:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3510940670967102, "perplexity": 2680.333427094719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00023.warc.gz"} |
https://seqan.readthedocs.io/en/seqan-v1.4.2/HowTo/GenerateSeqAnKnimeNodes.html | # Generating SeqAn KNIME Nodes¶
Learning Objective
You will learn how to import applications written in SeqAn into the KNIME Eclipse plugin. After completing this tutorial, you will be able to use self made applications in KNIME workflows.
Difficulty
Very basic
Duration
1 h
Prerequisites
KNIME SDK
You can download it from the KNIME Download Site (at the end of the page). We will use Version 2.8. (We assume that you have installed it to $HOME/eclipse_knime_2.8. but it could be anywhere). git For Downloading the latest GenericKnimeNodes. Apache Ant The Generic KNIME Plugins project uses Apache Ant as the build system. On Linux and Mac, you should be able to install it through your package manager. For Windows, see the Apache Ant Downloads. We will generate a simple SeqAn KNIME node from a SeqAn app that reads a fastq file from disk and just writes it back. We start by installing the necessary software. Afterwards, we explain which steps are required in order to prepare a SeqAn app to be used in KNIME, and finally, we show how to import the app into KNIME. The following section provides some more information on the plugin structure and where the necessary information is stored. Note that this tutorial is mainly written for MacOS and Linux users, but Windows users should also be able to follow through. ## Preparation: Downloading GenericKnimeNodes¶ We will work in a new directory knime_node (we will assume that the directory is directly in your$HOME for the rest of the tutorial).
knime_node # git clone git://github.com/genericworkflownodes/GenericKnimeNodes.git
## Preparation: Installing KNIME File Handling¶
We need to install support for file handling nodes in KNIME. For this, open the window for installing Eclipse plugins; in the program’s main menu: Help > Install New Software....
Here, enter http://www.knime.org/update/2.8/ into the Work with: field, enter file into the search box, and finally select KNIME File Handling Nodes in the list. Then, click Next and follow through with the installation of the plugin. When done, Eclipse must be restarted.
## Generating KNIME Nodes for SeqAn Apps¶
You can generate a workflow plugin directory for the SeqAn apps using the prepare_workflow_plugin target.
In order for your application to turn into a KNIME node, you have to add the line:
set (SEQAN_CTD_EXECUTABLES ${SEQAN_CTD_EXECUTABLES} <my_app> CACHE INTERNAL "") to the end of the CMakeList.txt file of your application. The following example will demonstrate the creation of a SeqAn app and its registration as a KNIME node. ~ # svn co http://svn.seqan.de/seqan/trunk seqan-trunk ~ # cd seqan-trunk ~ # ./util/bin/skel.py app knime_node sandbox/my_sandbox Now open the file seqan-trunk/sandbox/my_sandbox/apps/knime_node/knime_node.cpp and replace its content with the one found in seqan-trunk/core/demos/knime_node.cpp. The code implements the reading of a read file and its storage somewhere on the disk. In order to register the app knime_node, you simply add the line set (SEQAN_CTD_EXECUTABLES${SEQAN_CTD_EXECUTABLES} knime_node CACHE INTERNAL "")
to seqan-trunk/sandbox/my_sandbox/apps/knime_node/CMakeList.txt.
Then, you can generate the Knime Nodes/Eclipse plugin. First, change to the directory GenericKnimeNodes that we cloned using git earlier. We then execute ant and pass the variables knime.sdk with the path to the KNIME SDK that you downloaded earlier and plugin.dir with the path of our plugin directory.
~ # mkdir -p seqan-trunk-build/release
~ # seqan-trunk-build/release
~ # cd seqan-trunk-build/release
release # cmake ../../seqan-trunk
release # make prepare_workflow_plugin
release # cd ~/knime_node/GenericKnimeNodes
GenericKnimeNodes # ant -Dknime.sdk=${HOME}/eclipse_knime_2.8.0 \ -Dplugin.dir=${HOME}/seqan-trunk-build/release/workflow_plugin_dir
The generated files are within the generated_plugin directory of the directory GenericKnimeNodes.
If you ran into problems, you may copy the file my_sandbox.zip, which contains a fully functional sandbox with the knime_node app and the adjusted CMakeList.txt file. You still have to call ant though.
## Importing the Generated Projects into Eclipse¶
In the main menu, go to File > Import.... In the Import window, select General > Existing Project Into Workspace.
In the next dialog, click Browse... next to Select root directory.
Then, select the directory of your “GenericWorkflowNodes” checkout. The final dialog should then look as follows.
Clicking finish will import (1) the GKN classes themselves and (2) your generated plugin’s classes.
Now, the packages of the GKN classes and your plugin show up in the left Package Explorer pane of Eclipse.
Tip
Synchronizing ant build result with Eclipse.
Since the code generation happens outside of Eclipse, there are often problems caused by Eclipse not recognizing updates in generated ‘’.java’’ files. After each call to ant, you should clean all built files in all projects by selecting the menu entries Project > Clean..., selecting Clean all projects, and then clicking OK.
Then, select all projects in the Package Explorer, right-click and select Refresh.
Tip
You might get a warning with in one of the KNIME files. In order to remove it you need to download the KNIME’s test environment, but you can just ignore the error in our case.
## Launching Eclipse with our Nodes¶
Finally, we have to launch KNIME with our plugin. We have to create a run configuration for this. Select Run > Run Configurations....
In the Run Configurations window, select Eclipse Application on the left, then click the small New launch configuration icon on the top left (both marked in the following screenshot). Now, set the Name field to “KNIME”, select Run an application and select org.knime.product.KNIME_APPLICATION in the drop down menu. Finally, click Run.
Your tool will show up in the tool selector in Community Nodes.
Important
Sometimes KNIME complains about the Java version you are using. In that case, you can use Java 1.6. as shown in Choosing The JRE Version.
Important
If you are running a MacOS you might need to add -Xms40m -Xmx512M -XX:MaxPermSize=256m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -server to the VM argument box of your Run Configuration.
You should now be able to use the created node in a KNIME workflow. The following sections provide additional information about the structure of the plugin and where the crucial information is stored.
## Plugin Overview¶
KNIME nodes are shipped as Eclipse plugins. The GenericKnimeNodes (GWN) package provides the infrastructure to automatically generate such nodes from the description of their command line. The description of the command line is kept in XML files called Common Tool Descriptor (CTD) files. The input of the GWN package is a directory tree with the following structure.
plugin_dir
│
├── plugin.properties
│
├── descriptors (place your ctd files and mime.types here)
│
│
├── icons (the icons to be used must be here)
│
├── DESCRIPTION (A short description of the project)
│
├── LICENSE (Licensing information of the project)
│
plugin.properties
File with the plugin configuration.
descriptors
Directory with the CTD files and a mime.types file. This mime.types file contains a mapping between MIME types and file extensions. There is one CTD file called ${app_name}.ctd. payload ZIP archives with the binaries are located here. This directory has to be present even if the directory is empty. Also, you need a file binaries.ini in this directory which can be empty or contain environment variable definitions as name=value lines. icons Some icons: A file category.png (15x15 px) for categories in the KNIME tool tree. A file ‘’splash.png’ (50x50 px) with an icon to display in the KNIME splash screen. One for each app, called${app_name}.png
DESCRIPTION
A text file with your project’s description.
A file with the license of the project.
A file with copyright information for the project.
The GWN project provides tools to convert such a plugin directory into an Eclipse plugin. This plugin can then be launched together with KNIME. The following picture illustrates the process.
## Anatomy of a Plugin Directory¶
You can download a ZIP archive of the resulting project from the attached file workflow_plugin_dir.zip. We will ignore the contents of icons, DESCRIPTION, LICENSE, and COPYRIGHT here. You can see all relevant details by inspecting the ZIP archive.
### The file plugin.properties¶
The content of the file plugin.properties is as follows:
# the package of the plugin
pluginPackage=de.seqan
# the name of the plugin
pluginName=SeqAn
# the version of the plugin
pluginVersion=1.5.0.201309051220
# the path (starting from KNIMEs Community Nodes node)
nodeRepositoyRoot=community
executor=com.genericworkflownodes.knime.execution.impl.LocalToolExecutor
commandGenerator=com.genericworkflownodes.knime.execution.impl.CLICommandGenerator
When creating your own plugin directory, you only have to update the first three properties:
pluginPackage
A Java package path to use for the Eclipse package.
pluginName
A CamelCase name of the plugin.
pluginVersion
Version of the Eclipse plugin.
### The file descriptors/mime.types¶
The contents of the file is as shown below. Each line contains the definition of a MIME type. The name of the mime type is followed (separated by a space) by the file extensions associated with the file type. There may be no ambiguous mappings, i.e. giving the extension for both application/x-fasta and application/x-fastq.
application/x-fasta fa fasta
application/x-fastq fq fastq
application/x-sam sam
application/x-bam bam
### The file descriptors/samtools_sort_bam.ctd¶
This file descripes the SortBam tool for sorting BAM files. We do not describe the files descriptors/samtools_sam_to_bam.ctd and descriptors/samtools_bam_to_sam.ctd in the same detail as you can interpolate from here.
<?xml version="1.0" encoding="UTF-8"?>
<tool name="KnimeNode" version="0.1" docurl="http://www.seqan.de" category="" >
<executableName>knime_node</executableName>
<description>This is a very simple KNIME node providing an input and output port.</description>
<manual>This is a very simple KNIME node providing an input and output port. The code should be modified such that the node does something useful
</manual>
<cli>
<clielement optionIdentifier="--write-ctd-file-ext" isList="false">
<mapping referenceName="knime_node.write-ctd-file-ext" />
</clielement>
<clielement optionIdentifier="--arg-1-file-ext" isList="false">
<mapping referenceName="knime_node.arg-1-file-ext" />
</clielement>
<clielement optionIdentifier="--outputFile" isList="false">
<mapping referenceName="knime_node.outputFile" />
</clielement>
<clielement optionIdentifier="--outputFile-file-ext" isList="false">
<mapping referenceName="knime_node.outputFile-file-ext" />
</clielement>
<clielement optionIdentifier="--quiet" isList="false">
<mapping referenceName="knime_node.quiet" />
</clielement>
<clielement optionIdentifier="--verbose" isList="false">
<mapping referenceName="knime_node.verbose" />
</clielement>
<clielement optionIdentifier="--very-verbose" isList="false">
<mapping referenceName="knime_node.very-verbose" />
</clielement>
<!-- Following clielements are arguments. You should consider providing a help text to ease understanding. -->
<clielement optionIdentifier="" isList="false">
<mapping referenceName="knime_node.argument-0" />
</clielement>
</cli>
<PARAMETERS version="1.6.2" xsi:noNamespaceSchemaLocation="http://open-ms.sourceforge.net/schemas/Param_1_6_2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<NODE name="knime_node" description="This is a very simple KNIME node providing an input and output port.">
<ITEM name="write-ctd-file-ext" value="" type="string" description="Override file extension for --write-ctd" required="false" advanced="true" tags="file-ext-override,gkn-ignore" />
<ITEM name="arg-1-file-ext" value="" type="string" description="Override file extension for argument 1" restrictions="fastq,fq" required="false" advanced="true" tags="file-ext-override" />
<ITEM name="outputFile" value="result.fastq" type="output-file" description="Name of the multi-FASTA output." supported_formats="*.fastq,*.fq" required="false" advanced="false" />
<ITEM name="outputFile-file-ext" value="" type="string" description="Override file extension for --outputFile" restrictions="fastq,fq" required="false" advanced="true" tags="file-ext-override,gkn-ignore" />
<ITEM name="quiet" value="false" type="string" description="Set verbosity to a minimum." restrictions="true,false" required="false" advanced="false" />
<ITEM name="verbose" value="false" type="string" description="Enable verbose output." restrictions="true,false" required="false" advanced="false" />
<ITEM name="very-verbose" value="false" type="string" description="Enable very verbose output." restrictions="true,false" required="false" advanced="false" />
<ITEM name="argument-0" value="" type="input-file" description="" supported_formats="*.fastq,*.fq" required="true" advanced="false" />
</NODE>
</PARAMETERS>
</tool>
Here is a description of the tags and the attributes:
/tool
The root tag.
/tool@name
The CamelCase name of the tool as shown in KNIME and part of the class name.
/tool@version
The version of the tool.
/toll@category
The path to the tool’s category.
/tool/executableName
The name of the executable in the payload ZIP’s bin dir.
/tool/description
Description of the tool.
/tool/manual
Long description for the tool.
/tool/docurl
URL to the tool’s documentation.
/tool/cli
Container for the <clielement> tags. These tags describe the command line options and arguments of the tool. The command line options and arguments can be mapped to parameters which are configurable through the UI. The parameters are stored in /tool/PARAMETERS
/tool/cli/clielement
There is one entry for each command line argument and option.
/tool/cli/clielement@optionIdentifier
The identifier of the option on the command line. For example, for the -l` option of ls, this is -l.
/tool/cli/clielement@isList
Whether or not the parameter is a list and multiple values are possible. One of true and false.
/tool/cli/clielement/mapping
Provides the mapping between a CLI element and a PARAMETER.
/tool/cli/clielement/mapping@referenceName
The path of the parameter. The parameters <ITEM>s in /tool/PARAMETERS are stored in nested <NODE> tags and this gives the path to the specific parameter.
/tool/PARAMETERS
Container for the <NODE> and <ITEM> tags. The <PARAMETERS> tag is in a diferent namespace and provides its own XSI.
/tool/PARAMETERS@version
Format version of the <PARAMETERS> section.
/tool/PARAMETERS/.../NODE
A node in the parameter tree. You can use such nodes to organize the parameters in a hierarchical fashion.
Boolean that marks an option as advanced.
/tool/PARAMETERS/.../NODE@name
Name of the parameter section.
/tool/PARAMETERS/.../NODE@description
Documentation of the parameter section.
/tool/PARAMETERS/.../ITEM
Description of one command line option or argument.
/tool/PARAMETERS/.../ITEM@name
Name of the option.
/tool/PARAMETERS/.../ITEM@value
Default value of the option. When a default value is given, it is passed to the program, regardless of whether the user touched the default value or not.
/tool/PARAMETERS/.../ITEM@type
Type of the parameter. Can be one of string, int, double, input-file, output-path, input-prefix, or output-prefix. Booleans are encoded as string with the restrictions attribute set to "true,false".
/tool/PARAMETERS/.../ITEM@required
Boolean that states whether the parameter is required or not.
/tool/PARAMETERS/.../ITEM@description
Documentation for the user.
/tool/PARAMETERS/.../ITEM@supported_formats
A list of supported file formats. Example: "*.bam,*.sam".
/tool/PARAMETERS/.../ITEM@restrictions
In case of int or double types, the restrictions have the form min:, :max, min:max and give the smallest and/or largest number a value can have. In the case of string types, restrictions gives the list of allowed values, e.g. one,two,three. If the type is string and the restriction field equals "true,false", then the parameter is a boolean and set in case true is selected in the GUI. A good example for this would be the -l flag of the ls program.
Tip
If a <clielement> does provides an empty optionIdentifier then it is a positional argument without a flag (examples for parameters with flags are -n 1, --number 1).
If a <clielement> does not provide a <mapping> then it is passed regardless of whether has been configured or not.
The samtools_sort_bam tool from above does not provide any configurable options but only two arguments. These are by convention called argument-0 and argument-1 but could have any name.
Also, we always call the program with view -f as the first two command line arguments since we do not provide a mapping for these arguments.
The directory payload contains ZIP files with the executable tool binaries. There is one ZIP file for each platform (Linux, Windows, and Mac Os X) and each architecture (32 bit and 64 bit). The names of the files are binaries_${plat}_${arch}.zip where ${plat} is one of lnx, win, or mac, and${arch} is one of 32 and 64. | 2019-03-22 19:06:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30991634726524353, "perplexity": 8503.81956826846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00405.warc.gz"} |
https://anuragarnab.github.io/instances.html | Abstract: Traditional Scene Understanding problems such as Object Detection and Semantic Segmentation have made breakthroughs in recent years due to the adoption of deep learning. However, the former task is not able to localise objects at a pixel level, and the latter task has no notion of different instances of objects of the same class. We focus on the task of Instance Segmentation which recognises and localises objects down to a pixel level. Our model is based on a deep neural network trained for semantic segmentation. This network incorporates a Conditional Random Field with end-to-end trainable higher order potentials based on object detector outputs. This allows us to reason about instances from an initial, category-level semantic segmentation. Our simple method effectively leverages the great progress recently made in semantic segmentation and object detection. The accurate instance-level segmentations that our network produces is reflected by the considerable improvements obtained over previous work at high $$AP^r$$ IoU thresholds. | 2022-08-19 04:18:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3666735887527466, "perplexity": 853.560118835347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00009.warc.gz"} |
https://math.stackexchange.com/questions/830516/partial-fractions-decomposition-of-the-gamma-function | # Partial Fractions Decomposition of the Gamma Function
I'm currently dealing with a problem my professor raised (since I just studied the Mittag-Leffler's Partial Fractions Theorem). The problem is to derive a partial fractions decomposition of the Gamma Function that displays its (simple) poles and principal parts.
So in my textbook, the Gamma Function is defined as the following limit:
$$\lim_{n \to \infty}\frac{n!\ n^z}{z(z+1)...(z+n)}$$
where $z$ is an arbitrary complex number that is not $0, -1, -2,...$
With the residue of each pole $z_v=-v,\ v=0,1,2,...$ as
$$a_{-1}=\frac{(-1)^v}{v!}$$
We have the principal parts of the Gamma Function at each pole $z_v$ as
$$h_v(z)=\frac{(-1)^v}{v!\ (z+v)}$$
And so it is sufficient to show that
$$\sum_{v=1}^{\infty} \left[h_v(z)-g_v(z) \right]$$ is uniformly convergent on $|z|\leq R,\ R>0$ , where $g_v(z)$ is defined to be the first few finite terms of the series expansion of $h_v(z)$ around $z=0$.
By considering $v$ large enough so that $|z_v|=v>2R$ and taking $g_v(z)=0$, we have that for $|z|\leq R$
$$\left|\frac{(-1)^v}{v!\ (z+v)} \right|=\frac{1}{v!\ |z+v|}<\frac{2}{v!\ v}$$
So the series above converges uniformly and hence there exists an entire function $G(z)$ such that
$$\Gamma(z) = G(z)+\frac{1}{z} + \sum_{v=1}^{\infty}\frac{(-1)^v}{v!\ (z+v)}$$
And now I am stuck at finding $G(z)$. Anyone can provide some help?
• You can identify your function from dlmf.nist.gov/5.9#E4, but I have no clue how to give a proof with your lim/product definition of $\Gamma$. With the integral definition you can find a proof in Lebedev's Special functions Ch.1.1 Jun 11 '14 at 14:04
• Hmm I just need a clue about the function $G(z)$... Jun 11 '14 at 14:27
• Maybe it is a bit late: In your notation the NIST/Lebedev formula is $$\Gamma(z)=\int_{1}^{\infty}t^{z-1}e^{-t}dt+\sum_{v=0}^{\infty}\frac{(-1)^{v}}{(z+v)v!}$$ and $G(z) = \int_{1}^{\infty}t^{z-1}e^{-t}dt$ is the function you seek. According to Lebedev (who refers to Titchmarsh, The Theory of Functions) the function $G(z)$ is entire. Jun 12 '14 at 6:52 | 2021-12-01 15:51:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604445099830627, "perplexity": 143.189338233606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00195.warc.gz"} |
https://mathematica.stackexchange.com/questions/133546/how-to-reparametrize-a-curve-by-arclength-using-mathematica?noredirect=1 | # How to reparametrize a curve by arclength using Mathematica? [duplicate]
There are some formulas in Differential Geometry that require a curve to be parametrized by arc length. Suppose we have a curve $\alpha(t)$, given by
alpha[t_] := {f1[t], f2[t], f3[t]}.
Now, my question is: if I wanted to reparametrize by arc length the procedure would be:
1. Compute $\alpha'(t)$ and integrate $s(t) = \int_0^t |\alpha'(\tau)|^2 \mathrm{\tau}$
2. Invert $s(t)$ to obtain $t = t(s)$.
3. Define a new curve $\beta(s) = \alpha(t(s))$.
How could I do this with mathematica? There are procedures there like inverting which I'm unsure how would be used.
On the other hand since this is a quite common use case I believe there might be some easier way to do with Mathematica.
How could I do this? How can I get a curve and reparametrize by arc length with Mathematica?
## marked as duplicate by Szabolcs, Yves Klett, m_goldberg, Feyre, MarcoBDec 16 '16 at 9:02
• Related – corey979 Dec 14 '16 at 23:55
• Would you be satisfied with a numerical solution, i.e. representing $t(s)$ as anInterpolatingFunction? Or do you need an analytic form for $t(s)$? – Jason B. Dec 15 '16 at 0:48
• "There are some formulas in Differential Geometry that require a curve to be parametrized by arc length." - not necessarily; those formulae can in most cases be modified for general parameters. Note that most integrals do not have neat closed forms, much less their inverses, so you'll be using NDSolve[] most of the time, as alluded by Jason's comment. See this related question. – J. M. is away Dec 15 '16 at 1:29
I post this to illustrate some functions, e.g. ArcLengthand Interpolation that may be helpful. In the following a "three-petalled rose" is used to show 2 parametrizations (polar:green, arclength: red). The arc length parametrization is approximated by interpolation
r[t_] := RotationMatrix[Pi/4, {1, 0, 0}].{Cos[t] Sin[3 t],
Sin[t] Sin[3 t], 0}
arc[t_] := ArcLength[r[u], {u, 0, t}];
if[n_, step_] :=
Interpolation[Table[{arc[j], r[j][[n]]}, {j, 0, 2 Pi, step}]]
{xs, ys, zs} = if[#, 0.1] & /@ Range[3];
ra[u_] := {xs[u], ys[u], zs[u]};
anim = Table[Show[ParametricPlot3D[r[t], {t, 0, 2 Pi}],
Graphics3D[{Red, PointSize[0.04], Point[ra[p]]}]],
{p, 0, 0.95 arc[2 Pi], 0.05}];
anim2 = Table[Show[ParametricPlot3D[r[t], {t, 0, 2 Pi}],
Graphics3D[{Green, PointSize[0.04], Point[r[p]]}]],
{p, 0, 2 Pi, 0.05}];
Polar parametrization:
Arclength parametrization:
It is not perfect. Comparing 'speeds' along curve:
Plot[Evaluate[Sqrt[D[r[t], t].D[r[t], t]]], {t, 0, 2 Pi}]
Plot[Evaluate[Sqrt[D[ra[t], t].D[ra[t], t]]], {t, 0, 0.95 arc[2 Pi]},
PlotRange -> {0, 1.5}] | 2019-07-19 15:09:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284949660301208, "perplexity": 2879.22874011412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00459.warc.gz"} |
https://www.iacr.org/cryptodb/data/author.php?authorkey=6331 | ## CryptoDB
### Ruben Niederhagen
#### Publications
Year
Venue
Title
2020
TCHES
We present and evaluate a custom extension to the RISC-V instruction set for finite field arithmetic. The result serves as a very compact approach to software-hardware co-design of PQC implementations in the context of small embedded processors such as smartcards. The extension provides instructions that implement finite field operations with subsequent reduction of the result. As small finite fields are used in various PQC schemes, such instructions can provide a considerable speedup for an otherwise software-based implementation. Furthermore, we create a prototype implementation of the presented instructions for the extendable VexRiscv core, integrate the result into a chip design, and evaluate the design on two different FPGA platforms. The effectiveness of the extension is evaluated by using the instructions to optimize the Kyber and NewHope key-encapsulation schemes. To that end, we also present an optimized software implementation for the standard RISC-V instruction set for the polynomial arithmetic underlying those schemes, which serves as basis for comparison. Both variants are tuned on an assembler level to optimally use the processor pipelines of contemporary RISC-V CPUs. The result shows a speedup for the polynomial arithmetic of up to 85% over the basic software implementation. Using the custom instructions drastically reduces the code and data size of the implementation without introducing runtime-performance penalties at a small cost in circuit size. When used in the selected schemes, the custom instructions can be used to replace a full general purpose multiplier to achieve very compact implementations.
2020
ASIACRYPT
This paper presents an attack based on side-channel information and information set decoding (ISD) on the code-based Niederreiter cryptosystem and an evaluation of the practicality of the attack using an electromagnetic side channel. We start by directly adapting the timing side-channel plaintext-recovery attack by Shoufan et al. from 2010 to the constant-time implementation of the Niederreiter cryptosystem as used in the official FPGA-implementation of the NIST finalist “Classic McEliece”. We then enhance our attack using ISD and a new technique that we call iterative chunking to further significantly reduce the number of required side-channel measurements. We theoretically show that our attack improvements have a significant impact on reducing the number of required side-channel measurements. For example, for the 256-bit security parameter set kem/mceliece6960119 of “Classic McEliece”, we improve the basic attack that requires 5415 measurements to less than 562 measurements on average to mount a successful plaintext-recovery attack. Further reductions can be achieved at the price of increasing the cost of the ISD computations. We confirm our findings by practically mounting the attack on the official FPGA-implementation of “Classic McEliece” for all proposed parameter sets.
2020
TCHES
This paper proposes two different methods to perform NTT-based polynomial multiplication in polynomial rings that do not naturally support such a multiplication. We demonstrate these methods on the NTRU Prime key-encapsulation mechanism (KEM) proposed by Bernstein, Chuengsatiansup, Lange, and Vredendaal, which uses a polynomial ring that is, by design, not amenable to use with NTT. One of our approaches is using Good’s trick and focuses on speed and supporting more than one parameter set with a single implementation. The other approach is using a mixed radix NTT and focuses on the use of smaller multipliers and less memory. On a ARM Cortex-M4 microcontroller, we show that our three NTT-based implementations, one based on Good’s trick and two mixed radix NTTs, provide between 32% and 17% faster polynomial multiplication. For the parameter-set ntrulpr761, this results in between 16% and 9% faster total operations (sum of key generation, encapsulation, and decapsulation) and requires between 15% and 39% less memory than the current state-of-the-art NTRU Prime implementation on this platform, which is using Toom-Cook-based polynomial multiplication.
2017
CHES
This paper presents a post-quantum secure, efficient, and tunable FPGA implementation of the key-generation algorithm for the Niederreiter cryptosystem using binary Goppa codes. Our key-generator implementation requires as few as 896,052 cycles to produce both public and private portions of a key, and can achieve an estimated frequency Fmax of over 240 MHz when synthesized for Stratix V FPGAs. To the best of our knowledge, this work is the first hardware-based implementation that works with parameters equivalent to, or exceeding, the recommended 128-bit “post-quantum security” level. The key generator can produce a key pair for parameters $m=13$, $t=119$, and $n=6960$ in only 3.7 ms when no systemization failure occurs, and in $3.5 \cdot 3.7$ ms on average. To achieve such performance, we implemented an optimized and parameterized Gaussian systemizer for matrix systemization, which works for any large-sized matrix over any binary field $\text {GF}(2^m)$. Our work also presents an FPGA-based implementation of the Gao-Mateer additive FFT, which only takes about 1000 clock cycles to finish the evaluation of a degree-119 polynomial at $2^{13}$ data points. The Verilog HDL code of our key generator is parameterized and partly code-generated using Python and Sage. It can be synthesized for different parameters, not just the ones shown in this paper. We tested the design using a Sage reference implementation, iVerilog simulation, and on real FPGA hardware.
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EUROCRYPT
2014
EPRINT
2012
CHES
2010
EPRINT
This paper describes an implementation of Pollard's rho algorithm to compute the elliptic curve discrete logarithm for the Synergistic Processor Elements of the Cell Broadband Engine Architecture. Our implementation targets the elliptic curve discrete logarithm problem defined in the Certicom ECC2K-130 challenge. We compare a bitsliced implementation to a non-bitsliced implementation and describe several optimization techniques for both approaches. In particular, we address the question whether normal-basis or polynomial-basis representation of field elements leads to better performance. Using our software, the ECC2K-130 challenge can be solved in one year using the Synergistic Processor Units of less than 2700 Sony Playstation~3 gaming consoles.
2010
EPRINT
This paper presents new software speed records for the computation of cryptographic pairings. More specifically, we present details of an implementation which computes the optimal ate pairing on a 256-bit Barreto-Naehrig curve in only 4,379,912 cycles on one core of an Intel Core 2 Quad Q9550 processor. This speed is achieved by combining 1.) state-of-the-art high-level optimization techniques, 2.) a new representation of elements in the underlying finite fields which makes use of the special modulus arising from the Barreto-Naehrig curve construction, and 3.) implementing arithmetic in this representation using the double-precision floating-point SIMD instructions of the AMD64 architecture.
2010
EPRINT
We analyze how fast we can solve general systems of multivariate equations of various low degrees over \GF{2}; this is a well known hard problem which is important both in itself and as part of many types of algebraic cryptanalysis. Compared to the standard exhaustive-search technique, our improved approach is more efficient both asymptotically and practically. We implemented several optimized versions of our techniques on CPUs and GPUs. Modern graphic cards allows our technique to run more than 10 times faster than the most powerful CPU available. Today, we can solve 48+ quadratic equations in 48 binary variables on a NVIDIA GTX 295 video card (USD 500) in 21 minutes. With this level of performance, solving systems of equations supposed to ensure a security level of 64 bits turns out to be feasible in practice with a modest budget. This is a clear demonstration of the power of GPUs in solving many types of combinatorial and cryptanalytic problems.
2010
CHES | 2021-10-22 17:19:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34020155668258667, "perplexity": 1422.2688968867424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00718.warc.gz"} |
https://nullrend.com/date/2020/09/ | # Month: September 2020
## The fame, hah
Only spammers seem to like my content, pffft.
## JK you’ll be poor even if you attend
Schools often run deficits in normal times; in 2019, nearly 1,000 private colleges were already borderline insolvent. Covid will cause many to shutter for good. It is accounting, not epidemiology, that drives university administrators to push for a rapid return to business as usual, effectively demanding that faculty and staff sacrifice their lives for the financial health of their employer.
You can attend college, the price is death.
Or you could not attend college, in which case the price is poverty.
## Skyway
<a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo:
They’re really working fast on getting the Skyway done at the Library
## Someone should do a Dungeons and Dragons campaign where the final boss is a giant Elder Dragon. A…
Someone should do a Dungeons and Dragons campaign where the final boss is a giant Elder Dragon. A dragon so big you actually have to fight its fleas, the size of wolves, on your way to its head.
Then it turns out it’s being a menace because it’s got cavities in its teeth. Off you go to play dentist.
https://nullrend.tumblr.com/post/628968117853372416
## No change, no peace
As the historian Barry A. Crouch recounts in The Dance of Freedom, Ruby warned that the formerly enslaved were beset by the “fiendish lawlessness of the whites who murder and outrage the free people with the same indifference as displayed in the killing of snakes or other venomous reptiles,” and that “terrorism engendered by the brutal and murderous acts of the inhabitants, mostly rebels,” was preventing the freedmen from so much as building schools.
The Orange Maggaot calls people who support BLM “thugs”, “criminals”, “terrorists”, saying he’ll impose Law and Order however necessary.
White people have always been the one to terrorize their communities, and those of people they don’t deem acceptable.
The cold cultural war heats up.
## Closing quickly, for that matter.
Social networks are universally more restrictive than web pages but also more fun in significant ways, chief amongst them being that more people can participate. What if the rest of the web have that simplicity and immediacy, but without the centralization? What if we could start over?
Mozilla is knowingly walking away from any of these options because they’re bitter they could not come to dominate the Web after Firefox helped bring about the downfall of Internet Explorer. Big Tech will not support a reimagining of what the web could be since it will mean less profits. Can’t have that in a capitalist society, now can we?
There’s hope now that the Servo engine is cut loose, but the time window to avoid having a technological cycle (about 30 years or so) be dominated by corporations is closing.
## thegreenwolf: ladimcbeth: rgr-pop: psychosomatic86: tristiko…
I FOUND IT GUYS I SPENT HALF AN HOUR LOOKING FOR THIS VIDEO AND ITS HERE
Always reblog peent.
*before clicking play*: IS THIS WHAT i THINK IT IS???
*clicks play*: IT ISSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
omg!! omg
Forever reblog.
Heaven let your eent shine down.
https://nullrend.tumblr.com/post/628404999154794496
## Lines
<a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo:
## Black Sheep Eat Street
<a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo:
I really wanted some pizza today. The vehicular shenanigans are extra.
## Green-tinged sunlight
<a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo:
This couch is great for napping if you manage to fall asleep. This morning doesn’t seem to be one of those days when sleep comes easily. | 2021-10-25 04:42:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30649203062057495, "perplexity": 8101.888562792013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00552.warc.gz"} |
https://keio.pure.elsevier.com/ja/publications/artinmazur-zeta-functions-of-generalized-beta-transformations | Artin–mazur zeta functions of generalized beta-transformations
抄録
In this paper, we study the Artin–Mazur zeta function of a generalization of the well-known β-transformation introduced by Góra [Invariant densities for generalized β-maps. Ergodic Theory Dynam. Systems 27 (2007), 1583–1598].We show that the Artin– Mazur zeta function can be extended to a meromorphic function via an expansion of 1 defined by using the transformation. As an application, we relate its analytic properties to the algebraic properties of β.
本文言語 English 85-103 19 Kyushu Journal of Mathematics 71 1 https://doi.org/10.2206/kyushujm.71.85 Published - 2017 はい
• 数学 (全般)
フィンガープリント
「Artin–mazur zeta functions of generalized beta-transformations」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。 | 2021-09-19 21:40:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8610842227935791, "perplexity": 1111.3007822907866}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00104.warc.gz"} |
https://www.zbmath.org/?q=an%3A1215.05097 | # zbMATH — the first resource for mathematics
Connectivity of iterated line graphs. (English) Zbl 1215.05097
Summary: Let $$k\geq 0$$ be an integer and $$L^k(G)$$ be the $$k$$th iterated line graph of a graph $$G$$. Niepel and Knor proved that if $$G$$ is a 4-connected graph, then $$\kappa (L^{2}(G))\geq 4\delta (G) - 6$$. We show that the connectivity of $$G$$ can be relaxed. In fact, we prove in this note that if $$G$$ is an essentially 4-edge-connected and 3-connected graph, then $$\kappa (L^{2}(G))\geq 4\delta (G) - 6$$. Similar bounds are obtained for essentially 4-edge-connected and 2-connected (1-connected) graphs.
##### MSC:
05C40 Connectivity 05C76 Graph operations (line graphs, products, etc.)
Full Text:
##### References:
[1] Bondy, J.A.; Murty, U.S.R., Graph theory with applications, (1976), Macmillan London, Elsevier, New York · Zbl 1134.05001 [2] Chartrand, G.; Stewart, M.J., The connectivity of line graphs, Math. ann., 182, 170-174, (1969) · Zbl 0167.52203 [3] Knor, M.; Niepel, L’., Connectivity of iterated line graphs, Discrete appl. math., 125, 255-266, (2003) · Zbl 1009.05086
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-03-07 06:53:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5606439113616943, "perplexity": 1779.807471368597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00477.warc.gz"} |
http://physics.stackexchange.com/questions/69840/closed-form-for-shape-tension-of-an-elastic-cable-slung-between-two-points | # Closed form for shape/tension of an elastic cable slung between two points
Given the 2D coordinates of two points, $a$ and $b$, between which an elastic cable of known length, $l$, mass per unit length, $m$, and the spring constant, $e$, is slung, I need to compute the shape of the cable, and also the horizontal tension, $t$, in the cable.
So far I have the equations for the x and y coordinates of the cable, parameterized by $p$, which is the distance along the unstretched cable: $$f_x(p) = \frac{t}{mg} \sinh^{-1}\left(\frac{mgp}{t}\right) + \frac{tp}{e} + c_x.\\ f_y(p) = \sqrt{\left(\frac{t}{mg}\right)^2 + p^2} + \frac{mgp^2}{2e} + c_y.\\$$ where $c_x$ and $c_y$ are constants, which leads me to the following triplet of simultaneous equations: $$f_x(q) - f_x(r) = a_x - b_x.\\ f_y(q) - f_y(r) = a_y - b_y.\\ |q-r| = l.$$ in three unknowns, $t$, $q$ and $r$ (given that the constants cancel), where $q$ and $r$ are the values of the parameter $p$ at the points $a$ and $b$ respectively.
How would you compute those unknowns, and can it be done in closed form?
-
Does $t$ depend on the position? Is it the tension at some extremity? – fffred Jul 2 at 20:34
– Qmechanic Jul 3 at 0:13
fffred: No, the value $t$ is constant along the cable. It is the $x$ (horizontal) component of the tension at any point in the cable. – user664303 Jul 3 at 8:54
Qmechanic: Thanks. I googled "catenary" before posting, and this lead to what I have already. – user664303 Jul 3 at 8:59 | 2013-12-21 05:44:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348980188369751, "perplexity": 367.4925371320821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775028/warc/CC-MAIN-20131218054935-00099-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://www.numdam.org/item/?id=CM_1991__79_3_279_0 | CCC posets of perfect trees
Compositio Mathematica, Volume 79 (1991) no. 3, pp. 279-294.
@article{CM_1991__79_3_279_0,
author = {Velickovic, Boban},
title = {CCC posets of perfect trees},
journal = {Compositio Mathematica},
pages = {279--294},
volume = {79},
number = {3},
year = {1991},
zbl = {0735.03023},
mrnumber = {1121140},
language = {en},
url = {http://www.numdam.org/item/CM_1991__79_3_279_0/}
}
TY - JOUR
AU - Velickovic, Boban
TI - CCC posets of perfect trees
JO - Compositio Mathematica
PY - 1991
DA - 1991///
SP - 279
EP - 294
VL - 79
IS - 3
UR - http://www.numdam.org/item/CM_1991__79_3_279_0/
UR - https://zbmath.org/?q=an%3A0735.03023
UR - https://www.ams.org/mathscinet-getitem?mr=1121140
LA - en
ID - CM_1991__79_3_279_0
ER -
%0 Journal Article
%A Velickovic, Boban
%T CCC posets of perfect trees
%J Compositio Mathematica
%D 1991
%P 279-294
%V 79
%N 3
%G en
%F CM_1991__79_3_279_0
Velickovic, Boban. CCC posets of perfect trees. Compositio Mathematica, Volume 79 (1991) no. 3, pp. 279-294. http://www.numdam.org/item/CM_1991__79_3_279_0/
[Ab] U. Abraham, A minimal model for ⊑CH: iteration of Jensen's reals, Trans. Am. Math. Soc. 281(2) (1984), 657-674. | Zbl
[Ba] J. Baumgartner, Applications of the Proper Forcing Axiom, in Handbook of set-theoretic topology (Eds. K. Kunen and J. Vaughan), North-Holland, Amsterdam, 1984, 913-959. | MR | Zbl
[BL] J. Baumgartner and R. Laver, Iterated perfect-set forcing, Ann. Math. Logic 17 (1979), 271-288. | MR | Zbl
[Gr] M. Groszek, Combinatorics on ideals and forcing with trees, Journal of Symbolic Logic 52 (1987), 582-593. | MR | Zbl
[GL] M. Groszek and T. Jech, Generalized iteration of forcing, to appear. | MR | Zbl
[Jec] T. Jech, Set theory, Academic, New York, 1978. | MR | Zbl
[Jen] R. Jensen, Definable sets of minimal degree, in Mathematical logic and foundations of set theory (Ed. Y. Bar-Hillel), North-Holland, Amsterdam, 1970, 122-218. | MR | Zbl
[Ku] K. Kunen, Set theory, North-Holland, Amsterdam, 1980. | MR | Zbl
[Ma] D. Mauldin (Ed.), The Scottish book, Birkhäuser, Boston, Mass. 1981. | MR | Zbl
[Mi] A. Miller, Mapping sets of reals onto the reals, Journal of Symbolic Logic 48(3) (1983), 575-584. | MR | Zbl
[Sa] G. Sacks, Forcing with perfect closed sets, in Axiomatic set theory (Ed. D. Scott), Proc. Symp. Pure Math. 13, I, Amer. Math. Soc., Providence, 1971, 331-355. | MR | Zbl
[Sh] S. Shelah, Proper forcing, Lecture Notes in Math. 940, Springer Verlag, Berlin, 1982. | MR | Zbl
[Ve] B. Velickovic, Forcing axioms and stationary sets, Adv. Math., to appear. | MR | Zbl | 2022-12-08 07:25:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5145326256752014, "perplexity": 6645.095354586178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00873.warc.gz"} |
https://handwiki.org/wiki/Finance:Foreign_exchange_market | # Finance:Foreign exchange market
Short description: Global decentralized trading of international currencies
The foreign exchange market (Forex, FX, or currency market) is a global decentralized or over-the-counter (OTC) market for the trading of currencies. This market determines foreign exchange rates for every currency. It includes all aspects of buying, selling and exchanging currencies at current or determined prices. In terms of trading volume, it is by far the largest market in the world, followed by the credit market.[1]
The main participants in this market are the larger international banks. Financial centers around the world function as anchors of trading between a wide range of multiple types of buyers and sellers around the clock, with the exception of weekends. Since currencies are always traded in pairs, the foreign exchange market does not set a currency's absolute value but rather determines its relative value by setting the market price of one currency if paid for with another. Ex: US$1 is worth X CAD, or CHF, or JPY, etc. The foreign exchange market works through financial institutions and operates on several levels. Behind the scenes, banks turn to a smaller number of financial firms known as "dealers", who are involved in large quantities of foreign exchange trading. Most foreign exchange dealers are banks, so this behind-the-scenes market is sometimes called the "interbank market" (although a few insurance companies and other kinds of financial firms are involved). Trades between foreign exchange dealers can be very large, involving hundreds of millions of dollars. Because of the sovereignty issue when involving two currencies, Forex has little (if any) supervisory entity regulating its actions. The foreign exchange market assists international trade and investments by enabling currency conversion. For example, it permits a business in the United States to import goods from European Union member states, especially Eurozone members, and pay Euros, even though its income is in United States dollar s. It also supports direct speculation and evaluation relative to the value of currencies and the carry trade speculation, based on the differential interest rate between two currencies.[2] In a typical foreign exchange transaction, a party purchases some quantity of one currency by paying with some quantity of another currency. The modern foreign exchange market began forming during the 1970s. This followed three decades of government restrictions on foreign exchange transactions under the Bretton Woods system of monetary management, which set out the rules for commercial and financial relations among the world's major industrial states after World War II. Countries gradually switched to floating exchange rates from the previous exchange rate regime, which remained fixed per the Bretton Woods system. The foreign exchange market is unique because of the following characteristics: • its huge trading volume, representing the largest asset class in the world leading to high liquidity; • its geographical dispersion; • its continuous operation: 24 hours a day except for weekends, i.e., trading from 22:00 GMT on Sunday (Sydney) until 22:00 GMT Friday (New York); • the variety of factors that affect exchange rates; • the low margins of relative profit compared with other markets of fixed income; and • the use of leverage to enhance profit and loss margins and with respect to account size. As such, it has been referred to as the market closest to the ideal of perfect competition, notwithstanding currency intervention by central banks. According to the Bank for International Settlements, the preliminary global results from the 2019 Triennial Central Bank Survey of Foreign Exchange and OTC Derivatives Markets Activity show that trading in foreign exchange markets averaged$6.6 trillion per day in April 2019. This is up from $5.1 trillion in April 2016. Measured by value, foreign exchange swaps were traded more than any other instrument in April 2019, at$3.2 trillion per day, followed by spot trading at $2 trillion.[3] The$6.6 trillion break-down is as follows:
## History
### Ancient
Currency trading and exchange first occurred in ancient times.[4] Money-changers (people helping others to change money and also taking a commission or charging a fee) were living in the Holy Land in the times of the Talmudic writings (Biblical times). These people (sometimes called "kollybistẻs") used city stalls, and at feast times the Temple's Court of the Gentiles instead.[5] Money-changers were also the silversmiths and/or goldsmiths[6] of more recent ancient times.
During the 4th century AD, the Byzantine government kept a monopoly on the exchange of currency.[7]
Papyri PCZ I 59021 (c.259/8 BC), shows the occurrences of exchange of coinage in Ancient Egypt.[8]
Currency and exchange were important elements of trade in the ancient world, enabling people to buy and sell items like food, pottery, and raw materials.[9] If a Greek coin held more gold than an Egyptian coin due to its size or content, then a merchant could barter fewer Greek gold coins for more Egyptian ones, or for more material goods. This is why, at some point in their history, most world currencies in circulation today had a value fixed to a specific quantity of a recognized standard like silver and gold.
### Medieval and later
During the 15th century, the Medici family were required to open banks at foreign locations in order to exchange currencies to act on behalf of textile merchants.[10][11] To facilitate trade, the bank created the nostro (from Italian, this translates to "ours") account book which contained two columned entries showing amounts of foreign and local currencies; information pertaining to the keeping of an account with a foreign bank.[12][13][14][15] During the 17th (or 18th) century, Amsterdam maintained an active Forex market.[16] In 1704, foreign exchange took place between agents acting in the interests of the Kingdom of England and the County of Holland.[17]
### Early modern
Alex. Brown & Sons traded foreign currencies around 1850 and was a leading currency trader in the USA.[18] In 1880, J.M. do Espírito Santo de Silva (Banco Espírito Santo) applied for and was given permission to engage in a foreign exchange trading business.[19][20]
The year 1880 is considered by at least one source to be the beginning of modern foreign exchange: the gold standard began in that year.[21]
Prior to the First World War, there was a much more limited control of international trade. Motivated by the onset of war, countries abandoned the gold standard monetary system.[22]
### Modern to post-modern
From 1899 to 1913, holdings of countries' foreign exchange increased at an annual rate of 10.8%, while holdings of gold increased at an annual rate of 6.3% between 1903 and 1913.[23]
At the end of 1913, nearly half of the world's foreign exchange was conducted using the pound sterling.[24] The number of foreign banks operating within the boundaries of London increased from 3 in 1860, to 71 in 1913. In 1902, there were just two London foreign exchange brokers.[25] At the start of the 20th century, trades in currencies was most active in Paris , New York City and Berlin; Britain remained largely uninvolved until 1914. Between 1919 and 1922, the number of foreign exchange brokers in London increased to 17; and in 1924, there were 40 firms operating for the purposes of exchange.[26]
During the 1920s, the Kleinwort family were known as the leaders of the foreign exchange market, while Japheth, Montagu & Co. and Seligman still warrant recognition as significant FX traders.[27] The trade in London began to resemble its modern manifestation. By 1928, Forex trade was integral to the financial functioning of the city. Continental exchange controls, plus other factors in Europe and Latin America, hampered any attempt at wholesale prosperity from trade[clarification needed] for those of 1930s London.[28]
#### After World War II
In 1944, the Bretton Woods Accord was signed, allowing currencies to fluctuate within a range of ±1% from the currency's par exchange rate.[29] In Japan, the Foreign Exchange Bank Law was introduced in 1954. As a result, the Bank of Tokyo became a center of foreign exchange by September 1954. Between 1954 and 1959, Japanese law was changed to allow foreign exchange dealings in many more Western currencies.[30]
U.S. President, Richard Nixon is credited with ending the Bretton Woods Accord and fixed rates of exchange, eventually resulting in a free-floating currency system. After the Accord ended in 1971,[31] the Smithsonian Agreement allowed rates to fluctuate by up to ±2%. In 1961–62, the volume of foreign operations by the U.S. Federal Reserve was relatively low.[32][33] Those involved in controlling exchange rates found the boundaries of the Agreement were not realistic and so ceased this[clarification needed] in March 1973, when sometime afterward[clarification needed] none of the major currencies were maintained with a capacity for conversion to gold,[clarification needed] organizations relied instead on reserves of currency.[34][35] From 1970 to 1973, the volume of trading in the market increased three-fold.[36][37][38] At some time (according to Gandolfo during February–March 1973) some of the markets were "split", and a two-tier currency market[clarification needed] was subsequently introduced, with dual currency rates. This was abolished in March 1974.[39][40][41]
Reuters introduced computer monitors during June 1973, replacing the telephones and telex used previously for trading quotes.[42]
#### Markets close
Due to the ultimate ineffectiveness of the Bretton Woods Accord and the European Joint Float, the forex markets were forced to close[clarification needed] sometime during 1972 and March 1973.[43] The largest purchase of US dollars in the history of 1976[clarification needed] was when the West German government achieved an almost 3 billion dollar acquisition (a figure is given as 2.75 billion in total by The Statesman: Volume 18 1974). This event indicated the impossibility of balancing of exchange rates by the measures of control used at the time, and the monetary system and the foreign exchange markets in West Germany and other countries within Europe closed for two weeks (during February and, or, March 1973. Giersch, Paqué, & Schmieding state closed after purchase of "7.5 million Dmarks" Brawley states "... Exchange markets had to be closed. When they re-opened ... March 1 " that is a large purchase occurred after the close).[44][45][46][47]
#### After 1973
In developed nations, state control of foreign exchange trading ended in 1973 when complete floating and relatively free market conditions of modern times began.[48] Other sources claim that the first time a currency pair was traded by U.S. retail customers was during 1982, with additional currency pairs becoming available by the next year.[49][50]
On 1 January 1981, as part of changes beginning during 1978, the People's Bank of China allowed certain domestic "enterprises" to participate in foreign exchange trading.[51][52] Sometime during 1981, the South Korean government ended Forex controls and allowed free trade to occur for the first time. During 1988, the country's government accepted the IMF quota for international trade.[53]
Intervention by European banks (especially the Bundesbank) influenced the Forex market on 27 February 1985.[54] The greatest proportion of all trades worldwide during 1987 were within the United Kingdom (slightly over one quarter). The United States had the second highest involvement in trading.[55]
During 1991, Iran changed international agreements with some countries from oil-barter to foreign exchange.[56]
## Market size and liquidity
Main foreign exchange market turnover, 1988–2007, measured in billions of USD.
The foreign exchange market is the most liquid financial market in the world. Traders include governments and central banks, commercial banks, other institutional investors and financial institutions, currency speculators, other commercial corporations, and individuals. According to the 2019 Triennial Central Bank Survey, coordinated by the Bank for International Settlements, average daily turnover was $6.6 trillion in April 2019 (compared to$1.9 trillion in 2004).[3] Of this $6.6 trillion,$2 trillion was spot transactions and $4.6 trillion was traded in outright forwards, swaps, and other derivatives. Foreign exchange is traded in an over-the-counter market where brokers/dealers negotiate directly with one another, so there is no central exchange or clearing house. The biggest geographic trading center is the United Kingdom, primarily London. In April 2019, trading in the United Kingdom accounted for 43.1% of the total, making it by far the most important center for foreign exchange trading in the world. Owing to London's dominance in the market, a particular currency's quoted price is usually the London market price. For instance, when the International Monetary Fund calculates the value of its special drawing rights every day, they use the London market prices at noon that day. Trading in the United States accounted for 16.5%, Singapore and Hong Kong account for 7.6% and Japan accounted for 4.5%.[3] Turnover of exchange-traded foreign exchange futures and options was growing rapidly in 2004-2013, reaching$145 billion in April 2013 (double the turnover recorded in April 2007).[57] As of April 2019, exchange-traded currency derivatives represent 2% of OTC foreign exchange turnover. Foreign exchange futures contracts were introduced in 1972 at the Chicago Mercantile Exchange and are traded more than to most other futures contracts.
Most developed countries permit the trading of derivative products (such as futures and options on futures) on their exchanges. All these developed countries already have fully convertible capital accounts. Some governments of emerging markets do not allow foreign exchange derivative products on their exchanges because they have capital controls. The use of derivatives is growing in many emerging economies.[58] Countries such as South Korea, South Africa, and India have established currency futures exchanges, despite having some capital controls.
### Money transfer/remittance companies and bureaux de change
Money transfer companies/remittance companies perform high-volume low-value transfers generally by economic migrants back to their home country. In 2007, the Aite Group estimated that there were $369 billion of remittances (an increase of 8% on the previous year). The four largest foreign markets (India , China , Mexico, and the Philippines ) receive$95 billion. The largest and best-known provider is Western Union with 345,000 agents globally, followed by UAE Exchange. Bureaux de change or currency transfer companies provide low-value foreign exchange services for travelers. These are typically located at airports and stations or at tourist locations and allow physical notes to be exchanged from one currency to another. They access foreign exchange markets via banks or non-bank foreign exchange companies.
There is no unified or centrally cleared market for the majority of trades, and there is very little cross-border regulation. Due to the over-the-counter (OTC) nature of currency markets, there are rather a number of interconnected marketplaces, where different currencies instruments are traded. This implies that there is not a single exchange rate but rather a number of different rates (prices), depending on what bank or market maker is trading, and where it is. In practice, the rates are quite close due to arbitrage. Due to London's dominance in the market, a particular currency's quoted price is usually the London market price. Major trading exchanges include Electronic Broking Services (EBS) and Thomson Reuters Dealing, while major banks also offer trading systems. A joint venture of the Chicago Mercantile Exchange and Reuters , called Fxmarketspace opened in 2007 and aspired but failed to the role of a central market clearing mechanism.
The main trading centers are London and New York City, though Tokyo , Hong Kong, and Singapore are all important centers as well. Banks throughout the world participate. Currency trading happens continuously throughout the day; as the Asian trading session ends, the European session begins, followed by the North American session and then back to the Asian session.
Fluctuations in exchange rates are usually caused by actual monetary flows as well as by expectations of changes in monetary flows. These are caused by changes in gross domestic product (GDP) growth, inflation (purchasing power parity theory), interest rates (interest rate parity, Domestic Fisher effect, International Fisher effect), budget and trade deficits or surpluses, large cross-border M&A deals and other macroeconomic conditions. Major news is released publicly, often on scheduled dates, so many people have access to the same news at the same time. However, large banks have an important advantage; they can see their customers' order flow.
Currencies are traded against one another in pairs. Each currency pair thus constitutes an individual trading product and is traditionally noted XXXYYY or XXX/YYY, where XXX and YYY are the ISO 4217 international three-letter code of the currencies involved. The first currency (XXX) is the base currency that is quoted relative to the second currency (YYY), called the counter currency (or quote currency). For instance, the quotation EURUSD (EUR/USD) 1.5465 is the price of the Euro expressed in US dollars, meaning 1 euro = 1.5465 dollars. The market convention is to quote most exchange rates against the USD with the US dollar as the base currency (e.g. USDJPY, USDCAD, USDCHF). The exceptions are the British pound (GBP), Australian dollar (AUD), the New Zealand dollar (NZD) and the euro (EUR) where the USD is the counter currency (e.g. GBPUSD, AUDUSD, NZDUSD, EURUSD).
The factors affecting XXX will affect both XXXYYY and XXXZZZ. This causes a positive currency correlation between XXXYYY and XXXZZZ.
On the spot market, according to the 2019 Triennial Survey, the most heavily traded bilateral currency pairs were:
• EURUSD: 24.0%
• USDJPY: 13.2%
• GBPUSD (also called cable): 9.6%
The U.S. currency was involved in 88.3% of transactions, followed by the euro (32.3%), the yen (16.8%), and sterling (12.8%) (see table). Volume percentages for all individual currencies should add up to 200%, as each transaction involves two currencies.
Trading in the euro has grown considerably since the currency's creation in January 1999, and how long the foreign exchange market will remain dollar-centered is open to debate. Until recently, trading the euro versus a non-European currency ZZZ would have usually involved two trades: EURUSD and USDZZZ. The exception to this is EURJPY, which is an established traded currency pair in the interbank spot market.
## Determinants of exchange rates
Main page: Finance:Exchange rate
In a fixed exchange rate regime, exchange rates are decided by the government, while a number of theories have been proposed to explain (and predict) the fluctuations in exchange rates in a floating exchange rate regime, including:
• International parity conditions: Relative purchasing power parity, interest rate parity, Domestic Fisher effect, International Fisher effect. To some extent the above theories provide logical explanation for the fluctuations in exchange rates, yet these theories falter as they are based on challengeable assumptions (e.g., free flow of goods, services, and capital) which seldom hold true in the real world.
• Balance of payments model: This model, however, focuses largely on tradable goods and services, ignoring the increasing role of global capital flows. It failed to provide any explanation for the continuous appreciation of the US dollar during the 1980s and most of the 1990s, despite the soaring US current account deficit.
• Asset market model: views currencies as an important asset class for constructing investment portfolios. Asset prices are influenced mostly by people's willingness to hold the existing quantities of assets, which in turn depends on their expectations on the future worth of these assets. The asset market model of exchange rate determination states that “the exchange rate between two currencies represents the price that just balances the relative supplies of, and demand for, assets denominated in those currencies.”
None of the models developed so far succeed to explain exchange rates and volatility in the longer time frames. For shorter time frames (less than a few days), algorithms can be devised to predict prices. It is understood from the above models that many macroeconomic factors affect the exchange rates and in the end currency prices are a result of dual forces of supply and demand. The world's currency markets can be viewed as a huge melting pot: in a large and ever-changing mix of current events, supply and demand factors are constantly shifting, and the price of one currency in relation to another shifts accordingly. No other market encompasses (and distills) as much of what is going on in the world at any given time as foreign exchange.[70]
Supply and demand for any given currency, and thus its value, are not influenced by any single element, but rather by several. These elements generally fall into three categories: economic factors, political conditions and market psychology.
### Economic factors
Economic factors include: (a) economic policy, disseminated by government agencies and central banks, (b) economic conditions, generally revealed through economic reports, and other economic indicators.
• Economic policy comprises government fiscal policy (budget/spending practices) and monetary policy (the means by which a government's central bank influences the supply and "cost" of money, which is reflected by the level of interest rates).
• Government budget deficits or surpluses: The market usually reacts negatively to widening government budget deficits, and positively to narrowing budget deficits. The impact is reflected in the value of a country's currency.
• Balance of trade levels and trends: The trade flow between countries illustrates the demand for goods and services, which in turn indicates demand for a country's currency to conduct trade. Surpluses and deficits in trade of goods and services reflect the competitiveness of a nation's economy. For example, trade deficits may have a negative impact on a nation's currency.
• Inflation levels and trends: Typically a currency will lose value if there is a high level of inflation in the country or if inflation levels are perceived to be rising. This is because inflation erodes purchasing power, thus demand, for that particular currency. However, a currency may sometimes strengthen when inflation rises because of expectations that the central bank will raise short-term interest rates to combat rising inflation.
• Economic growth and health: Reports such as GDP, employment levels, retail sales, capacity utilization and others, detail the levels of a country's economic growth and health. Generally, the more healthy and robust a country's economy, the better its currency will perform, and the more demand for it there will be.
• Productivity of an economy: Increasing productivity in an economy should positively influence the value of its currency. Its effects are more prominent if the increase is in the traded sector.[71]
### Political conditions
Internal, regional, and international political conditions and events can have a profound effect on currency markets.
All exchange rates are susceptible to political instability and anticipations about the new ruling party. Political upheaval and instability can have a negative impact on a nation's economy. For example, destabilization of coalition governments in Pakistan and Thailand can negatively affect the value of their currencies. Similarly, in a country experiencing financial difficulties, the rise of a political faction that is perceived to be fiscally responsible can have the opposite effect. Also, events in one country in a region may spur positive/negative interest in a neighboring country and, in the process, affect its currency.
### Market psychology
Market psychology and trader perceptions influence the foreign exchange market in a variety of ways:
• Flights to quality: Unsettling international events can lead to a "flight-to-quality", a type of capital flight whereby investors move their assets to a perceived "safe haven". There will be a greater demand, thus a higher price, for currencies perceived as stronger over their relatively weaker counterparts. The US dollar, Swiss franc and gold have been traditional safe havens during times of political or economic uncertainty.[72]
• Long-term trends: Currency markets often move in visible long-term trends. Although currencies do not have an annual growing season like physical commodities, business cycles do make themselves felt. Cycle analysis looks at longer-term price trends that may rise from economic or political trends.[73]
• "Buy the rumor, sell the fact": This market truism can apply to many currency situations. It is the tendency for the price of a currency to reflect the impact of a particular action before it occurs and, when the anticipated event comes to pass, react in exactly the opposite direction. This may also be referred to as a market being "oversold" or "overbought".[74] To buy the rumor or sell the fact can also be an example of the cognitive bias known as anchoring, when investors focus too much on the relevance of outside events to currency prices.
• Economic numbers: While economic numbers can certainly reflect economic policy, some reports and numbers take on a talisman-like effect: the number itself becomes important to market psychology and may have an immediate impact on short-term market moves. "What to watch" can change over time. In recent years, for example, money supply, employment, trade balance figures and inflation numbers have all taken turns in the spotlight.
• Technical trading considerations: As in other markets, the accumulated price movements in a currency pair such as EUR/USD can form apparent patterns that traders may attempt to use. Many traders study price charts in order to identify such patterns.[75]
## Financial instruments
### Spot
Main page: Finance:Foreign exchange spot
A spot transaction is a two-day delivery transaction (except in the case of trades between the US dollar, Canadian dollar, Turkish lira, euro and Russian ruble, which settle the next business day), as opposed to the futures contracts, which are usually three months. This trade represents a “direct exchange” between two currencies, has the shortest time frame, involves cash rather than a contract, and interest is not included in the agreed-upon transaction. Spot trading is one of the most common types of forex trading. Often, a forex broker will charge a small fee to the client to roll-over the expiring transaction into a new identical transaction for a continuation of the trade. This roll-over fee is known as the "swap" fee.
### Forward
One way to deal with the foreign exchange risk is to engage in a forward transaction. In this transaction, money does not actually change hands until some agreed upon future date. A buyer and seller agree on an exchange rate for any date in the future, and the transaction occurs on that date, regardless of what the market rates are then. The duration of the trade can be one day, a few days, months or years. Usually the date is decided by both parties. Then the forward contract is negotiated and agreed upon by both parties.
### Non-deliverable forward (NDF)
Forex banks, ECNs, and prime brokers offer NDF contracts, which are derivatives that have no real deliver-ability. NDFs are popular for currencies with restrictions such as the Argentinian peso. In fact, a forex hedger can only hedge such risks with NDFs, as currencies such as the Argentinian peso cannot be traded on open markets like major currencies.[76]
### Swap
Main page: Finance:Foreign exchange swap
The most common type of forward transaction is the foreign exchange swap. In a swap, two parties exchange currencies for a certain length of time and agree to reverse the transaction at a later date. These are not standardized contracts and are not traded through an exchange. A deposit is often required in order to hold the position open until the transaction is completed.
### Futures
Main page: Finance:Currency future
Futures are standardized forward contracts and are usually traded on an exchange created for this purpose. The average contract length is roughly 3 months. Futures contracts are usually inclusive of any interest amounts.
Currency futures contracts are contracts specifying a standard volume of a particular currency to be exchanged on a specific settlement date. Thus the currency futures contracts are similar to forward contracts in terms of their obligation, but differ from forward contracts in the way they are traded. In addition, Futures are daily settled removing credit risk that exist in Forwards.[77] They are commonly used by MNCs to hedge their currency positions. In addition they are traded by speculators who hope to capitalize on their expectations of exchange rate movements.
### Option
Main page: Finance:Foreign exchange option
A foreign exchange option (commonly shortened to just FX option) is a derivative where the owner has the right but not the obligation to exchange money denominated in one currency into another currency at a pre-agreed exchange rate on a specified date. The FX options market is the deepest, largest and most liquid market for options of any kind in the world.
## Speculation
Controversy about currency speculators and their effect on currency devaluations and national economies recurs regularly. Economists, such as Milton Friedman, have argued that speculators ultimately are a stabilizing influence on the market, and that stabilizing speculation performs the important function of providing a market for hedgers and transferring risk from those people who don't wish to bear it, to those who do.[78] Other economists, such as Joseph Stiglitz, consider this argument to be based more on politics and a free market philosophy than on economics.[79]
Large hedge funds and other well capitalized "position traders" are the main professional speculators. According to some economists, individual traders could act as "noise traders" and have a more destabilizing role than larger and better informed actors.[80]
Currency speculation is considered a highly suspect activity in many countries. While investment in traditional financial instruments like bonds or stocks often is considered to contribute positively to economic growth by providing capital, currency speculation does not; according to this view, it is simply gambling that often interferes with economic policy. For example, in 1992, currency speculation forced Sweden's central bank, the Riksbank, to raise interest rates for a few days to 500% per annum, and later to devalue the krona.[81] Mahathir Mohamad, one of the former Prime Ministers of Malaysia, is one well-known proponent of this view. He blamed the devaluation of the Malaysian ringgit in 1997 on George Soros and other speculators.
Gregory Millman reports on an opposing view, comparing speculators to "vigilantes" who simply help "enforce" international agreements and anticipate the effects of basic economic "laws" in order to profit.[82] In this view, countries may develop unsustainable economic bubbles or otherwise mishandle their national economies, and foreign exchange speculators made the inevitable collapse happen sooner. A relatively quick collapse might even be preferable to continued economic mishandling, followed by an eventual, larger, collapse. Mahathir Mohamad and other critics of speculation are viewed as trying to deflect the blame from themselves for having caused the unsustainable economic conditions.
## Risk aversion
The MSCI World Index of Equities fell while the US dollar index rose
Risk aversion is a kind of trading behavior exhibited by the foreign exchange market when a potentially adverse event happens that may affect market conditions. This behavior is caused when risk averse traders liquidate their positions in risky assets and shift the funds to less risky assets due to uncertainty.[83]
In the context of the foreign exchange market, traders liquidate their positions in various currencies to take up positions in safe-haven currencies, such as the US dollar.[84] Sometimes, the choice of a safe haven currency is more of a choice based on prevailing sentiments rather than one of economic statistics. An example would be the financial crisis of 2008. The value of equities across the world fell while the US dollar strengthened (see Fig.1). This happened despite the strong focus of the crisis in the US.[85]
Currency carry trade refers to the act of borrowing one currency that has a low interest rate in order to purchase another with a higher interest rate. A large difference in rates can be highly profitable for the trader, especially if high leverage is used. However, with all levered investments this is a double edged sword, and large exchange rate price fluctuations can suddenly swing trades into huge losses.
## References
1. Record, Neil, Currency Overlay (Wiley Finance Series)
2. Global imbalances and destabilizing speculation (2007), UNCTAD Trade and development report 2007 (Chapter 1B).
3. CR Geisst – Encyclopedia of American Business History Infobase Publishing, 1 January 2009 Retrieved 14 July 2012 ISBN:1438109873
4. GW Bromiley – International Standard Bible Encyclopedia: A–D William B. Eerdmans Publishing Company, 13 February 1995 Retrieved 14 July 2012 ISBN:0802837816
5. T Crump – The Phenomenon of Money (Routledge Revivals) Taylor & Francis US, 14 January 2011 Retrieved 14 July 2012 ISBN:0415611873
6. J Hasebroek – Trade and Politics in Ancient Greece Biblo & Tannen Publishers, 1 March 1933 Retrieved 14 July 2012 ISBN:0819601500
7. S von Reden (2007 Senior Lecturer in Ancient History and Classics at the University of Bristol, UK) - Money in Ptolemaic Egypt: From the Macedonian Conquest to the End of the Third Century BC (p.48) Cambridge University Press , 6 December 2007 ISBN:0521852641 [Retrieved 25 March 2015]
8. Mark Cartwright. "Trade in Ancient Greece".
9. RC Smith, I Walter, G DeLong – Global Banking Oxford University Press, 17 January 2012 Retrieved 13 July 2012 ISBN:0195335937
10. (tertiary) – G Vasari – The Lives of the Artists Retrieved 13 July 2012 ISBN:019283410X
11. (page 130 of ) Raymond de Roover – The Rise and Decline of the Medici Bank: 1397–94 Beard Books, 1999 Retrieved 14 July 2012 ISBN:1893122328
12. RA De Roover – The Medici Bank: its organization, management, operations and decline New York University Press, 1948 Retrieved 14 July 2012
13. Cambridge dictionaries online – "nostro account"
14. Oxford dictionaries online – "nostro account"
15. S Homer, Richard E Sylla A History of Interest Rates John Wiley & Sons , 29 August 2005 Retrieved 14 July 2012 ISBN:0471732834
16. T Southcliffe Ashton – An Economic History of England: The 18th Century, Volume 3 Taylor & Francis, 1955 Retrieved 13 July 2012
17. (page 196 of) JW Markham A Financial History of the United States, Volumes 1–2 M.E. Sharpe, 2002 Retrieved 14 July 2012 ISBN:0765607301
18. (page 847) of M Pohl, European Association for Banking History – Handbook on the History of European Banks Edward Elgar Publishing, 1994 Retrieved 14 July 2012
19. (secondary) – [1] Retrieved 13 July 2012
20. S Shamah – A Foreign Exchange Primer ["1880" is within 1.2 Value Terms] John Wiley & Sons, 22 November 2011 Retrieved 27 July 2102 ISBN:1119994896
21. T Hong – Foreign Exchange Control in China: First Edition (Asia Business Law Series Volume 4) Kluwer Law International, 2004 ISBN:9041124268 Retrieved 12 January 2013
22. P Mathias, S Pollard – The Cambridge Economic History of Europe: The industrial economies : the development of economic and social policies Cambridge University Press, 1989 Retrieved 13 July 2012 ISBN:0521225043
23. S Misra, PK Yadav [2]International Business: Text And Cases PHI Learning Pvt. Ltd. 2009 Retrieved 27 July 2012 ISBN:8120336526
24. P. L. Cottrell – Centres and Peripheries in Banking: The Historical Development of Financial Markets Ashgate Publishing, Ltd., 2007 Retrieved 13 July 2012 ISBN:0754661210
25. P. L. Cottrell (p. 75)
26. J Wake – Kleinwort, Benson: The History of Two Families in Banking Oxford University Press, 27 February 1997 Retrieved 13 July 2012 ISBN:0198282990
27. J Atkin – The Foreign Exchange Market Of London: Development Since 1900 Psychology Press, 2005 Retrieved 13 July 2012 ISBN:041534901X
28. Laurence S. Copeland – Exchange Rates and International Finance Pearson Education, 2008 Retrieved 15 July 2012 ISBN:0273710273
29. M Sumiya – A History of Japanese Trade and Industry Policy Oxford University Press, 2000 Retrieved 13 July 2012 ISBN:0198292511
30. RC Smith, I Walter, G DeLong (p.4)
31. AH Meltzer – A History of the Federal Reserve, Volume 2, Book 1; Books 1951–1969 University of Chicago Press, 1 February 2010 Retrieved 14 July 2012 ISBN:0226520013
32. (page 7 "fixed exchange rates" of) DF DeRosa –Options on Foreign Exchange Retrieved 15 July 2012
33. K Butcher – Forex Made Simple: A Beginner's Guide to Foreign Exchange Success John Wiley and Sons, 18 February 2011 Retrieved 13 July 2012 ISBN:0730375250
34. J Madura – International Financial Management, Cengage Learning, 12 October 2011 Retrieved 14 July 2012 ISBN:0538482966
35. N DraKoln – Forex for Small Speculators Enlightened Financial Press, 1 April 2004 Retrieved 13 July 2012 ISBN:0966624580
36. SFO Magazine, RR Wasendorf, Jr.) (INT) – Forex Trading PA Rosenstreich – The Evolution of FX and Emerging Markets Traders Press, 30 June 2009 Retrieved 13 July 2012 ISBN:1934354104
37. J Jagerson, SW Hansen – All About Forex Trading McGraw-Hill Professional, 17 June 2011 Retrieved 13 July 2012 ISBN:007176822X
38. Franz Pick Pick's currency yearbook 1977 – Retrieved 15 July 2012
39. page 70 of Swoboda
40. G Gandolfo – International Finance and Open-Economy Macroeconomics Springer, 2002 Retrieved 15 July 2012 ISBN:3540434593
41. City of London: The History Random House , 31 December 2011 Retrieved 15 July 2012 ISBN:1448114721
42. "Thursday was aborted by news of a record assault on the dollar that forced the closing of most foreign exchange markets." in The outlook: Volume 45, published by Standard and Poor's Corporation – 1972 – Retrieved 15 July 2012 → [3]
43. H Giersch, K-H Paqué, H Schmieding – The Fading Miracle: Four Decades of Market Economy in Germany Cambridge University Press, 10 November 1994 Retrieved 15 July 2012 ISBN:0521358698
44. International Center for Monetary and Banking Studies, AK Swoboda – Capital Movements and Their Control: Proceedings of the Second Conference of the International Center for Monetary and Banking Studies BRILL, 1976 Retrieved 15 July 2012 ISBN:902860295X
45. ( -p. 332 of ) MR Brawley – Power, Money, And Trade: Decisions That Shape Global Economic Relations University of Toronto Press, 2005 Retrieved 15 July 2012 ISBN:1551116839
46. "... forced to close for several days in mid-1972, ... The foreign exchange markets were closed again on two occasions at the beginning of 1973,.. " in H-J Rüstow New paths to full employment: the failure of orthodox economic theory Macmillan, 1991 Retrieved 15 July 2012 → [4]
47. Chen, James (2009). Essentials of Foreign Exchange Trading. ISBN 0470464003. Retrieved 15 November 2016.
48. Hicks, Alan (2000). Managing Currency Risk Using Foreign Exchange Options. ISBN 1855734915. Retrieved 15 November 2016.
49. Johnson, G. G. (1985). Formulation of Exchange Rate Policies in Adjustment Programs. ISBN 0939934507. Retrieved 15 November 2016.
50. JA Dorn – China in the New Millennium: Market Reforms and Social Development Cato Institute, 1998 Retrieved 14 July 2012 ISBN:1882577612
51. B Laurens, H Mehran, M Quintyn, T Nordman – Monetary and Exchange System Reforms in China: An Experiment in Gradualism International Monetary Fund, 26 September 1996 Retrieved 14 July 2012 ISBN:1452766126
52. Y-I Chung – South Korea in the Fast Lane: Economic Development and Capital Formation Oxford University Press, 20 July 2007 Retrieved 14 July 2012 ISBN:0195325451
53. KM Dominguez, JA Frankel – Does Foreign Exchange Intervention Work? Peterson Institute for International Economics, 1993 Retrieved 14 July 2012 ISBN:0881321044
54. (page 211 – [source BIS 2007]) H Van Den Berg – International Finance and Open-Economy Macroeconomics: Theory, History, and Policy World Scientific, 31 August 2010 Retrieved 14 July 2012 ISBN:9814293512
55. PJ Quirk Issues in International Exchange and Payments Systems International Monetary Fund, 13 April 1995 Retrieved 14 July 2012 ISBN:1557754802
56. "Report on global foreign exchange market activity in 2013" (PDF). Basel, Switzerland: Bank for International Settlements. September 2013. p. 12.
57. "Derivatives in emerging markets", the Bank for International Settlements, 13 December 2010
58. "The \$4 trillion question: what explains FX growth since the 2007 survey?, the Bank for International Settlements, 13 December 2010
59. "Triennial Central Bank Survey Foreign exchange turnover in April 2016" (PDF). Basel, Switzerland: Bank for International Settlements. September 2016.
60. Gabriele Galati, Michael Melvin (December 2004). "Why has FX trading surged? Explaining the 2004 triennial survey". Bank for International Settlements.
61. Alan Greenspan, The Roots of the Mortgage Crisis: Bubbles cannot be safely defused by monetary policy before the speculative fever breaks on its own. , the Wall Street Journal, 12 December 2007
62. McKay, Peter A. (26 July 2005). "Scammers Operating on Periphery Of CFTC's Domain Lure Little Guy With Fantastic Promises of Profits". The Wall Street Journal.
63. The Sunday Times (London), 16 July 2006
64. "Safe Haven Currency". Financial Glossary (Reuters).
65. John J. Murphy, Technical Analysis of the Financial Markets (New York Institute of Finance, 1999), pp. 343–375.
66. Sam Y. Cross, All About the Foreign Exchange Market in the United States, Federal Reserve Bank of New York (1998), chapter 11, pp. 113–115.
67. Gelet, Joseph (2016). Splitting Pennies. Elite E Services. ISBN:9781533331090.
68. Arlie O. Petters; Xiaoying Dong (17 June 2016). An Introduction to Mathematical Finance with Applications: Understanding and Building Financial Intuition. Springer. pp. 345–. ISBN 978-1-4939-3783-7.
69. Michael A. S. Guth, "Profitable Destabilizing Speculation," Chapter 1 in Michael A. S. Guth, Speculative behavior and the operation of competitive markets under uncertainty, Avebury Ashgate Publishing, Aldorshot, England (1994), ISBN:1-85628-985-0.
70. What I Learned at the World Economic Crisis Joseph Stiglitz, The New Republic, 17 April 2000, reprinted at GlobalPolicy.org
71. Lawrence Summers and Summers VP (1989) 'When financial markets work too well: a Cautious case for a securities transaction tax' Journal of financial services
72. Redburn, Tom (17 September 1992). "But Don't Rush Out to Buy Kronor: Sweden's 500% Gamble". The New York Times.
73. Gregory J. Millman, Around the World on a Trillion Dollars a Day, Bantam Press, New York, 1995.
74. "Risk Averse". Investopedia.
75. Moon, Angela (5 February 2010). "Global markets – US stocks rebound, dollar gains on risk aversion". Reuters.
76. Stewart, Heather (9 April 2008). "IMF says US crisis is 'largest financial shock since Great Depression'". The Guardian (London). | 2022-01-17 22:00:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26927968859672546, "perplexity": 6926.213625313649}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00488.warc.gz"} |
http://mathhelpforum.com/discrete-math/147201-how-many-five-digit-integers-10-000-99-999-divisible-5-a-print.html | # How many five digit integers (10,000-99,999) are divisible by 5?
• Jun 1st 2010, 12:31 AM
taurus
How many five digit integers (10,000-99,999) are divisible by 5?
When I did it I got 2000*10-2000=18000. But I was marked wrong but cant see where I went wrong.
• Jun 1st 2010, 12:52 AM
undefined
Quote:
Originally Posted by taurus
When I did it I got 2000*10-2000=18000. But I was marked wrong but cant see where I went wrong.
The answer is indeed 18000. I don't know why it got marked wrong.
• Jun 1st 2010, 02:08 AM
simplependulum
The minimum is $10000$ and the maxumum is $99995$ so the total number is :
$\frac{99995-10000}{5} + 1$
$= 18000$
• Jun 1st 2010, 02:41 AM
Quote:
Originally Posted by taurus
When I did it I got 2000*10-2000=18000. But I was marked wrong but cant see where I went wrong.
Maybe it was not the final answer that was marked wrong.
Possibly a certain method was being examined to arrive at the answer.
You either calculated the amount of non-zero numbers up to 100,000 divisible by 5
and got 20,000, by dividing 100,000 by 5 as every 5th number is a multiple of 5....
then you subtracted the amount of numbers up to 10,000 divisible by 5 and got 2,000.
Subtracting these gives 18,000.
Unfortunately, this is the amount of non-zero numbers from 10,001 to 100,000 inclusive that are divisible by 5.
It is of course good enough, but doesn't answer the question directly.
The examiner may have thought this is what you did.
Then again, you may have done the following...
Your answer is good enough because you can start from 0, thereby finding the amount of numbers from 0 to 99,999 divisible by 5
and the amount of numbers from 0 to 9,999 divisible by 5.
Maybe the examiner didn't think through your method.
From 10,000 to 99,999 the numbers divisible by 5 end in 0 or 5.
The first (most significant) digit can be any of nine from 1 to 9.
The 2nd digit can be any of 10.
The 3rd digit can be any of 10.
The 4th digit can be any of 10.
The 5th digit can be either of 2.
That's 9(10)(10)(10)2=18,000 | 2017-11-18 04:39:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5468441843986511, "perplexity": 751.577272473325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00462.warc.gz"} |
http://mymathforum.com/algebra/339392-i-don-t-think-there-enough-information-here.html | My Math Forum I don't think there is enough information here...
Algebra Pre-Algebra and Basic Algebra Math Forum
March 5th, 2017, 09:51 PM #1 Senior Member Joined: Jan 2017 From: US Posts: 104 Thanks: 5 I don't think there is enough information here... At a diving competition, divers jump off a 64 foot high cliff. Solve for how long it will take the diver to hit the water. Show your work and explain the steps you used to solve. Correct me if I'm wrong, but I feel like you would need more information (i.e., the velocity of the dive) to solve this. If I'm wrong could someone please help me with this? Thanks in advance!
March 5th, 2017, 09:52 PM #2 Senior Member Joined: Sep 2015 From: USA Posts: 1,653 Thanks: 840 assume $v_0=0$ Thanks from Indigo28
March 11th, 2017, 02:36 PM #3 Newbie Joined: Feb 2017 From: Omaha, Nebraska Posts: 8 Thanks: 0 Math Focus: Algebra and Trigonometry Think this might be what your wanting to do: h(t) = 16t^2 where h is height and t^2 is time squared, so plug in the dive distance of 64 feet and solve for t.
March 11th, 2017, 03:16 PM #4 Math Team Joined: Jul 2011 From: Texas Posts: 2,678 Thanks: 1339 $h(t) = 64-16t^2$
March 12th, 2017, 04:00 AM #5 Math Team Joined: Jul 2011 From: North America, 42nd parallel Posts: 3,372 Thanks: 233 If the initial velocity has zero vertical component it plays no role.
March 12th, 2017, 10:52 AM #6 Newbie Joined: Feb 2017 From: Omaha, Nebraska Posts: 8 Thanks: 0 Math Focus: Algebra and Trigonometry Here's the answer I get... With initial velocity of zero, assuming zero resistance and seconds for time: (Diver accelerates due to constant force of gravity.) h(t) = 16t^2 64t = 16 t^2 64/16 = t^2 / t 4 = t Last edited by y2kevin; March 12th, 2017 at 10:58 AM. Reason: add note about diver and gravity
March 12th, 2017, 11:01 AM #7 Global Moderator Joined: Dec 2006 Posts: 18,245 Thanks: 1439 No. With h(t) = 64 - 16t², one gets h(t) = 0 when t = 2.
Tags information
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post utkarshakash Algebra 7 October 11th, 2012 06:27 AM nagrajkundan Academic Guidance 0 December 2nd, 2011 08:44 AM esraamajed Number Theory 2 March 27th, 2008 09:52 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2017-12-15 02:27:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6892391443252563, "perplexity": 5122.532033040548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563083.64/warc/CC-MAIN-20171215021156-20171215041156-00649.warc.gz"} |
https://optimization-online.org/2007/05/ | ## Some Relations Between Facets of Low- and High-Dimensional Group Problems
In this paper, we introduce an operation that creates families of facet-defining inequalities for high-dimensional infinite group problems using facet-defining inequalities of lower-dimensional group problems. We call this family sequential-merge inequalities because they are produced by applying two group cuts one after the other and because the resultant inequality depends on the order of the … Read more
## An Integer Programming Approach to the Path Selection Problems
We consider two types of path selection problems defined on arc-capacitated networks. Given an arc-capacitated network and a set of selected ordered pairs of nodes (commodity) each of which has a demand quantity, the first problem is to select a subset of commodities and setup one path for each chosen commodity to maximize profit, while … Read more
## A Computational Analysis of Lower Bounds for Big Bucket Production Planning Problems
In this paper, we analyze a variety of approaches to obtain lower bounds for multi-level production planning problems with big bucket capacities, i.e., problems in which multiple items compete for the same resources. We give an extensive survey of both known and new methods, and also establish relationships between some of these methods that, to … Read more
## Convex duality and entropy-based moment closure: Characterizing degenerate densities
A common method for constructing a function from a finite set of moments is to solve a constrained minimization problem. The idea is to find, among all functions with the given moments, that function which minimizes a physically motivated, strictly convex functional. In the kinetic theory of gases, this functional is the kinetic entropy; the … Read more
## The Value of Information in Inventory Management
Inventory management traditionally assumes the precise knowledge of the underlying demand distribution and a risk-neutral manager. New product introduction does not fit this framework because (i) not enough information is available to compute probabilities and (ii) managers are generally risk-averse. In this work, we analyze the value of information for two-stage inventory management in a … Read more
## The Value of Information in the Newsvendor Problem
In this work, we investigate the value of information when the decision-maker knows whether a perishable product will be in high, moderate or low demand before placing his order. We derive optimality conditions for the probability of the baseline scenario under symmetric distributions and analyze the impact of the cost parameters on simulation experiments. Our … Read more
## Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem
We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a self-concordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We … Read more
## On the Extension of a Mehrotra-Type Algorithm for Semidefinite Optimization
It has been shown in various papers that most interior-point algorithms and their analysis can be generalized to semidefinite optimization. This paper presents an extension of the recent variant of Mehrotra’s predictor-corrector algorithm that was proposed by Salahi et al. (2005) for linear optimization problems. Based on the NT (Nesterov and Todd 1997) direction as … Read more
## A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more
## On the probabilistic complexity of finding an approximate solution for linear programming
We consider the problem of finding an $\epsilon-$optimal solution of a standard linear program with real data, i.e., of finding a feasible point at which the objective function value differs by at most $\epsilon$ from the optimal value. In the worst-case scenario the best complexity result to date guarantees that such a point is obtained … Read more | 2023-03-31 22:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.590363621711731, "perplexity": 537.7259865725803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00534.warc.gz"} |
https://socratic.org/questions/how-do-you-differentiate-g-x-6x-9-sin-3x-using-the-product-rule | # How do you differentiate g(x) = (6x+9)sin(3x) using the product rule?
Oct 20, 2017
$g ' \left(x\right) = 3 \left(6 x + 9\right) \cos \left(3 x\right) + 6 \sin \left(3 x\right)$
#### Explanation:
First, we recall product rule :
$f \left(x\right) = a \cdot b$
$f ' \left(x\right) = a \cdot \frac{\mathrm{db}}{\mathrm{dx}} + b \cdot \frac{\mathrm{da}}{\mathrm{dx}}$
From the above question, we can think of $\left(6 x + 9\right)$as $a$ and $\sin \left(3 x\right)$ as $b$. So, we have the following:
$g ' \left(x\right) = \left(6 x + 9\right) \cdot d \frac{\sin \left(3 x\right)}{\mathrm{dx}} + \sin \left(3 x\right) \cdot d \frac{6 x + 9}{\mathrm{dx}}$
$= \left(6 x + 9\right) \cdot \cos \left(3 x\right) d \frac{3 x}{\mathrm{dx}} + \sin \left(3 x\right) \cdot 6$
$= 3 \left(6 x + 9\right) \cos \left(3 x\right) + 6 \sin \left(3 x\right)$ | 2019-02-16 11:07:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926081895828247, "perplexity": 829.5713784231859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00313.warc.gz"} |
http://webhome.cs.uvic.ca/~ruskey/classes/Knuth4A/ass3_2015.html | # Assignment #3
## Theme: boolean evaluation, broadword computation.
Assigned: Oct. 13, 2015.
Due: Oct. 27, 2015. Reminder: homework should be in LaTeX (or TeX). Due at the beginning of class on the due date.
• Use one of Knuth's SAT solvers to determine the value of the van der Waerden number $W(4,4)$ (from the table on page 5 of 7.2.2.2 on Satisfiability). Explain how you determined the value, showing the output from the solver.
• Find a circuit for addition mod 3 like that in exercise 60 of section 7.1.2 (pg. 129), except use 11 to encode 2 instead of 10. Your cicuit should use the same number of gates as the one in Knuth's solution. Explain why it is correct; e.g., by using truth tables.
• Look up the IEEE 754 floating point standard, particularly the representation of 64 bit floating point numbers. Your computer undoubtedly implements this standard. write a C program that extracts the exponent from such numbers in order to compute $\rho x$. This is somewhat like implementing equation (55) on page 142, but in C instead of MMIX, and with $\rho x$ instead of $\lambda x$. Test your program by showing its output on the numbers $x = 2^{20}+1,2^{20}+2, \ldots, 2^{20}+2^5$. Turn in your program source and output (text files are ok here).
• Prove that $\nu x\ =\ x - \left\lfloor \frac{x}{2} \right\rfloor - \left\lfloor \frac{x}{4} \right\rfloor - \left\lfloor \frac{x}{8} \right\rfloor - \cdots$
• Start thinking about a suitable project. Turn in a one paragraph description of a potential project. | 2018-12-10 12:12:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5071945190429688, "perplexity": 909.5396046850561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823322.49/warc/CC-MAIN-20181210101954-20181210123454-00481.warc.gz"} |
https://gmatclub.com/forum/there-are-5-rock-songs-6-pop-songs-and-3-jazz-how-many-different-57548.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Sep 2018, 06:11
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 01 May 2004
Posts: 330
Location: USA
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
06 Jul 2004, 20:47
2
16
00:00
Difficulty:
95% (hard)
Question Stats:
44% (02:46) correct 57% (02:38) wrong based on 200 sessions
### HideShow timer Statistics
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
Math Expert
Joined: 02 Sep 2009
Posts: 49303
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
19 Mar 2015, 08:54
6
9
Mayanksharma85 wrote:
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
There are 3 Jazz songs, each of them can either be included in the album or not, so total of 2 options for each song. Hence, there can be total of 2^3 = 8 different jazz song combination in the album. Notice that those 8 combinations include one combination where none of the jazz songs is included.
Similarly, for 5 Rock songs, there are 2^5 combinations. Since 2^5 will also include one case in which there are 0 rock songs, then we should subtract that one case (the albums should contain at least one Rock song) and we'll get 2^5 - 1.
For 6 Rock songs, there are 2^6 combinations: 2^6 will also include one case in which there are 0 pop songs, thus we should subtract that one case (the albums should contain at least one Pop song) and we'll get 2^6 - 1.
Total = 2^3*(2^5 - 1)(2^6 - 1) = 15,624.
Hope it's clear.
_________________
Intern
Joined: 04 Aug 2003
Posts: 30
Location: Texas
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
07 Jul 2004, 09:06
3
3
If there are 5 rock songs then there are 2^5 ways to make a combination. But there should be atleast 1 Rock song, so the total Rock combination will be 2^5 -1
Similar explanation for Pop
But the album can be formed without any Jazz so 2^3
((2^5)-1) * ((2^6)-1) * 2^3
Ans: 15624
##### General Discussion
Manager
Joined: 01 Nov 2007
Posts: 98
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
25 Dec 2007, 13:26
1
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
15624
16384
6144
384
240
CEO
Joined: 17 Nov 2007
Posts: 3458
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
26 Dec 2007, 04:53
A
There are 5 Rock songs, 6 Pop songs, and 3 Jazz: R=5, P=6, J=3
N - the number of combination with at least one Rock song and one Pop song
1.Total number of combination: Nt=2^(5+6+3)=2^14
2. The number of combination without Rock songs: Nr=2^(6+3)=2^9
3. The number of combination without Pop songs: Nr=2^(5+3)=2^8
4. The number of combination without Pop and Rock songs: Nr=2^3
5. N=2^14-2^9-2^8+2^3=16384-512-256+8=15624
http://www.gmatclub.com/forum/t57169 - a similar approach.
VP
Joined: 28 Dec 2005
Posts: 1472
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
26 Dec 2007, 17:59
1
Do we not need to know how many songs constitute an 'album' for this question ?
And walker, can you explain the logic between 2^x for total number of combinations ?
CEO
Joined: 17 Nov 2007
Posts: 3458
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
26 Dec 2007, 23:59
pmenon wrote:
And walker, can you explain the logic between 2^x for total number of combinations ?
1. we have n songs: S={1,2,3,4....,n}
2. each song may be included or not be included in a list. ( two possibilities)
3. our list of songs we can image like a={1,0,0,1,1,0,1,0,....1} - where 1 - in the list, 1 - out of the list.
4. How many lists we can compose? N=2*2*2......2=2^n
Manager
Joined: 01 Nov 2007
Posts: 98
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
21 Mar 2008, 00:57
walker, thanks a lot for perfect explanation!
CEO
Joined: 17 Nov 2007
Posts: 3458
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
21 Mar 2008, 01:00
It is a great example, in which use of nCk and nPk formulas are inappropriate.
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | Limited GMAT/GRE Math tutoring in Chicago
Intern
Joined: 07 May 2014
Posts: 13
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many [#permalink]
### Show Tags
19 Mar 2015, 08:41
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
Intern
Joined: 07 May 2014
Posts: 13
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
19 Mar 2015, 10:45
it is, thanks Bunuel.
Manager
Joined: 01 Apr 2015
Posts: 54
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
10 Apr 2015, 07:38
Bunuel, just a small correction, 3rd paragraph of your answer shouldnt it be pop songs ("for 6 rock songs") instead of rock ?
Intern
Joined: 06 Jun 2014
Posts: 47
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
05 May 2015, 10:29
Bunuel I have some doubt on this question, at my understanding the solution of 15624 albums means that one song makes one album correct, If that is the case doesnt that suppouse to be expelcitly stated in the promt. At my understandng and how I approached this question is that i assumed that each album need to have one rock,one pop and one jazz song, so baisicaly one album consists 3 songs of which at least one rock and pop need to be inside teh album. There is my confusion. could you briefly adress my issue here. Thank a lot
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 2007
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
08 May 2015, 00:59
1
kzivrev wrote:
Bunuel I have some doubt on this question, at my understanding the solution of 15624 albums means that one song makes one album correct, If that is the case doesnt that suppouse to be expelcitly stated in the promt. At my understandng and how I approached this question is that i assumed that each album need to have one rock,one pop and one jazz song, so baisicaly one album consists 3 songs of which at least one rock and pop need to be inside teh album. There is my confusion. could you briefly adress my issue here. Thank a lot
Hi kzivrev,
The question keeps a restriction on having at least 1 rock song and 1 pop song in an album. So, an album can have a minimum of 2 songs; that would be the case when there is 1 rock song ,1 pop song and 0 jazz song in the album.
Since, the question does not place any restriction on having a jazz song in the album, an album can be without a jazz song. Hence, your assumption of having minimum of 3 songs in an album is not valid.
If you observe the solution $$2^3*(2^5 - 1)(2^6 - 1) = 15,624.$$. Here 1 case has been subtracted from $$2^5$$ and $$2^6$$ to eliminate the possibility of having 0 rock song or 0 pop song respectively in an album. There is no such elimination of case for jazz songs as there is no restriction of having 1 jazz song in the album.
Hope its clear!
Regards
Harsh
_________________
Number Properties | Algebra |Quant Workshop
Success Stories
Guillermo's Success Story | Carrie's Success Story
Ace GMAT quant
Articles and Question to reach Q51 | Question of the week
Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2
Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2
Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability
Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry
Algebra- Wavy line | Inequalities
Practice Questions
Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
Intern
Joined: 13 Mar 2011
Posts: 21
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
22 Mar 2016, 06:49
Bunuel wrote:
Mayanksharma85 wrote:
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
There are 3 Jazz songs, each of them can either be included in the album or not, so total of 2 options for each song. Hence, there can be total of 2^3 = 8 different jazz song combination in the album. Notice that those 8 combinations include one combination where none of the jazz songs is included.
Similarly, for 5 Rock songs, there are 2^5 combinations. Since 2^5 will also include one case in which there are 0 rock songs, then we should subtract that one case (the albums should contain at least one Rock song) and we'll get 2^5 - 1.
For 6 Rock songs, there are 2^6 combinations: 2^6 will also include one case in which there are 0 pop songs, thus we should subtract that one case (the albums should contain at least one Pop song) and we'll get 2^6 - 1.
Total = 2^3*(2^5 - 1)(2^6 - 1) = 15,624.
Hope it's clear.
This way of solving the problem a little bit difficult to comprehend. The solution 2^5 * 2^6 * 2^3 - 2^8 - 2^9 + 2^3 looks easier to understand. However, I still bewilder. Please could you help me to figure out. Why we add 2^3 ( only Jazz album ) instead of subtracting them? Why we aren't subtracting albums that contain only Rock and only Pop songs ?
_________________
I’m not afraid of the man who knows 10,000 kicks and has practiced them once. I am afraid of the man who knows one kick & has practiced it 10,000 times! - Bruce Lee
Please, press the +1 KUDOS button , if you find this post helpful
Math Expert
Joined: 02 Aug 2009
Posts: 6800
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
22 Mar 2016, 07:03
1
leeto wrote:
Bunuel wrote:
Mayanksharma85 wrote:
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
There are 3 Jazz songs, each of them can either be included in the album or not, so total of 2 options for each song. Hence, there can be total of 2^3 = 8 different jazz song combination in the album. Notice that those 8 combinations include one combination where none of the jazz songs is included.
Similarly, for 5 Rock songs, there are 2^5 combinations. Since 2^5 will also include one case in which there are 0 rock songs, then we should subtract that one case (the albums should contain at least one Rock song) and we'll get 2^5 - 1.
For 6 Rock songs, there are 2^6 combinations: 2^6 will also include one case in which there are 0 pop songs, thus we should subtract that one case (the albums should contain at least one Pop song) and we'll get 2^6 - 1.
Total = 2^3*(2^5 - 1)(2^6 - 1) = 15,624.
Hope it's clear.
This way of solving the problem a little bit difficult to comprehend. The solution 2^5 * 2^6 * 2^3 - 2^8 - 2^9 + 2^3 looks easier to understand. However, I still bewilder. Please could you help me to figure out. Why we add 2^3 ( only Jazz album ) instead of subtracting them? Why we aren't subtracting albums that contain only Rock and only Pop songs ?
Hi,
when you are subtracting 2^8, you are looking at the combinations of 5 rock songs and 3 jazz songs and
when you are subtracting 2^9, you are looking at the combinations of 6 Pop songs and 3 jazz songs..
so in both cases the jazz songs are subtracted twice , hence we add 2^3..
If you open up 2^3*(2^5 - 1)(2^6 - 1) it will be 2^5 * 2^6 * 2^3 - 2^8 - 2^9 + 2^3
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
GMAT online Tutor
Intern
Joined: 13 Mar 2011
Posts: 21
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
25 Mar 2016, 04:08
chetan2u wrote:
Hi,
when you are subtracting 2^8, you are looking at the combinations of 5 rock songs and 3 jazz songs and
when you are subtracting 2^9, you are looking at the combinations of 6 Pop songs and 3 jazz songs..
so in both cases the jazz songs are subtracted twice , hence we add 2^3..
If you open up 2^3*(2^5 - 1)(2^6 - 1) it will be 2^5 * 2^6 * 2^3 - 2^8 - 2^9 + 2^3
thanks, I think I got it. Could I ask another question about managing albums without songs ( "zero song albums" ) ?
So, do I correctly understand that basically :
2^14 contains one "zero song album"$$R^0 P^0 J^0$$
2^9 contains one "zero song album" too => $$P^0 J^0$$
2^8 contains one "zero song album" too => $$R^0 J^0$$
2^3 contains one "zero song album" too => $$J^0$$
So, at the end, we have $$R^0 P^0 J^0 - P^0 J^0 - R^0 J^0 + J^0 = 0$$ => 1("zero song album") - 1("zero song album") - 1("zero song album") + 1("zero song album") = 0 ("zero song album")
To sum up, it is looks like our final solution manages "zero song albums" automatically. That is why we didn't take any additional steps. Is it correct ( sorry if the formula looks a little bit obstruct, hopes you get my concerns ) ?
_________________
I’m not afraid of the man who knows 10,000 kicks and has practiced them once. I am afraid of the man who knows one kick & has practiced it 10,000 times! - Bruce Lee
Please, press the +1 KUDOS button , if you find this post helpful
Math Expert
Joined: 02 Aug 2009
Posts: 6800
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
25 Mar 2016, 04:32
1
leeto wrote:
chetan2u wrote:
Hi,
when you are subtracting 2^8, you are looking at the combinations of 5 rock songs and 3 jazz songs and
when you are subtracting 2^9, you are looking at the combinations of 6 Pop songs and 3 jazz songs..
so in both cases the jazz songs are subtracted twice , hence we add 2^3..
If you open up 2^3*(2^5 - 1)(2^6 - 1) it will be 2^5 * 2^6 * 2^3 - 2^8 - 2^9 + 2^3
thanks, I think I got it. Could I ask another question about managing albums without songs ( "zero song albums" ) ?
So, do I correctly understand that basically :
2^14 contains one "zero song album"$$R^0 P^0 J^0$$
2^9 contains one "zero song album" too => $$P^0 J^0$$
2^8 contains one "zero song album" too => $$R^0 J^0$$
2^3 contains one "zero song album" too => $$J^0$$
So, at the end, we have $$R^0 P^0 J^0 - P^0 J^0 - R^0 J^0 + J^0 = 0$$ => 1("zero song album") - 1("zero song album") - 1("zero song album") + 1("zero song album") = 0 ("zero song album")
To sum up, it is looks like our final solution manages "zero song albums" automatically. That is why we didn't take any additional steps. Is it correct ( sorry if the formula looks a little bit obstruct, hopes you get my concerns ) ?
yes you are correct on your concept..
Now when you have realized this, It will be easier to understand the straight formula..
$$2^3*(2^5 - 1)(2^6 - 1)$$..
$$(2^6 - 1)$$ is the ways where atleast one POP song is there, so you have removed one case where none were there..
similarily $$(2^5 - 1)$$ is the ways where atleast one ROCK song is there, so you have removed one case where none were there.
and $$2^3$$ remains as it is as it is possible that NO JAZZ song is there..
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
GMAT online Tutor
Intern
Joined: 16 Dec 2013
Posts: 39
Location: United States
GPA: 3.7
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
16 Apr 2016, 06:22
Bunuel wrote:
Mayanksharma85 wrote:
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
There are 3 Jazz songs, each of them can either be included in the album or not, so total of 2 options for each song. Hence, there can be total of 2^3 = 8 different jazz song combination in the album. Notice that those 8 combinations include one combination where none of the jazz songs is included.
Similarly, for 5 Rock songs, there are 2^5 combinations. Since 2^5 will also include one case in which there are 0 rock songs, then we should subtract that one case (the albums should contain at least one Rock song) and we'll get 2^5 - 1.
For 6 Rock songs, there are 2^6 combinations: 2^6 will also include one case in which there are 0 pop songs, thus we should subtract that one case (the albums should contain at least one Pop song) and we'll get 2^6 - 1.
Total = 2^3*(2^5 - 1)(2^6 - 1) = 15,624.
Hope it's clear.
I understand the method to this problem...but curious as to why (14!/(5!x6!x3!)- 3!) won't work?
Math Expert
Joined: 02 Aug 2009
Posts: 6800
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different [#permalink]
### Show Tags
16 Apr 2016, 06:37
Avinashs87 wrote:
Bunuel wrote:
Mayanksharma85 wrote:
can somebody please explain why ''4. The number of combination without Pop and Rock songs: Nr=2^3'' was not subtracted from the overall combinations.
There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different albums can be formed using the above repertoire if the albums should contain at least one Rock song and one Pop song?
A. 15,624
B. 16,384
C. 6,144
D. 384
E. 240
There are 3 Jazz songs, each of them can either be included in the album or not, so total of 2 options for each song. Hence, there can be total of 2^3 = 8 different jazz song combination in the album. Notice that those 8 combinations include one combination where none of the jazz songs is included.
Similarly, for 5 Rock songs, there are 2^5 combinations. Since 2^5 will also include one case in which there are 0 rock songs, then we should subtract that one case (the albums should contain at least one Rock song) and we'll get 2^5 - 1.
For 6 Rock songs, there are 2^6 combinations: 2^6 will also include one case in which there are 0 pop songs, thus we should subtract that one case (the albums should contain at least one Pop song) and we'll get 2^6 - 1.
Total = 2^3*(2^5 - 1)(2^6 - 1) = 15,624.
Hope it's clear.
I understand the method to this problem...but curious as to why (14!/(5!x6!x3!)- 3!) won't work?
Hi
firstly, your formula 14!/(5!6!3!) is for ways in which 14 songs can be arranged where 5 songs of one type and 6 and 3 of other two type..
this is used say when I ask you different words you can form from 'PASSION', which will be 7!/2!..
why should you do this here? AND what does 3! stand for..
Rather it has to be other way " you should tell why this should work?"
May be then someone can find the error
_________________
1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372
2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html
3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html
GMAT online Tutor
Re: There are 5 Rock songs, 6 Pop songs, and 3 Jazz. How many different &nbs [#permalink] 16 Apr 2016, 06:37
Go to page 1 2 Next [ 29 posts ]
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-09-22 13:11:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26250359416007996, "perplexity": 4605.912010817574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00485.warc.gz"} |
http://ec.citizendium.org/wiki/Euclid%27s_lemma | # Euclid's lemma
Main Article
Discussion
Related Articles [?]
Bibliography [?]
Citable Version [?]
This editable Main Article is under development and subject to a disclaimer.
In number theory, Euclid's lemma, named after the ancient Greek geometer and number theorist Euclid of Alexandria, states that if a prime number p is a divisor of the product of two integers, ab, then either p is a divisor of a or p is a divisor of b (or both).
Euclid's lemma is used in the proof of the unique factorization theorem, which states that a number cannot have more than one prime factorization.
## Proof
In order to prove Euclid's lemma we will first prove another, unnamed, lemma that will become useful later. This additional lemma is
Lemma 1: Suppose p and q are relatively prime integers and that p|kq for some integer k. Then p|k.
Proof: Because p and q are relatively prime, the Euclidean Algorithm tells us that there exist integers r and s such that 1=gcd(p,q)=rp+sq. Next, since p|kq there exists some integer n such that np=kq. Now write
k=(rp+sq)k = rpk + s(kq) = rpk + snp = p(rk+sn).
Since rk+sn is an integer, this shows that p|k as desired.
Now we can prove Euclid's lemma. Let a, b, p ${\displaystyle \in \mathbb {Z} }$ with p prime, and suppose that p is a divisor of ab, p|ab. Now let g=gcd(a,p). Since p is prime and g divides it, then either g=p or g=1. In the first case, p divides a by the definition of the gcd, so we are done. In the second case we have that a and p are relatively prime and that p|ba so by the Lemma 1, p divides b. Thus in either case p divides (at least) one of a and b. Note that it is of course possible for p to divide both a and b, the simplest example of which is the case a=b=p. | 2023-02-06 03:30:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563695549964905, "perplexity": 437.7118703787892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00354.warc.gz"} |
https://stats.libretexts.org/Bookshelves/Applied_Statistics/Book%3A_Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.03%3A_Statistics_in_Psychology | 1.3: Statistics in Psychology
I hope that the discussion above helped explain why science in general is so focused on statistics. But I’m guessing that you have a lot more questions about what role statistics plays in psychology, and specifically why psychology classes always devote so many lectures to stats. So here’s my attempt to answer a few of them…
• Why does psychology have so much statistics?
To be perfectly honest, there’s a few different reasons, some of which are better than others. The most important reason is that psychology is a statistical science. What I mean by that is that the “things” that we study are people. Real, complicated, gloriously messy, infuriatingly perverse people. The “things” of physics include objects like electrons, and while there are all sorts of complexities that arise in physics, electrons don’t have minds of their own. They don’t have opinions, they don’t differ from each other in weird and arbitrary ways, they don’t get bored in the middle of an experiment, and they don’t get angry at the experimenter and then deliberately try to sabotage the data set. At a fundamental level psychology is harder than physics.
Basically, we teach statistics to you as psychologists because you need to be better at stats than physicists. There’s actually a saying used sometimes in physics, to the effect that “if your experiment needs statistics, you should have done a better experiment”. They have the luxury of being able to say that because their objects of study are pathetically simple in comparison to the vast mess that confronts social scientists. It’s not just psychology, really: most social sciences are desperately reliant on statistics. Not because we’re bad experimenters, but because we’ve picked a harder problem to solve. We teach you stats because you really, really need it.
• Can’t someone else do the statistics?
To some extent, but not completely. It’s true that you don’t need to become a fully trained statistician just to do psychology, but you do need to reach a certain level of statistical competence. In my view, there’s three reasons that every psychological researcher ought to be able to do basic statistics:
1. There’s the fundamental reason: statistics is deeply intertwined with research design. If you want to be good at designing psychological studies, you need to at least understand the basics of stats.
2. If you want to be good at the psychological side of the research, then you need to be able to understand the psychological literature, right? But almost every paper in the psychological literature reports the results of statistical analyses. So if you really want to understand the psychology, you need to be able to understand what other people did with their data. And that means understanding a certain amount of statistics.
3. There’s a big practical problem with being dependent on other people to do all your statistics: statistical analysis is expensive. In almost any real life situation where you want to do psychological research, the cruel facts will be that you don’t have enough money to afford a statistician. So the economics of the situation mean that you have to be pretty self-sufficient.
Note that a lot of these reasons generalize beyond researchers. If you want to be a practicing psychologist and stay on top of the field, it helps to be able to read the scientific literature, which relies pretty heavily on statistics.
• I don’t care about jobs, research, or clinical work. Do I need statistics?
Okay, now you’re just messing with me. Still, I think it should matter to you too. Statistics should matter to you in the same way that statistics should matter to everyone: we live in the 21st century, and data are everywhere. Frankly, given the world in which we live these days, a basic knowledge of statistics is pretty damn close to a survival tool! Which is the topic of the next section…
1.3: Statistics in Psychology is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2022-05-24 20:53:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5198780298233032, "perplexity": 578.5663961889753}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00103.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-p-p-2-linear-models-and-rates-of-change-exercises-page-18/76 | ## Calculus 10th Edition
Published by Brooks Cole
# Chapter P - P.2 - Linear Models and Rates of Change - Exercises: 76
Linear equation: $C=0.51x+200$ Answer to the question: It will cost the company $269.87 if a sales representative drives 137 miles on any given day. #### Work Step by Step Linear equation: If$x$is the number of miles driven, then one can multiply the number of miles driven with the amount paid per mile($0.51), and then add the 200 per day.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-05-28 08:26:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5780502557754517, "perplexity": 2729.6710354859138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00561.warc.gz"} |
https://revisionworld.com/a2-level-level-revision/maths/statistics/binomial-distribution | ## Title
Binomial Distribution
Quick revise
If a discrete random variable X has the following probability density function (p.d.f.), it is said to have a binomial distribution:
• P(X = x) = nCx q(n-x)px, where q = 1 - p
p can be considered as the probability of a success, and q the probability of a failure.
Note: nCr (“n choose r”) is more commonly written , but I shall use the former because it is easier to write on a computer. It means the number of ways of choosing r objects from a collection of n objects (see permutations and combinations).
If a random variable X has a binomial distribution, we write X ~ B(n, p) (~ means ‘has distribution…’).
n and p are known as the parameters of the distribution (n can be any integer greater than 0 and p can be any number between 0 and 1). All random variables with a binomial distribution have the above p.d.f., but may have different parameters (different values for n and p).
Example
A coin is thrown 10 times. Find the probability density function for X, where X is the random variable representing the number of heads obtained.
The probability of throwing a head is ½ and the probability of throwing a tail is ½. Therefore, the probability of throwing 8 tails is (½)8
If we throw 2 heads and 8 tails, we could have thrown them HTTTTTHTT, or TTHTHTTTTT, or in a number of other ways. In fact, the total number of ways of throwing 2 heads and 8 tails is 10C2 (see the permutations and combinations section).
Hence the probability of throwing 2 heads and 8 tails is 10C2× (½)2× (½)8 . As you can see this has a Binomial distribution, where n = 10, p = ½.
You can see, therefore, that the p.d.f. is going to be: P(X = x) = 10Cx (½)(10-x) (½)x .
From this, we can work out the probability of throwing, for example, 3 heads (put x = 3).
Expectation and Variance
If X ~ B(n,p), then the expectation and variance is given by:
• E(X) = np
• Var(X) = npq
Example
In the above example, what is the expected number of heads thrown?
E(X) = np
Now in the above example, p = probability of throwing a head = ½ . n = number of throws = 10
Hence expected number of heads = 5.
This is what you would expect: if you throw a coin 10 times you would expect 5 heads and 5 tails on average.
Rate: | 2022-01-22 06:13:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566871881484985, "perplexity": 318.22823240348856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00549.warc.gz"} |
https://math.stackexchange.com/questions/2721756/how-to-pronounce-this-notation | # How to pronounce this notation
Reading this paper : https://arxiv.org/pdf/1608.04644.pdf the following term : $$L_\infty$$ is referenced in the context of :
How is $L_\infty$ pronounced ? Is it just "L infinity" ? | 2021-12-02 18:14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887209415435791, "perplexity": 904.3047294357041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362287.26/warc/CC-MAIN-20211202175510-20211202205510-00476.warc.gz"} |
https://eccc.weizmann.ac.il/keyword/18339/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > DE MORGAN FORMULA:
Reports tagged with De Morgan Formula:
TR21-002 | 8th January 2021
Pooya Hatami, William Hoza, Avishay Tal, Roei Tell
#### Fooling Constant-Depth Threshold Circuits
We present new constructions of pseudorandom generators (PRGs) for two of the most widely-studied non-uniform circuit classes in complexity theory. Our main result is a construction of the first non-trivial PRG for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth $d\in\mathbb{N}$ ... more >>>
ISSN 1433-8092 | Imprint | 2021-01-23 05:18:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30972591042518616, "perplexity": 7343.0404792931895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00606.warc.gz"} |
http://www.hpmuseum.org/forum/showthread.php?mode=linear&tid=8797&pid=79993 | newRPL: Alpha demo 0.9 released [UPDATED 2017-10-25]
09-22-2017, 09:49 PM (This post was last modified: 09-22-2017 09:50 PM by Neve.)
Post: #141
Neve Member Posts: 219 Joined: Oct 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-22-2017 09:32 PM)Luigi Vampa Wrote: Gents, "mea culpa", I started this little mess. I should have read Neve's posts more carefully. Arrrgh, sometimes I have a big-mouth :.!
Big thanks :0)
PS: Caro Claudio, sono desolato :./
Don’t worry Luigi, I’m good with this. No need to apologize (at least not to me.)
I also agree that this argument is totally useless, childish and counter productive.
That’s the reason I’m not even answering and arguing anymore about it.
We all have made our point of view clear. Let’s leave it at that.
I hope my suggestions will be heard in the future as I would really like to be able to install it and finally use it.
I’m sure I’m not the only one to think that.
Think positive!
Engineer & Senior IT Executive
Tall-Key HP41CL, CV, CX, 82162A Printer, 82143A Printer, 82160A HP-IL, 2 Card-Readers, Modules, Wand, HP50g.
09-22-2017, 09:57 PM
Post: #142
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
Excellent, back to the technical stuff...
(09-22-2017 08:29 PM)Neve Wrote: Nope, that I am definitely not indeed. I’m a system engineer and IT executive. I do like many things. But what I do not like is the interface that I find to crowded with two menu lines which, to me, are useless. I understand how it can be appealing to you or to some people. But on a screen that is already not that large, it’s not something I appreciate. Especially when you get older and when smaller fonts are becoming harder to read under low light conditions. Under normal conditions when you actually want smaller fonts, I can’t use flags to make the change, because these flags are not available.
I think you misunderstood. There's no flag to change the font, there's several commands. The fonts have names, not flag numbers. There's no "mini font" and "large font" like it used to. Now there's two 5-pixel fonts, two 6-pixel fonts, one 7-pixel font and four 8-pixel fonts you can pick and choose freely to use on every part of the screen. That's what's in ROM, also users can modify and use user fonts freely. You can install/uninstall any font like in a PC and use it independently on any part of the screen.
You can have different fonts for: First level of stack, other levels of the stack, menus, status area, command line editor, plots and applications (Forms, soon to be).
You don't like the second menu? No problem, just press On-VAR for a second and the second menu is gone. Not so crowded anymore.
Menu font too small for you? Just use the 7-pixel font for the menus, looks great.
Menus too black for you? flags -15 and -16 will turn them white.
(09-22-2017 08:29 PM)Neve Wrote: Speaking of flags (but not only), as a general rule, I don’t like when options are removed and things are imposed. I like to have the ability to choose. If you want to make something better, and you have in more ways than I can list, don’t impose what works “for you”, or what doesn’t bother you, on others if you want to reach a broader public. Remember that, as a developer, you need to adapt your work to the need of the users not ask the user to get use to whatever vision you have of what they need. That sounds more like the Apple/Steve Jobs approach: “tell us what you don’t need, and will make sure you’ll need and impose them”
You are again misinformed. Flags that were removed were replaced with more powerful functionality. For example, on the 50g you can control a few modes to display numbers. On newRPL you have full control:
There's 3 formats active at all times, big numbers, normal numbers and small numbers. You can define the format freely for the 3 formats (they can also be the same), you can have big numbers presented in SCI or ENG format, while smaller numbers displayed normally, and tiny numbers back to SCI format. You control the number of digits, separator characters, everything you can possibly change on a number's presentation. But... not with flags, there's a SETNFMT command that takes the format in a much better way.
Want another example? Coordinate system: RECT/POLAR. The flag no longer exist, because it's a property of the numbers/vectors themselves. You can have complex numbers in rectangular or polar coordinates simultaneously on the stack. Convert to/from polar/rect affects only the number/vector you want to convert, not all of them.
More freedom of choice, not less.
Another one? Base BIN/HEX/OCT/DEC no longer exists as flags. An integer number can have any base, it's a property of the number, not a system-wide flag. So you can be doing calculations in hexa, then throw in a number in binary and every number stays in the base you typed it. Again, more choice, not less.
I'm not sure which flags you refer to when you say newRPL imposes things (I'd like more specific examples), but I think you just got it all backwards. I'm with you when it comes to more freedom, and I don't think newRPL imposes much of anything, it's an environment as open or more than the original. The whole point is to improve on the original, not make it worse.
(09-22-2017 08:29 PM)Neve Wrote: That includes the keyboard changes. Because, on top of having to learn (again) a somewhat different system rebuilt from the ground up, I don’t need nor want to have to remember what keys does what, just because they are now wired differently. That, to me, is another huge minus.
In other words: Freedom and flexibility.
I hope this all makes sense.
Makes a lot of sense, but it's like you are talking about a different project.
The keyboard is quite faithful to the original layout, with one exception: The 6 keys now used for the second menu.
The main relocation was the STO key, moved to HIST. That's what everybody struggles with (myself included, but not anymore).
The other keys are of no consequence, since there's no APPS menu, no FILER, etc. in newRPL (as of now, there might be a filer in the future)
Everything else in those keys was moved to the cursor keys. Now you can define the selection and do cut/copy/paste with the cursors, as well as UPDIR and HOME.
Defining the selection with the cursors is more natural, like on a PC, and efficient (usually you need to move the cursor to the position where the selection starts, so why not mark the block with the same key?).
By the way, cut/copy/paste is way more powerful than the 50g, works on the stack, the interactive stack and the editor, so you can put an object from the stack on the clipboard, then paste it in the editor as text and vice versa. And I keep repeating the same thing: more freedom.
Everything else in the keyboard stayed the same, all the symbols, trig and hyperbolic functions, even the UNITS menu at number 6, etc.
The Alphanumeric mode was revamped but it's still the old Alpha mode with alpha lock selected and a few visual hints.
So once you remember where STO is, everything else is roughly the same, save that you now have 6 keys dedicated to the second menu.
There's new functionality in some keys, which is not painted on the keyboard but that doesn't count as "relocation". For example now you have 8-level stack undo/redo on the left cursor which is extremely useful. You can undo menus as well with the UNDO key (each menu separately) many levels, unlike the old single-level hold-PREV. More freedom, more freedom.
So the key assignment is not as crazy as you paint it, it's mainly STO and the cursors that moved. I use a stock 50g and one with newRPL side by side every day and I can switch back and forth with no trouble. I do miss the stack undo on the stock 50g, it's one of the features that I use most.
In the end: like you said, freedom and flexibility.
09-22-2017, 10:25 PM (This post was last modified: 09-22-2017 10:26 PM by Neve.)
Post: #143
Neve Member Posts: 219 Joined: Oct 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-22-2017 09:57 PM)Claudio L. Wrote: Excellent, back to the technical stuff...
So the key assignment is not as crazy as you paint it, it's mainly STO and the cursors that moved. I use a stock 50g and one with newRPL side by side every day and I can switch back and forth with no trouble. I do miss the stack undo on the stock 50g, it's one of the features that I use most.
In the end: like you said, freedom and flexibility.
Ok. I’ll give it another try with your suggestions. I hope my mind will change for the better!
As for the stack undo, I hope you’ll implement that in the future.
So where did the HIST go key?
PS: Even though I do love the 50g for being the most “powerful” real RPN pocket computer out there I do have to admit that my 2 main calculators are an HP41C (soon to become a CL) and an HP41CX, which are my preferred calculators of all times. Hell, I grew up with these!
On a side note, would it be possible to have a mode where the stack would act like the one on the 41? For example, on a 41 typing 25->Enter->* would give you 625.0000. But I guess that wouldn’t be RPL anymore....
Engineer & Senior IT Executive
Tall-Key HP41CL, CV, CX, 82162A Printer, 82143A Printer, 82160A HP-IL, 2 Card-Readers, Modules, Wand, HP50g.
09-22-2017, 10:33 PM
Post: #144
Eric Rechlin Member Posts: 186 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
One advantage of the way flags work is that not only do they let you set values for configurations, but they also allow you to programmatically query what the current configuration is.
I apologize in advance if what I am about to ask is irrelevant (I haven't yet had the time to figure out how to use newRPL!), but does NewRPL provide the ability to programmatically determine things like the current font size and the current default base? If so, then there is absolutely no need for flags for those like the 49/50 have. If not, then that might be an area where some improvement could be made.
Also, after I installed newRPL 0.9a on Windows 10, the only thing it added to its Start menu group was the Uninstall shortcut -- it didn't create a shortcut to run the actual newRPL program from. I don't know if this is a problem with my system or a bug in the installer, but I thought I'd point it out.
09-23-2017, 12:17 AM
Post: #145
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-22-2017 10:25 PM)Neve Wrote: Ok. I’ll give it another try with your suggestions. I hope my mind will change for the better!
As for the stack undo, I hope you’ll implement that in the future.
So where did the HIST go key?
Hehe, another confusion. I meant I miss the stack undo on the stock firmware when I use my other 50g, because when I use newRPL I use it all the time, just press left on the stack to go to the previous state of the stack, right-shift and left will do REDO. It's handy.
I always thought HIST was a wasted key, so there's no HIST for now. If somebody feels the need for it, can request it and will be added. What I do miss is CMD sometimes (to recall the last 4 things you typed) so it will be implemented (no time frame). UNDO on the HIST key is now the menu UNDO (rs-HIST does one menu, rs-hold-HIST does the other), while the stack UNDO stays at the left cursor, more accessible and intuitive, you just "go back".
(09-22-2017 10:25 PM)Neve Wrote: PS: Even though I do love the 50g for being the most “powerful” real RPN pocket computer out there I do have to admit that my 2 main calculators are an HP41C (soon to become a CL) and an HP41CX, which are my preferred calculators of all times. Hell, I grew up with these!
On a side note, would it be possible to have a mode where the stack would act like the one on the 41? For example, on a 41 typing 25->Enter->* would give you 625.0000. But I guess that wouldn’t be RPL anymore....
On an RPL machine, pressing Enter compiles the text in the command line and puts the result in the stack. The text is gone at this point so * has only one argument to work with.
While there could be a way of "fooling" the system to recreate the RPN effect, storing the 25 on a temporary place or something, the behavior is not really defined when you put more than one object on the command line. If you type 1 2 3, then Enter, *, what is * supposed to do? { 1 2 3 } * { 1 2 3 } ? or 2*3? or 3*3?. This is the only excuse why an RPL machine can't work like an RPN does.
But, I guess a classic RPN mode could be done, limiting the stack depth, and limiting input to a single number at a time. Actually, newRPL is an open core, a single library could define an entire RPN calculator, with all its opcodes. All it needs is an accompanying GUI to match the RPN mode.
At one time I looked at the source code of the WP34s, to see how hard would it be to put it into a newRPL library, and have a WP34 mode inside newRPL. Before you get your hopes too high, it would be a lot of work, but an integrated project like this could be just what the community needs.
Imagine one firmware running on popular hardware (now 50g, perhaps Prime in the future), that can mimic the best RPN and the best RPL calculator. You choose the mode of operation depending on what you are doing that day, just hit one key and you are in the other mode.
For developers, it means less divided efforts, we all work on the same core and make improvements to both modes.
But for now I'm just dreaming out loud... newRPL is still incomplete, it needs to be completed before we can think of implementing those "extras".
09-23-2017, 12:27 AM
Post: #146
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-22-2017 10:33 PM)Eric Rechlin Wrote: One advantage of the way flags work is that not only do they let you set values for configurations, but they also allow you to programmatically query what the current configuration is.
I apologize in advance if what I am about to ask is irrelevant (I haven't yet had the time to figure out how to use newRPL!), but does NewRPL provide the ability to programmatically determine things like the current font size and the current default base? If so, then there is absolutely no need for flags for those like the 49/50 have. If not, then that might be an area where some improvement could be made.
Completely relevant. Everything that's not done with flags has commands to set and get the configuration, including number format, fonts, etc. As a matter of fact, there's no GUI yet to select anything, so everything *is* programmatic at this point.
Now you mentioned the default base: there's no default base, each number can have its own base, the system will not add it for you. You specifically have to give a base: #3h.
(09-22-2017 10:33 PM)Eric Rechlin Wrote: Also, after I installed newRPL 0.9a on Windows 10, the only thing it added to its Start menu group was the Uninstall shortcut -- it didn't create a shortcut to run the actual newRPL program from. I don't know if this is a problem with my system or a bug in the installer, but I thought I'd point it out.
That's strange. I created the installer using the open source Excelsior installer, and tested it before uploading. It created a desktop icon, and inside the menu group there's the uninstaller and the actual program. I'm also using Windows 10 on this machine. Perhaps uninstall and try again?
If not then I'll have to perhaps start using a different installer creator software.
By the way, I forgot to mention to everybody reading this thread that I updated both the simulator and the ROMs to build 916, and named it Alpha 0.9a.
It's far more stable than 0.9 so I thought to make an official release.
09-23-2017, 01:14 AM
Post: #147
Neve Member Posts: 219 Joined: Oct 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 12:17 AM)Claudio L. Wrote: Hehe, another confusion. I meant I miss the stack undo on the stock firmware when I use my other 50g, because when I use newRPL I use it all the time, just press left on the stack to go to the previous state of the stack, right-shift and left will do REDO. It's handy.
Fair enough.
(09-23-2017 12:17 AM)Claudio L. Wrote: I always thought HIST was a wasted key, so there's no HIST for now. If somebody feels the need for it, can request it and will be added.
I don’t feel that way about the HIST key, and I like keys to behave according to what’s actually printed on them.
(09-23-2017 12:17 AM)Claudio L. Wrote: What I do miss is CMD sometimes (to recall the last 4 things you typed) so it will be implemented (no time frame).
Yes, that would be nice.
(09-23-2017 12:17 AM)Claudio L. Wrote: UNDO on the HIST key is now the menu UNDO (rs-HIST does one menu, rs-hold-HIST does the other), while the stack UNDO stays at the left cursor, more accessible and intuitive, you just "go back".
I’ll have to see how this works to forge myself a better opinion...
(09-23-2017 12:17 AM)Claudio L. Wrote: On an RPL machine, pressing Enter compiles the text in the command line and puts the result in the stack. The text is gone at this point so * has only one argument to work with.
While there could be a way of "fooling" the system to recreate the RPN effect, storing the 25 on a temporary place or something, the behavior is not really defined when you put more than one object on the command line. If you type 1 2 3, then Enter, *, what is * supposed to do? { 1 2 3 } * { 1 2 3 } ? or 2*3? or 3*3?. This is the only excuse why an RPL machine can't work like an RPN does.
But, I guess a classic RPN mode could be done, limiting the stack depth, and limiting input to a single number at a time. Actually, newRPL is an open core, a single library could define an entire RPN calculator, with all its opcodes. All it needs is an accompanying GUI to match the RPN mode.
At one time I looked at the source code of the WP34s, to see how hard would it be to put it into a newRPL library, and have a WP34 mode inside newRPL. Before you get your hopes too high, it would be a lot of work, but an integrated project like this could be just what the community needs.
Imagine one firmware running on popular hardware (now 50g, perhaps Prime in the future), that can mimic the best RPN and the best RPL calculator. You choose the mode of operation depending on what you are doing that day, just hit one key and you are in the other mode.
For developers, it means less divided efforts, we all work on the same core and make improvements to both modes.
But for now I'm just dreaming out loud... newRPL is still incomplete, it needs to be completed before we can think of implementing those "extras".
I can see you’re moving in the right direction!! That would be definitely awesome to have, let’s say, an entire HP41 with all it’s modules running on a 50g or on a Prime!!!
That would be the best next thing!!!
Engineer & Senior IT Executive
Tall-Key HP41CL, CV, CX, 82162A Printer, 82143A Printer, 82160A HP-IL, 2 Card-Readers, Modules, Wand, HP50g.
09-23-2017, 04:50 AM
Post: #148
Didier Lachieze Senior Member Posts: 1,101 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 01:14 AM)Neve Wrote: That would be definitely awesome to have, let’s say, an entire HP41 with all it’s modules running on a 50g or on a Prime!!!
Do you know that you can run HP-41X on your 50G?
09-23-2017, 05:33 AM
Post: #149
Neve Member Posts: 219 Joined: Oct 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 04:50 AM)Didier Lachieze Wrote:
(09-23-2017 01:14 AM)Neve Wrote: That would be definitely awesome to have, let’s say, an entire HP41 with all it’s modules running on a 50g or on a Prime!!!
Do you know that you can run HP-41X on your 50G?
Are you the developer of this emulator?
I don’t really need it as I own an hp41 already, which has been my calculator of choice going on 33 years, but it’s nice to know.
Thank You.
Engineer & Senior IT Executive
Tall-Key HP41CL, CV, CX, 82162A Printer, 82143A Printer, 82160A HP-IL, 2 Card-Readers, Modules, Wand, HP50g.
09-23-2017, 05:44 AM
Post: #150
Didier Lachieze Senior Member Posts: 1,101 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 05:33 AM)Neve Wrote: Are you the developer of this emulator?
No. But the developer is a member of this forum and participates in discussions from time to time.
09-23-2017, 06:01 AM (This post was last modified: 09-23-2017 06:02 AM by Neve.)
Post: #151
Neve Member Posts: 219 Joined: Oct 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 05:44 AM)Didier Lachieze Wrote:
(09-23-2017 05:33 AM)Neve Wrote: Are you the developer of this emulator?
No. But the developer is a member of this forum and participates in discussions from time to time.
Cool
Engineer & Senior IT Executive
Tall-Key HP41CL, CV, CX, 82162A Printer, 82143A Printer, 82160A HP-IL, 2 Card-Readers, Modules, Wand, HP50g.
09-23-2017, 01:42 PM (This post was last modified: 09-23-2017 01:43 PM by Claudio L..)
Post: #152
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-22-2017 10:33 PM)Eric Rechlin Wrote: Also, after I installed newRPL 0.9a on Windows 10, the only thing it added to its Start menu group was the Uninstall shortcut -- it didn't create a shortcut to run the actual newRPL program from. I don't know if this is a problem with my system or a bug in the installer, but I thought I'd point it out.
I just confirmed this on a different Windows 10 machine that was allowed to update to latest. Not only those shortcuts weren't initially created as they do on my other Windows 10, but the entire Group "newRPL Desktop", which contains the uninstaller only, gets removed from the Start menu after I see it for the first time. I disabled the real-time virus protection to see if it was removing it for a reason but no, same effect. And if I manually go to the folder where start menu groups are stored, the newRPL Desktop group isn't there, as if it was deleted on purpose by some "dark magic". Perhaps because the application isn't digitally signed? But there's no logs of Windows defender removing anything, and it allows the application to install the program and you can use it if you manually create the shortcut. It just annoys the user by not letting the installer create a shortcut? and it removes it from the Start menu on purpose? the fact a human being coded that behavior in the OS is just out of this world.
I also found this thread on Microsoft's own website, which is very old but seems unresolved.
The laptop I used to create the installer wasn't allowed to do latest updates because of hardware driver issues (Win10 removes and replaces the touchpad drivers with a non-functioning one), and the installer works perfectly there.
I'll double check, perhaps the Excelsior installer team found a workaround already. Stay tuned.
09-23-2017, 02:29 PM
Post: #153
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-23-2017 12:17 AM)Claudio L. Wrote: I always thought HIST was a wasted key, so there's no HIST for now. If somebody feels the need for it, can request it and will be added.
I don't think I expressed myself clearly in the sentence above. HIST in RPN mode invokes the interactive stack, which is also in the UP key, so it's a wasted key. newRPL already has the interactive stack on the UP cursor, no need for another shortcut. The HIST functionality I mentioned that is not implemented is the one that gets activated in algebraic mode, where you can go back through the stack (which contains both what you typed and the results) and bring that back to the edit line. What I mentioned might be implemented if people want it, is a more interactive version of CMD, more like HIST. Preserving the last few (8 default?, user-selectable) entered command lines and providing a way to interactively select one of the entered lines, and bring it back to the command line to re-type it. That could be useful (though a big memory hog if you edit large programs repeatedly).
09-24-2017, 04:58 PM
Post: #154
The Shadow Member Posts: 191 Joined: Jan 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
I have to agree that HIST is a wasted key on the stock 50g - in fact, I long ago put a user key over it to something actually useful. (A menu with a bunch of list commands that I use frequently.)
09-25-2017, 10:37 AM
Post: #155
Nigel (UK) Senior Member Posts: 326 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
NEG quirk:
• Enter
Code:
<< 23 45 67 >>
onto the stack.
• Use downarrow to bring it into the command line for editing.
• Move the cursor to the "2" and press +/-.
• A minus sign appears after the 67!
• After this, everything works fine.
Is this intentional? Probably not!
Nigel (UK)
09-25-2017, 12:59 PM (This post was last modified: 09-25-2017 01:28 PM by Claudio L..)
Post: #156
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-25-2017 10:37 AM)Nigel (UK) Wrote: NEG quirk:
• Enter
Code:
<< 23 45 67 >>
onto the stack.
• Use downarrow to bring it into the command line for editing.
• Move the cursor to the "2" and press +/-.
• A minus sign appears after the 67!
• After this, everything works fine.
Is this intentional? Probably not!
Nigel (UK)
Hmmmm... seems like the +/- key is not multiline ready. I think it's counting the offset from the start of text, but then applying it to the current line. Thanks for the report, will be fixed ASAP.
EDIT: The above was wrong. There was a bug that assigned a pointer to a string that later moved in memory, without re-reading the pointer. It's fixed, will come out in the next update. Good catch.
09-25-2017, 07:52 PM
Post: #157
Joe Horn Senior Member Posts: 1,425 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-24-2017 04:58 PM)The Shadow Wrote: I have to agree that HIST is a wasted key on the stock 50g...
I think the only time it's needed (with no alternative available) is in this specific scenario: You start the interactive stack, then hit EDIT, then want to ECHO something onto the command line from elsewhere on the stack. The only way to do that is to press HIST. ... I think. But I'd be happy to be informed otherwise.
Quote:... in fact, I long ago put a user key over it to something actually useful. (A menu with a bunch of list commands that I use frequently.)
I use ECHO so infrequently that I did the same as you, but my assignment there is UNDO (more accurately, TakeOver -41.3 KEYEVAL) because I use UNDO a lot.
X<> c
-Joe-
09-26-2017, 12:03 AM
Post: #158
Claudio L. Senior Member Posts: 1,458 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-25-2017 07:52 PM)Joe Horn Wrote:
(09-24-2017 04:58 PM)The Shadow Wrote: I have to agree that HIST is a wasted key on the stock 50g...
I think the only time it's needed (with no alternative available) is in this specific scenario: You start the interactive stack, then hit EDIT, then want to ECHO something onto the command line from elsewhere on the stack. The only way to do that is to press HIST. ... I think. But I'd be happy to be informed otherwise.
I didn't know that! I had to take the calc and test it. Sadly, it proves I never used it before in all these years.
But this whole discussion helped identify a couple of use cases that are missing in newRPL's current interactive stack:
* Edit an item in-place is not supported (yet). For the time being, to edit an object you'd have to drop it to level 1, exit the interactive stack and edit it, then use the interactive stack to move back in place. This is cumbersome, EDIT needs to be implemented soon.
* The ECHO use case, where you can echo stack items to the edit line is also not implemented. The current workaround is to enter the interactive stack, select one or more objects and copy them to the clipboard. Then you can paste into the editor.
I guess I need to figure out a way to open the interactive stack from the command line, and vice versa, which is not as straightforward as it might seem.
09-26-2017, 04:58 PM
Post: #159
The Shadow Member Posts: 191 Joined: Jan 2014
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
(09-26-2017 12:03 AM)Claudio L. Wrote:
(09-25-2017 07:52 PM)Joe Horn Wrote: I think the only time it's needed (with no alternative available) is in this specific scenario: You start the interactive stack, then hit EDIT, then want to ECHO something onto the command line from elsewhere on the stack. The only way to do that is to press HIST. ... I think. But I'd be happy to be informed otherwise.
I didn't know that!
I didn't either! It doesn't seem like something one would use often, but it is definitely handy.
09-27-2017, 12:09 PM
Post: #160
compsystems Senior Member Posts: 1,114 Joined: Dec 2013
RE: newRPL: Alpha demo 0.9 released [UPDATED 2017-09-15]
Hello, please someone can create an article about NewRPL, with the purpose of highlighting the differences with the RPL
https://en.wikipedia.org/wiki/newRPL_(pr..._language)
The good enough is the enemy of the excellent.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | 2019-03-24 05:01:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22946201264858246, "perplexity": 3135.741172967118}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00452.warc.gz"} |
https://ora.ox.ac.uk/objects/uuid:09dad8c8-ab57-4029-9e3f-2cc0acb95402 | Report
### Subsumption of concepts in DL FL0 for (cyclic) terminologies with respect to descriptive semantics is PSPACE−complete
Abstract:
We prove the PSPACE-completeness of the subsumption problem for(cyclic) terminologies with respect to descriptive semantics in a simple Description Logic \\cal FL_0, which allows for conjunctions and universal value restrictions only, thus solving the problem which was open for more than ten years
### Authors
Publisher:
Max−Planck−Institut für Informatik
ISSN:
0946-011X
UUID:
Local pid:
cs:880
Deposit date:
2015-03-31 | 2022-07-01 10:50:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765961170196533, "perplexity": 9608.820239574461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00442.warc.gz"} |
http://koreascience.or.kr/article/JAKO201821142174043.page | # Ingestion Dose Evaluation of Korean Based on Dynamic Model in a Severe Accident
• Kwon, Dahye (Department of Nuclear Engineering, Hanyang University) ;
• Hwang, Won-Tae (Korea Atomic Energy Research Institute) ;
• Jae, Moosung (Department of Nuclear Engineering, Hanyang University)
• Accepted : 2018.05.04
• Published : 2018.06.30
#### Abstract
Background: In terms of the Level 3 probabilistic safety assessment (Level 3 PSA), ingestion of food that had been exposed to radioactive materials is important to assess the intermediate- and long-term radiological dose. Because the ingestion dose is considerably dependent upon the agricultural and dietary characteristics of each country, the reliability of the assessment results may become diminished if the characteristics of a foreign country are considered. Thus, this study intends to evaluate and analyze the ingestion dose of Korean during a severe accident by completely considering the available agricultural and dietary characteristics in Korea. Materials and Methods: This study uses COMIDA2, which is a program based on dynamic food chain model. It sets the parameters that are appropriate to Korean characteristics so that we can evaluate the inherent ingestion dose of Korean. The results were analyzed by considering the accident date and food category with regard to the $^{137}Cs$. Results and Discussion: The dose and contribution of the food category depicted distinctive differences based on the accident date. Particularly, the ingestion dose during the first and second years depicted a considerable difference by the accident date. However, after the third year, the effect of foliar absorption was negligible and exhibited a similar tendency along with the order of root uptake rate based on the food category. Conclusion: In this study, the agricultural and dietary characteristics of Korea were analyzed and evaluated the ingestion dose of Korean during a severe accident using COMIDA2. By considering the inherent characteristics of Korean, it can be determined that the results of this study will significantly contribute to the reliability of the Level 3 PSA.
#### References
1. Abbott ML, Rood AS. COMIDA: a radionuclide food chain model for acute fallout deposition. Health. Phys. 1994;66(1):3-33. https://doi.org/10.1097/00004032-199401000-00001
2. Hwang WT, Cho GS, Han MH. Development of a dynamic food chain model DYNACON and its application to Korean agricultural conditions. J. Nucl. Sci. Technol. 1998;35(6):454-461. https://doi.org/10.1080/18811248.1998.9733888
3. Korea Atomic Energy Research Institute. A Development of computer code for evaluating internal radiation dose through ingestion and inhalation pathways. KAERI/RR-998/90. 1990;152-157.
4. International Atomic Energy Agency. Handbook of parameter values for the prediction of radionuclide transfer in terrestrial and freshwater environments. IAEA Technical Reports Series no.472. 2010;147.
5. Idaho National Engineering and Environmental Laboratory. COMIDA input parameters and sample input files, Appendix A. NUREG/CR-6613. 1998;15-16.
6. International Commission on Radiation Protection. Age-dependent doses to the members of the public from intake of radionuclides-part 5 compilation of ingestion and inhalation coefficients. ICRP Publication 72, Ann. ICRP 26(1). 1995;1-91. | 2021-02-25 08:37:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5245130658149719, "perplexity": 6779.5947725565875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00037.warc.gz"} |
http://motls.blogspot.com/2012/05/iran-agw-useless-summits-in-baghdad.html | ## Friday, May 25, 2012 ... /////
### Iran, AGW: useless summits in Baghdad, Bonn
...and African migrants in Israel...
In recent days, two major cities starting with a "B" witnessed futile negotiations about important topics.
In Bonn, the ex-capital of West Germany, some bureaucrats were trying to prepare a post-Kyoto agreement to regulate the greenhouse gases that could be signed in Durban in December 2012. Surprisingly for them, they found out that:
Rich-poor divide reopens at UN climate talks
Some poor countries were promised by the environmentalist i.e. Marxist activists that they would be given piles of wealth after the civilized countries are deconstructed with the help of the global warming lies.
Suddenly, some Western negotiators realized at least the fact that whether or not it is a good idea to reduce the CO2 emissions, the CO2 emissions can't decrease if the developing countries will keep on developing: the CO2 production would simply shift to the currently developing world.
Of course, the poor folks just wanted the money and some of them wanted to damage the West: these were the only reasons why they would support the climate change hysteria a few years ago. They don't have the slightest interest in hurting themselves. So the talks can't lead anywhere.
One should try to appreciate what kind of money is proposed to be wasted for these insane policies. Greece is an astronomical black hole that eats a hundred of billions of euros of foreign "loans" (wasted donations) every year. I don't have to explain to you – especially if you possess some stocks – how devastating the impact of this small, unstable, and irrelevant piece of land on the world economy already is.
However, the carbon regulation policies already eat more than that. In the background, something is hurting the global economy at least as intensely as Greece and no one talks about it. Of course, these policies haven't achieved any reductions of the CO2 emissions yet – not even a reduction of the exponential growth rate of these emissions. To do so, the expenses would have to increase at least by an order of magnitude. That's financially equivalent to imagining that all of Spain, Portugal, and Italy will become exactly as hopeless as Greece. Try to visualize how the world economy would behave in this way; some people have no problem to deliberately propose such suicidal policies. (While these policies already eat hundreds of billions of dollars from the world economy every year, the negotiators don't know where to find a few million dollars for their next conference.)
And this would only start to make a "detectable" impact on CO2 emissions. Maybe.
Global warming is being praised for saving the once-rare British Argus butterfly (and pretty much all other species in the world) from extinction. What a horror. To compensate for this fact, green activists repeat that global warming drives polar bears – whose number has quadrupled in recent 50 years – to extinction.
Needless to say, even if the growth of CO2 emissions were visibly slowed down or reverted, there won't be any observable and demonstrable influences of this "success" on the world climate, at least not for the next 50 years, and even if there were such influences, they wouldn't have a positive sign.
The people who still discuss carbon regulation policies in 2012 are insane psychopaths and must be treated in this way.
Instead of being given beds in asylums, these psychopaths improve our lives by stories about the warming by 3.5 °C and similar stories every day. Note that not only an error margin is absent (the error margin of all such figures in the IPCC report is of order 100 percent so it is really absurd to list two significant figures); they don't even say what is the period of time in which the temperature increment could be 3.5 °C and what's the probability that this speculation "could" turn out to be right (be sure that it's nearly zero for any time scale shorter than two centuries).
Rational reasoning has totally evaporated from these segments of the society. These negotiators, the activists pumping hormones into the movement, and the would-be journalists who hype all this nonsense in the media are dangerous lunatics.
Incidentally, if you want some good news, Bavaria's stock exchange will end the trading of carbon indulgences next month as the prices dropped 60% in a year and the trading volumes converged to practically zero.
Iran
There are of course other dangerous lunatics in the world, too. Baghdad, the Iraqi capital, has seen another round of useless and failed talks between Iran and the Western powers that want to stop the ever more dangerous enrichment of uranium in the Persian nuclear facilities. Today, the U.N. will announce that Iran has beaten its previous record and enriched the uranium up to 27 percent.
Make no mistake about it: there are lots of lunatics in Iran, starting from Ali Khamanei, the bigot-in-chief who officially calls himself the supreme leader. Some of them literally believe that Allah, the virtual bigot-in-chief, will give them all the virgins and demands that they eliminate the infidels. On the other hand, I must say that the rumors that Iran is thinking rationally are based on a rational core, too.
There is of course a lot of civilian, non-religious, non-military activity in Persia. It's a country that has been Westernized to a large extent, especially during the Shah's reign. Much of it hasn't evaporated yet. However, I am not talking just about semi-sensible semi-socialist industrialists or scientists who work at random places of Persia (some of whom I know).
I am talking about their negotiators, too.
It seems to me that they have totally understood the emptiness of much of the Western politics, its inability to see the most obvious things, its focus on the form instead of the substance. So they sent a Saeed Jalili, a Persian top-tier security bureaucrat, to Baghdad. I think this guy in particular may be more rational than many of his Western counterparts. His job is simple: to have a nice time with third-rate politicians such as the EU foreign minister Catherine Ashton and make her sure that everything is fine and we may talk and talk and talk. We may talk next month in Moscow, too. It's so pleasant.
Meanwhile, the Iranians know very well that any delay is Persia's incremental victory. The reason is simple: the centrifuges are running. The research that allows Persia to develop and install ever more dangerous missiles with ever more dangerous warheads is recording some progress every month, too.
It seems to me that lots of people similar to Catherine Ashton simply have no clue. They're always ready to be led into thinking that the problem may be delayed by another month or another year and we're making progress towards security. Except that a rational observer, much like the Iranian religious bigots, sees very clearly that the progress is zero and, when the developments in Iran are counted, it's negative (=positive from the Islamic Republic's vantage point) after every new round of negotiations.
People like Jalili are capable of dancing with their counterparts such as Ashton in circles. Ashton enjoys the dancing so she believes that she's moving forward. But she's not. She's rotating in circles and the Persian centrifuges are doing the same thing. The only difference is that by rotating in circles, the Persian centrifuges are pushing the Iranian power-thirsty bigots forward while Ashton's dances with Jalili don't move us forward. One may actually see that Ashton herself has moved backwards; Iranians noticed that Ashton had a more conservative, Islamists-pleasing wardrobe than she had last time. Will she wear a burqah in Moscow next month?
Cross the Jordan River [the river of all the hopes], a courageous enough 1968 song celebrating emigration of Czechs after the 1968 Soviet Occupation (although formally talking about the ancient Jewish exodus), by Ms Helena Vondráčková who gradually became a pillar of the pro-Brezhnev totalitarian entertainment industry (but who made a big comeback after the 1989 Velvet Revolution, anyway). Funnily enough, the "peasant" with the mule e.g. around 1:33 is Mr Waldemar Matuška, a singer who really did emigrate in the 1980s. Ms Helena's fate was very different from that of her fellow singer, Ms Marta Kubišová, a much stronger moral character who was really harassed by the communists, had to work as a clerk in a vegetable shop, and couldn't protect her youth and image so well... I propose the song as an anthem for the Israeli (and American?) pilots who will be given the task to bomb Iran.
America has declared that it is ready to strike Iran and I don't believe there is any room left for any other than military solution at this moment. Persia should be urged to evacuate the vicinities of the labs, especially in Qom that may have to be treated by thermonuclear devices due to the stubborn, annoying, and dodgy fortification of the facility, and Obama should distribute the orders. If this operation "just" delayed the Iranian nuclear warheads by 5 years, it would be an amazing success that should be regularly repeated every few years.
Israel: immigration
Meanwhile, the ordinary people in Israel aren't thinking about Iran too much. Instead, what they see are African migrants. An Iranian nuclear bomb sent to Israel would be very visible but it doesn't mean that there can't exist much more gradual but possibly more harmful processes that may harm Israel – and, analogously, others.
What happened with Africa and Israel?
Last year, the West failed to protect Hosni Mobarak, one of the most enlightened leaders of an Arab country. In fact, most of the people in the West didn't even have the will to do so. Several groups of Islamic bigots (groups that, fortunately, dislike each other as well) took over Egypt, together with lots of anarchy. In particular, the Sinai Peninsula is a mess, too.
Lots of African migrants from Eritrea, Sudan, and a few other countries are using this chaotic land adjacent to Israel – with the help of local Bedouins – to penetrate into the most advanced country in the region. Of course that for most of them, the reasons are purely economical. In this respect, Israel faces the very same immigration problems as those we know from many Western countries. As the crime rate goes up (rapes etc.) and there are many other problems, strong words are being used. The Zionist dream is disappearing, and so on.
Israel is trying to erect a physical barrier on the border with the Sinai Peninsula (African workers are sometimes employed for the hard work) but it's not something you can do within an hour. A thousand of new immigrant enters the Jewish state every day, sending the total number to 60,000 or so.
Let me say something. I believe that a civilized country should be able to deal with a problematic minority that represents 1% of the population. Many other countries are forced to solve similar or larger problems. So if the increase stopped, I do believe that decent Israeli should stop their hysteria about the Africans, too. I agree that the inability to tolerate 1% of a traditionally poorer race is a symptom of racism. On the other hand, one should introduce some policies that will guarantee that the percentage won't grow substantially above 1%. The Israeli Arabs already represent a sufficiently large source of problems for the country and Israel just can't afford "too much more" of such problems.
Worries about visible things – such as the nuclear Holocaust in the Middle East – may be popular and attractive for our imagination. But we shouldn't forget that there are many other, less spectacular and gradually creeping events and trends that may screw our lives and our civilization equally or more efficiently. So people should stop fighting (and wasting time and money for) virtual problems such as global warming and they should start to seriously discuss genuine problems such as destructive technologies in the hands of uncontrollable bigots or the uncontrollable inflow of illegal immigrants into various countries.
And that's the memo.
#### snail feedback (1) :
reader Synapismos é a φάλαγξ forward marche ou marx? said...
you believe that a civilized country like israel or germany should be able to deal with a problematic minority that represents 1% of the population?
if they throw the jewish nigger's from eritreia in the oven
they have enough money to do the same with the ai'rabs and obama?
Israel don't have CO2 tax?
if they use the Polpot method that is a nice sink for the CO2 | 2016-10-21 13:03:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3128454089164734, "perplexity": 2650.5691307717707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00523-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://meetings.aps.org/Meeting/DFD09/SessionIndex3/?SessionEventID=113093 | ### Session EE: Biofluids III: General III - Flows and Diseases
Chair: James Brasseur, Pennsylvania State University
Room: 101E
Sunday, November 22, 2009 4:15PM - 4:28PM EE.00001: Probing protein mechanical stability with controlled shear flows Jonathan Dusting , Lorna Ashton , Justin Leontini , Ewan Blanch , Stavroula Balabani Understanding and controlling protein aggregation or misfolding is of both fundamental and medical interest. The structural changes experienced by proteins in response to forces such as those generated within flows have not been well characterised, despite the importance of mechanics in many biological processes. By monitoring the structural conformation of proteins in different concentric cylinder flows using Raman Spectroscopy we have quantified the relative stability of $\beta$-sheet dominated proteins compared with those containing a greater proportion of $\alpha$-helix. To ensure that the fluid stresses are quantified accurately, a combined DNS and PIV approach has been undertaken for flow cell characterisation across the full range of operating Re. This is important for practical concentric cylinder geometries where the shear components are non-zero and spatially dependent, with the peak stresses located near the endwalls. Furthermore, recirculation regions appear well below the crtical Reynolds number for Taylor vortex formation. Sunday, November 22, 2009 4:28PM - 4:41PM EE.00002: A quasi-one-dimensional model for collapsible channel oscillations Draga Pihler-Puzovic , Timothy Pedley A fluid driven rapidly through a flexible tube exhibits self-excited oscillations. To model this phenomenon, we consider 2D high Re laminar flow of a Newtonian incompressible fluid through a collapsible channel. The channel has a section of an otherwise rigid wall replaced by a membrane with inertia, under longitudinal tension, with no bending stiffness and subject to the external pressure. Based on the analysis by Pedley and Stephanoff (\emph{JFM},85), membrane motion is coupled to the time-dependent behaviour of the core flow through a modified KdV equation. We focus on the importance of membrane inertia for the system. The stability of the problem is studied numerically. In the parameter regimes of interest the computations reveal transitional behaviour: initially small perturbation of the system decays in an oscillatory manner but beyond a certain time higher frequency oscillations start dominating and the system diverges. At the same time a switching between mode one in which the flexible wall has a single extremum, to higher modes with multiple extrema is observed. These results are discussed with respect to previous computations for 2D collapsible channels. Sunday, November 22, 2009 4:41PM - 4:54PM EE.00003: Phasic Relationships among Hemodynamic Properties of Pulsatile Flow in Microcirculations Jung Yeop Lee , Sang Joon Lee Pulsatile blood flows in \textit{omphalo-mesenteric} arteries of HH-stage 18 chicken embryos are measured using a time-resolved particle image velocimetry (PIV) technique to obtain hemodynamic information in microcirculations and compare hemodynamic properties of pulsatile blood flows. Due to the intrinsic features of pulsatile flow and complicated vessel network of microcirculation, an \textit{out}-\textit{of}-\textit{phase} motion of blood occurs in nearby vessel segments of bifurcations. This is mainly attributed to the morphological characteristics and peripheral resistance of vasculature. The \textit{out}-\textit{of}-\textit{phase} motion is quantitatively identified using the one-dimensional temporal cross-correlation function. This cross-correlation function is extended to establish the phasic relationships among hemodynamic properties such as velocity, shear rate, and acceleration. Velocity and shear rate are almost \textit{in phase}, as predicted theoretically. On the other hand, velocity (or shear rate) shows an almost 180$^{\circ}$ \textit{out}-\textit{of}-\textit{phase} against acceleration, which is quite larger than the theoretically predicted value. Sunday, November 22, 2009 4:54PM - 5:07PM EE.00004: Numerical Study on Flows of Red Blood Cells with Liposome-Encapsulated Hemoglobin at Microvascular Bifurcation Toru Hyakutake , Shigeki Tani , Yuki Akagi , Takeshi Matsumoto , Shinichiro Yanase Flow analysis at microvascular bifurcation after partial replacement of red blood cell (RBC) with liposome-encapsulated hemoglobin (LEH) was performed using the lattice Boltzmann method. A two-dimensional bifurcation model with a parent vessel and daughter branch was considered, and the distributions of the RBC, LEH, and oxygen fluxes were calculated. The immersed boundary method was employed to incorporate the fluid--membrane interaction between the flow field and deformable RBC When only RBCs flow into the daughter branches with unevenly distributed flows, plasma separation occurred and the RBC flow to the lower-flow branch was disproportionately decreased. On the other hand, when the half of RBC are replaced by LEH, the biasing of RBC flow was enhanced whereas LEH flowed favorably into the lower-flow branch, because many LEH within the parent vessel are suspended in the plasma layer, where no RBCs exist. Consequently, the branched oxygen fluxes became nearly proportional to flows. These results indicate that LEH facilitates oxygen supply to branches that are inaccessible to RBCs. Sunday, November 22, 2009 5:07PM - 5:20PM EE.00005: A Numerical Computation Model for Low-Density Lipoprotein (LDL) Aggregation and Deposition in the Human Artery Yongli Zhao , Shaobiao Cai , Albert Ratner Cholesterol caused cardiovascular events are commonly seen in human lives. These events are primarily believed to be caused by the built up of particles like low-density lipoprotein (LDL). When a large number of LDL circulates in the blood, it can gradually build up in the inner walls of the arteries. A thick, hard deposit plaque can be formed together with other substances. This type of plaque may clog those arteries and cause vascular problems. Clinical evidences suggest that LDL is related to cardiovascular events and the progression of coronary heart disease is due to its aggregation and deposition. This study presents an investigation of LDL aggregation and deposition based on particulate flow. A soft-sphere based particulate computational flow model is developed to represent LDL suspending in plasma. The transport, collision and adhesion phenomena of LDL particles are simulated to examine the physics involved in aggregation and deposition. A multiple-time step discrete-element approach is presented for efficiently simulating large number of LDL particles and their interactions. The roles the quality and quantity the LDL playing in the process of aggregation and deposition are determined. The study provides a new perspective for improving the understanding of the fundamentals as related to these particle-caused cardiovascular events. Sunday, November 22, 2009 5:20PM - 5:33PM EE.00006: A Comprehensive Fluid Dynamic-Diffusion Model of Blood Microcirculation with Focus on Sickle Cell Disease Francois Le Floch , Wesley L. Harris A novel methodology has been developed to address sickle cell disease, based on highly descriptive mathematical models for blood flow in the capillaries. Our investigations focus on the coupling between oxygen delivery and red blood cell dynamics, which is crucial to understanding sickle cell crises and is unique to this blood disease. The main part of our work is an extensive study of blood dynamics through simulations of red cells deforming within the capillary vessels, and relies on the use of a large mathematical system of equations describing oxygen transfer, blood plasma dynamics and red cell membrane mechanics. This model is expected to lead to the development of new research strategies for sickle cell disease. Our simulation model could be used not only to assess current researched remedies, but also to spur innovative research initiatives, based on our study of the physical properties coupled in sickle cell disease. Sunday, November 22, 2009 5:33PM - 5:46PM EE.00007: Investigating the fluid mechanics behind red blood cell-induced lateral platelet motion Lindsay Crowl Erickson , Aaron Fogelson Platelets play an essential role in blood clotting; they adhere to damaged tissue and release chemicals that activate other platelets. Yet in order to adhere, platelets must first come into contact with the injured vessel wall. Under arterial flow conditions, platelets have an enhanced concentration near blood vessel walls. This non-uniform cell distribution depends on the fluid dynamics of blood as a heterogeneous medium. We use a parallelized lattice Boltzmann-immersed boundary method to solve the flow dynamics of red cells and platelets in a periodic 2D vessel with no-slip boundary conditions. Red cells are treated as biconcave immersed boundary objects with isotropic Skalak membrane tension and an internal viscosity five times that of the surrounding plasma. Using this method we analyze the influence of shear rate, hematocrit, and red cell membrane properties on lateral platelet motion. We find that the effective diffusion of platelets is significantly lower near the vessel wall compared to the center of the vessel. Insight gained from this work could lead to significant improvements to current models for platelet adhesion where the presence of red blood cells is neglected due to computational intensity. Sunday, November 22, 2009 5:46PM - 5:59PM EE.00008: A Spatial-Temporal Model of Platelet Deposition and Blood Coagulation Under Flow Karin Leiderman Gregg , Aaron Fogelson In the event of a vascular injury, a blood clot will form to prevent bleeding. This response involves two intertwined processes: platelet aggregation and coagulation. Activated platelets are critical to coagulation in that they provide localized reactive surfaces on which many of the coagulation reactions occur. The final product from the coagulation cascade directly couples the coagulation system to platelet aggregation by acting as a strong activator of platelets and cleaving blood-borne fibrinogen into fibrin which then forms a mesh to help stabilize platelet aggregates. Together, the fibrin mesh and the platelet aggregates comprise a blood clot, which in some cases, can grow to occlusive diameters. Transport of coagulation proteins to and from the vicinity of the injury is controlled largely by the dynamics of the blood flow. It is crucial to learn how blood flow affects the growth of clots, and how the growing masses, in turn, feed back and affect the fluid motion. We have developed the first spatial-temporal model of platelet deposition and blood coagulation under flow that includes detailed decriptions of the coagulation biochemistry, chemical activation and deposition of blood platelets, as well as the two-way interaction between the fluid dynamics and the growing platelet mass. Sunday, November 22, 2009 5:59PM - 6:12PM EE.00009: Enhancement of Absorption by Micro-Mixing induced by Villi Motion Yanxing Wang , James Brasseur , Gino Banco Motions of surface villi create microscale flows that can couple with lumen-scale eddies to enhance absorption at the epithelium of the small intestine. Using a multigrid strategy within the lattice-Boltzmann framework, we model a macro-scale cavity flow with microscale villi'' in pendular motion on the lower surface and evaluate the couplings between macro and micro-scale fluid motions, scalar mixing, and uptake of passive scalar at the villi surface. We study the influences of pendular frequency, villous length, and villous groupings on absorption rate. The basic mechanism underlying the enhancement of absorption rate by a villous-induced micro-mixing layer'' (MML) is the microscale pumping'' of low concentration fluid from between groups of villi coupled with the return of high concentration fluid into the villi groups from the macroscale flow. The MML couples with the macrosacle eddies through a diffusion layer that separates micro and macro mixed layers. The absorption rate increases with frequency of villi oscillation due to enhanced vertical pumping. We discover a critical villus length above which absorption rate increases significantly. The absorption is influenced by villus groupings in a complex way due to the interference between vertical and horizontal geometry vs. MML scales. We conclude that optimized villi motility can enhance absorption and may underlie an explanation for the existence of villi in the gut. [Supported by NSF] Sunday, November 22, 2009 6:12PM - 6:25PM EE.00010: Multiscale modeling of blood flow in cerebral malaria Dmitry Fedosov , Bruce Caswell , George Karniadakis The main characteristics of the malaria disease are progressing changes in red blood cell (RBC) mechanical properties and geometry, and its cytoadhesion to the vascular endothelium. Malaria-infected RBCs become considerably stiff compared to healthy ones, and may bind to the vascular endothelium of arterioles and venules. This leads to a significant reduction of blood flow, and eventual vessel obstruction. Due to a non-trivial malaria-infected RBC adhesive dynamics and obstruction formations the blood flow in cerebral malaria is extremely complex. Here, we employ multiscale modeling to couple nanometer scales at the binding level, micrometer scales at the cell level and millimeter scales at the arteriole level. Blood flow in cerebral malaria is modeled using a coarse-grained RBC model developed in our group. The RBC adhesion is simulated based on the stochastic bond formation/breakage model, which is validated against recent experiments. | 2013-05-25 14:32:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44336748123168945, "perplexity": 3980.5238667424287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956263/warc/CC-MAIN-20130516120556-00048-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/185031/package-inputenc-error-unicode-char-from-bibliography-file | Package inputenc Error: Unicode char -> from bibliography file [closed]
Recently I added a name in my bibliography file with the letter É. When I tried to compile, Texstudio gave me the following code:
Package inputenc Error: Unicode char \u8:É. not set up for use with LaTeX N.~Nisan, T.~Roughgarden, É.
My first step was to change the letter to E in the bibliography file (Jabref) but the error did not go away. Even after deleting the complete entry from the bibliography I still get the same error.
My Latex document contains the following bibliography formatting:
\usepackage[comma, sort&compress]{natbib} % Use the natbib reference package - read up on this to edit the reference style; if you want text (e.g. Smith et al., 2012) for the in-text references (instead of numbers), remove 'numbers' which was after square
%Underneath should help with citing an article's title....
\def\mybibtexdatabase{Bibliography}
\usepackage{usebib}
\newbibfield{title}
\bibinput{\mybibtexdatabase}
In what way can I get rid of the error message?
edit:
I found out that the file main.bbl (my main latex file is main.tex) still contained the bibliography entry that gave the error. When I deleted the entry from that file everything was able to compile again.
This solves my question.
closed as off-topic by cfr, Ian Thompson, percusse, jubobs, JesseJun 15 '14 at 22:24
• This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
• Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. We'd like to keep answers separate from questions, so you should write a separate answer instead of editing your answer into the question. Self-answers are perfectly admissible, and a well-written answer may earn you additional reputation. – Martin Schröder Jun 15 '14 at 21:42
• possible duplicate of Strange persisting error, even after removing the code. – cfr Jun 15 '14 at 21:44
• Please, add a minimal working example (MWE) showing the problem; in particular the bibliographic entry that causes the problem. – egreg Jun 15 '14 at 21:49
• This question appears to be off-topic because it is about a user-specific file issue that is resolved by creating the .bbl file again. – percusse Jun 15 '14 at 22:16
• Compiling with biblatex+biber would allow any unicode character… – Bernard Jun 15 '14 at 22:53 | 2019-04-24 00:34:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074073195457458, "perplexity": 2362.5721776279815}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578616424.69/warc/CC-MAIN-20190423234808-20190424020808-00373.warc.gz"} |
https://www.jottr.org/2020/09/21/detect-when-the-random-number-generator-was-used/ | If you ever need to figure out if a function call in R generated a random number or not, here is a simple trick that you can use in an interactive R session. Add the following to your ~/.Rprofile(*):
if (interactive()) {
last <- .GlobalEnv$.Random.seed function(...) { curr <- .GlobalEnv$.Random.seed
if (!identical(curr, last)) {
msg <- "TRACKER: .Random.seed changed"
if (requireNamespace("crayon", quietly=TRUE)) msg <- crayon::blurred(msg)
message(msg)
last <<- curr
}
TRUE
}
}), name = "RNG tracker"))
}
It works by checking whether or not the state of the random number generator (RNG), that is, .Random.seed in the global environment, was changed. If it has, a note is produced. For example,
> sum(1:100)
[1] 5050
> runif(1)
[1] 0.280737
TRACKER: .Random.seed changed
>
It is not always obvious that a function generates random numbers internally. For instance, the rank() function may or may not updated the RNG state depending on argument ties as illustrated in the following example:
> x <- c(1, 4, 3, 2)
> rank(x)
[1] 1.0 2.5 2.5 4.0
> rank(x, ties.method = "random")
[1] 1 3 2 4
TRACKER: .Random.seed changed
>
For some functions, it may even depend on the input data whether or not random numbers are generated, e.g.
> y <- matrixStats::rowRanks(matrix(c(1,2,2), nrow=2, ncol=3), ties.method = "random")
TRACKER: .Random.seed changed
> y <- matrixStats::rowRanks(matrix(c(1,2,3), nrow=2, ncol=3), ties.method = "random")
>
I have this RNG tracker enabled all the time to learn about functions that unexpectedly draw random numbers internally, which can be important to know when you run statistical analysis in parallel.
As a bonus, if you have the crayon package installed, the RNG tracker will output the note with a style that is less intrusive.
(*) If you use the startup package, you can add it to a new file ~/.Rprofile.d/interactive=TRUE/rng_tracker.R. To learn more about the startup package, have a look at the blog posts on startup.
EDIT 2020-09-23: Changed the message prefix from ‘NOTE:’ to ‘TRACKER:‘. | 2023-03-28 02:48:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3313497006893158, "perplexity": 3559.887744307104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00331.warc.gz"} |
https://iucrdata.iucr.org/x/issues/2022/06/00/wm4166/index.html | ## metal-organic compounds
IUCrDATA
ISSN: 2414-3146
## trans-Carbonylchloridobis(triethylphosphane-κP)platinum(II) tetrafluoridoborate
aDepartment of Chemistry, Fordham University, 441 East Fordham Road, Bronx, NY 10458, USA
*Correspondence e-mail: pcorfield@fordham.edu
(Received 30 May 2022; accepted 8 June 2022; online 10 June 2022)
The chemical formulation of the title compound was established as trans-[PtCl{P(C2H5)3(CO)}BF4 by single-crystal X-ray analysis, in contrast to the five-coordinate tetrafluoroethylene complex that had been anticipated. The compound had been prepared by reaction of trans-PtHCl(P(C2H5)3)2 with C2F4 in the absence of air, and the presence of the carbonyl group was not suspected. The square-planar cations and BF4 anions are linked by C—H⋯F and C—H⋯O interactions into thick wavy (010) sheets. The present crystal-structure refinement is based on the original intensity data recorded in 1967.
Chemical scheme
### Structure description
A low-yield product in the reaction of trans-PtHCl(P(C2H5)3)2 with C2F4 in the absence of air was originally postulated to be a five-coordinate platinum complex, PtHCl(π-C2F4)(P(C2H5)3)2 (Clark & Tsang, 1967), and the crystal-structure determination was undertaken at that time in view of the then current interest in five-coordination and of π-complexes. As described in Clark et al. (1967), the preliminary crystal-structure model showed no evidence of five-coordination, nor of the presence of a π-bonded tetrafluoroethylene group. Instead, a four-coordinated, cationic PtII complex was indicated, with a carbonyl group as the fourth ligand, isoelectronic with Vaska's compound, IrCl(CO)(PR3)2 (Vaska & DiLuzio, 1961). The presence of a carbonyl group was completely unexpected, as the reaction had been carried out in a vacuum line, in the absence of oxygen. This was the first reported molecular structure of a platinum carbonyl at the time, according to our database analysis below. The strong carbonyl vibrational band in the infrared spectrum was mistaken for the anticipated Pt—H band. Evidently, the carbonyl oxygen atom had been extracted from the Pyrex glassware by the tetrafluoroethylene reagent. That reaction vessels are not always as inert as they are expected to be is the subject of a recent review by Nielsen & Pedersen (2022) in which formation of the title compound in this paper is one of several examples of fluorine compounds reacting with glassware.
The crystal structure refinement based on the original X-ray intensity data recorded in 1967 is now presented here, because no atomic coordinates were given in the original report (Clark et al., 1967) or deposited with the Cambridge Structural Database (CSD; Groom et al., 2016). The square-planar platinum(II) cation and a tetrafluoridoborate anion are shown in Fig. 1. As can be seen, the cation has an approximate mirror plane of symmetry that extends to the conformations of the ethyl groups. The Pt—CO bond length is 1.812 (17) Å, Pt—Cl is 2.301 (4) Å, and the Pt—P bond lengths are 2.341 (5) and 2.348 (5) Å. The P—Pt—CO angles average 92.9 (8)° while the Cl—Pt—P angles average 87.2 (2)°. The trans angles P—Pt—P and Cl—Pt—C are 174.10 (17)° and 177.0 (12)°, respectively, with the slight distortions from linearity tending towards a flattened tetrahedron rather than a flattened square pyramid. Each of the triethylphosphine groups has one ethyl group in the trans conformation and two in the gauche conformation.
Figure 1 View of the molecular entities showing the atomic numbering and displacement ellipsoids at the 50% probability level.
Packing diagrams showing views down the b and c axes are shown in Fig. 2a and 2b. There are close contacts between each tetrafluoridoborate anion and the ethyl groups of three neighboring cations with putative C—H⋯F hydrogen bonds, as listed in Table 1. The Hirshfeld dnorm plot for the BF4 anion shown in Fig. 3 was produced with CrystalExplorer (Spackman et al., 2021) and indicates a close contact near F2, probably due to the C13—H13⋯F2 hydrogen bond, which seems to be the strongest C—H⋯F bond. The chlorido and carbonyl ligands do not have close intermolecular contacts, perhaps because they are shielded by the gauche conformations of the neighboring ethyl groups. A putative weak C—H⋯O hydrogen bond is listed in Table 1 and shown in red in Fig. 3. The hydrogen bonds listed join cations and anions into thick wavy (010) sheets, as can be seen in Fig. 2b.
Table 1Hydrogen-bond geometry (Å, °)
D—H⋯A D—H H⋯A DA D—H⋯A
C5—H5B⋯O1i 0.97 2.75 3.45 (2) 129
C13—H13B⋯F2ii 0.96 2.43 3.27 (3) 147
C4—H4A⋯F2 0.97 2.56 3.48 (3) 159
C7—H7A⋯F4iii 0.97 2.67 3.46 (3) 140
C11—H11B⋯F3 0.96 2.75 3.47 (3) 133
C6—H6A⋯F1ii 0.97 2.81 3.67 (4) 147
Symmetry codes: (i) ; (ii) ; (iii) .
Figure 2 Projections of the structure down the b axis (a) and c axis (b), with arbitrary sphere sizes for the atoms. The reference cation and anion have Pt and B atoms identified. Putative C—H⋯O and C—H⋯F hydrogen bonds are shown as red and green dashed lines, respectively.
Figure 3 Hirshfeld dnorm surface for the BF4− anion, showing the red area that indicates close contacts for F2.
Database analysis
From the time the preliminary structure of this compound was published in 1967, crystal and molecular structures of a wide variety of platinum carbonyl complexes have been reported, ranging from metal clusters through monomeric complexes as in this case. All 662 structures found with the PtCO' search fragment in the CSD database, with all filters removed except for single-crystal structure', except the present one (TEPPTC) are dated 1968 or after. All but 20 of these structures have only one CO group coordinating to the PtII atom while the rest have just two coordinating carbonyl groups except for the [Pt(CO)4]2+ cation reported by Willner et al. (2001) in entry QEZTEU. The mean Pt—CO distance for the 603 structures with coordinates given is 1.860 Å, with a wide range of 1.680 to 2.095 Å. It is interesting that the presence of phosphine ligands tends to lead to longer Pt—CO distances, while the presence of a Cl ligand to shorter Pt—CO distances. Thus, in the 35 entries in the above structures that have two PR3 groups attached to the PtII atom as well as the CO group, the mean Pt—C distance is 1.910 Å, with a narrow range of 1.855–1.965 Å, while for the 36 entries that have a Cl as well as a carbonyl ligand, the mean Pt—CO distance is 1.837 Å with a range of 1.753 to 1.901 Å. In the latter case, the Pt—CO distance seems insensitive to whether the Cl atom is cis or trans to the CO group. These tendencies must oppose each other in the present structure, leading to the Pt—CO distance of 1.812 (17) Å. Entry GEYBOB (Rusakov et al., 1988) has the same cation as in the present structure, but the anion is BF3Cl and there is a solvent molecule in the crystal. The shape of the cation is very similar to that of the present structure, with similar distortions of the angles from 90° and a Pt—CO bond length of 1.846 Å.
### Synthesis and crystallization
A sample supplied by Dr H. C. Clark had been synthesized as described in Clark & Tsang (1967). Crystals suitable for X-ray analysis were obtained by recrystallization of the sample from methyl acetate.
### Refinement
With the early automatic diffractometer that was used to collect the original X-ray intensity data in 1967, it was not customary to obtain a set of Friedel pairs of reflections in the case of a non-centrosymmetric structure. In this case, however, due to the polar space group and the poor scattering by the small crystal, data were collected over the whole sphere of reflection up to θ = 20°; in addition, data were recollected over four quadrants for the weaker reflections at higher angles. Initial absorption corrections using a Gaussian grid were inconclusive – perhaps due to a programming error –, so for the final refinements an overall absorption correction using the tensor analysis in XABS2 (Parkin et al., 1995) was used. Hydrogen atoms were constrained, with C—H distances of 0.97 Å and 0.96 Å for CH2 and CH3 groups, respectively, and Uiso(H) = 1.5Ueq(C). Anisotropic temperature factors for the carbonyl CO atoms required tight restraints. While the displacement ellipsoids for the fluorine atoms are large, probably indicating some disorder for the BF4 anion (Fig. 1), initial refinements of a disordered model were not successful and the disordered model was not pursued. There is indeed some residual electron density in the neighborhood of the BF4 anion, but only one of the 20 highest electron density peaks in the final difference-Fourier map is near this group. Crystal data, data collection and structure refinement details are summarized in Table 2.
Table 2Experimental details
Crystal data Chemical formula [PtCl(C6H15P)2(CO)]BF4 Mr 581.66 Crystal system, space group Orthorhombic, Pca21 Temperature (K) 293 a, b, c (Å) 16.012 (8), 9.171 (4), 14.966 (7) V (Å3) 2197.7 (18) Z 4 Radiation type Mo Kα μ (mm−1) 6.68 Crystal size (mm) 0.12 × 0.10 × 0.08 Data collection Diffractometer Picker, punched card control Absorption correction Empirical (using intensity measurements) (XABS2; Parkin et al., 1995) Tmin, Tmax 0.55, 0.81 No. of measured, independent and observed [I > 2σ(I)] reflections 7773, 3180, 2437 Rint 0.062 (sin θ/λ)max (Å−1) 0.596 Refinement R[F2 > 2σ(F2)], wR(F2), S 0.041, 0.090, 0.92 No. of reflections 3180 No. of parameters 214 No. of restraints 61 H-atom treatment H-atom parameters constrained Δρmax, Δρmin (e Å−3) 0.64, −0.76 Absolute structure Flack x determined using 961 quotients [(I+)−(I−)]/[(I+)+(I−)] (Parsons et al., 2013) Absolute structure parameter 0.000 (14) Computer programs: PICK (local program by J. A. Ibers), PICKOUT (local program by R. J. Doedens) and EQUIV (local program by J. A. Ibers), local version of FORDAP, SHELXL (Sheldrick, 2015), ORTEPIII (Burnett & Johnson, 1996; Farrugia, 2012) and publCIF (Westrip, 2010).
### Structural data
Computing details
Data collection: PICK (local program by J. A. Ibers); cell refinement: PICK (local program by J. A. Ibers); data reduction: PICKOUT (local program by R. J. Doedens) and EQUIV (local program by J. A. Ibers); program(s) used to solve structure: Local version of FORDAP; program(s) used to refine structure: SHELXL (Sheldrick, 2015); molecular graphics: ORTEPIII (Burnett & Johnson, 1996; Farrugia, 2012); software used to prepare material for publication: publCIF (Westrip, 2010).
trans-Carbonylchloridobis(triethylphosphane-κP)platinum(II) tetrafluoridoborate
Crystal data
[PtCl(C6H15P)2(CO)]BF4 Dx = 1.758 Mg m−3 Dm = 1.734 (4) Mg m−3 Dm measured by flotation in CH3I/CCl4 Mr = 581.66 Mo Kα radiation, λ = 0.7107 Å Orthorhombic, Pca21 Cell parameters from 16 reflections a = 16.012 (8) Å θ = 3.7–14.1° b = 9.171 (4) Å µ = 6.68 mm−1 c = 14.966 (7) Å T = 293 K V = 2197.7 (18) Å3 Needle, colorless Z = 4 0.12 × 0.10 × 0.08 mm F(000) = 1128
Data collection
Picker, punched card control diffractometer Rint = 0.062 Radiation source: sealed X-ray tube θmax = 25.1°, θmin = 2.2° θ/2θ scans h = 0→19 Absorption correction: empirical (using intensity measurements) (XABS2; Parkin et al., 1995) k = 0→10 Tmin = 0.55, Tmax = 0.81 l = −17→17 7773 measured reflections 3 standard reflections every 250 reflections 3180 independent reflections intensity decay: 8(2) 2437 reflections with I > 2σ(I)
Refinement
Refinement on F2 Secondary atom site location: difference Fourier map Least-squares matrix: full Hydrogen site location: inferred from neighbouring sites R[F2 > 2σ(F2)] = 0.041 H-atom parameters constrained wR(F2) = 0.090 w = 1/[σ2(Fo2)] where P = (Fo2 + 2Fc2)/3 S = 0.92 (Δ/σ)max = 0.002 3180 reflections Δρmax = 0.64 e Å−3 214 parameters Δρmin = −0.76 e Å−3 61 restraints Absolute structure: Flack x determined using 961 quotients [(I+)-(I-)]/[(I+)+(I-)] (Parsons et al., 2013) Primary atom site location: heavy-atom method Absolute structure parameter: 0.000 (14)
Special details
Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å2)
x y z Uiso*/Ueq Pt 0.11894 (3) 0.19055 (6) 0.49793 (9) 0.0540 (2) CL 0.0065 (3) 0.0347 (5) 0.5053 (8) 0.0887 (17) P1 0.1894 (3) 0.0057 (5) 0.4207 (4) 0.0580 (13) P2 0.0355 (3) 0.3627 (6) 0.5723 (3) 0.0606 (14) C1 0.2098 (10) 0.3084 (18) 0.497 (3) 0.074 (5) O1 0.2670 (8) 0.3789 (15) 0.498 (2) 0.114 (5) C2 0.1949 (12) −0.1601 (17) 0.4862 (17) 0.068 (5) H2A 0.138889 −0.191260 0.502025 0.102* H2B 0.220505 −0.236749 0.450938 0.102* C3 0.1344 (12) −0.048 (2) 0.3209 (13) 0.081 (6) H3A 0.167832 −0.120101 0.289379 0.122* H3B 0.082483 −0.095089 0.338078 0.122* C4 0.2941 (10) 0.045 (2) 0.3857 (11) 0.061 (5) H4A 0.293095 0.132343 0.349114 0.092* H4B 0.327255 0.066627 0.438343 0.092* C5 −0.0571 (12) 0.405 (2) 0.510 (2) 0.089 (7) H5A −0.087502 0.315295 0.499106 0.133* H5B −0.092341 0.467457 0.545982 0.133* C6 0.0892 (12) 0.5313 (19) 0.5972 (14) 0.071 (6) H6A 0.136834 0.509449 0.634974 0.107* H6B 0.110475 0.571661 0.541807 0.107* C7 −0.0026 (13) 0.295 (2) 0.6757 (12) 0.074 (6) H7A −0.035167 0.370900 0.704742 0.111* H7B −0.039380 0.213328 0.664405 0.111* C8 0.2449 (16) −0.136 (2) 0.5696 (14) 0.100 (8) H8A 0.228595 −0.205293 0.614220 0.150* H8B 0.235030 −0.038809 0.591484 0.150* H8C 0.303262 −0.147316 0.556527 0.150* C9 0.1148 (15) 0.076 (3) 0.2576 (15) 0.117 (9) H9A 0.087111 0.039006 0.205509 0.175* H9B 0.165842 0.123467 0.240124 0.175* H9C 0.079213 0.145449 0.287034 0.175* C10 0.3376 (13) −0.075 (2) 0.3335 (15) 0.095 (7) H10A 0.394947 −0.048653 0.323932 0.143* H10B 0.310361 −0.087509 0.276862 0.143* H10C 0.334969 −0.164404 0.366633 0.143* C11 −0.0435 (15) 0.477 (3) 0.4237 (19) 0.141 (12) H11A −0.093656 0.471759 0.388736 0.212* H11B 0.001072 0.428995 0.392480 0.212* H11C −0.028989 0.577227 0.433552 0.212* C12 0.0369 (16) 0.646 (2) 0.6429 (17) 0.112 (9) H12A 0.069086 0.733174 0.649995 0.167* H12B 0.019712 0.610676 0.700464 0.167* H12C −0.011595 0.666085 0.607203 0.167* C13 0.0687 (15) 0.246 (3) 0.7400 (15) 0.100 (8) H13A 0.046541 0.232547 0.799019 0.150* H13B 0.111629 0.318759 0.741295 0.150* H13C 0.091808 0.155328 0.719096 0.150* B 0.2121 (17) 0.498 (3) 0.272 (2) 0.064 (7) F1 0.259 (3) 0.602 (3) 0.275 (3) 0.287 (18) F2 0.2511 (16) 0.3870 (19) 0.2964 (13) 0.180 (9) F3 0.1531 (14) 0.523 (3) 0.3305 (18) 0.267 (15) F4 0.1865 (18) 0.509 (4) 0.197 (2) 0.278 (16)
Atomic displacement parameters (Å2)
U11 U22 U33 U12 U13 U23 Pt 0.0466 (3) 0.0576 (3) 0.0578 (3) −0.0066 (3) −0.0007 (10) 0.0042 (9) CL 0.061 (2) 0.076 (3) 0.128 (5) −0.019 (2) 0.008 (6) −0.008 (6) P1 0.051 (3) 0.067 (3) 0.057 (3) −0.003 (3) 0.000 (3) 0.006 (3) P2 0.059 (3) 0.063 (3) 0.060 (3) 0.002 (3) 0.001 (3) 0.018 (3) C1 0.064 (9) 0.070 (10) 0.087 (11) −0.005 (9) 0.030 (18) 0.020 (18) O1 0.090 (9) 0.122 (11) 0.131 (11) −0.045 (9) 0.004 (19) −0.057 (19) C2 0.083 (11) 0.066 (11) 0.055 (13) 0.008 (9) 0.019 (12) 0.030 (12) C3 0.083 (15) 0.083 (14) 0.078 (14) 0.012 (13) −0.037 (12) −0.013 (11) C4 0.049 (11) 0.080 (12) 0.055 (12) −0.007 (10) 0.009 (9) −0.003 (10) C5 0.091 (13) 0.097 (14) 0.079 (18) 0.032 (11) −0.012 (16) 0.022 (16) C6 0.078 (14) 0.059 (12) 0.077 (15) −0.020 (10) 0.034 (11) 0.001 (10) C7 0.084 (13) 0.078 (14) 0.060 (12) 0.015 (12) 0.035 (9) 0.025 (12) C8 0.122 (19) 0.105 (18) 0.072 (14) 0.003 (16) −0.009 (13) 0.035 (13) C9 0.16 (2) 0.106 (18) 0.082 (16) 0.038 (17) −0.057 (16) −0.005 (13) C10 0.099 (17) 0.081 (15) 0.105 (17) 0.021 (13) 0.019 (13) −0.007 (13) C11 0.102 (19) 0.22 (3) 0.102 (19) 0.05 (2) −0.014 (16) 0.08 (2) C12 0.13 (2) 0.077 (16) 0.12 (2) 0.016 (14) 0.062 (17) 0.017 (14) C13 0.110 (17) 0.12 (2) 0.066 (14) 0.001 (16) 0.002 (12) 0.026 (14) B 0.061 (13) 0.044 (13) 0.087 (17) −0.012 (11) −0.011 (13) −0.006 (13) F1 0.37 (4) 0.19 (2) 0.30 (4) −0.11 (3) 0.10 (3) −0.08 (2) F2 0.189 (18) 0.128 (13) 0.22 (2) 0.060 (15) −0.006 (19) 0.034 (14) F3 0.18 (2) 0.38 (4) 0.24 (3) 0.07 (2) 0.123 (19) 0.13 (2) F4 0.25 (3) 0.39 (4) 0.20 (2) 0.14 (3) −0.08 (2) −0.04 (2)
Geometric parameters (Å, º)
Pt—C1 1.813 (18) C7—C13 1.56 (3) Pt—CL 2.301 (4) C7—H7A 0.9700 Pt—P1 2.341 (5) C7—H7B 0.9700 Pt—P2 2.348 (5) C8—H8A 0.9600 P1—C2 1.812 (17) C8—H8B 0.9600 P1—C3 1.804 (18) C8—H8C 0.9600 P1—C4 1.794 (16) C9—H9A 0.9600 P2—C5 1.80 (2) C9—H9B 0.9600 P2—C6 1.808 (18) C9—H9C 0.9600 P2—C7 1.775 (17) C10—H10A 0.9600 C1—O1 1.120 (17) C10—H10B 0.9600 C2—C8 1.50 (3) C10—H10C 0.9600 C2—H2A 0.9700 C11—H11A 0.9600 C2—H2B 0.9700 C11—H11B 0.9600 C3—C9 1.52 (3) C11—H11C 0.9600 C3—H3A 0.9700 C12—H12A 0.9600 C3—H3B 0.9700 C12—H12B 0.9600 C4—C10 1.52 (2) C12—H12C 0.9600 C4—H4A 0.9700 C13—H13A 0.9600 C4—H4B 0.9700 C13—H13B 0.9600 C5—C11 1.46 (4) C13—H13C 0.9600 C5—H5A 0.9700 B—F1 1.21 (3) C5—H5B 0.9700 B—F2 1.25 (3) C6—C12 1.51 (2) B—F3 1.30 (3) C6—H6A 0.9700 B—F4 1.20 (3) C6—H6B 0.9700 C1—Pt—CL 177.0 (12) C12—C6—H6A 108.5 C1—Pt—P1 92.4 (8) C12—C6—H6B 108.5 C1—Pt—P2 93.3 (8) H6A—C6—H6B 107.5 CL—Pt—P1 87.2 (2) C13—C7—H7A 109.0 CL—Pt—P2 87.1 (2) C13—C7—H7B 109.0 P1—Pt—P2 174.10 (17) H7A—C7—H7B 107.8 O1—C1—Pt 178 (3) C2—C8—H8A 109.5 C2—P1—Pt 111.4 (8) C2—C8—H8B 109.5 C3—P1—Pt 111.9 (7) C2—C8—H8C 109.5 C4—P1—Pt 116.7 (6) H8A—C8—H8B 109.5 C2—P1—C4 106.4 (9) H8A—C8—H8C 109.5 C2—P1—C3 103.9 (11) H8B—C8—H8C 109.5 C3—P1—C4 105.6 (10) C3—C9—H9A 109.5 C5—P2—Pt 111.5 (10) C3—C9—H9B 109.5 C6—P2—Pt 113.7 (6) C3—C9—H9C 109.5 C7—P2—Pt 112.0 (7) H9A—C9—H9B 109.5 C5—P2—C6 108.3 (10) H9A—C9—H9C 109.5 C5—P2—C7 104.3 (12) H9B—C9—H9C 109.5 C6—P2—C7 106.4 (10) C4—C10—H10A 109.5 P1—C2—C8 110.5 (14) C4—C10—H10B 109.5 P1—C3—C9 114.2 (16) C4—C10—H10C 109.5 P1—C4—C10 115.7 (14) H10A—C10—H10B 109.5 P2—C5—C11 115.7 (17) H10A—C10—H10C 109.5 P2—C6—C12 115.1 (14) H10B—C10—H10C 109.5 P2—C7—C13 112.8 (14) C5—C11—H11A 109.5 P1—C2—H2A 109.5 C5—C11—H11B 109.5 P1—C2—H2B 109.5 C5—C11—H11C 109.5 P1—C3—H3A 108.7 H11A—C11—H11B 109.5 P1—C3—H3B 108.7 H11A—C11—H11C 109.5 P1—C4—H4A 108.4 H11B—C11—H11C 109.5 P1—C4—H4B 108.4 C6—C12—H12A 109.5 P2—C5—H5A 108.4 C6—C12—H12B 109.5 P2—C5—H5B 108.4 C6—C12—H12C 109.5 P2—C6—H6A 108.5 H12A—C12—H12B 109.5 P2—C6—H6B 108.5 H12A—C12—H12C 109.5 P2—C7—H7A 109.0 H12B—C12—H12C 109.5 P2—C7—H7B 109.0 C7—C13—H13A 109.5 C8—C2—H2A 109.5 C7—C13—H13B 109.5 C8—C2—H2B 109.5 C7—C13—H13C 109.5 H2A—C2—H2B 108.1 H13A—C13—H13B 109.5 C9—C3—H3A 108.7 H13A—C13—H13C 109.5 C9—C3—H3B 108.7 H13B—C13—H13C 109.5 H3A—C3—H3B 107.6 F4—B—F3 111 (3) C10—C4—H4A 108.4 F4—B—F2 120 (3) C10—C4—H4B 108.4 F3—B—F2 108 (3) H4A—C4—H4B 107.4 F4—B—F1 101 (4) C11—C5—H5A 108.4 F3—B—F1 107 (3) C11—C5—H5B 108.4 F2—B—F1 109 (3) H5A—C5—H5B 107.4
Hydrogen-bond geometry (Å, º)
D—H···A D—H H···A D···A D—H···A C5—H5B···O1i 0.97 2.75 3.45 (2) 129 C13—H13B···F2ii 0.96 2.43 3.27 (3) 147 C4—H4A···F2 0.97 2.56 3.48 (3) 159 C7—H7A···F4iii 0.97 2.67 3.46 (3) 140 C11—H11B···F3 0.96 2.75 3.47 (3) 133 C6—H6A···F1ii 0.97 2.81 3.67 (4) 147
Symmetry codes: (i) x−1/2, −y+1, z; (ii) −x+1/2, y, z+1/2; (iii) −x, −y+1, z+1/2.
### Acknowledgements
I am deeply grateful to the late James A. Ibers, who suggested this problem and submitted the earlier communication on the structure.
### Funding information
Funding for this research was provided by: National Science Foundation.
### References
Burnett, M. N. & Johnson, C. K. (1996). ORTEPIII. Report ORNL6895. Oak Ridge National Laboratory, Tennessee, USA. Google Scholar
Clark, H. C., Corfield, P. W. R., Dixon, K. R. & Ibers, J. A. (1967). J. Am. Chem. Soc. 89, 3360–3361. CSD CrossRef CAS Web of Science Google Scholar
Clark, H. C. & Tsang, W. S. (1967). J. Am. Chem. Soc. 89, 529–533. CrossRef CAS Web of Science Google Scholar
Farrugia, L. J. (2012). J. Appl. Cryst. 45, 849–854. Web of Science CrossRef CAS IUCr Journals Google Scholar
Groom, C. R., Bruno, I. J., Lightfoot, M. P. & Ward, S. C. (2016). Acta Cryst. B72, 171–179. Web of Science CrossRef IUCr Journals Google Scholar
Nielsen, M. M. & Pedersen, C. M. (2022). Chem. Sci. 13, 6181–6196. Web of Science CrossRef CAS Google Scholar
Parkin, S., Moezzi, B. & Hope, H. (1995). J. Appl. Cryst. 28, 53–56. CrossRef CAS Web of Science IUCr Journals Google Scholar
Parsons, S., Flack, H. D. & Wagner, T. (2013). Acta Cryst. B69, 249–259. Web of Science CSD CrossRef CAS IUCr Journals Google Scholar
Rusakov, S. L., Lisyak, T. V., Apalkova, G. M., Gusev, A. I., Kharitonov, Y. Y. & Kolomnikov, I. S. (1988). Koord. Khim. 14, 229–233. CAS Google Scholar
Sheldrick, G. M. (2015). Acta Cryst. C71, 3–8. Web of Science CrossRef IUCr Journals Google Scholar
Spackman, P. R., Turner, M. J., McKinnon, J. J., Wolff, S. K., Grimwood, D. J., Jayatilaka, D. & Spackman, M. A. (2021). J. Appl. Cryst. 54, 1006–1011. Web of Science CrossRef CAS IUCr Journals Google Scholar
Vaska, L. & DiLuzio, J. W. (1961). J. Am. Chem. Soc. 83, 2784–2785. CrossRef CAS Web of Science Google Scholar
Westrip, S. P. (2010). J. Appl. Cryst. 43, 920–925. Web of Science CrossRef CAS IUCr Journals Google Scholar
Willner, H., Bodenbinder, M., Bröchler, R., Hwang, G., Rettig, S. J., Trotter, J., von Ahsen, B., Westphal, U., Jonas, V., Thiel, W. & Aubke, F. (2001). J. Am. Chem. Soc. 123, 588–602. Web of Science CSD CrossRef ICSD PubMed CAS Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.
IUCrDATA
ISSN: 2414-3146 | 2022-08-07 15:19:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31979459524154663, "perplexity": 14044.418310955785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00293.warc.gz"} |
http://clay6.com/qa/14887/if-alpha-beta-2-and-alpha-3-beta-3-56-then-the-quadratic-equation-whose-roo | # If $\alpha+\beta=-2$ and $\alpha^3+\beta^3=-56$, then the quadratic equation whose roots are $\alpha$ and $\beta$ is
$\begin {array} {1 1} (1)\;x^2+2x+16=0 & \quad (2)\;x^2+2x-16=0 \\ (3)\;x^2+2x-12=0 & \quad (4)\;x^2+2x-8=0 \end {array}$
## 1 Answer
$(4)\;x^2+2x-8=0$
answered Nov 7, 2013 by
1 answer
1 answer
1 answer
1 answer
1 answer
1 answer
1 answer | 2018-03-18 17:21:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167399406433105, "perplexity": 4627.0506095255605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00230.warc.gz"} |
https://www.nag.com/numeric/nl/nagdoc_27/flhtml/d03/d03pjf.html | # NAG FL Interfaced03pjf (dim1_parab_dae_coll_old)d03pja (dim1_parab_dae_coll)
## 1Purpose
d03pjf/d03pja integrates a system of linear or nonlinear parabolic partial differential equations (PDEs), in one space variable with scope for coupled ordinary differential equations (ODEs). The spatial discretization is performed using a Chebyshev ${C}^{0}$ collocation method, and the method of lines is employed to reduce the PDEs to a system of ODEs. The resulting system is solved using a backward differentiation formula (BDF) method or a Theta method (switching between Newton's method and functional iteration).
d03pja is a version of d03pjf that has additional arguments in order to make it safe for use in multithreaded applications (see Section 5).
## 2Specification
### 2.1Specification for d03pjf
Fortran Interface
Subroutine d03pjf ( npde, m, ts, tout, u, npts, x, nv, nxi, xi, neqn, rtol, atol, itol, norm, ind,
Integer, Intent (In) :: npde, m, nbkpts, npoly, npts, nv, nxi, neqn, itol, lrsave, lisave, itask, itrace Integer, Intent (Inout) :: isave(lisave), ind, ifail Real (Kind=nag_wp), Intent (In) :: tout, xbkpts(nbkpts), xi(nxi), rtol(*), atol(*), algopt(30) Real (Kind=nag_wp), Intent (Inout) :: ts, u(neqn), rsave(lrsave) Real (Kind=nag_wp), Intent (Out) :: x(npts) Character (1), Intent (In) :: norm, laopt External :: pdedef, bndary, odedef, uvinit
#include <nag.h>
void d03pjf_ (const Integer *npde, const Integer *m, double *ts, const double *tout, void (NAG_CALL *pdedef)(const Integer *npde, const double *t, const double x[], const Integer *nptl, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], double p[], double q[], double r[], Integer *ires),void (NAG_CALL *bndary)(const Integer *npde, const double *t, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], const Integer *ibnd, double beta[], double gamma[], Integer *ires),double u[], const Integer *nbkpts, const double xbkpts[], const Integer *npoly, const Integer *npts, double x[], const Integer *nv, void (NAG_CALL *odedef)(const Integer *npde, const double *t, const Integer *nv, const double v[], const double vdot[], const Integer *nxi, const double xi[], const double ucp[], const double ucpx[], const double rcp[], const double ucpt[], const double ucptx[], double f[], Integer *ires),const Integer *nxi, const double xi[], const Integer *neqn, void (NAG_CALL *uvinit)(const Integer *npde, const Integer *npts, const double x[], double u[], const Integer *nv, double v[]),const double rtol[], const double atol[], const Integer *itol, const char *norm, const char *laopt, const double algopt[], double rsave[], const Integer *lrsave, Integer isave[], const Integer *lisave, const Integer *itask, const Integer *itrace, Integer *ind, Integer *ifail, const Charlen length_norm, const Charlen length_laopt)
### 2.2Specification for d03pja
Fortran Interface
Subroutine d03pja ( npde, m, ts, tout, u, npts, x, nv, nxi, xi, neqn, rtol, atol, itol, norm, ind,
Integer, Intent (In) :: npde, m, nbkpts, npoly, npts, nv, nxi, neqn, itol, lrsave, lisave, itask, itrace Integer, Intent (Inout) :: isave(lisave), ind, iuser(*), iwsav(505), ifail Real (Kind=nag_wp), Intent (In) :: tout, xbkpts(nbkpts), xi(nxi), rtol(*), atol(*), algopt(30) Real (Kind=nag_wp), Intent (Inout) :: ts, u(neqn), rsave(lrsave), ruser(*), rwsav(1100) Real (Kind=nag_wp), Intent (Out) :: x(npts) Logical, Intent (Inout) :: lwsav(100) Character (1), Intent (In) :: norm, laopt Character (80), Intent (InOut) :: cwsav(10) External :: pdedef, bndary, odedef, uvinit
#include <nag.h>
void d03pja_ (const Integer *npde, const Integer *m, double *ts, const double *tout, void (NAG_CALL *pdedef)(const Integer *npde, const double *t, const double x[], const Integer *nptl, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], double p[], double q[], double r[], Integer *ires, Integer iuser[], double ruser[]),void (NAG_CALL *bndary)(const Integer *npde, const double *t, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], const Integer *ibnd, double beta[], double gamma[], Integer *ires, Integer iuser[], double ruser[]),double u[], const Integer *nbkpts, const double xbkpts[], const Integer *npoly, const Integer *npts, double x[], const Integer *nv, void (NAG_CALL *odedef)(const Integer *npde, const double *t, const Integer *nv, const double v[], const double vdot[], const Integer *nxi, const double xi[], const double ucp[], const double ucpx[], const double rcp[], const double ucpt[], const double ucptx[], double f[], Integer *ires, Integer iuser[], double ruser[]),const Integer *nxi, const double xi[], const Integer *neqn, void (NAG_CALL *uvinit)(const Integer *npde, const Integer *npts, const double x[], double u[], const Integer *nv, double v[], Integer iuser[], double ruser[]),const double rtol[], const double atol[], const Integer *itol, const char *norm, const char *laopt, const double algopt[], double rsave[], const Integer *lrsave, Integer isave[], const Integer *lisave, const Integer *itask, const Integer *itrace, Integer *ind, Integer iuser[], double ruser[], char cwsav[], logical lwsav[], Integer iwsav[], double rwsav[], Integer *ifail, const Charlen length_norm, const Charlen length_laopt, const Charlen length_cwsav)
## 3Description
d03pjf/d03pja integrates the system of parabolic-elliptic equations and coupled ODEs
$∑j=1npdePi,j ∂Uj ∂t +Qi=x-m ∂∂x xmRi, i=1,2,…,npde, a≤x≤b,t≥t0,$ (1)
$Fit,V,V.,ξ,U*,Ux*,R*,Ut*,Uxt*=0, i=1,2,…,nv,$ (2)
where (1) defines the PDE part and (2) generalizes the coupled ODE part of the problem.
In (1), ${P}_{i,j}$ and ${R}_{i}$ depend on $x$, $t$, $U$, ${U}_{x}$, and $V$; ${Q}_{i}$ depends on $x$, $t$, $U$, ${U}_{x}$, $V$ and linearly on $\stackrel{.}{V}$. The vector $U$ is the set of PDE solution values
$U x,t = U 1 x,t ,…, U npde x,t T ,$
and the vector ${U}_{x}$ is the partial derivative with respect to $x$. Note that ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ must not depend on $\frac{\partial U}{\partial t}$. The vector $V$ is the set of ODE solution values
$Vt=V1t,…,VnvtT,$
and $\stackrel{.}{V}$ denotes its derivative with respect to time.
In (2), $\xi$ represents a vector of ${n}_{\xi }$ spatial coupling points at which the ODEs are coupled to the PDEs. These points may or may not be equal to some of the PDE spatial mesh points. ${U}^{*}$, ${U}_{x}^{*}$, ${R}^{*}$, ${U}_{t}^{*}$ and ${U}_{xt}^{*}$ are the functions $U$, ${U}_{x}$, $R$, ${U}_{t}$ and ${U}_{xt}$ evaluated at these coupling points. Each ${F}_{i}$ may only depend linearly on time derivatives. Hence the equation (2) may be written more precisely as
$F=G-AV.-B Ut* Uxt* ,$ (3)
where $F={\left[{F}_{1},\dots ,{F}_{{\mathbf{nv}}}\right]}^{\mathrm{T}}$, $G$ is a vector of length nv, $A$ is an nv by nv matrix, $B$ is an nv by $\left({n}_{\xi }×{\mathbf{npde}}\right)$ matrix and the entries in $G$, $A$ and $B$ may depend on $t$, $\xi$, ${U}^{*}$, ${U}_{x}^{*}$ and $V$. In practice you need only supply a vector of information to define the ODEs and not the matrices $A$ and $B$. (See Section 5 for the specification of odedef.)
The integration in time is from ${t}_{0}$ to ${t}_{\mathrm{out}}$, over the space interval $a\le x\le b$, where $a={x}_{1}$ and $b={x}_{{\mathbf{nbkpts}}}$ are the leftmost and rightmost of a user-defined set of break-points ${x}_{1},{x}_{2},\dots ,{x}_{{\mathbf{nbkpts}}}$. The coordinate system in space is defined by the value of $m$; $m=0$ for Cartesian coordinates, $m=1$ for cylindrical polar coordinates and $m=2$ for spherical polar coordinates.
The PDE system which is defined by the functions ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ must be specified in pdedef.
The initial values of the functions $U\left(x,t\right)$ and $V\left(t\right)$ must be given at $t={t}_{0}$. These values are calculated in uvinit.
The functions ${R}_{i}$ which may be thought of as fluxes, are also used in the definition of the boundary conditions. The boundary conditions must have the form
$βix,tRix,t,U,Ux,V=γix,t,U,Ux,V,V., i=1,2,…,npde,$ (4)
where $x=a$ or $x=b$. The functions ${\gamma }_{i}$ may only depend linearly on $\stackrel{.}{V}$.
The boundary conditions must be specified in bndary.
The algebraic-differential equation system which is defined by the functions ${F}_{i}$ must be specified in odedef. You must also specify the coupling points $\xi$ in the array xi. Thus, the problem is subject to the following restrictions:
1. (i) in (1), ${\stackrel{.}{V}}_{\mathit{j}}\left(t\right)$, for $\mathit{j}=1,2,\dots ,{\mathbf{nv}}$, may only appear linearly in the functions ${Q}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$, with a similar restriction for $\gamma$;
2. (ii)${P}_{\mathit{i},j}$ and the flux ${R}_{\mathit{i}}$ must not depend on any time derivatives;
3. (iii)${t}_{0}<{t}_{\mathrm{out}}$, so that integration is in the forward direction;
4. (iv)the evaluation of the functions ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ is done at both the break-points and internally selected points for each element in turn, that is ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ are evaluated twice at each break-point. Any discontinuities in these functions must therefore be at one or more of the mesh points;
5. (v)at least one of the functions ${P}_{i,j}$ must be nonzero so that there is a time derivative present in the PDE problem;
6. (vi)if $m>0$ and ${x}_{1}=0.0$, which is the left boundary point, then it must be ensured that the PDE solution is bounded at this point. This can be done either by specifying the solution at $x=0.0$ or by specifying a zero flux there, that is ${\beta }_{i}=1.0$ and ${\gamma }_{i}=0.0$.
The parabolic equations are approximated by a system of ODEs in time for the values of ${U}_{i}$ at the mesh points. This ODE system is obtained by approximating the PDE solution between each pair of break-points by a Chebyshev polynomial of degree npoly. The interval between each pair of break-points is treated by d03pjf/d03pja as an element, and on this element, a polynomial and its space and time derivatives are made to satisfy the system of PDEs at ${\mathbf{npoly}}-1$ spatial points, which are chosen internally by the code and the break-points. The user-defined break-points and the internally selected points together define the mesh. The smallest value that npoly can take is one, in which case, the solution is approximated by piecewise linear polynomials between consecutive break-points and the method is similar to an ordinary finite element method.
In total there are $\left({\mathbf{nbkpts}}-1\right)×{\mathbf{npoly}}+1$ mesh points in the spatial direction, and ${\mathbf{npde}}×\left(\left({\mathbf{nbkpts}}-1\right)×{\mathbf{npoly}}+1\right)+{\mathbf{nv}}$ ODEs in the time direction; one ODE at each break-point for each PDE component, ${\mathbf{npoly}}-1$ ODEs for each PDE component between each pair of break-points, and nv coupled ODEs. The system is then integrated forwards in time using a Backward Differentiation Formula (BDF) method or a Theta method.
## 4References
Berzins M (1990) Developments in the NAG Library software for parabolic equations Scientific Software Systems (eds J C Mason and M G Cox) 59–72 Chapman and Hall
Berzins M and Dew P M (1991) Algorithm 690: Chebyshev polynomial software for elliptic-parabolic systems of PDEs ACM Trans. Math. Software 17 178–206
Berzins M, Dew P M and Furzeland R M (1988) Software tools for time-dependent equations in simulation and optimization of large systems Proc. IMA Conf. Simulation and Optimization (ed A J Osiadcz) 35–50 Clarendon Press, Oxford
Berzins M and Furzeland R M (1992) An adaptive theta method for the solution of stiff and nonstiff differential equations Appl. Numer. Math. 9 1–19
Zaturska N B, Drazin P G and Banks W H H (1988) On the flow of a viscous fluid driven along a channel by a suction at porous walls Fluid Dynamics Research 4
## 5Arguments
1: $\mathbf{npde}$Integer Input
On entry: the number of PDEs to be solved.
Constraint: ${\mathbf{npde}}\ge 1$.
2: $\mathbf{m}$Integer Input
On entry: the coordinate system used:
${\mathbf{m}}=0$
Indicates Cartesian coordinates.
${\mathbf{m}}=1$
Indicates cylindrical polar coordinates.
${\mathbf{m}}=2$
Indicates spherical polar coordinates.
Constraint: ${\mathbf{m}}=0$, $1$ or $2$.
3: $\mathbf{ts}$Real (Kind=nag_wp) Input/Output
On entry: the initial value of the independent variable $t$.
On exit: the value of $t$ corresponding to the solution values in u. Normally ${\mathbf{ts}}={\mathbf{tout}}$.
Constraint: ${\mathbf{ts}}<{\mathbf{tout}}$.
4: $\mathbf{tout}$Real (Kind=nag_wp) Input
On entry: the final value of $t$ to which the integration is to be carried out.
5: $\mathbf{pdedef}$Subroutine, supplied by the user. External Procedure
pdedef must compute the functions ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ which define the system of PDEs. The functions may depend on $x$, $t$, $U$, ${U}_{x}$ and $V$; ${Q}_{i}$ may depend linearly on $\stackrel{.}{V}$. The functions must be evaluated at a set of points.
The specification of pdedef for d03pjf is:
Fortran Interface
Subroutine pdedef ( npde, t, x, nptl, u, ux, nv, v, vdot, p, q, r, ires)
Integer, Intent (In) :: npde, nptl, nv Integer, Intent (Inout) :: ires Real (Kind=nag_wp), Intent (In) :: t, x(nptl), u(npde,nptl), ux(npde,nptl), v(nv), vdot(nv) Real (Kind=nag_wp), Intent (Out) :: p(npde,npde,nptl), q(npde,nptl), r(npde,nptl)
void pdedef_ (const Integer *npde, const double *t, const double x[], const Integer *nptl, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], double p[], double q[], double r[], Integer *ires)
The specification of pdedef for d03pja is:
Fortran Interface
Subroutine pdedef ( npde, t, x, nptl, u, ux, nv, v, vdot, p, q, r, ires,
Integer, Intent (In) :: npde, nptl, nv Integer, Intent (Inout) :: ires, iuser(*) Real (Kind=nag_wp), Intent (In) :: t, x(nptl), u(npde,nptl), ux(npde,nptl), v(nv), vdot(nv) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: p(npde,npde,nptl), q(npde,nptl), r(npde,nptl)
void pdedef_ (const Integer *npde, const double *t, const double x[], const Integer *nptl, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], double p[], double q[], double r[], Integer *ires, Integer iuser[], double ruser[])
1: $\mathbf{npde}$Integer Input
On entry: the number of PDEs in the system.
2: $\mathbf{t}$Real (Kind=nag_wp) Input
On entry: the current value of the independent variable $t$.
3: $\mathbf{x}\left({\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Input
On entry: contains a set of mesh points at which ${P}_{i,j}$, ${Q}_{i}$ and ${R}_{i}$ are to be evaluated. ${\mathbf{x}}\left(1\right)$ and ${\mathbf{x}}\left({\mathbf{nptl}}\right)$ contain successive user-supplied break-points and the elements of the array will satisfy ${\mathbf{x}}\left(1\right)<{\mathbf{x}}\left(2\right)<\cdots <{\mathbf{x}}\left({\mathbf{nptl}}\right)$.
4: $\mathbf{nptl}$Integer Input
On entry: the number of points at which evaluations are required (the value of ${\mathbf{npoly}}+1$).
5: $\mathbf{u}\left({\mathbf{npde}},{\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{u}}\left(\mathit{i},\mathit{j}\right)$ contains the value of the component ${U}_{\mathit{i}}\left(x,t\right)$ where $x={\mathbf{x}}\left(\mathit{j}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nptl}}$.
6: $\mathbf{ux}\left({\mathbf{npde}},{\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{ux}}\left(\mathit{i},\mathit{j}\right)$ contains the value of the component $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial x}$ where $x={\mathbf{x}}\left(\mathit{j}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nptl}}$.
7: $\mathbf{nv}$Integer Input
On entry: the number of coupled ODEs in the system.
8: $\mathbf{v}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{v}}\left(\mathit{i}\right)$ contains the value of the component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
9: $\mathbf{vdot}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{vdot}}\left(\mathit{i}\right)$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
Note: ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$, may only appear linearly in ${Q}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{\mathbf{npde}}$.
10: $\mathbf{p}\left({\mathbf{npde}},{\mathbf{npde}},{\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{p}}\left(\mathit{i},\mathit{j},\mathit{k}\right)$ must be set to the value of ${P}_{\mathit{i},\mathit{j}}\left(x,t,U,{U}_{x},V\right)$ where $x={\mathbf{x}}\left(\mathit{k}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$, $\mathit{j}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{k}=1,2,\dots ,{\mathbf{nptl}}$.
11: $\mathbf{q}\left({\mathbf{npde}},{\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{q}}\left(\mathit{i},\mathit{j}\right)$ must be set to the value of ${Q}_{\mathit{i}}\left(x,t,U,{U}_{x},V,\stackrel{.}{V}\right)$ where $x={\mathbf{x}}\left(\mathit{j}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nptl}}$.
12: $\mathbf{r}\left({\mathbf{npde}},{\mathbf{nptl}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{r}}\left(\mathit{i},\mathit{j}\right)$ must be set to the value of ${R}_{\mathit{i}}\left(x,t,U,{U}_{x},V\right)$ where $x={\mathbf{x}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nptl}}$.
13: $\mathbf{ires}$Integer Input/Output
On entry: set to .
On exit: should usually remain unchanged. However, you may set ires to force the integration routine to take certain actions as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling (sub)routine with the error indicator set to ${\mathbf{ifail}}={\mathbf{6}}$.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, d03pjf/d03pja returns to the calling subroutine with the error indicator set to ${\mathbf{ifail}}={\mathbf{4}}$.
Note: the following are additional arguments for specific use with d03pja. Users of d03pjf therefore need not read the remainder of this description.
14: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
15: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
pdedef is called with the arguments iuser and ruser as supplied to d03pjf/d03pja. You should use the arrays iuser and ruser to supply information to pdedef.
pdedef must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d03pjf/d03pja is called. Arguments denoted as Input must not be changed by this procedure.
Note: pdedef should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d03pjf/d03pja. If your code inadvertently does return any NaNs or infinities, d03pjf/d03pja is likely to produce unexpected results.
6: $\mathbf{bndary}$Subroutine, supplied by the user. External Procedure
bndary must compute the functions ${\beta }_{i}$ and ${\gamma }_{i}$ which define the boundary conditions as in equation (4).
The specification of bndary for d03pjf is:
Fortran Interface
Subroutine bndary ( npde, t, u, ux, nv, v, vdot, ibnd, beta, ires)
Integer, Intent (In) :: npde, nv, ibnd Integer, Intent (Inout) :: ires Real (Kind=nag_wp), Intent (In) :: t, u(npde), ux(npde), v(nv), vdot(nv) Real (Kind=nag_wp), Intent (Out) :: beta(npde), gamma(npde)
void bndary_ (const Integer *npde, const double *t, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], const Integer *ibnd, double beta[], double gamma[], Integer *ires)
The specification of bndary for d03pja is:
Fortran Interface
Subroutine bndary ( npde, t, u, ux, nv, v, vdot, ibnd, beta, ires,
Integer, Intent (In) :: npde, nv, ibnd Integer, Intent (Inout) :: ires, iuser(*) Real (Kind=nag_wp), Intent (In) :: t, u(npde), ux(npde), v(nv), vdot(nv) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: beta(npde), gamma(npde)
void bndary_ (const Integer *npde, const double *t, const double u[], const double ux[], const Integer *nv, const double v[], const double vdot[], const Integer *ibnd, double beta[], double gamma[], Integer *ires, Integer iuser[], double ruser[])
1: $\mathbf{npde}$Integer Input
On entry: the number of PDEs in the system.
2: $\mathbf{t}$Real (Kind=nag_wp) Input
On entry: the current value of the independent variable $t$.
3: $\mathbf{u}\left({\mathbf{npde}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{u}}\left(\mathit{i}\right)$ contains the value of the component ${U}_{\mathit{i}}\left(x,t\right)$ at the boundary specified by ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
4: $\mathbf{ux}\left({\mathbf{npde}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{ux}}\left(\mathit{i}\right)$ contains the value of the component $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial x}$ at the boundary specified by ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
5: $\mathbf{nv}$Integer Input
On entry: the number of coupled ODEs in the system.
6: $\mathbf{v}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{v}}\left(\mathit{i}\right)$ contains the value of the component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
7: $\mathbf{vdot}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{vdot}}\left(\mathit{i}\right)$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
Note: ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$, may only appear linearly in ${Q}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{\mathbf{npde}}$.
8: $\mathbf{ibnd}$Integer Input
On entry: specifies which boundary conditions are to be evaluated.
${\mathbf{ibnd}}=0$
bndary must set up the coefficients of the left-hand boundary, $x=a$.
${\mathbf{ibnd}}\ne 0$
bndary must set up the coefficients of the right-hand boundary, $x=b$.
9: $\mathbf{beta}\left({\mathbf{npde}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{beta}}\left(\mathit{i}\right)$ must be set to the value of ${\beta }_{\mathit{i}}\left(x,t\right)$ at the boundary specified by ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
10: $\mathbf{gamma}\left({\mathbf{npde}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{gamma}}\left(\mathit{i}\right)$ must be set to the value of ${\gamma }_{\mathit{i}}\left(x,t,U,{U}_{x},V,\stackrel{.}{V}\right)$ at the boundary specified by ibnd, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$.
11: $\mathbf{ires}$Integer Input/Output
On entry: set to .
On exit: should usually remain unchanged. However, you may set ires to force the integration routine to take certain actions as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling (sub)routine with the error indicator set to ${\mathbf{ifail}}={\mathbf{6}}$.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, d03pjf/d03pja returns to the calling subroutine with the error indicator set to ${\mathbf{ifail}}={\mathbf{4}}$.
Note: the following are additional arguments for specific use with d03pja. Users of d03pjf therefore need not read the remainder of this description.
12: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
13: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
bndary is called with the arguments iuser and ruser as supplied to d03pjf/d03pja. You should use the arrays iuser and ruser to supply information to bndary.
bndary must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d03pjf/d03pja is called. Arguments denoted as Input must not be changed by this procedure.
Note: bndary should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d03pjf/d03pja. If your code inadvertently does return any NaNs or infinities, d03pjf/d03pja is likely to produce unexpected results.
7: $\mathbf{u}\left({\mathbf{neqn}}\right)$Real (Kind=nag_wp) array Input/Output
On entry: if ${\mathbf{ind}}=1$ the value of u must be unchanged from the previous call.
On exit: the computed solution ${U}_{\mathit{i}}\left({x}_{\mathit{j}},t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$, and ${V}_{\mathit{k}}\left(t\right)$, for $\mathit{k}=1,2,\dots ,{\mathbf{nv}}$, evaluated at $t={\mathbf{ts}}$, as follows:
• ${\mathbf{u}}\left({\mathbf{npde}}×\left(\mathit{j}-1\right)+\mathit{i}\right)$ contain ${U}_{\mathit{i}}\left({x}_{\mathit{j}},t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$, and
• ${\mathbf{u}}\left({\mathbf{npts}}×{\mathbf{npde}}+\mathit{i}\right)$ contain ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
8: $\mathbf{nbkpts}$Integer Input
On entry: the number of break-points in the interval $\left[a,b\right]$.
Constraint: ${\mathbf{nbkpts}}\ge 2$.
9: $\mathbf{xbkpts}\left({\mathbf{nbkpts}}\right)$Real (Kind=nag_wp) array Input
On entry: the values of the break-points in the space direction. ${\mathbf{xbkpts}}\left(1\right)$ must specify the left-hand boundary, $a$, and ${\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)$ must specify the right-hand boundary, $b$.
Constraint: ${\mathbf{xbkpts}}\left(1\right)<{\mathbf{xbkpts}}\left(2\right)<\cdots <{\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)$.
10: $\mathbf{npoly}$Integer Input
On entry: the degree of the Chebyshev polynomial to be used in approximating the PDE solution between each pair of break-points.
Constraint: $1\le {\mathbf{npoly}}\le 49$.
11: $\mathbf{npts}$Integer Input
On entry: the number of mesh points in the interval $\left[a,b\right]$.
Constraint: ${\mathbf{npts}}=\left({\mathbf{nbkpts}}-1\right)×{\mathbf{npoly}}+1$.
12: $\mathbf{x}\left({\mathbf{npts}}\right)$Real (Kind=nag_wp) array Output
On exit: the mesh points chosen by d03pjf/d03pja in the spatial direction. The values of x will satisfy ${\mathbf{x}}\left(1\right)<{\mathbf{x}}\left(2\right)<\cdots <{\mathbf{x}}\left({\mathbf{npts}}\right)$.
13: $\mathbf{nv}$Integer Input
On entry: the number of coupled ODE components.
Constraint: ${\mathbf{nv}}\ge 0$.
14: $\mathbf{odedef}$Subroutine, supplied by the NAG Library or the user. External Procedure
odedef must evaluate the functions $F$, which define the system of ODEs, as given in (3).
If you wish to compute the solution of a system of PDEs only (${\mathbf{nv}}=0$), odedef must be the dummy routine d03pck for d03pjf (or d53pck for d03pja). d03pck and d53pck are included in the NAG Library.
The specification of odedef for d03pjf is:
Fortran Interface
Subroutine odedef ( npde, t, nv, v, vdot, nxi, xi, ucp, ucpx, rcp, ucpt, f, ires)
Integer, Intent (In) :: npde, nv, nxi Integer, Intent (Inout) :: ires Real (Kind=nag_wp), Intent (In) :: t, v(nv), vdot(nv), xi(nxi), ucp(npde,nxi), ucpx(npde,nxi), rcp(npde,nxi), ucpt(npde,nxi), ucptx(npde,nxi) Real (Kind=nag_wp), Intent (Out) :: f(nv)
void odedef_ (const Integer *npde, const double *t, const Integer *nv, const double v[], const double vdot[], const Integer *nxi, const double xi[], const double ucp[], const double ucpx[], const double rcp[], const double ucpt[], const double ucptx[], double f[], Integer *ires)
The specification of odedef for d03pja is:
Fortran Interface
Subroutine odedef ( npde, t, nv, v, vdot, nxi, xi, ucp, ucpx, rcp, ucpt, f, ires,
Integer, Intent (In) :: npde, nv, nxi Integer, Intent (Inout) :: ires, iuser(*) Real (Kind=nag_wp), Intent (In) :: t, v(nv), vdot(nv), xi(nxi), ucp(npde,nxi), ucpx(npde,nxi), rcp(npde,nxi), ucpt(npde,nxi), ucptx(npde,nxi) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: f(nv)
void odedef_ (const Integer *npde, const double *t, const Integer *nv, const double v[], const double vdot[], const Integer *nxi, const double xi[], const double ucp[], const double ucpx[], const double rcp[], const double ucpt[], const double ucptx[], double f[], Integer *ires, Integer iuser[], double ruser[])
1: $\mathbf{npde}$Integer Input
On entry: the number of PDEs in the system.
2: $\mathbf{t}$Real (Kind=nag_wp) Input
On entry: the current value of the independent variable $t$.
3: $\mathbf{nv}$Integer Input
On entry: the number of coupled ODEs in the system.
4: $\mathbf{v}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{v}}\left(\mathit{i}\right)$ contains the value of the component ${V}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
5: $\mathbf{vdot}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nv}}>0$, ${\mathbf{vdot}}\left(\mathit{i}\right)$ contains the value of component ${\stackrel{.}{V}}_{\mathit{i}}\left(t\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
6: $\mathbf{nxi}$Integer Input
On entry: the number of ODE/PDE coupling points.
7: $\mathbf{xi}\left({\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{xi}}\left(\mathit{i}\right)$ contains the ODE/PDE coupling points, ${\xi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxi}}$.
8: $\mathbf{ucp}\left({\mathbf{npde}},{\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucp}}\left(\mathit{i},\mathit{j}\right)$ contains the value of ${U}_{\mathit{i}}\left(x,t\right)$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
9: $\mathbf{ucpx}\left({\mathbf{npde}},{\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucpx}}\left(\mathit{i},\mathit{j}\right)$ contains the value of $\frac{\partial {U}_{\mathit{i}}\left(x,t\right)}{\partial x}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
10: $\mathbf{rcp}\left({\mathbf{npde}},{\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{rcp}}\left(\mathit{i},\mathit{j}\right)$ contains the value of the flux ${R}_{\mathit{i}}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
11: $\mathbf{ucpt}\left({\mathbf{npde}},{\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: if ${\mathbf{nxi}}>0$, ${\mathbf{ucpt}}\left(\mathit{i},\mathit{j}\right)$ contains the value of $\frac{\partial {U}_{\mathit{i}}}{\partial t}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
12: $\mathbf{ucptx}\left({\mathbf{npde}},{\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{ucptx}}\left(\mathit{i},\mathit{j}\right)$ contains the value of $\frac{{\partial }^{2}{U}_{\mathit{i}}}{\partial x\partial t}$ at the coupling point $x={\xi }_{\mathit{j}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nxi}}$.
13: $\mathbf{f}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{f}}\left(\mathit{i}\right)$ must contain the $\mathit{i}$th component of $F$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$, where $F$ is defined as
$F=G-AV.-B Ut* Uxt* ,$ (5)
or
$F=-AV.-B Ut* Uxt* .$ (6)
The definition of $F$ is determined by the input value of ires.
14: $\mathbf{ires}$Integer Input/Output
On entry: the form of $F$ that must be returned in the array f.
${\mathbf{ires}}=1$
Equation (5) must be used.
${\mathbf{ires}}=-1$
Equation (6) must be used.
On exit: should usually remain unchanged. However, you may reset ires to force the integration routine to take certain actions as described below:
${\mathbf{ires}}=2$
Indicates to the integrator that control should be passed back immediately to the calling (sub)routine with the error indicator set to ${\mathbf{ifail}}={\mathbf{6}}$.
${\mathbf{ires}}=3$
Indicates to the integrator that the current time step should be abandoned and a smaller time step used instead. You may wish to set ${\mathbf{ires}}=3$ when a physically meaningless input or output value has been generated. If you consecutively set ${\mathbf{ires}}=3$, d03pjf/d03pja returns to the calling subroutine with the error indicator set to ${\mathbf{ifail}}={\mathbf{4}}$.
Note: the following are additional arguments for specific use with d03pja. Users of d03pjf therefore need not read the remainder of this description.
15: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
16: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
odedef is called with the arguments iuser and ruser as supplied to d03pjf/d03pja. You should use the arrays iuser and ruser to supply information to odedef.
odedef must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d03pjf/d03pja is called. Arguments denoted as Input must not be changed by this procedure.
Note: odedef should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d03pjf/d03pja. If your code inadvertently does return any NaNs or infinities, d03pjf/d03pja is likely to produce unexpected results.
15: $\mathbf{nxi}$Integer Input
On entry: the number of ODE/PDE coupling points.
Constraints:
• if ${\mathbf{nv}}=0$, ${\mathbf{nxi}}=0$;
• if ${\mathbf{nv}}>0$, ${\mathbf{nxi}}\ge 0$.
16: $\mathbf{xi}\left({\mathbf{nxi}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{xi}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nxi}}$, must be set to the ODE/PDE coupling points.
Constraint: ${\mathbf{xbkpts}}\left(1\right)\le {\mathbf{xi}}\left(1\right)<{\mathbf{xi}}\left(2\right)<\cdots <{\mathbf{xi}}\left({\mathbf{nxi}}\right)\le {\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)$.
17: $\mathbf{neqn}$Integer Input
On entry: the number of ODEs in the time direction.
Constraint: ${\mathbf{neqn}}={\mathbf{npde}}×{\mathbf{npts}}+{\mathbf{nv}}$.
18: $\mathbf{uvinit}$Subroutine, supplied by the user. External Procedure
uvinit must compute the initial values of the PDE and the ODE components ${U}_{\mathit{i}}\left({x}_{\mathit{j}},{t}_{0}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$, and ${V}_{\mathit{k}}\left({t}_{0}\right)$, for $\mathit{k}=1,2,\dots ,{\mathbf{nv}}$.
The specification of uvinit for d03pjf is:
Fortran Interface
Subroutine uvinit ( npde, npts, x, u, nv, v)
Integer, Intent (In) :: npde, npts, nv Real (Kind=nag_wp), Intent (In) :: x(npts) Real (Kind=nag_wp), Intent (Out) :: u(npde,npts), v(nv)
void uvinit_ (const Integer *npde, const Integer *npts, const double x[], double u[], const Integer *nv, double v[])
The specification of uvinit for d03pja is:
Fortran Interface
Subroutine uvinit ( npde, npts, x, u, nv, v,
Integer, Intent (In) :: npde, npts, nv Integer, Intent (Inout) :: iuser(*) Real (Kind=nag_wp), Intent (In) :: x(npts) Real (Kind=nag_wp), Intent (Inout) :: ruser(*) Real (Kind=nag_wp), Intent (Out) :: u(npde,npts), v(nv)
void uvinit_ (const Integer *npde, const Integer *npts, const double x[], double u[], const Integer *nv, double v[], Integer iuser[], double ruser[])
1: $\mathbf{npde}$Integer Input
On entry: the number of PDEs in the system.
2: $\mathbf{npts}$Integer Input
On entry: the number of mesh points in the interval $\left[a,b\right]$.
3: $\mathbf{x}\left({\mathbf{npts}}\right)$Real (Kind=nag_wp) array Input
On entry: ${\mathbf{x}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npts}}$, contains the current values of the space variable ${x}_{\mathit{i}}$.
4: $\mathbf{u}\left({\mathbf{npde}},{\mathbf{npts}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{u}}\left(\mathit{i},\mathit{j}\right)$ contains the value of the component ${U}_{\mathit{i}}\left({x}_{\mathit{j}},{t}_{0}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{npde}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{npts}}$.
5: $\mathbf{nv}$Integer Input
On entry: the number of coupled ODEs in the system.
6: $\mathbf{v}\left({\mathbf{nv}}\right)$Real (Kind=nag_wp) array Output
On exit: ${\mathbf{v}}\left(\mathit{i}\right)$ contains the value of component ${V}_{\mathit{i}}\left({t}_{0}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nv}}$.
Note: the following are additional arguments for specific use with d03pja. Users of d03pjf therefore need not read the remainder of this description.
7: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
8: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
uvinit is called with the arguments iuser and ruser as supplied to d03pjf/d03pja. You should use the arrays iuser and ruser to supply information to uvinit.
uvinit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d03pjf/d03pja is called. Arguments denoted as Input must not be changed by this procedure.
Note: uvinit should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d03pjf/d03pja. If your code inadvertently does return any NaNs or infinities, d03pjf/d03pja is likely to produce unexpected results.
19: $\mathbf{rtol}\left(*\right)$Real (Kind=nag_wp) array Input
Note: the dimension of the array rtol must be at least $1$ if ${\mathbf{itol}}=1$ or $2$ and at least ${\mathbf{neqn}}$ if ${\mathbf{itol}}=3$ or $4$.
On entry: the relative local error tolerance.
Constraint: ${\mathbf{rtol}}\left(i\right)\ge 0.0$ for all relevant $i$.
20: $\mathbf{atol}\left(*\right)$Real (Kind=nag_wp) array Input
Note: the dimension of the array atol must be at least $1$ if ${\mathbf{itol}}=1$ or $3$ and at least ${\mathbf{neqn}}$ if ${\mathbf{itol}}=2$ or $4$.
On entry: the absolute local error tolerance.
Constraint: ${\mathbf{atol}}\left(i\right)\ge 0.0$ for all relevant $i$.
Note: corresponding elements of rtol and atol cannot both be $0.0$.
21: $\mathbf{itol}$Integer Input
On entry: a value to indicate the form of the local error test. itol indicates to d03pjf/d03pja whether to interpret either or both of rtol or atol as a vector or scalar. The error test to be satisfied is $‖{e}_{i}/{w}_{i}‖<1.0$, where ${w}_{i}$ is defined as follows:
itol rtol atol ${w}_{i}$ 1 scalar scalar ${\mathbf{rtol}}\left(1\right)×\left|{U}_{i}\right|+{\mathbf{atol}}\left(1\right)$ 2 scalar vector ${\mathbf{rtol}}\left(1\right)×\left|{U}_{i}\right|+{\mathbf{atol}}\left(i\right)$ 3 vector scalar ${\mathbf{rtol}}\left(i\right)×\left|{U}_{i}\right|+{\mathbf{atol}}\left(1\right)$ 4 vector vector ${\mathbf{rtol}}\left(i\right)×\left|{U}_{i}\right|+{\mathbf{atol}}\left(i\right)$
In the above, ${e}_{\mathit{i}}$ denotes the estimated local error for the $\mathit{i}$th component of the coupled PDE/ODE system in time, ${\mathbf{u}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{neqn}}$.
The choice of norm used is defined by the argument norm.
Constraint: $1\le {\mathbf{itol}}\le 4$.
22: $\mathbf{norm}$Character(1) Input
On entry: the type of norm to be used.
${\mathbf{norm}}=\text{'M'}$
Maximum norm.
${\mathbf{norm}}=\text{'A'}$
Averaged ${L}_{2}$ norm.
If ${{\mathbf{u}}}_{\mathrm{norm}}$ denotes the norm of the vector u of length neqn, then for the averaged ${L}_{2}$ norm
$unorm=1neqn∑i=1neqnui/wi2,$
while for the maximum norm
$u norm = maxi ui / wi .$
See the description of itol for the formulation of the weight vector $w$.
Constraint: ${\mathbf{norm}}=\text{'M'}$ or $\text{'A'}$.
23: $\mathbf{laopt}$Character(1) Input
On entry: the type of matrix algebra required.
${\mathbf{laopt}}=\text{'F'}$
Full matrix methods to be used.
${\mathbf{laopt}}=\text{'B'}$
Banded matrix methods to be used.
${\mathbf{laopt}}=\text{'S'}$
Sparse matrix methods to be used.
Constraint: ${\mathbf{laopt}}=\text{'F'}$, $\text{'B'}$ or $\text{'S'}$.
Note: you are recommended to use the banded option when no coupled ODEs are present (i.e., ${\mathbf{nv}}=0$).
24: $\mathbf{algopt}\left(30\right)$Real (Kind=nag_wp) array Input
On entry: may be set to control various options available in the integrator. If you wish to employ all the default options, ${\mathbf{algopt}}\left(1\right)$ should be set to $0.0$. Default values will also be used for any other elements of algopt set to zero. The permissible values, default values, and meanings are as follows:
${\mathbf{algopt}}\left(1\right)$
Selects the ODE integration method to be used. If ${\mathbf{algopt}}\left(1\right)=1.0$, a BDF method is used and if ${\mathbf{algopt}}\left(1\right)=2.0$, a Theta method is used. The default value is ${\mathbf{algopt}}\left(1\right)=1.0$.
If ${\mathbf{algopt}}\left(1\right)=2.0$, ${\mathbf{algopt}}\left(\mathit{i}\right)$, for $\mathit{i}=2,3,4$ are not used.
${\mathbf{algopt}}\left(2\right)$
Specifies the maximum order of the BDF integration formula to be used. ${\mathbf{algopt}}\left(2\right)$ may be $1.0$, $2.0$, $3.0$, $4.0$ or $5.0$. The default value is ${\mathbf{algopt}}\left(2\right)=5.0$.
${\mathbf{algopt}}\left(3\right)$
Specifies what method is to be used to solve the system of nonlinear equations arising on each step of the BDF method. If ${\mathbf{algopt}}\left(3\right)=1.0$ a modified Newton iteration is used and if ${\mathbf{algopt}}\left(3\right)=2.0$ a functional iteration method is used. If functional iteration is selected and the integrator encounters difficulty, there is an automatic switch to the modified Newton iteration. The default value is ${\mathbf{algopt}}\left(3\right)=1.0$.
${\mathbf{algopt}}\left(4\right)$
Specifies whether or not the Petzold error test is to be employed. The Petzold error test results in extra overhead but is more suitable when algebraic equations are present, such as ${P}_{i,\mathit{j}}=0.0$, for $\mathit{j}=1,2,\dots ,{\mathbf{npde}}$, for some $i$ or when there is no ${\stackrel{.}{V}}_{i}\left(t\right)$ dependence in the coupled ODE system. If ${\mathbf{algopt}}\left(4\right)=1.0$, the Petzold test is used. If ${\mathbf{algopt}}\left(4\right)=2.0$, the Petzold test is not used. The default value is ${\mathbf{algopt}}\left(4\right)=1.0$.
If ${\mathbf{algopt}}\left(1\right)=1.0$, ${\mathbf{algopt}}\left(\mathit{i}\right)$, for $\mathit{i}=5,6,7$, are not used.
${\mathbf{algopt}}\left(5\right)$
Specifies the value of Theta to be used in the Theta integration method. $0.51\le {\mathbf{algopt}}\left(5\right)\le 0.99$. The default value is ${\mathbf{algopt}}\left(5\right)=0.55$.
${\mathbf{algopt}}\left(6\right)$
Specifies what method is to be used to solve the system of nonlinear equations arising on each step of the Theta method. If ${\mathbf{algopt}}\left(6\right)=1.0$, a modified Newton iteration is used and if ${\mathbf{algopt}}\left(6\right)=2.0$, a functional iteration method is used. The default value is ${\mathbf{algopt}}\left(6\right)=1.0$.
${\mathbf{algopt}}\left(7\right)$
Specifies whether or not the integrator is allowed to switch automatically between modified Newton and functional iteration methods in order to be more efficient. If ${\mathbf{algopt}}\left(7\right)=1.0$, switching is allowed and if ${\mathbf{algopt}}\left(7\right)=2.0$, switching is not allowed. The default value is ${\mathbf{algopt}}\left(7\right)=1.0$.
${\mathbf{algopt}}\left(11\right)$
Specifies a point in the time direction, ${t}_{\mathrm{crit}}$, beyond which integration must not be attempted. The use of ${t}_{\mathrm{crit}}$ is described under the argument itask. If ${\mathbf{algopt}}\left(1\right)\ne 0.0$, a value of $0.0$ for ${\mathbf{algopt}}\left(11\right)$, say, should be specified even if itask subsequently specifies that ${t}_{\mathrm{crit}}$ will not be used.
${\mathbf{algopt}}\left(12\right)$
Specifies the minimum absolute step size to be allowed in the time integration. If this option is not required, ${\mathbf{algopt}}\left(12\right)$ should be set to $0.0$.
${\mathbf{algopt}}\left(13\right)$
Specifies the maximum absolute step size to be allowed in the time integration. If this option is not required, ${\mathbf{algopt}}\left(13\right)$ should be set to $0.0$.
${\mathbf{algopt}}\left(14\right)$
Specifies the initial step size to be attempted by the integrator. If ${\mathbf{algopt}}\left(14\right)=0.0$, the initial step size is calculated internally.
${\mathbf{algopt}}\left(15\right)$
Specifies the maximum number of steps to be attempted by the integrator in any one call. If ${\mathbf{algopt}}\left(15\right)=0.0$, no limit is imposed.
${\mathbf{algopt}}\left(23\right)$
Specifies what method is to be used to solve the nonlinear equations at the initial point to initialize the values of $U$, ${U}_{t}$, $V$ and $\stackrel{.}{V}$. If ${\mathbf{algopt}}\left(23\right)=1.0$, a modified Newton iteration is used and if ${\mathbf{algopt}}\left(23\right)=2.0$, functional iteration is used. The default value is ${\mathbf{algopt}}\left(23\right)=1.0$.
${\mathbf{algopt}}\left(29\right)$ and ${\mathbf{algopt}}\left(30\right)$ are used only for the sparse matrix algebra option, ${\mathbf{laopt}}=\text{'S'}$.
${\mathbf{algopt}}\left(29\right)$
Governs the choice of pivots during the decomposition of the first Jacobian matrix. It should lie in the range $0.0<{\mathbf{algopt}}\left(29\right)<1.0$, with smaller values biasing the algorithm towards maintaining sparsity at the expense of numerical stability. If ${\mathbf{algopt}}\left(29\right)$ lies outside this range then the default value is used. If the routines regard the Jacobian matrix as numerically singular then increasing ${\mathbf{algopt}}\left(29\right)$ towards $1.0$ may help, but at the cost of increased fill-in. The default value is ${\mathbf{algopt}}\left(29\right)=0.1$.
${\mathbf{algopt}}\left(30\right)$
Is used as a relative pivot threshold during subsequent Jacobian decompositions (see ${\mathbf{algopt}}\left(29\right)$) below which an internal error is invoked. If ${\mathbf{algopt}}\left(30\right)$ is greater than $1.0$ no check is made on the pivot size, and this may be a necessary option if the Jacobian is found to be numerically singular (see ${\mathbf{algopt}}\left(29\right)$). The default value is ${\mathbf{algopt}}\left(30\right)=0.0001$.
25: $\mathbf{rsave}\left({\mathbf{lrsave}}\right)$Real (Kind=nag_wp) array Communication Array
If ${\mathbf{ind}}=0$, rsave need not be set on entry.
If ${\mathbf{ind}}=1$, rsave must be unchanged from the previous call to the routine because it contains required information about the iteration.
26: $\mathbf{lrsave}$Integer Input
On entry: the dimension of the array rsave as declared in the (sub)program from which d03pjf/d03pja is called. Its size depends on the type of matrix algebra selected.
If ${\mathbf{laopt}}=\text{'F'}$, ${\mathbf{lrsave}}\ge {\mathbf{neqn}}×{\mathbf{neqn}}+{\mathbf{neqn}}+\mathit{nwkres}+\mathit{lenode}$.
If ${\mathbf{laopt}}=\text{'B'}$, ${\mathbf{lrsave}}\ge \left(3\mathit{mlu}+1\right)×{\mathbf{neqn}}+\mathit{nwkres}+\mathit{lenode}$.
If ${\mathbf{laopt}}=\text{'S'}$, ${\mathbf{lrsave}}\ge 4{\mathbf{neqn}}+11{\mathbf{neqn}}/2+1+\mathit{nwkres}+\mathit{lenode}$.
Where $\mathit{mlu}$ is the lower or upper half bandwidths such that
for PDE problems only (no coupled ODEs),
$\mathit{mlu}=3{\mathbf{npde}}-1\text{;}$
for coupled PDE/ODE problems,
$\mathit{mlu}={\mathbf{neqn}}-1\text{.}$
Where $\mathit{nwkres}$ is defined by
if ${\mathbf{nv}}>0\text{ and }{\mathbf{nxi}}>0$,
$\mathit{nwkres}=3{\left({\mathbf{npoly}}+1\right)}^{2}+\left({\mathbf{npoly}}+1\right)×\left[{{\mathbf{npde}}}^{2}+6{\mathbf{npde}}+{\mathbf{nbkpts}}+1\right]+8{\mathbf{npde}}+{\mathbf{nxi}}×\left(5{\mathbf{npde}}+1\right)+{\mathbf{nv}}+3\text{;}$
if ${\mathbf{nv}}>0\text{ and }{\mathbf{nxi}}=0$,
$\mathit{nwkres}=3{\left({\mathbf{npoly}}+1\right)}^{2}+\left({\mathbf{npoly}}+1\right)×\left[{{\mathbf{npde}}}^{2}+6{\mathbf{npde}}+{\mathbf{nbkpts}}+1\right]+13{\mathbf{npde}}+{\mathbf{nv}}+4\text{;}$
if ${\mathbf{nv}}=0$,
$\mathit{nwkres}=3{\left({\mathbf{npoly}}+1\right)}^{2}+\left({\mathbf{npoly}}+1\right)×\left[{{\mathbf{npde}}}^{2}+6{\mathbf{npde}}+{\mathbf{nbkpts}}+1\right]+13{\mathbf{npde}}+5\text{.}$
Where $\mathit{lenode}$ is defined by
if the BDF method is used,
$\mathit{lenode}=\left(6+\mathrm{int}\left({\mathbf{algopt}}\left(2\right)\right)\right)×{\mathbf{neqn}}+50\text{;}$
if the Theta method is used,
$\mathit{lenode}=9{\mathbf{neqn}}+50\text{.}$
Note: when ${\mathbf{laopt}}=\text{'S'}$, the value of lrsave may be too small when supplied to the integrator. An estimate of the minimum size of lrsave is printed on the current error message unit if ${\mathbf{itrace}}>0$ and the routine returns with ${\mathbf{ifail}}={\mathbf{15}}$.
27: $\mathbf{isave}\left({\mathbf{lisave}}\right)$Integer array Communication Array
If ${\mathbf{ind}}=0$, isave need not be set on entry.
If ${\mathbf{ind}}=1$, isave must be unchanged from the previous call to the routine because it contains required information about the iteration required for subsequent calls. In particular:
${\mathbf{isave}}\left(1\right)$
Contains the number of steps taken in time.
${\mathbf{isave}}\left(2\right)$
Contains the number of residual evaluations of the resulting ODE system used. One such evaluation involves computing the PDE functions at all the mesh points, as well as one evaluation of the functions in the boundary conditions.
${\mathbf{isave}}\left(3\right)$
Contains the number of Jacobian evaluations performed by the time integrator.
${\mathbf{isave}}\left(4\right)$
Contains the order of the ODE method last used in the time integration.
${\mathbf{isave}}\left(5\right)$
Contains the number of Newton iterations performed by the time integrator. Each iteration involves residual evaluation of the resulting ODE system followed by a back-substitution using the $LU$ decomposition of the Jacobian matrix.
28: $\mathbf{lisave}$Integer Input
On entry: the dimension of the array isave as declared in the (sub)program from which d03pjf/d03pja is called. Its size depends on the type of matrix algebra selected:
• if ${\mathbf{laopt}}=\text{'F'}$, ${\mathbf{lisave}}\ge 24$;
• if ${\mathbf{laopt}}=\text{'B'}$, ${\mathbf{lisave}}\ge {\mathbf{neqn}}+24$;
• if ${\mathbf{laopt}}=\text{'S'}$, ${\mathbf{lisave}}\ge 25×{\mathbf{neqn}}+24$.
Note: when using the sparse option, the value of lisave may be too small when supplied to the integrator. An estimate of the minimum size of lisave is printed on the current error message unit if ${\mathbf{itrace}}>0$ and the routine returns with ${\mathbf{ifail}}={\mathbf{15}}$.
29: $\mathbf{itask}$Integer Input
On entry: specifies the task to be performed by the ODE integrator.
${\mathbf{itask}}=1$
Normal computation of output values u at $t={\mathbf{tout}}$.
${\mathbf{itask}}=2$
One step and return.
${\mathbf{itask}}=3$
Stop at first internal integration point at or beyond $t={\mathbf{tout}}$.
${\mathbf{itask}}=4$
Normal computation of output values u at $t={\mathbf{tout}}$ but without overshooting $t={t}_{\mathrm{crit}}$ where ${t}_{\mathrm{crit}}$ is described under the argument algopt.
${\mathbf{itask}}=5$
Take one step in the time direction and return, without passing ${t}_{\mathrm{crit}}$, where ${t}_{\mathrm{crit}}$ is described under the argument algopt.
Constraint: ${\mathbf{itask}}=1$, $2$, $3$, $4$ or $5$.
30: $\mathbf{itrace}$Integer Input
On entry: the level of trace information required from d03pjf/d03pja and the underlying ODE solver. itrace may take the value $-1$, $0$, $1$, $2$ or $3$.
${\mathbf{itrace}}=-1$
No output is generated.
${\mathbf{itrace}}=0$
Only warning messages from the PDE solver are printed on the current error message unit (see x04aaf).
${\mathbf{itrace}}>0$
Output from the underlying ODE solver is printed on the current advisory message unit (see x04abf). This output contains details of Jacobian entries, the nonlinear iteration and the time integration during the computation of the ODE system.
If ${\mathbf{itrace}}<-1$, $-1$ is assumed and similarly if ${\mathbf{itrace}}>3$, $3$ is assumed.
The advisory messages are given in greater detail as itrace increases. You are advised to set ${\mathbf{itrace}}=0$, unless you are experienced with Sub-chapter D02MN.
31: $\mathbf{ind}$Integer Input/Output
On entry: indicates whether this is a continuation call or a new integration.
${\mathbf{ind}}=0$
Starts or restarts the integration in time.
${\mathbf{ind}}=1$
Continues the integration after an earlier exit from the routine. In this case, only the arguments tout and ifail should be reset between calls to d03pjf/d03pja.
Constraint: ${\mathbf{ind}}=0$ or $1$.
On exit: ${\mathbf{ind}}=1$.
32: $\mathbf{ifail}$Integer Input/Output
Note: for d03pja, ifail does not occur in this position in the argument list. See the additional arguments described below.
On entry: ifail must be set to $0$, . If you are unfamiliar with this argument you should refer to Section 4 in the Introduction to the NAG Library FL Interface for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
Note: the following are additional arguments for specific use with d03pja. Users of d03pjf therefore need not read the remainder of this description.
32: $\mathbf{iuser}\left(*\right)$Integer array User Workspace
33: $\mathbf{ruser}\left(*\right)$Real (Kind=nag_wp) array User Workspace
iuser and ruser are not used by d03pjf/d03pja, but are passed directly to pdedef, bndary, odedef and uvinit and may be used to pass information to these routines.
34: $\mathbf{cwsav}\left(10\right)$Character(80) array Communication Array
35: $\mathbf{lwsav}\left(100\right)$Logical array Communication Array
36: $\mathbf{iwsav}\left(505\right)$Integer array Communication Array
37: $\mathbf{rwsav}\left(1100\right)$Real (Kind=nag_wp) array Communication Array
If ${\mathbf{ind}}=0$, cwsav, lwsav, iwsav and rwsav need not be set on entry.
If ${\mathbf{ind}}=1$, cwsav, lwsav, iwsav and rwsav must be unchanged from the previous call to d03pjf/d03pja.
38: $\mathbf{ifail}$Integer Input/Output
Note: see the argument description for ifail above.
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, ${\mathbf{algopt}}\left(1\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{algopt}}\left(1\right)=0.0$, $1.0$ or $2.0$.
On entry, at least one point in xi lies outside $\left[{\mathbf{xbkpts}}\left(1\right),{\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)\right]$: ${\mathbf{xbkpts}}\left(1\right)=〈\mathit{\text{value}}〉$ and ${\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)=〈\mathit{\text{value}}〉$.
On entry, $\mathit{i}=〈\mathit{\text{value}}〉$, ${\mathbf{xbkpts}}\left(\mathit{i}\right)=〈\mathit{\text{value}}〉$, $\mathit{j}=〈\mathit{\text{value}}〉$ and ${\mathbf{xbkpts}}\left(\mathit{j}\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xbkpts}}\left(1\right)<{\mathbf{xbkpts}}\left(2\right)<\cdots <{\mathbf{xbkpts}}\left({\mathbf{nbkpts}}\right)$.
On entry, $\mathit{i}=〈\mathit{\text{value}}〉$, ${\mathbf{xi}}\left(\mathit{i}+1\right)=〈\mathit{\text{value}}〉$ and ${\mathbf{xi}}\left(\mathit{i}\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xi}}\left(\mathit{i}+1\right)>{\mathbf{xi}}\left(\mathit{i}\right)$.
On entry, $\mathit{i}=〈\mathit{\text{value}}〉$ and ${\mathbf{atol}}\left(\mathit{i}\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{atol}}\left(\mathit{i}\right)\ge 0.0$.
On entry, $\mathit{i}=〈\mathit{\text{value}}〉$ and $\mathit{j}=〈\mathit{\text{value}}〉$.
Constraint: corresponding elements ${\mathbf{atol}}\left(\mathit{i}\right)$ and ${\mathbf{rtol}}\left(\mathit{j}\right)$ cannot both be $0.0$.
On entry, $\mathit{i}=〈\mathit{\text{value}}〉$ and ${\mathbf{rtol}}\left(\mathit{i}\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{rtol}}\left(\mathit{i}\right)\ge 0.0$.
On entry, ${\mathbf{ind}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ind}}=0$ or $1$.
On entry, ${\mathbf{itask}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{itask}}=1$, $2$, $3$, $4$ or $5$.
On entry, ${\mathbf{itol}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{itol}}=1$, $2$, $3$ or $4$.
On entry, ${\mathbf{laopt}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{laopt}}=\text{'F'}$, $\text{'B'}$ or $\text{'S'}$.
On entry, ${\mathbf{lisave}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lisave}}\ge 〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{lrsave}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lrsave}}\ge 〈\mathit{\text{value}}〉$.
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}=0$, $1$ or $2$.
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ and ${\mathbf{xbkpts}}\left(1\right)=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}\le 0$ or ${\mathbf{xbkpts}}\left(1\right)\ge 0.0$
On entry, ${\mathbf{nbkpts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nbkpts}}\ge 2$.
On entry, ${\mathbf{neqn}}=〈\mathit{\text{value}}〉$, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nv}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{neqn}}={\mathbf{npde}}×{\mathbf{npts}}+{\mathbf{nv}}$.
On entry, ${\mathbf{norm}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{norm}}=\text{'A'}$ or $\text{'M'}$.
On entry, ${\mathbf{npde}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npde}}\ge 1$.
On entry, ${\mathbf{npoly}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npoly}}\le 49$.
On entry, ${\mathbf{npoly}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npoly}}\ge 1$.
On entry, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$, ${\mathbf{nbkpts}}=〈\mathit{\text{value}}〉$ and ${\mathbf{npoly}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npts}}=\left({\mathbf{nbkpts}}-1\right)×{\mathbf{npoly}}+1$.
On entry, ${\mathbf{nv}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nv}}\ge 0$.
On entry, ${\mathbf{nv}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nxi}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxi}}=0$ when ${\mathbf{nv}}=0$.
On entry, ${\mathbf{nv}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nxi}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nxi}}\ge 0$ when ${\mathbf{nv}}>0$.
On entry, on initial entry ${\mathbf{ind}}=1$.
Constraint: on initial entry ${\mathbf{ind}}=0$.
On entry, ${\mathbf{tout}}=〈\mathit{\text{value}}〉$ and ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{tout}}>{\mathbf{ts}}$.
On entry, ${\mathbf{tout}}-{\mathbf{ts}}$ is too small: ${\mathbf{tout}}=〈\mathit{\text{value}}〉$ and ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=2$
Underlying ODE solver cannot make further progress from the point ts with the supplied values of atol and rtol. ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=3$
Repeated errors in an attempted step of underlying ODE solver. Integration was successful as far as ts: ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
In the underlying ODE solver, there were repeated error test failures on an attempted step, before completing the requested task, but the integration was successful as far as $t={\mathbf{ts}}$. The problem may have a singularity, or the error requirement may be inappropriate.
${\mathbf{ifail}}=4$
In setting up the ODE system an internal auxiliary was unable to initialize the derivative. This could be due to your setting ${\mathbf{ires}}=3$ in pdedef or bndary.
${\mathbf{ifail}}=5$
Singular Jacobian of ODE system. Check problem formulation.
${\mathbf{ifail}}=6$
In evaluating residual of ODE system, ${\mathbf{ires}}=2$ has been set in pdedef, bndary, or odedef. Integration is successful as far as ts: ${\mathbf{ts}}=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=7$
atol and rtol were too small to start integration.
${\mathbf{ifail}}=8$
ires set to an invalid value in call to pdedef, bndary, or odedef.
${\mathbf{ifail}}=9$
Serious error in internal call to an auxiliary. Increase itrace for further details.
${\mathbf{ifail}}=10$
Integration completed, but small changes in atol or rtol are unlikely to result in a changed solution.
The required task has been completed, but it is estimated that a small change in atol and rtol is unlikely to produce any change in the computed solution. (Only applies when you are not operating in one step mode, that is when ${\mathbf{itask}}\ne 2$ or $5$.)
${\mathbf{ifail}}=11$
Error during Jacobian formulation for ODE system. Increase itrace for further details.
${\mathbf{ifail}}=12$
In solving ODE system, the maximum number of steps ${\mathbf{algopt}}\left(15\right)$ has been exceeded. ${\mathbf{algopt}}\left(15\right)=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=13$
Zero error weights encountered during time integration.
Some error weights ${w}_{i}$ became zero during the time integration (see the description of itol). Pure relative error control (${\mathbf{atol}}\left(i\right)=0.0$) was requested on a variable (the $i$th) which has become zero. The integration was successful as far as $t={\mathbf{ts}}$.
${\mathbf{ifail}}=14$
Flux function appears to depend on time derivatives.
${\mathbf{ifail}}=15$
When using the sparse option lisave or lrsave is too small: ${\mathbf{lisave}}=〈\mathit{\text{value}}〉$, ${\mathbf{lrsave}}=〈\mathit{\text{value}}〉$.
${\mathbf{ifail}}=-99$
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
## 7Accuracy
d03pjf/d03pja controls the accuracy of the integration in the time direction but not the accuracy of the approximation in space. The spatial accuracy depends on both the number of mesh points and on their distribution in space. In the time integration only the local error over a single step is controlled and so the accuracy over a number of steps cannot be guaranteed. You should therefore test the effect of varying the accuracy argument atol and rtol.
## 8Parallelism and Performance
d03pjf/d03pja is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
d03pjf/d03pja makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The argument specification allows you to include equations with only first-order derivatives in the space direction but there is no guarantee that the method of integration will be satisfactory for such systems. The position and nature of the boundary conditions in particular are critical in defining a stable problem.
The time taken depends on the complexity of the parabolic system and on the accuracy requested.
## 10Example
This example provides a simple coupled system of one PDE and one ODE.
$V 1 2 ∂ U 1 ∂ t -x V 1 V . 1 ∂ U 1 ∂ x = ∂ 2 U 1 ∂ x 2 V . 1 = V 1 U 1 + ∂ U 1 ∂ x +1 +t ,$
for $t\in \left[{10}^{-4},0.1×{2}^{i}\right]\text{, }i=1,2,\dots ,5,x\in \left[0,1\right]$.
The left boundary condition at $x=0$ is
$∂U1 ∂x =-V1expt.$
The right boundary condition at $x=1$ is
$U1=-V1V.1.$
The initial conditions at $t={10}^{-4}$ are defined by the exact solution:
$V1=t, and U1x,t=expt1-x-1.0, x∈0,1,$
and the coupling point is at ${\xi }_{1}=1.0$.
### 10.1Program Text
Note: the following programs illustrate the use of d03pjf and d03pja.
Program Text (d03pjfe.f90)
Program Text (d03pjae.f90)
### 10.2Program Data
Program Data (d03pjfe.d)
Program Data (d03pjae.d)
### 10.3Program Results
Program Results (d03pjfe.r)
Program Results (d03pjae.r) | 2021-10-16 21:36:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 768, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833543658256531, "perplexity": 3482.574164914285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00685.warc.gz"} |
https://www.semanticscholar.org/paper/Spectra-of-length-and-area-in-(2-%2B-1)-Lorentzian-Freidel-Livine/3a424d369328516455c9ff85c26258f060e218c5 | # Spectra of length and area in (2 + 1) Lorentzian loop quantum gravity
@article{Freidel2003SpectraOL,
title={Spectra of length and area in (2 + 1) Lorentzian loop quantum gravity},
author={Laurent Freidel and Etera R. Livine and Carlo Rovelli},
journal={Classical and Quantum Gravity},
year={2003},
volume={20},
pages={1463-1478}
}
• Published 18 December 2002
• Physics
• Classical and Quantum Gravity
We study the spectrum of the length and area operators in Lorentzian loop quantum gravity, in 2 + 1 spacetime dimensions. We find that the spectrum of spacelike intervals is continuous, whereas the spectrum of timelike intervals is discrete. This result contradicts the expectation that spacelike intervals are always discrete. On the other hand, it is consistent with the results of the spin foam quantization of the same theory.
Loop quantum cosmology in 2+1 dimension
As a first step to generalize the structure of loop quantum cosmology to the theories with the spacetime dimension other than four, the isotropic model of loop quantum cosmology in 2+1 dimension is
In search of fundamental discreteness in (2 + 1)-dimensional quantum gravity
• Physics
• 2009
Inspired by previous work in (2 + 1)-dimensional quantum gravity, which found evidence for a discretization of time in the quantum theory, we reexamine the issue for the case of pure Lorentzian
Spectra of geometric operators in three-dimensional loop quantum gravity: From discrete to continuous
• Physics
• 2014
We study and compare the spectra of geometric operators (length and area) in the quantum kinematics of two formulations of three-dimensional Lorentzian loop quantum gravity. In the SU(2)
Quantum Gravity in 2 + 1 Dimensions: The Case of a Closed Universe
• S. Carlip
• Physics
Living reviews in relativity
• 2005
A summary of the rather large body of work that has gone towards quantizing (2 + 1)-dimensional vacuum gravity in the setting of a spatially closed universe is summarized.
Towards a Covariant Loop Quantum Gravity
We review the canonical analysis of the Palatini action without going to the time gauge as in the standard derivation of Loop Quantum Gravity. This allows to keep track of the Lorentz gauge symmetry
Time level splitting in quantum Chern–Simons gravity
Using the Dirac theory of constraints, the reduction procedure for the field degrees of freedom, the number of which is restricted by the equations of motion and topological conditions, is proposed.
The Matrix Elements of Area Operator in (2+1) Euclidean Loop Quantum Gravity
• Physics
• 2021
In this article we discuss the matrix elements of area operator in (2+1) Euclidean Loop Quantum Gravity. The Euclidean signature is chosen because it has the same group as (3+1) Lorentzian Loop
Abelian 2+1D loop quantum gravity coupled to a scalar field
• C. Charles
• Physics
General Relativity and Gravitation
• 2019
In order to study 3d loop quantum gravity coupled to matter, we consider a simplified model of abelian quantum gravity, the so-called $$\mathrm {U}(1)^3$$U(1)3 model. Abelian gravity coupled to a
The Entropy of BTZ Black Hole from Loop Quantum Gravity
The result that the horizon degrees of freedom can be described by the 2D SO(1,1) punctured BF theory is got and the area law for the entropy of BTZ black hole is got.
(2+1)-dimensional loop quantum cosmology of Bianchi I models
• Physics
• 2016
We study the anisotropic Bianchi I loop quantum cosmology in 2+1 dimensions. Both the $\mubar$ and $\mubar'$ schemes are considered in the present paper and the following expected results are
## References
SHOWING 1-10 OF 31 REFERENCES
3+1 spinfoam model of quantum gravity with spacelike and timelike components
• Mathematics
• 2000
We present a spin foam formulation of Lorentzian quantum general relativity. The theory is based on a simple generalization of a Euclidean model defined in terms of a field theory over a group. The
Loop quantum gravity and quanta of space: a primer
• Physics
• 1998
We present a straightforward and self-contained introduction to the basics of the loop approach to quantum gravity, and a derivation of what is arguably its key result, namely the spectral analysis
Quasinormal modes, the area spectrum, and black hole entropy.
A result from classical gravity concerning the quasinormal mode spectrum of a black hole is used to fix the Immirzi parameter and the Bekenstein-Hawking expression of A/4l(2)(P) for the entropy of ablack hole is arrived at.
A State Sum Model for (2+1) Lorentzian Quantum Gravity
A state sum model based on the group SU(1,1) is defined. Investigations of its geometry and asymptotics suggest it is a good candidate for modelling (2+1) Lorentzian quantum gravity.
Quantum spin dynamics (QSD): IV. ? Euclidean quantum gravity as a model to test ? Lorentzian quantum gravity
The quantization of Lorentzian or Euclidean 2 + 1 gravity by canonical methods is a well studied problem. However, the constraints of 2 + 1 gravity are those of a topological field theory and
Spin foam model for Lorentzian general relativity
• Mathematics
• 2001
We present a spin foam formulation of Lorentzian quantum General Relativity. The theory is based on a simple generalization of an Euclidean model defined in terms of a field theory over a group. Its
Knot theory and quantum gravity.
• Mathematics
Physical review letters
• 1988
A new represenatation for quantum general relativity is described, which is defined in terms of functionals of sets of loops in three-space. In this representation exact solutions of the quantum
Loop Quantum Gravity
A general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity is provided, and a guide to the relevant literature is provided.
Quantum theory of geometry: I. Area operators
• Mathematics
• 1996
A new functional calculus, developed recently for a fully non-perturbative treatment of quantum gravity, is used to begin a systematic construction of a quantum theory of geometry. Regulated | 2022-07-02 22:53:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49706369638442993, "perplexity": 964.5769676611645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00054.warc.gz"} |
http://lenkiefer.com/2017/10/26/predicting-recessions-with-dynamic-model-averaging/ | # Forecasting recessions with dynamic model averaging
## We go into the vasty deep, dipping our toes ever so slightly into the dark waters of macroeconometric forecasting. Here we use dynamic model averaging to forecast recessions with R.
HERE THE LITERATURE IS VASTY DEEP. In this post we’ll dip our toes, every so slightly, into the dark waters of macroeconometric forecasting. I’ve been studying some techniques and want to try them out. I’m still at the learning and exploring stage, but let’s do it together.
In this post we’ll conduct an exercise in forecasting U.S. recessions using several approaches. Per usual we’ll do it with R and I’ll include code so you can follow along.
### Longer than usual disclaimer
I’m not recommending any of these techniques and don’t guarantee any results if you try them. The exercises here are merely for learning and don’t represent my views (or the views of my employer) about the likelihood of recessions. The results have not gone through peer review, I haven’t fully reviewed the techniques, code and these results aren’t necessarily suitable for any purposes.
# Background
## Literature
We’re not going to review the literature in any detail. A large portion of modern macroeconomics studies business cycles, and there’s a huge literature on forecasting techniques. But if you’re interested in background I’ll provide three important links directly related to today’s investigation. Later if things go great, I’ll have time to write up some more thoughts on these papers.
1. See this FEDS Notes on “Which market indicators best forecast recessions”
2. See this paper Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to Cold Rolling Mill
3. See this paper Dynamic Logistic Regression and Dynamic Model Averaging for Binary Classification
## Data
All the data we’ll need will be available through the Saint Louis Federal Reserve’s FRED database. See here for more discussion of using FRED with R.
## R packages
We’ll rely on tidyquant and tibbletime packages for data munging and the dma package for estimation using the dynamic model averaging technique discussed below.
Let’s get to it!
# A very simple model
Let’s follow the basic setup outline in this FEDS note. We want to forecast whether or not the National Bureau of Economic Research (NBER) has declared that a month t falls in a recession. See here for NBER business cycle dates.
We follow their framework (with a slight modification swapping a Logit for a Probit). Let $$Y_t=1$$ indicate that month $$t$$ falls in a recession.
$\text Logit({Pr}\ (Y_t=1\ | \ x_{t-h})) = x^{(k)'}_{t-h}\theta^{(k)}_{t-h} \ \ \ \ \ (1)$
where $$\theta^{(k)}_{t-h}$$ is a (possibly) time varying parameter vector and $$x^{(k)'}_{t-h}$$ is a vector of input variables. Forecasts are made $$h$$ periods ahead and the dimension of $$x$$ may vary across $$k$$.
We could (and will in a future post) consider several indicators to help predict a recession and also consider several different forecast horizons $$h$$. In this post we’ll focus on just 2 indicators and a 2 horizons.
## Indicators
We’ll focus on two indicators and see how well they individually or in combination might predict recessions. We’ll use monthly observations on the 3-month percent change in nonfarm payroll employment PAYEMS (adopting the FRED mnemonic) and the slope of the U.S. Treasury Yield Curve SLOPE measured by the the percentage difference in the 10-year constant maturity Treasury yield and the 3-mont Treasury bill rate.
Within the vasty deep of macroeconometric forecasting literature, employment growth has been a reliable concurrent indicator for recessions while the yield curve is one of the few variables that seems to have some predictive power for recessions, as we shall see.
## Simple logistic regression
Let’s assume the parameter $$\theta$$ remains constant and there their is a single composite model that consists of both PAYEMS and SLOPE. We an estimate it with a logistic regression (we’ll use stargazer to format our output).
# forecast contemporaneously (h=0)
glm.h0<- glm(USREC ~ SLOPE + PAYEMS,family=binomial(link='logit'), data=df4)
# forecast 12 months ahead (h=12)
glm.h1<- glm(REC12 ~ SLOPE + PAYEMS,family=binomial(link='logit'), data=df4)
stargazer::stargazer(glm.h0,glm.h1, title="Forecasting Recessions", type="html",
dep.var.caption = "Recession probability",
dep.var.labels = "Forecast horizon",
column.labels = c("contemporaneous (h=0)", "forecast (h=12)"))
Recession probability Forecast horizon REC12 contemporaneous (h=0) forecast (h=12) (1) (2) SLOPE -0.964*** -1.350*** (0.155) (0.143) PAYEMS -5.370*** -0.046 (0.529) (0.231) Constant -0.072 -0.528*** (0.230) (0.196) Observations 720 720 Log Likelihood -127.587 -217.054 Akaike Inf. Crit. 261.173 440.108 Note: p<0.1; p<0.05; p<0.01
### tibbltime for rolling regressions
This regression assumes that the parameters are constant over time. But they might change due to regime changes or other sources of parameter instatiblity. One way to deal with instability is to run rolling window regressions. Those are super easy thanks to tibbletime.
Let’s estimate 20-year rolling windows and plot the coefficients over time. Here we’ll look at the models for 12-months ahead. See this tibbletime vignette for details on these steps.
# compute a rolling regression
# rolling regressgion with rollify
rolling_lm <- rollify(.f = function(REC12 , SLOPE , PAYEMS) {
},
window = 240,
unlist = FALSE)
df4.tt <- as_tbl_time(df4,index=date) #covert to tibbletime
df4.tt %>% mutate(roll_lm=rolling_lm(REC12 , SLOPE , PAYEMS)) %>%
filter(!is.na(roll_lm)) %>%
mutate(tidied = purrr::map(roll_lm, broom::tidy)) %>%
unnest(tidied) %>%
select(date, term, estimate, std.error, statistic, p.value) -> df4.reg
# plot coefficients
ggplot(data=df4.reg, aes(x=date,y=estimate,color=term))+
geom_line(size=1.05,color="royalblue")+
theme_minimal()+
facet_wrap(~term)+
labs(y="Coefficient",x="",
title="Model for U.S. recession probabilities\n240-month rolling regressions",
caption="@lenkiefer Recession probabilities based on a rolling regression.\nSLOPE: Yield curve slope (10-year minus 3-month U.S. Treasury yield)\nPAYEMS Employment change (3 month %)\nModel: logistic regression for USREC12 = PAYEMS + SLOPE \nModel fit with glm & tibbletime: Davis Vaughan and Matt Dancho (2017).\ntibbletime: Time Aware Tibbles. R package version 0.0.2.
https://CRAN.R-project.org/package=tibbletime")+
geom_hline(yintercept=0,color="black")+
theme(plot.caption=element_text(hjust=0),
plot.subtitle=element_text(face="italic",size=9),
plot.title=element_text(face="bold",size=14))
These estimates show quite a large amount of apparent parameter instability. Coefficient signs even switch. And the probabilities jump. Can we do better?
# Dynamic model averaging
Oh yes we can.
We’d like to have a model that would allow parameters to vary over time. We’d like to discount the past, but not as abruptly as the rolling regressions do.
Turns out the Dynamic Modeling Average approach of McCormick, Raftery and Madigan (dma) is perfect for this exercise.
I hope to explore this in more detail in future, but let’s load it up and try it out. The package is good, but I don’t like the default plotting, so I’ll do some manipulations to get our results tidy and ready for ggplot2.
# fit binary model (contemporaneous)
df5 <- filter(df3, year(date)>1955 & !is.na(USREC))
# convert data to matrix
xvar <- as.matrix(df5 %>% select(SLOPE,PAYEMS))
yvar <- as.matrix(df5$USREC) # design for models mmat<- matrix(c(1,0, 0,1, 1,1),3,2,byrow=TRUE) # fit model h= 0 dma.fit0<- logistic.dma(unname(xvar), yvar, mmat, lambda=0.99, alpha=0.99, autotune=TRUE, initialsamp=120) # repeat for h=12 df50 <- df5 %>% mutate(yhat0=dma.fit0$yhatdma)
df5 <- filter(df3, year(date)>1955 & !is.na(REC12))
xvar <- as.matrix(df5 %>% select(SLOPE,PAYEMS))
yvar <- as.matrix(df5$REC12) mmat<- matrix(c(1,0, 0,1, 1,1),3,2,byrow=TRUE) dma.fit12<- logistic.dma(unname(xvar), yvar, mmat, lambda=0.99, alpha=0.99, autotune=TRUE, initialsamp=120) df5 <- df5 %>% mutate(yhat12=dma.fit12$yhatdma)
# combine results
df6<-full_join(df5,df50 %>% select(date,yhat0), by="date")
Then we can plot results:
g.12<-
ggplot(data=df6, aes(x=date+months(12),y=yhat12))+
geom_rect(data=recessions.df, inherit.aes=FALSE,
aes(xmin=Peak, xmax=Trough, ymin=-Inf, ymax=+Inf),
fill='lightblue', alpha=0.5)+theme_minimal()+
geom_line(color="royalblue",size=1.05)+
scale_y_continuous(labels=scales::percent)+
labs(x="",y="recession probability",
title="Estimated U.S. recession probabilities given by dynamic model averaging",
caption="@lenkiefer Recession probabilities based on dynamic model averaging of three models\nforecasting recession 0 and 12-months ahead.\nModel 1: Yield curve slope (10-year minus 3-month U.S. Treasury yield)\nModel 2: Employment change (3 month %)\nModel 3: Yield curve slope and employment change\nModel fit with dma: Tyler H. McCormick, Adrian Raftery and David Madigan (2017).\ndma: Dynamic Model Averaging. R package version 1.3-0. https://CRAN.R-project.org/package=dma")+
geom_hline(yintercept=0,color="black")+
theme(plot.caption=element_text(hjust=0),
plot.subtitle=element_text(face="italic",size=9),
plot.title=element_text(face="bold",size=14))
g.0<-
ggplot(data=df6, aes(x=date,y=yhat0))+
geom_rect(data=recessions.df, inherit.aes=FALSE,
aes(xmin=Peak, xmax=Trough, ymin=-Inf, ymax=+Inf),
fill='lightblue', alpha=0.5)+theme_minimal()+
geom_line(color="red",size=1.05)+
scale_y_continuous(labels=scales::percent)+
labs(x="",y="recession probability",
title="Estimated U.S. recession probabilities given by dynamic model averaging",
caption="@lenkiefer Recession probabilities based on dynamic model averaging of three models\nforecasting recession 0 and 12-months ahead.\nModel 1: Yield curve slope (10-year minus 3-month U.S. Treasury yield)\nModel 2: Employment change (3 month %)\nModel 3: Yield curve slope and employment change\nModel fit with dma: Tyler H. McCormick, Adrian Raftery and David Madigan (2017).\ndma: Dynamic Model Averaging. R package version 1.3-0. https://CRAN.R-project.org/package=dma")+
geom_hline(yintercept=0,color="black")+
theme(plot.caption=element_text(hjust=0),
plot.subtitle=element_text(face="italic",size=9),
plot.title=element_text(face="bold",size=14))
g.rec<-
plot_grid(g.0+labs(caption=""),g.12+labs(title=""),
ncol=1)
g.rec
The plot above shows how well the model predicts recessions contemporaneously (pretty good) and ahead of time (not so well). We might want to add more variables to the model and see how they look. We’ll leave that for a later time.
### Discussion
There’s a lot more to discuss about this approach. I’m still chewing on it, but as I discover more I’ll post more here. In a follow-up post, we’ll dig into the internals of the dma package and try to see exactly what’s going on. See you next time. | 2018-11-20 05:27:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5815714001655579, "perplexity": 7913.8684644155865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746227.72/warc/CC-MAIN-20181120035814-20181120061814-00475.warc.gz"} |
https://www.semanticscholar.org/paper/Non-ergodic-phases-in-strongly-disordered-random-B.L.Altshuler-E.Cuevas/1e4df557e12b2dba87f608d232be0756c6ca1ec4 | # Non-ergodic phases in strongly disordered random regular graphs
@inproceedings{BLAltshuler2016NonergodicPI,
title={Non-ergodic phases in strongly disordered random regular graphs},
author={B.L.Altshuler and E.Cuevas and L.B.Ioffe and V.E.Kravtsov},
year={2016}
}
• Published 8 May 2016
• Physics
B. L. Altshuler, E. Cuevas, L. B. Ioffe, 4 and V. E. Kravtsov 4 Physics Department, Columbia University, 538 West 120th Street, New York, New York 10027, USA Departamento de F́ısica, Universidad de Murcia, E30071 Murcia, Spain CNRS and Universite Paris Sud, UMR 8626, LPTMS, Orsay Cedex, F-91405, France L. D. Landau Institute for Theoretical Physics, Chernogolovka, Russia Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy
9 Citations
## Figures from this paper
• Computer Science
J. Complex Networks
• 2020
It is demonstrated that in the main zone the level spacing matches the Wigner–Dyson law and is delocalized, however it shares the Poisson statistics in the side zone, which is the signature of localization, and speculation about the difference in eigenvalue statistics between ‘evolutionary’ and ‘instant’ networks is speculated.
• Mathematics
Physical review. E
• 2017
It is claimed that at the plateau the spontaneously broken Z_{2} symmetry is restored by the mechanism of modes collectivization in clusters of different colors, and the phenomena of a finite plateau formation holds also for polychromatic networks with M≥2 colors.
• Physics
Physical review letters
• 2016
It is found that for subdiffusively thermalizing systems the variance scales more slowly with system size than expected for diffusive systems, directly violating Berry's conjecture.
• Physics
Physical review letters
• 2021
These results, corroborated by comparison to exact diagonalization for an SYK model, are at variance with the concept of "nonergodic extended states" in many-body systems discussed in the recent literature.
• Mathematics
Letters in Mathematical Physics
• 2018
We consider the Rosenzweig–Porter model $$H = V + \sqrt{T}\, \varPhi$$H=V+TΦ, where V is a $$N \times N$$N×N diagonal matrix, $$\varPhi$$Φ is drawn from the $$N \times N$$N×N Gaussian Orthogonal
• Physics
Physical Review Research
• 2021
Quantum multifractality is a fundamental property of systems such as non-interacting disordered systems at an Anderson transition and many-body systems in Hilbert space. Here we discuss the origin of
• Physics
Scientific Reports
• 2018
A modified version of the excitation scheme introduced by Volchkov et al. is applied to address the critical state at the mobility edge of the Anderson localization transition, and the projected image of the cloud is shown to inherit multifractality and to display universal density correlations.
## References
SHOWING 1-10 OF 12 REFERENCES
• Mathematics, Computer Science
Physical review letters
• 2016
A general method to obtain the exact rate function controlling the large deviation probability Prob[I_{N}[a,b]=kN]≍e^{-NΨ(k)} that an N×N sparse random matrix has I-N eigenvalues inside the interval [a, b].
• A. Frieze
• Mathematics, Computer Science
SODA '06
• 2006
Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
CONTENTS § 1. Introduction § 2. Formulation of the theorems § 3. Proofs § 4. Technical lemmas § 5. Appendix. The rotatory motion of a heavy asymmetric rigid bodyReferences
Part 1 Newtonian mechanics: experimental facts investigation of the equations of motion. Part 2 Lagrangian mechanics: variational principles Lagrangian mechanics on manifolds oscillations rigid
### Europhys
• Lett. 96, 37004
• 2011
• J. Phys. A
• 2011
### New J
• Phys. 17, 122002
• 2015 | 2023-01-30 22:08:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43374761939048767, "perplexity": 5837.577589866661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00676.warc.gz"} |
http://blogs.mathworks.com/community/2012/07/06/the-master-speaks-an-interview-with-cody-champion-bmtran/ | # MATLAB Spoken Here
## The Master Speaks: An Interview with Cody Champion @bmtran
Bryant Tran, better known to the Cody-playing world as @bmtran, has been the leading player on Cody since the week it launched. What is the secret to his Cody-dominating awesomeness? We were intrigued, so we tracked him down and asked him a few questions. Bryant graciously agreed to let us publish the resulting Q & A here.
Want to become a Cody champ yourself? Read and learn…
Q: Tell us a little about yourself and your experience with MATLAB. Where are you now… grad school where and for what? When did you learn MATLAB? When did you first realize you were a major power user?
A: My name is Bryant, and I’ve been born and raised in Austin, Texaas. I did my undergraduate degree at the Cockrell School at The University of Texas at Austin (Go Horns!) in Electrical Engineering, and am currently pursuing a Masters of Science in Mechanical Engineering studying Acoustics. I’ve been using MATLAB for about two years now. I vaguely knew about MATLAB during my undergrad (http://koentmnd.ytmnd.com/) though I was jaded into thinking it was extremely slow and inefficient (1-based indexing had a lot to do with that).
After getting a summer research position when I graduated, I began learning how to use MATLAB for signal processing. At the time, I was still convinced that C was the way to go, so the first advanced item I learned how to do was to use the MEX interface. I eventually realized the error of my ways and slowly began to learn about vectorization and logical indexing through the resident MATLAB guru in my group, and the rest is history! I’m now convinced that MATLAB is one of the best and most accessible programming paradigms in existence for scientific computing.
Q: Do people come to you for MATLAB help?
A: I’m definitely one of the go-to MATLAB “experts” at my workplace. I’m mostly helpful in doing performance analysis and optimization. One time, I was able to speed up a coworkers code from 3 hours to 13 seconds (830x speedup!) primarily through vectorization.
Q: How did you find out about Cody?
A: I was interested in the semiannual MATLAB contests, so I signed up for the mailing list since I kept missing them (I’ve yet to submit a solution to a contest, whoops). When Cody was released, I got an email on that mailing list. It was pretty good timing too since I had, just a couple weeks before, started doing Project Euler in MATLAB.
Q: We were all amazed at how quickly you solved all 96 of our original problems. Tell us about that first night.
A: When I started, I didn’t know it was the first day. I assumed that it had been going on for a while and I just kind of jumped in. I didn’t even realize that there was a scoring system at first; I just wanted to solve all of the problems, especially since the majority were fairly simple. As I said before, I’d been doing Project Euler for a couple weeks before I started Cody, and the momentum just carried. The same goes for the original 96: I just kept solving problems mechanically until I realized that I had solved the majority and I just had to finish it up!
Q: What motivates you to play? Is staying in first place a major factor?
A: I always enjoyed puzzles as a kid, and I’m arguably a pretty good programmer, so it kind of fits. Staying in first place was never really that big of motivation, but when Alfonso was creeping up on me in March, I realized that I was more competitive I had originally thought. The biggest motivation for me most of the time was 100% completion, which I had a handle on for a while. This was especially easy since Cody sorts the unsolved problems at the beginning. It started to curb with Matt Fig’s Mechanics I, with Robert Canfield’s minimization problems, and eventually with Richard Zapor’s insanity problems.
Q: You’ve been number one since the first week of the contest. Do you watch the leaderboard to see who’s gaining on you?
A: I do watch the leaderboard just to see if my rank has changed and who is number 2. I’ve since made a Trendy plot of my rank. I wish I could retroactively look up the scores and ranks and plot them.
Q: What does a typical session look like for you? When you’re solving a new problem, do you like to get a good answer in quickly and then tune it steadily into a better answer? Or do you like to knock them down quickly and move on?
A: When I first started and Alfonso Nieto-Castanon was submitting some top-notch solutions, I would try extremely hard to steal the leader spot from him (though I rarely ever did). Recently, I have much less free time since my research picked up, so I just solve the problem, look at the other answers, tweak it maybe once or twice, and then move on.
Q: What makes a good Cody problem? What are some simple mistakes that problem authors should avoid?
A: I think the best advice I could give to someone making a problem is to make a varied and comprehensive test suite. There are a fair number of hackers out there who will exploit weaknesses on that front (myself included). I also really appreciate good examples and references so I can understand the topic more.
Q: What makes a good Cody answer? Obviously Cody wants your answer to be short, but beyond that what makes a beautiful answer?
A: This might get controversial, but I like Cody answers that use exotic MATLAB functions that I either didn’t know about before or didn’t think about using in that way before. I find that they really stretch my understanding of the scope and ability of a lot of the MATLAB functions, which effectively makes MATLAB more powerful to me.
Q: Do you have any favorite problems or favorite answers you’d like to call out?
A: My favorite Cody problems are those that address an interesting topic in an elegant fashion. Not to toot my own horn, but my abacus problem is one of my favorites; the soroban abacus is an iconic but not well-understood math tool, and reading one is a very simple task. I’ve been surprised at how plentiful and varied the solutions are. My simple maze-solving problem is another of my favorites that I’ve generated since it’s a well-explored topic with solid graph theory behind it (and it comes in handy when you need to solve a maze!). I also tend to like any string manipulation problems because I find regular expressions to be cryptic, strange, and beautiful.
My favorite answers are anything that Alfonso Nieto-Castanon has ever submitted. His work is elegant and refined, and I learn a lot by figuring out and copying what he has done.
Q: What are some of your favorite ways to turn a short answer into a really short answer? Do you have any “guilty pleasures” where you know the code won’t win a beauty contest, but it’s great from a Cody point of view?
A: Well the most obvious of these is to eliminate concatenation; str2num is probably the quickest way to really cut down on the size of a program. Another one that comes in handy for me is using interp1 to do indexing since you can do it in one line with few arguments.
Q: Have you learned anything playing Cody that’s useful to you in your “real world” coding?
A: The skills I’ve gotten from Cody mostly extend to my knowledge of the way that MATLAB works as well as the breadth of functions that I’m familiar with. My coding style itself has remained fairly consistent.
Q: The most useful MATLAB function that nobody knows about is _______ .
A: bsxfun. also, matrix multiply – it’s surprising how few people can recognize a matrix multiply in their numerical algorithms, and it’s blazing fast in MATLAB.
Q: If I were a MATLAB function, I’d be _______ .
A: profile – I’m obsessed with optimizing my code at work.
### 7 Responses to “The Master Speaks: An Interview with Cody Champion @bmtran”
1. Aurélien replied on :
Nice article. Another trick to decrease the score is to use ans as output of the function. It allows to save 2 points ;-)
A nice feature in Cody would be to receive an email when someone has submitted a leader solution. A watch listbutton like in the FEx would be nice.
I have a question : when we rescore the solution, the solution map is updated but does the score of CODY players is also updated? I mean if you have submitted a solution which is no more good after rescoring, do you lose 10 points?
Maybe CODY players could also receive an alert if their previous correct solution now fail after a rescoring or after someone has added more test suites.
Me the feature that I have learned with CODY is to use num2str-’0′ to transform a number into in a vector:
>> num2str(3249)-’0′
ans =
3 2 4 9
Aurélien
2. Santosh Kasula replied on :
@Aurélien, thank you for your valuable feedback.
Regarding your question about Cody score, we do update the score if a player’s solution becomes incorrect after the rescore, and player loses 10 points if he has no other correct solution to that problem.
3. Aurélien replied on :
CODY workflow:
@Santosh thanks for your answer. So if my previous solution fails after someones has rescored a problem , does this problem appear again in My Cody as unresolved?
If yes , it is cool (no user action required except to retry later another solution) , otherwise I would like to receive an email.
CODY issue:
James (Cody player rank 13) and I found an issue with Cody: we cannot delete our comments : A window asks us if we really want to delete the comment, and then nothing happen once we click “Yes.”. We tried on different machines without sucess.
Thanks
Aurélien
4. Santosh C Kasula replied on :
@Aurélien, if your previous solution fails, that problem appears in the unsolved problems list.
Regarding deleting the comment, it is bug and we are working on a fix for it.
5. Aurélien replied on :
super . Thanks for everything.
6. Aurélien replied on :
Just one side note deleting a comment in Trendy (not Cody) works fine.
you explain the error message about “the server has encountered a problem…” which generally occurs when our solution is too long to evaluate.
I also notice in this page that the URL link “Give it a try!” does not work. The new page displayed is : BAD REQUEST
Grizzly/1.9.36
Best,
Aurélien
7. Helen Chen replied on :
@Aurélien – Thanks for both reports. We will look into these messages.
Name (required) E-mail (required, will not be published) Website (optional) Spam protection (required): What is 2 + 7 ?
Wrap code fragments inside <pre> tags, like this:
<pre class="code">
a = magic(3);
sum(a)
</pre>
If you have a "<" character in your code, either follow it with a space or replace it with "<" (including the semicolon).
News from the intersection of MATLAB, Community, and the web.
These postings are the author's and don't necessarily represent the opinions of MathWorks. | 2013-05-25 06:18:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3985133767127991, "perplexity": 1496.236020016902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00066-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://stats.stackexchange.com/questions/35038/given-two-sets-of-data-what-could-explain-similar-means-but-different-standard | # Given two sets of data, what could explain similar means but different standard deviations?
Given two sets of data of user activity, both of which appear to be in an exponential distribution, I have calculated the mean and standard deviations using both a mean/deviation and a sample mean/deviation (sample size = 30, number of samples = 10k):
### A (size: 627,000):
• Raw -- μ = 45.947, σ = 114.2, σ/√n = 0.14422
### B (size:3570):
• Raw -- μ = 46.43, σ = 116.1
Using the above data, it seems that the two means differ by a statistically significant amount, and thus allow us to say with confidence that the average for B is greater than the average of A.
### A
• Sampling -- μ = 46.174, σ = 21.256
### B
• Sampling -- μ = 46.786, σ = 21.366
Using the sampling data standard deviations, we see that the difference in means (0.612) is much less than the deviation, it seems that the means do not differ by a statistically significant amount.
So given the above, which is right? Can we say that these data sets differ? If the underlying distribution is exponential, are the above tests even accurate?
- | 2013-12-11 14:36:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116023778915405, "perplexity": 950.6389003937992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164037630/warc/CC-MAIN-20131204133357-00032-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://shikokuking.com/6aojd0r/into-function-graph-c2f8ad | … A graph is commonly used to give an intuitive picture of a function. Thanks to all authors for creating a page that has been read 12,415 times. Learn more... Graphing a function is not as simple as creating a table and plotting those points. To find a functions y-intercept, you set x=0 and find. Make sure you have enough points. Answer: To plot a graph of a function on your webpage, use the canvas element, as shown in the example below. $\begingroup$ Greetings! We use cookies to make wikiHow great. You can graph thousands of equations, and there are different formulas for each one. A free graphing calculator - graph function, examine intersection points, find maximum and minimum and much more This website uses cookies to ensure you get the best experience. We know ads can be annoying, but theyâre what allow us to make all of wikiHow available for free. Follow 675 views (last 30 days) Larry on 20 Feb 2014. This article has been viewed 12,415 times. Dragging. ... As evident from the examples on this page, arguments to helper functions must be enclosed into parentheses (or braces or brackets). Usage To plot a function just type it into the function box. Using the online curve plotter. A graph of a function is a visual representation of a function's behavior on an x-y plane. = Representing a function. Which graph on the right is the derivative graph of the function below on the left? Key Concept : A graph represents a function only if every vertical line intersects the graph in at most one point. They have the âUâ shape. Now, draw the tangent function graph so that the line approaches the asymptote without touching or crossing it. Download free on Amazon. If you just click-and-release (without moving), then the spot you clicked on will be the new center. If the function is a parametric function or a polar function, you always have to specify an interval. They have the “U” shape. The simple way, you can draw the plot or graph in MATLAB by using code. Key Terms. wikiHow is a âwiki,â similar to Wikipedia, which means that many of our articles are co-written by multiple authors. Monday, July 22, 2019 " Would be great if we could adjust the graph via grabbing it and placing it where we want too. For a one-to-one function How can I insert the input variables into the title? So if youâre asked to graph a function and its inverse, all you have to do is graph the function and then switch all x and y values in each point to graph the inverse. All tip submissions are carefully reviewed before being published. Parabolas may open upward or downward. 7=2\times0+10-a \Rightarrow a=3. 15 Directions: The function on the left is . in which x is called argument (input) of the function f and y is the image (output) of x under f. To zoom, use the zoom slider. The graph of a quadratic function is called a parabola. A number adding or subtracting inside the parentheses (or other grouping device) of a function creates a horizontal shift. x = + 2, y = x 2 = 4. Absolute value (distance from zero) of a value or expression. Wednesday, February 21, 2018 " It would be nice to be able to draw lines between the table points in the Graph Plotter rather than just the points. Free graphing calculator instantly graphs your math problems. In the above situation, the graph will not represent a function. Upgrade . Functions and their graphs. the function produces a plot. Section 1.4 Graphing functions with Excel. Inserting function variable into graph title in R. Ask Question Asked 8 years, 6 months ago. Purplemath. Graphically, if a line parallel to x axis cuts the graph of f(x) at more than one point then f(x) is many-to-one function and if a line parallel to y-axis cuts the graph at more than one place, then it is not a function. The graph of a quadratic function is a parabola. Since the graph shows that the y y y-intercept of this graph is (0, 7) (0,7) (0, 7), we can obtain a a a by simply plugging this into the new function as follows: 7 = 2 × 0 + 10 â a â a = 3. It has the unique feature that you can save your work as a URL (website link). Study of MATLAB plotting: For two-dimensional graph plotting, you require two vectors called ‘x’ and ‘y’. When b is less than 1, you have an exponential decay function. (see figure above) e.g. Functions can get very complex and go through transformations, such as flips, shifts, stretching and shrinking, making the usual graphing techniques difficult. So if I take any member of the domain, let's call that x, and I give it to the function, the function should tell me what member of my range is associated with it. Any function can be decomposed into a surjection and an injection: For any function h : X â Z there exist a surjection f : X â Y and an injection g : Y â Z such that h = g o f. To see this, define Y to be the set of preimages h â1 ( z ) where z is in h ( X ) . wikiHow is a âwiki,â similar to Wikipedia, which means that many of our articles are co-written by multiple authors. The canvas element is supported in all major browsers: Firefox, Opera, Safari, Google Chrome, and Microsoft Internet Explorer 9 or newer. Example 7: Multiple Choice A. You can use "a" in your formula and then use the slider to change the value of "a" to see how it affects the graph. ; Factor the numerator and denominator. To find the x-intercepts, you set the entire function to zero and solve for x. For some graphs, the vertical line will intersect the graph in one point at one position and more than one point at a different position. To graph a function, choose some values for the independent variable, $x$, plug them into the function to get a set of ordered pairs $(x,f(x))$, and plot these on the graph. Then connect the points to best match how the points are arranged on the graph. Example 8 Graph of Graph of . Parabolas may open upward or downward. The piecewise definition, I stated, is a good (and very common) way to think about such functions, as it is easy to both write down and read. One area where Excel is different from a graphing calculator is in producing the graph of a function that has been defined by a formula. If the function is a parametric or polar function, you have to specify the number of steps for which you want the function to be evaluated. We can represent it in many different ways, tho. By ⦠In order to graph this parabola, we can create the table of values, where x is the independent input and f(x) is the output of a squared input. Zooming. Function Grapher and Calculator Description:: All Functions. To graph a quadratic function written in vertex form, find its vertex (h, k). Now we can get into graphs. More References and Links Properties of Trigonometric Functions Inverse Trigonometric Functions Graphs of Hyperbolic Functions Logarithmic Functions The image on the next page shows the completed graph of one and a half periods of the tangent function. Vote. sin, cos and the addition), on the domain t, in the same figure? You can graph thousands of ⦠Letâs take an example of a trigonometric and exponential function. Answer: To plot a graph of a function on your webpage, use the canvas element, as shown in the example below. (see figure above) e.g. while x â x 2, x ε R is many-to-one function. grid off function). How to write a function that returns a graph? Graph the function f(x) = x 2 - 6x + 7 and find the intervals where it is increasing and where it is decreasing. 0 â® Vote. The factorial function on the nonnegative integers (↦!) Edit if improvable, show due diligence, give brief context, include minimum working examples of code and data in formatted form.As you receive give back, vote and answer questions, keep the site useful, be kind, correct mistakes and share what you have learned. The first input argument specifies the type of annotation. Graphs of Functions. For Whole Class. Answered: Walter Roberson on 21 Feb 2014 Accepted Answer: Dishant Arora. Include your email address to get a message when this question is answered. A graph of a function is a visual representation of a function's behavior on an x-y plane. Get started with the Microsoft Graph SDK for PHP. All the different overlapping graphs will cause you to lost and confused on which graph is which transformation. Functions whose domain are the nonnegative integers, known as sequences, are often defined by recurrence relations.. Use "x" as the variable like this: $\begingroup$ This is ONE function. = (−)! Statistics. >, and the initial condition ! If a positive constant is added to a function, f (x) + k, the graph will shift up. Note that the graph is indeed a function as it passes the vertical line test. 2. wikiHow is where trusted research and expert knowledge come together. The most basic parabola has an equation f(x) = x2. Often a geometric understanding of a problem will lead to a more elegant solution. The canvas element is supported in all major browsers: Firefox, Opera, Safari, Google Chrome, and Microsoft Internet Explorer 9 or newer. Grouping symbols such as parentheses, and helper functions may be arbitrarily deeply nested. This skill will be useful as we progress in our study of mathematics. Choosing the parent function with variables set before each one, and replacing x and y with the respective variables in the table will return a function based on those points. Function Graph Area. We can represent it in many different ways, tho. I have a function with two input variables. matplot(x, cbind(y1,y2),type="l",col=c("red","green"),lty=c(1,1)) use this if y1 and y2 are evaluated at the same x points. In the case of functions of two variables, that is functions whose domain consists of pairs, the graph usually refers to the set of ordered triples where f = z, instead of ⦠Select Certificates & secrets under Manage. Choosing the parent function with variables set before each one, and replacing x and y with the respective variables in the table will return a function based on those points. You've already learned the basic trig graphs.But just as you could make the basic quadratic, y = x 2, more complicated, such as y = –(x + 5) 2 – 3, so also trig graphs can be made more complicated.We can transform and translate trig functions, just like you transformed and translated other functions in algebra.. Let's start with the basic sine function, f (t) = sin(t). The graphs of such functions are like exponential growth functions in reverse. Then connect the points to best match how the points are arranged on the graph. Observe my graph passes through â3 on the y-axis. When you write the program on the MATLAB editor or command window, you need to follow the three steps for the graph. By … One period The period of the basic tangent function is π, and the graph will repeat from π to 2π. Now, just as a refresher, a function is really just an association between members of a set that we call the domain and members of the set that we call a range. Select the New client secret button. When you specify a higher number of steps, the graph will appear smoother, but it will take longer to plot. It is usually symbolized as. The most common way to get confused and make mistakes when graphing transformations is attempting to include each transformation on one single graph. You can click-and-drag to move the graph around. This article will provide the necessary information to correctly graph these transformations of functions. Find the relationship between the graph of a function and its inverse. The second input argument specifies the position of the annotation in units normalized to the figure. Calculus. Then, find other points of the function, taking into consideration the axis of symmetry, x=h. Exponential decay functions also cross the y-axis at (0, 1), but they go up to the left forever, and crawl along the x-axis to the right.These functions model things that shrink over time, such as the radioactive decay of uranium. This general curved shape is called a parabola The U-shaped graph of any quadratic function defined by f (x) = a x 2 + b x + c, where a, b, and c are real numbers and a ≠ 0. and is shared by the graphs of all quadratic functions. Functions: Hull: First graph: f(x) Derivative Integral But is this the correct answer? 2. In the common case where x and f are real numbers, these pairs are Cartesian coordinates of points in two-dimensional space and thus form a subset of this plane. Important Functions to Plot MATLAB Graph. Graphs help us understand different aspects of the function, which would be difficult to understand by just looking at the function itself. x = + 2, y = x 2 = 4. Identifying transformations allows us to quickly sketch the graph of functions. Points on the functions graph corresponding to relative extreme values are turning points, or points where the function changes from decreasing to increasing or vice versa. It has the unique feature that you can save your work as a URL (website link). Please help us continue to provide you with our trusted how-to guides and videos for free by whitelisting wikiHow on your ad blocker. To prevent that mistake, always draw a new graph after each transformation. Exponential decay functions also cross the y-axis at (0, 1), but they go up to the left forever, and crawl along the x-axis to the right. The parabola can either be in "legs up" or "legs down" orientation. The graphs of such functions are like exponential growth functions in reverse. Similarly, you can plot the graph for other trigonometric functions like cos, tan, cosec, cot, sec⦠Problem 3: How to plot the Exponential Function in MATLAB? The most basic parabola has an equation f(x) = x2. To reset the zoom to the original click on the Reset button. By using this website, you agree to our Cookie Policy. If I used ax (or xa) the program just gets confused. For example, a quadratic could be written as y1 = a(x1) 2 + b(x1) + c And it would return a function with values for a, b and c which is closest to running through all points. Link to worksheets used in this section. The vertex of a parabola is its the highest or the lowest point. min.depth<-2 max.depth<-5. Use the matplot function:. Functions and their graphs. Plot[2x, {x,0,4}] Plot[x^2, {x,10,12}] How do I merge these two graphs into one graph without the range {4,10}? Trigonometry. The numbers in this function do the opposite of … Precalculus. This article has been viewed 12,415 times. By using our site, you agree to our. Graphically, if a line parallel to x axis cuts the graph of f(x) at more than one point then f(x) is many-to-one function and if a line parallel to y-axis cuts the graph at more than one place, then it is not a function. Such functions are written in the form f(x – h), where h represents the horizontal shift.. Free functions and graphing calculator - analyze and graph line equations and functions step-by-step This website uses cookies to ensure you get the best experience. In other words, y is the output of f when the input is x. Download free in Windows Store. Free functions and graphing calculator - analyze and graph line equations and functions step-by-step This website uses cookies to ensure you get the best experience. The graph of a function is a visual representation of all of the points on the plane of ( x , f ( x )). function A = myplot(x,y) A = plot(x,y); Basic Math. Note how I used a*x to multiply a and x. Add text anywhere within the figure using the annotation function instead of the text function. If no vertical line can intersect the curve more than once, the graph does represent a function. Example: Here is a graph of the functions sin x (green) and cos 3x (blue). Notice that, like the other graphs that had negative exponents, the lines on the graph sort of separate into two different directions. In a function where c is added to the entire function, meaning the function becomes () = +, the basic graph will shift up c units. Viewed 30k times 6. Answer . Let's substitute x = 0 into the equation I just got to check if it's correct. As an exercise find the domains of the above functions and compare with the domains found graphically above. Collapsing nodes to a new Graph, a Function, or a Macro. Link to set up but unworked worksheets used in this section. Download free on iTunes. Microsoft Graph data connect provides a set of tools to streamline secure and scalable delivery of Microsoft Graph data to popular Azure data stores. To create this article, volunteer authors worked to edit and improve it over time. Draw Function Graphs Mathematics / Analysis - Plotter - Calculator 4.0. Download free on Google Play. Inspect the graph to see if any vertical line drawn would intersect the curve more than once. 7 = 2 × 0 + 1 0 â a â a = 3. B. C. 14 Graphing a Function Given its Derivative Graph Graph of Graph of Directions: The function on the left is . in which x is called argument (input) of the function f and y is the image (output) of x under f. The graph of a quadratic function is called a parabola. In a function where c is subtracted from the entire function, meaning the function becomes f ( x ) = f ( x ) â c {\displaystyle f(x)=f(x)-c} , the basic graph will shift down c units. This is a quadratic function which passes through the x-axis at the required points. Mathway. Enter a value in Description and select one of the options for Expires and select Add. Graph on the right. Function Grapher is a full featured Graphing Utility that supports graphing two functions together. Stack Exchange Network. Evaluate the function at 0 to find the y-intercept. By signing up you are agreeing to receive emails according to our privacy policy. Functions from Verbal Statements - turning word problems into functions. Which functions inverses are also functions? Some of the most characteristics of a function are its Relative Extreme Values. Microsoft Graph connectors (preview) work in the incoming direction, delivering data external to the Microsoft cloud into Microsoft Graph services and applications, to enhance Microsoft 365 experiences such as Microsoft Search. (Most "text book" math is the wrong way round - it gives you the function first and asks you to plug values into that function.) Shifting a graph horizontally. It's not correct! When you let go of the slider it goes back to the middle so you can zoom more. thus adjusting the coordinates and the equation. Tom Lucas, Bristol. 10 straight line graph challenges for use with computer graph plotting software or a graphical display calculator. Example: Here is a graph of the functions sin x (green) and cos 3x (blue). How can I plot the following 3 functions (i.e. Graph transformations. {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/05\/Step-2-image-basic-functions.png\/460px-Step-2-image-basic-functions.png","bigUrl":"\/images\/thumb\/0\/05\/Step-2-image-basic-functions.png\/728px-Step-2-image-basic-functions.png","smallWidth":460,"smallHeight":188,"bigWidth":728,"bigHeight":298,"licensing":" | 2021-04-18 05:24:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48320069909095764, "perplexity": 523.69859322251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00269.warc.gz"} |
https://brilliant.org/problems/combinatorial-straight-lines/ | # Combinatorial Straight Lines
Discrete Mathematics Level 4
If the coefficients $$A$$ and $$B$$ of the equation of a straight line $$Ax + By = 0$$ are two distinct digits from the numbers $$0,1,2,3,6,7$$, then the number of distinct straight lines is
× | 2016-10-28 06:26:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7566916346549988, "perplexity": 271.6493713270305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00304-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/206803-need-help-understand-how-solve.html | # Thread: need help to understand how to solve!
1. ## need help to understand how to solve!
Consider the function below.g(x) = x4 - x3 + x2 - x
Find g(-14).
2. ## Re: need help to understand how to solve!
Originally Posted by impressu2
Consider the function below.g(x) = x4 - x3 + x2 - x
Find g(-14).
If we have defined a function to be f(x) = x^4-x^3+x^2-x then we want to find g(-14)
f(x) = x^4-x^3+x^2-x => g(-14) = (-14)^4-(-14)^3+(-14)^2-(-14) = 41370
Thank you!
4. ## Re: need help to understand how to solve!
Originally Posted by impressu2
Consider the function below.g(x) = x4 - x3 + x2 - x
Find g(-14).
$\displaystyle \left g(x)=x^4-x^3+x^2-x=x^3(x-1)+x(x-1)=(x-1)(x^3+x)=(-14-1)(-14^3-14)=-15(-2744-14)=41370$.
Ops, I guess I was late, but this is another "method". | 2018-05-20 16:24:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5906537175178528, "perplexity": 2327.6094782455866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863626.14/warc/CC-MAIN-20180520151124-20180520171124-00579.warc.gz"} |
https://www.physicsforums.com/threads/differentail-eq-for-circle-passing-through-origin.110949/ | # Differentail Eq. For circle passing through origin
1. Feb 16, 2006
### Isma
can some 1 help me with it plzzz
2. Feb 16, 2006
### Tom Mattson
Staff Emeritus
can u post the qstn plzzz?
3. Feb 16, 2006
### Isma
question is find the differential eq. for circle that passes through origin.
4. Feb 16, 2006
### Tom Mattson
Staff Emeritus
That can't be all there is to it. There are an infinite number of circles that pass through the origin.
5. Feb 16, 2006
### Isma
exactly....i m v confused....thats all wat the question is for assignment i ve to submit tomorrow
i just kno the answer but dont kno anything else
ans: 2ay''+y'(raise to the power 3)=0
6. Feb 16, 2006
### vaishakh
2ay + y^3 = 0.
2a = -y^2
This represents a circle!!!!
7. Feb 17, 2006
### Isma
yesss!!
thx a lot :)
8. Feb 17, 2006
### Isma
but how will we come to this eq. from start if we dint know the DE?
9. Feb 17, 2006
what is the eq you were given for circle that pass in (0,0) ? is it like this one : $$(x-a)^2 +(y-b)^2 = a^2 +b^2$$
Last edited: Feb 17, 2006
10. Feb 17, 2006
### HallsofIvy
How about writing out the entire problem as it was given?
How can that be an answer when there was no "a" in the original question?
In what sense does that represent a circle?
Did you understand what he meant??
Any circle, that passes through the origin can be written
(x- a)2+ (y- b)2= a2+ b2
(I just noticed that ziad1985 said that!)
Differentiating wrt x, 2(x-a)+ 2(y-b)y'= 0.
Differentiating again, 2+ 2y'2+ 2(y-b)y"= 0.
Now combine those into an equation that does not have either a or b in it.
11. Feb 19, 2006
### Isma
actually....this assignment was given just after 1st lecture on DE in class...so i was v much messed up in mind....nd abt that thx heheheh....it made sense at that moment but not in the next 1:)
i really appreciate u helpin me ...i m gonna solve it like that
thx! | 2019-01-20 12:37:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4707389175891876, "perplexity": 6476.5832704523045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00487.warc.gz"} |
https://mathoverflow.net/questions/324214/open-problems-in-sobolev-spaces | # Open problems in Sobolev spaces
What are the open problems in the theory of Sobolev spaces?
I would like to see problems that are yes or no only. Also I would like to see problems with the statements that are short and easy to understand for someone who has a basic knowledge in the theory, say at the level of the book by Evans and Gariepy.
The problems do not have to be a well know ones. Just the problems you think are interesting.
That will allow people to leave comments related exclusively to this particular problem.
I have been working with Sobolev spaces for most of my adult live and I have some of my favorite problems that I will list below. But I will do it later, because first I would like to see your problems.
Let $$H^{s,p}(\mathbb{R}, \mathbb{C})$$ be the fractional order Sobolev space of scalar valued functions (distributions) over the real line, where $$s\in \mathbb R$$ and $$1.
It is a theorem by E. Shamir and R. Strichartz that the indicator function of the half line $$1_{\mathbb{R}_+}$$ (equal to $$1$$ for $$x\geq 0$$ and equal to $$0$$ for $$x<0$$) is a pointwise multiplier on $$H^{s,p}(\mathbb{R}, \mathbb{C})$$ if and only if ($$p'$$ dual exponent) $$- \frac{1}{p'} < s < \frac{1}{p}.$$ This means that $$\|1_{\mathbb{R}_+} \cdot f \|_{H^{s,p}} \leq C \|f\|_{H^{s,p}}$$ for all Schwartz functions $$f$$, with a constant $$C > 0$$ independent of $$f$$. This result is trivial for $$s = 0$$ (reducing to an $$L^p$$-space) but non-trivial for $$s\neq 0$$. Strictly outside this range, because of trace considerations, the inequality cannot hold.
My question regards the case of vector-valued functions. Let $$X$$ be a Banach space and let $$H^{s,p}(\mathbb{R}, X)$$ be the Sobolev space of $$X$$-valued functions (distributions), defined in the same way as in the scalar valued case. We could show the multiplier property of $$1_{\mathbb{R}_+}$$ in the same range as in the scalar-valued case provided the Banach space $$X$$ has the UMD property. See here or here, and here, Section 4 for an elementary proof of this fact. As a rule of thumb, all reflexive standard Banach spaces have UMD. Moreover, alle UMD spaces are reflexive. Space without UMD are thus $$L^1$$ and $$L^\infty$$.
My question is as follows:
Let $$X$$ be a Banach space. Suppose that the inequality $$\|1_{\mathbb{R}_+} \cdot f \|_{H^{s,p}(\mathbb{R}, X)} \leq C \|f\|_{H^{s,p}(\mathbb{R}, X)}$$ holds true for some $$s\neq 0$$ and some $$1, for all $$X$$-valued Schwartz functions $$f$$. Does this imply that $$X$$ has the UMD property?
I find this interesting because $$X$$ has the UMD property if and only if the Hilbert transform is a bounded operator on $$L^p(\mathbb{R}, X)$$, i.e. the signum function is a Fourier multiplier on this space. In other words, $$F^{-1} sgn F$$ is a bounded operator on $$L^p(\mathbb{R}, X)$$ ($$F$$ denoting the Fourier transform).
The pointwise multiplier property is equivalent to the boundedness of $$1_{\mathbb{R}_+} F^{-1}(1+|\cdot|^2)^{s/2} F$$ on $$L^p(\mathbb{R}, X)$$. So, given a positive answer the question, this would imply a new characterization of the boundedness of Hilbert transform in terms of a jump function in the time variable - and not in the frequency variable as in the usual definition.
Let $$E \subset \mathbb R^n$$. For $$f : E \to \mathbb R$$, let $$\|f\|_{L^{m,p}(E)} = \inf\{\|F\|_{L^{m,p}(\mathbb R^n)} : F|_E = f\}.$$ Here $$\| \cdot \|_{L^{m,p}}$$ is the homogeneous Sobolev seminorm $$\|F\|_{L^{m,p}(\mathbb R^n)} = \max\limits_{|\alpha| = m} \|\partial^\alpha F\|_{L^p(\mathbb R^n)}$$ Fefferman, Israel, and Luli have shown that in the case $$p>n$$ there is a linear extension operator $$T : L^{m,p}(E) \to L^{m,p}(\mathbb R^n)$$ such that $$Tf|_E = f$$ and $$\|Tf\|_{L^{m,p}(\mathbb R^n)} \leq C \|f\|_{L^{m,p}(E)}$$, where $$C$$ depends on $$m,n,p$$ only. To emphasize the point, $$C$$ does not depend at all on $$E$$, which can be completely arbitrary.
In principle, a result of this kind makes sense whenever $$p > n/m$$, but as far as I know nothing is known about the case $$p \leq n$$. Fefferman, Israel, and Luli have shown quite a bit more about these operators as well, but even the question of whether linear extension operators of uniformly bounded norm exist is open in the case $$p \leq n$$. | 2020-10-21 16:36:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395761489868164, "perplexity": 175.78550348777873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00113.warc.gz"} |
http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Stress_(mechanics) | # Stress (mechanics)
In continuum mechanics, stress is a physical quantity that expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the deformation of the material, which is not a physical quantity. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).
Stress
Residual stresses inside a plastic protractor are revealed by the polarized light.
Common symbols
σ
SI unitPascal
Other units
lbf per square inch ( lbf/in2 ) psi, bar
In SI base unitsPa = kgm−1s−2
DimensionM L−1 T−2
Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original non-deformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. However, if the deformation changes gradually with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress.
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
The relation between mechanical stress, deformation, and the rate of change of deformation can be quite complicated, although a linear approximation may be adequate in practice if the quantities are sufficiently small. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
In some branches of engineering, the term stress is occasionally used in a looser sense as a synonym of "internal force". For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, rather than the force divided by the area of its cross-section.
## History
Since ancient times humans have been consciously aware of stress inside materials. Until the 17th century, the understanding of stress was largely intuitive and empirical; and yet, it resulted in some surprisingly sophisticated technology, like the composite bow and glass blowing.[1]
Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals.
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo Galilei's rigorous experimental method, René Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals.[2] With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallel laminar flow.
## Overview
### Definition
Stress is defined as the force across a "small" boundary per unit area of that boundary, for all orientations of the boundary.[3] Being derived from a fundamental physical quantity (force) and a purely geometrical quantity (area), stress is also a fundamental quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them.[4]:p.90–106 Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood.
Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S.[5]:p.41–50 In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the stress T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field.
### Normal and shear stress
In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (compression or tension) perpendicular to the surface, and the shear stress that is parallel to the surface.
If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product T · n. This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress) The shear component is then the vector T − (T · n)n.
### Units
The dimension of stress is that of pressure, and therefore its coordinates are commonly measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress.
### Causes and effects
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines, or points; and possibly also on very short time intervals (as in the impulses due to collisions). In active matter, self-propulsion of microscopic particles generates macroscopic stress profiles[7]. In general, the stress distribution in a body is expressed as a piecewise continuous function of space and time.
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. However, if the deformation is changing with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article on viscosity. The same for normal viscous stresses can be found in Sharma (2019) [8].
The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
## Simple stress
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.[9]
### Uniaxial normal stress
A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude ${\displaystyle F}$ along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force, F with continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the singly number σ, calculated simply with the magnitude of those forces, F, and cross sectional area, A.
${\displaystyle \sigma ={\frac {F}{A}}}$
On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress.[9] If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress ${\displaystyle \sigma }$ change sign, and the stress is called compressive stress.
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value ${\displaystyle \sigma }$ = F/A will be only the average stress, called engineering stress or nominal stress. However, if the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle).
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid.
### Simple shear stress
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed simply by the single number ${\displaystyle \tau }$ , calculated simply with the magnitude of those forces, F and the cross sectional area, A.
${\displaystyle \tau ={\frac {F}{A}}}$
However, unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it.[9] For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. However, that average is often sufficient for practical purposes.[10]:p.292 Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges").
### Isotropic stress
Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand surprisingly large amounts of isotropic tensile stress under some circumstances. see Z-tube.
### Cylinder stresses
Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
## General stress
Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction ${\displaystyle d}$, and zero across any surfaces that are parallel to ${\displaystyle d}$. When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element.
### The Cauchy stress tensor
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
However, Cauchy observed that the stress vector ${\displaystyle T}$ across a surface will always be a linear function of the surface's normal vector ${\displaystyle n}$, the unit-length vector that is perpendicular to it. That is, ${\displaystyle T={\boldsymbol {\sigma }}(n)}$, where the function ${\displaystyle {\boldsymbol {\sigma }}}$ satisfies
${\displaystyle {\boldsymbol {\sigma }}(\alpha u+\beta v)=\alpha {\boldsymbol {\sigma }}(u)+\beta {\boldsymbol {\sigma }}(v)}$
for any vectors ${\displaystyle u,v}$ and any real numbers ${\displaystyle \alpha ,\beta }$. The function ${\displaystyle {\boldsymbol {\sigma }}}$, now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus, ${\displaystyle {\boldsymbol {\sigma }}}$ is classified as second-order tensor of type (0,2).
Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered ${\displaystyle x_{1},x_{2},x_{3}}$ or named ${\displaystyle x,y,z}$, the matrix may be written as
${\displaystyle {\begin{bmatrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{bmatrix}}\quad \quad \quad }$ or ${\displaystyle \quad \quad \quad {\begin{bmatrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\\\end{bmatrix}}}$
The stress vector ${\displaystyle T={\boldsymbol {\sigma }}(n)}$ across a surface with normal vector ${\displaystyle n}$ with coordinates ${\displaystyle n_{1},n_{2},n_{3}}$ is then a matrix product ${\displaystyle T=n\cdot {\boldsymbol {\sigma }}={\boldsymbol {\sigma }}^{T}\cdot n^{T}}$ (where T in upper index is transposition) (look on Cauchy stress tensor), that is
${\displaystyle {\begin{bmatrix}T_{1}\\T_{2}\\T_{3}\end{bmatrix}}={\begin{bmatrix}\sigma _{11}&\sigma _{21}&\sigma _{31}\\\sigma _{12}&\sigma _{22}&\sigma _{32}\\\sigma _{13}&\sigma _{23}&\sigma _{33}\end{bmatrix}}{\begin{bmatrix}n_{1}\\n_{2}\\n_{3}\end{bmatrix}}}$
The linear relation between ${\displaystyle T}$ and ${\displaystyle n}$ follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy’s equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is ${\displaystyle \sigma _{12}=\sigma _{21}}$, ${\displaystyle \sigma _{13}=\sigma _{31}}$, and ${\displaystyle \sigma _{23}=\sigma _{32}}$. Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written
${\displaystyle {\begin{bmatrix}\sigma _{x}&\tau _{xy}&\tau _{xz}\\\tau _{xy}&\sigma _{y}&\tau _{yz}\\\tau _{xz}&\tau _{yz}&\sigma _{z}\end{bmatrix}}}$
where the elements ${\displaystyle \sigma _{x},\sigma _{y},\sigma _{z}}$ are called the orthogonal normal stresses (relative to the chosen coordinate system), and ${\displaystyle \tau _{xy},\tau _{xz},\tau _{yz}}$ the orthogonal shear stresses.
### Change of coordinates
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution.
As a symmetric 3×3 real matrix, the stress tensor ${\displaystyle {\boldsymbol {\sigma }}}$ has three mutually orthogonal unit-length eigenvectors ${\displaystyle e_{1},e_{2},e_{3}}$ and three real eigenvalues ${\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}$, such that ${\displaystyle {\boldsymbol {\sigma }}e_{i}=\lambda _{i}e_{i}}$. Therefore, in a coordinate system with axes ${\displaystyle e_{1},e_{2},e_{3}}$, the stress tensor is a diagonal matrix, and has only the three normal components ${\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}$ the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
### Stress as a tensor field
In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
### Stress in thin plates
Man-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. However, these simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate).
### Stress in thin beams
The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis).
### Other descriptions of stress
The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations where the differences in stress distribution in most cases can be neglected. For large deformations, also called finite deformations, other measures of stress, such as the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor, are required.
Solids, liquids, and gases have stress fields. Static fluids support normal stress but will flow under shear stress. Moving viscous fluids can support shear stress (dynamic pressure). Solids can support both shear and normal stress, with ductile materials failing under shear and brittle materials failing under normal stress. All materials have temperature dependent variations in stress-related properties, and non-Newtonian materials have rate-dependent variations.
## Stress analysis
Stress analysis is a branch of applied physics that covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings.
### Goals and assumptions
Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces are being applied to such a system must be balanced by internal reaction forces,[11]:p.97 which are almost always surface contact forces between adjacent particles — that is, as stress.[5] Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle, creating a stress distribution throughout the body.
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material;[12]:p.42–81 or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations.[13]
### Methods
Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. However, most stress analysis is done by mathematical methods, especially during design. Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem.
Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.).
However, engineered structures are usually designed so that the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke’s law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Still, for two- or three-dimensional cases one must solve a partial differential equation problem. Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
## Alternative measures of stress
Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
### Piola–Kirchhoff stress tensor
In the case of finite deformations, the Piola–Kirchhoff stress tensors express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations and rotations, the Cauchy and Piola–Kirchhoff tensors are identical.
Whereas the Cauchy stress tensor ${\displaystyle {\boldsymbol {\sigma }}}$ relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor, ${\displaystyle {\boldsymbol {P}}}$ is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state.
The 1st Piola–Kirchhoff stress tensor, ${\displaystyle {\boldsymbol {P}}}$ relates forces in the present ("spatial") configuration with areas in the reference ("material") configuration.
${\displaystyle {\boldsymbol {P}}=J~{\boldsymbol {\sigma }}~{\boldsymbol {F}}^{-T}~}$
where ${\displaystyle {\boldsymbol {F}}}$ is the deformation gradient and ${\displaystyle J=\det {\boldsymbol {F}}}$ is the Jacobian determinant.
In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by
${\displaystyle P_{iL}=J~\sigma _{ik}~F_{Lk}^{-1}=J~\sigma _{ik}~{\cfrac {\partial X_{L}}{\partial x_{k}}}~\,\!}$
Because it relates different coordinate systems, the 1st Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The 1st Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the 1st Piola–Kirchhoff stress tensor will vary with material orientation.
The 1st Piola–Kirchhoff stress is energy conjugate to the deformation gradient.
#### 2nd Piola–Kirchhoff stress tensor
Whereas the 1st Piola–Kirchhoff stress relates forces in the current configuration to areas in the reference configuration, the 2nd Piola–Kirchhoff stress tensor ${\displaystyle {\boldsymbol {S}}}$ relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the reference configuration.
${\displaystyle {\boldsymbol {S}}=J~{\boldsymbol {F}}^{-1}\cdot {\boldsymbol {\sigma }}\cdot {\boldsymbol {F}}^{-T}~.}$
In index notation with respect to an orthonormal basis,
${\displaystyle S_{IL}=J~F_{Ik}^{-1}~F_{Lm}^{-1}~\sigma _{km}=J~{\cfrac {\partial X_{I}}{\partial x_{k}}}~{\cfrac {\partial X_{L}}{\partial x_{m}}}~\sigma _{km}\!\,\!}$
This tensor, a one-point tensor, is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the 2nd Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation.
The 2nd Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor.
## References
1. Gordon, J.E. (2003). Structures, or, Why things don't fall down (2. Da Capo Press ed.). Cambridge, MA: Da Capo Press. ISBN 0306812835.
2. Jacob Lubliner (2008). "Plasticity Theory" Archived 2010-03-31 at the Wayback Machine (revised edition). Dover Publications. ISBN 0-486-46290-0
3. Wai-Fah Chen and Da-Jian Han (2007), "Plasticity for Structural Engineers". J. Ross Publishing ISBN 1-932159-75-4
4. Peter Chadwick (1999), "Continuum Mechanics: Concise Theory and Problems". Dover Publications, series "Books on Physics". ISBN 0-486-40180-4. pages
5. I-Shih Liu (2002), "Continuum Mechanics". Springer ISBN 3-540-43019-9
6. (2009) The art of making glass. Lamberts Glashütte (LambertsGlas) product brochure. Accessed on 2013-02-08.
7. Marchetti, M. C.; Joanny, J. F.; Ramaswamy, S.; Liverpool, T. B.; Prost, J.; Rao, Madan; Simha, R. Aditi (2013). "Hydrodynamics of soft active matter". Reviews of Modern Physics. 85 (3): 1143–1189. doi:10.1103/RevModPhys.85.1143.
8. Sharma, B and Kumar, R "Estimation of bulk viscosity of dilute gases using a nonequilibrium molecular dynamics approach.", Physical Review E,100, 013309 (2019)
9. Ronald L. Huston and Harold Josephs (2009), "Practical Stress Analysis in Engineering Design". 3rd edition, CRC Press, 634 pages. ISBN 9781574447132
10. Walter D. Pilkey, Orrin H. Pilkey (1974), "Mechanics of solids" (book)
11. Donald Ray Smith and Clifford Truesdell (1993) "An Introduction to Continuum Mechanics after Truesdell and Noll". Springer. ISBN 0-7923-2454-4
12. Fridtjov Irgens (2008), "Continuum Mechanics". Springer. ISBN 3-540-74297-2
13. William S. Slaughter (2012), "The Linearized Theory of Elasticity". Birkhäuser Basel ISBN 978-0-8176-4117-7 | 2020-12-02 04:46:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 50, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257713317871094, "perplexity": 719.9419855533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00653.warc.gz"} |
https://calculus-do.com/%E9%9A%8F%E6%9C%BA%E5%BE%AE%E7%A7%AF%E5%88%86%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99stochastic-calculus%E4%BB%A3%E8%80%83-gaussian-processes/ | # 随机微积分作业代写stochastic calculus代考| GAUSSIAN PROCESSES
my-assignmentexpert™ 随机微积分stochastic calculus作业代写,免费提交作业要求, 满意后付款,成绩80\%以下全额退款,安全省心无顾虑。专业硕 博写手团队,所有订单可靠准时,保证 100% 原创。my-assignmentexpert™, 最高质量的随机微积分stochastic calculus作业代写,服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面,考虑到同学们的经济条件,在保障代写质量的前提下,我们为客户提供最合理的价格。 由于随机微积分stochastic calculus作业种类很多,难度波动比较大,同时其中的大部分作业在字数上都没有具体要求,因此随机微积分stochastic calculus作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。
my-assignmentexpert™ 为您的留学生涯保驾护航 在经济学作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的微积分calculus代写服务。我们的专家在随机微积分stochastic calculus 代写方面经验极为丰富,各种随机微积分stochastic calculus相关的作业也就用不着 说。
• 随机偏微分方程
• 随机控制
• Ito积分
• black-Scholes-Merton option pricing formula
• Fokker–Planck equation
• 布朗运动 Brownian motion
## 微积分作业代写calclulus代考|Gaussian random variables in Rk
1. The normal distribution $N=N\left(\mu, \sigma^{2}\right)$ on $R$ with mean $\mu$ and variance $\sigma^{2}$ is defined by
$$N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} \exp \left(-\frac{(x-\mu)^{2}}{2 \sigma^{2}}\right) d x$$
The characteristic function (Fourier transform) of this distribution is given by
$$\hat{N}(t)=\int_{R} e^{i t x} N(d x)=\exp \left(i \mu t-\frac{1}{2} \sigma^{2} t^{2}\right), \quad t \in R$$
In the case of a mean zero normal distribution $N=N\left(0, \sigma^{2}\right)$ this becomes
$$N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-x^{2} / 2 \sigma^{2}} d x, \quad \text { and } \quad \hat{N}(t)=e^{-\sigma^{2} t^{2} / 2}, \quad t \in R$$
and the standard normal distribution $N(0,1)$ satisfies
$$N(0,1)(d x)=\frac{1}{\sqrt{2 \pi}} e^{-x^{2} / 2} d x, \quad \text { and } \quad \widehat{N(0,1)}(t)=e^{-t^{2} / 2}, \quad t \in R .$$
For $\sigma^{2}=0$ the distribution $N\left(0, \sigma^{2}\right)=N(0,0)$ is not defined by the above density but is interpreted to be the point measure $N(0,0)=\epsilon_{0}$ concentrated at 0 . With
this interpretation the formula for the characteristic function $N(\widehat{0,0})(t)=\hat{\epsilon}_{0}(t)=$ $1=e^{-\sigma^{2} t^{2} / 2}$ holds in this case also.
The characteristic function of a random vector $X: \Omega \rightarrow R^{k}$ is defined to be the characteristic function of the distribution $P_{X}$ of $X$, that is, the function
$$F_{X}(t)=\hat{P}{X}(t)=\int{R^{k}} e^{i(t, x)} P_{X}(d x)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} .$$
Recall that the components $X_{1}, \ldots, X_{k}$ of the random vector $X=\left(X_{1}, \ldots, X_{k}\right)^{\prime}$ are independent if and only if the joint distribution $P_{X}$ is the product measure $P_{X_{1}} \otimes P_{X_{2}} \otimes \ldots \otimes P_{X_{k}}$. This is easily seen to be equivalent with the factorization
$$F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k}}\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^{k} .$$
Covariance matrix. The $k \times k$-matrix $C$ defined by $C_{i j}=E\left[\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right]$, where $m_{i}=E X_{i}$, is called the covariance matrix $C$ of $X$. Here it is assumed that all relevant expectations exist. Set $m=\left(m_{1}, m_{2}, \ldots, m_{k}\right)^{\prime}$ and note that the matrix $\left(\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right){i j}$ can be written as the product $(X-m)(X-m)^{\prime}$ of the column vector $(X-m)$ with the row vector $(X-m)^{\prime}$. Taking expectations entry by entry, we see that the covariance matrix $C$ of $X$ can also be written as $C=E\left[(X-m)(X-m)^{\prime}\right]$ in complete formal analogy to the covariance in the one dimensional case. Clearly $C$ is symmetric. Moreover, for each vector $t=$ $\left(t{1}, \ldots, t_{k}\right)^{\prime} \in R^{k}$ we have
$$0 \leq \operatorname{Var}\left(t_{1} X_{1}+\ldots+t_{k} X_{k}\right)=\sum_{i j} t_{i} t_{j} \operatorname{Cov}\left(X_{i} X_{j}\right)=\sum_{i j} C_{i j} t_{i} t_{j}=(C t, t)$$
and it follows that the covariance matrix $C$ is positive semidefinite. Let us note the effect of affine transformations on characteristic functions:
## 微积分作业代写calclulus代考|Theorem
1.b.0 Theorem. Let $T$ be an index set, $m: T \rightarrow R, C: T \times T \rightarrow R$ functions and assume that the matrix $C_{F}:=(C(s, t))_{s, t \in F}$ is selfadjoint and positive semidefinite, for each finite set $F \subseteq T$.
Then there exists a probability $P$ on the product space $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ such that the coordinate maps $X_{t}: \omega \in \Omega \mapsto X_{t}(\omega)=\omega(t), t \in T$, form a Gaussian process $X=\left(X_{t}\right){t \in T}:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ with mean function $E\left(X{t}\right)=m(t)$ and covariance function $\operatorname{Cov}\left(X_{s}, X_{t}\right)=C(s, t), s, t \in T$.
Remark. Our choice of $\Omega$ and $X_{t}$ implies that the process $X:(\Omega, \mathcal{F}) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, that is, the path $t \in T \mapsto X_{t}(\omega)$ is the element $\omega \in R^{T}=\Omega$ itself, for each $\omega \in \Omega$.
Proof. Fix any linear order on $T$ and use it to order vector components and matrix entries consistently. For finite subsets $F \subseteq G \subseteq T$ let
\begin{aligned} \pi_{F}: x &=\left(x_{t}\right){t \in T} \in \Omega=R^{T} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \text { and } \ \pi{G F}: x &=\left(x_{t}\right){t \in G} \in R^{G} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \end{aligned} denote the natural projections and set $$m{F}=(m(t)){t \in F} \in R^{F}, \quad C{F}=(C(s, t)){s, t \in F} \quad \text { and } \quad X{F}=\left(X_{t}\right){t \in F} .$$ Let $P$ be any probability on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$. Since $X:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, the distribution of $X$ on $\left(R^{T}, \mathcal{B}^{T}\right)$ is the measure $P$ itself and $\pi{F}(P)$ is the joint distribution of $X_{F}=\left(X_{t}\right){t \in F}$ on $R^{F}$. Thus $X$ is a Gaussian process with mean function $m$ and covariance function $C$ on the probability space $(\Omega, \mathcal{F}, P)$ if and only if the finite dimensional distribution $\pi{F}(P)$ is the Gaussian Law $N\left(m_{F}, C_{F}\right)$, for each finite subset $F \subseteq T$. By Kolmogoroff’s existence theorem (appendix D.5) such a probability measure on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ exists if and only if the system of Gaussian Laws $\left{N\left(m_{F}, C_{F}\right): F \subseteq T\right.$ finite $}$ satisfies the consistency condition
$$\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=N\left(m_{F}, C_{F}\right),$$
for all finite subsets $F \subseteq G \subseteq T$. To see that this is true, consider such sets $F$, $G$ and let $W$ be any random vector in $R^{G}$ such that $P_{W}=N\left(m_{G}, C_{G}\right)$. Then $\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=\pi_{G F}\left(P_{W}\right)=P_{\pi_{G F}(W)}$ and it will thus suffice to show that $Y=\pi_{G F}(W)$ is a Gaussian random vector with law $N\left(m_{F}, C_{F}\right)$ in $R^{F}$, that is, with characteristic function
$$F_{Y}(y)=\exp \left(i\left(y, m_{F}\right)-\frac{1}{2}\left(C_{F} y, y\right)\right), \quad y=\left(y_{t}\right){t \in F} \in R^{F} .$$ Since $W$ is a Gaussian random vector with law $N\left(m{G}, C_{G}\right)$ on $R^{G}$, we have
$$F_{W}(y)=\exp \left(i\left(x, m_{G}\right)-\frac{1}{2}\left(C_{G} x, x\right)\right), \quad x=\left(x_{t}\right){t \in G} \in R^{G},$$ and consequently (1.a.0), for $y \in R^{F}$, $$F{Y}(y)=F_{\pi_{G F}(W)}(y)=F_{W}\left(\pi_{G F}^{\prime} y\right)=\exp \left(i\left(\pi_{G F}^{\prime} y, m_{G}\right)-\frac{1}{2}\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)\right) .$$
Here $\pi_{G F}^{\prime}: R^{F} \rightarrow R^{G}$ is the adjoint map and so $\left(\pi_{G F}^{\prime} y, m_{G}\right)=\left(y, \pi_{G F} m_{G}\right)=$ $\left(y, m_{F}\right)$. Thus it remains to be shown only that $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{F} y, y\right)$. Let $y=\left(y_{t}\right){t \in F} \in R^{F}$. First we claim that $\pi{G F}^{\prime} y=z$, where the vector $z=\left(z_{t}\right){t \in G} \in$ $R^{G}$ is defined by $$z{t}=\left{\begin{array}{ll} y_{t} & \text { if } t \in F \ 0 & \text { if } t \in G \backslash F^{\prime} \end{array} \quad \forall y=\left(y_{t}\right){t \in F} \in R^{F} .\right.$$ Indeed, if $x=\left(x{t}\right){t \in G} \in R^{G}$ we have $\left(y, \pi{G F} x\right)=\sum_{t \in F} y_{t} x_{t}=\sum_{t \in G} z_{t} x_{t}=(z, x)$ and so $z=\pi_{G F}^{\prime} y$. Thus $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{G} z, z\right)=\sum_{s, t \in G} C(s, t) z_{s} z_{t}=$ $\sum_{s, t \in F} C(s, t) y_{s} y_{t}=\left(C_{F} y, y\right)$.
## 微积分作业代写calclulus代考|Gaussian random variables in Rk
1. 正态分布ñ=ñ(μ,σ2)在R平均μ和方差σ2定义为
ñ(dX)=1σ2圆周率经验(−(X−μ)22σ2)dX
ñ^(吨)=∫R和一世吨Xñ(dX)=经验(一世μ吨−12σ2吨2),吨∈R
ñ(dX)=1σ2圆周率和−X2/2σ2dX, 和 ñ^(吨)=和−σ2吨2/2,吨∈R
ñ(0,1)(dX)=12圆周率和−X2/2dX, 和 ñ(0,1)^(吨)=和−吨2/2,吨∈R.
$$F_{X}(t)=\hat{P} {X}(t)=\int {R^{k}} e^{i(t, x)} P_{X }(dx)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} 。 R和C一种一世一世吨H一种吨吨H和C○米p○n和n吨sX1,…,X到○F吨H和r一种nd○米v和C吨○rX=(X1,…,X到)′一种r和一世nd和p和nd和n吨一世F一种nd○n一世和一世F吨H和j○一世n吨d一世s吨r一世b你吨一世○n磷X一世s吨H和pr○d你C吨米和一种s你r和磷X1⊗磷X2⊗…⊗磷X到.吨H一世s一世s和一种s一世一世和s和和n吨○b和和q你一世v一种一世和n吨在一世吨H吨H和F一种C吨○r一世和一种吨一世○n F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k} }\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^ {k} 。$$
## 微积分作业代写calclulus代考|Theorem
1.b.0 定理。让吨是一个索引集,米:吨→R,C:吨×吨→R函数并假设矩阵CF:=(C(s,吨))s,吨∈F是自伴随和半正定的,对于每个有限集F⊆吨.
F和(和)=经验(一世(和,米F)−12(CF和,和)),和=(和吨)吨∈F∈RF.自从在是一个有规律的高斯随机向量ñ(米G,CG)在RG, 我们有
F在(和)=经验(一世(X,米G)−12(CGX,X)),X=(X吨)吨∈G∈RG,因此(1.a.0),对于和∈RF,F和(和)=F圆周率GF(在)(和)=F在(圆周率GF′和)=经验(一世(圆周率GF′和,米G)−12(CG圆周率GF′和,圆周率GF′和)). | 2022-10-06 07:36:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979602694511414, "perplexity": 154.24556317236468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00230.warc.gz"} |
http://physics.stackexchange.com/tags/dipole/new | # Tag Info
## New answers tagged dipole
0
If the total charge of the system is zero, the dipole moment does not depend on distance. much same like: if total momentum of a system is zero, the angular momentum does not depend on the origin of reference. Dipole moment is the intrinsic property of a system (subtract total charge to zero first); Angular momentum is the intrinsic property of a system ...
3
The electric dipole moment is defined as $$p = \int r \; dq$$ In the case of a pair of charges for which both charges are of the same magnitude, the choice of the origin turns out to be irrelevant: $$p = \mathbf{r_1} q - \mathbf{r_2} q = q(\mathbf{r_1} - \mathbf{r_2}) = q\mathbf{d}$$ where $\mathbf{d}$ is the distance between the charges. However, when ...
1
If you take a permanent magnet, and place a sheet of paper over it. Now sprinkle iron filings on it, and you pretty much get this diagram. This has been the mainstay of field theory since Faraday's time. A test charge at rest will begin to move in the direction of the field line. Since there is nowhere that it can rest where there is more than one ...
0
At any point the electric field is the vector sum of the fields from the two charges. So while the fields from $A$ and $B$ are indeed in opposite directions at your point $p$ you just add them (well, subtract their magnitudes since they're in opposite directions) and this gives you the net field. I wouldn't take the field lines too seriously. They are not ...
1
Dipole $\def\vp{{\vec p}}\def\ve{{\vec e}}\def\l{\left}\def\r{\right}\def\vr{{\vec r}}\def\ph{\varphi}\def\eps{\varepsilon}\def\grad{\operatorname{grad}}\def\vE{{\vec E}}$ $\vp:=\ve Ql$ constant $l\rightarrow 0$, $Q\rightarrow\infty$. \begin{align} \ph(\vr,\vr') &= \lim_{l\rightarrow0}\frac{Ql\ve\cdot\ve}{4\pi\eps_0 l}\l(\frac{1}{|\vr-\vr'-\ve\frac ...
Top 50 recent answers are included | 2014-04-19 12:12:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929149329662323, "perplexity": 233.00837265986812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/356224-producing-intelligence-on-meters/ | Producing intelligence on meters?
This topic is 4705 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
I recently bought a new book titled "On Intelligence" by Jeff Hawkins. It's a great book. For those who haven't read it, he teaches a new theory on how we can't build intelligent machines until we figure out how the brain works. I'm in the third chapter right now, but like anyone else, I began expirementing immediately with it. Following loosely on the book, I wrote the following program. I'll ask my question after you view it.
// Description: Shoot several arrows and tell how
// much arrows he shot from memory.
#include <iostream>
#include <cstdlib>
#include <string>
using namespace std;
class NPC
{
private:
// Memory storage
struct Memory
{
int arrowsShot;
};
public:
Memory memory; // Stores all memory!
NPC()
{
// Reset memory like a newborn baby haha
memory.arrowsShot = 0;
}
void shootArrow()
{
this->memory.arrowsShot += 1; // Increment a knowledge of an arrow shot.
}
int getArrowsShot()
{
return this->memory.arrowsShot;
}
void getArrowsShotDlg()
{
cout << "I shot " << this->memory.arrowsShot << " arrows in the past." << endl;
}
};
int main()
{
NPC N1;
N1.shootArrow();
N1.shootArrow();
N1.shootArrow();
N1.getArrowsShotDlg();
system("pause");
return 0;
}
I'm wondering how machines will decide to do things without the programmer telling it what to do. In this case, I'm telling my NPC to shoot an arrow three times. I shouldn't have to tell it to do anything. I also shoudln't rely on a method like "GetRandomNumberToProduceAIResult()." That's not how we work. If you played The Sims, you notice they did things based on meters. If there was no entertainment, and their need for entertainment was high, they'd wave at you until you do something about it. Instead of telling the NPC here on what to do, would the NPC be better off deciding what to do based on meters? This is my second day practicing AI-related stuff, and I'm sure this has been a question for years, but it'd be great to hear other views on this than myself. Thanks, Phil
Share on other sites
The only way I can think of is introducing randomness, but then having it learn and adjust the random values. To do that, you will need a fitness function to kind of give a "score" on how well it's doing and a way to relate it to the values. This is how neural nets work, I believe (well, part of them, anyway).
Share on other sites
Hmm I'm not sure if you meant this, but it sparked in my head while reading it... each NPC is initialized with random numbers stored in each meter. This way, each NPC is different. Then their meter percentages increment/decrement throughout the day. What would make them drop/increase though? Again, the programmer is at work for the NPCs. I'm not really thinking of just games, but robots too.
My idea in code: (new code is marked in comments)
Each new NPC starts out with a different entertain percentage.
// Description: Shoot several arrows and tell how // much arrows he shot from memory. #include <iostream>#include <cstdlib>#include <string>#include <ctime> // NEWusing namespace std;namespace // NEW{ int RANGE_MIN = 0; int RANGE_MAX = 100;}class NPC{private: // Memory storage struct Memory { int arrowsShot; }; int entertainMeter;public: Memory memory; // Stores all memory! NPC() { // Reset memory like a newborn baby haha memory.arrowsShot = 0; // NEW entertainMeter = (((double) rand() / (double) RAND_MAX) * RANGE_MAX + RANGE_MIN); } void shootArrow() { this->memory.arrowsShot += 1; // Increment a knowledge of an arrow shot. } int getArrowsShot() { return this->memory.arrowsShot; } void getArrowsShotDlg() { cout << "I shot " << this->memory.arrowsShot << " arrows in the past." << endl; } int getEntertainPercent() // NEW { return this->entertainMeter; }};int main(){ srand((unsigned) time(NULL)); // NEW NPC N1; N1.shootArrow(); N1.shootArrow(); N1.shootArrow(); N1.getArrowsShotDlg(); cout << N1.getEntertainPercent() << endl; // NEW system("pause"); return 0;}
Share on other sites
You need to build a personality trait database I think. Store things like how easily they get bored (ADHD!), if they need social interaction (programmers go days without seeing another human and are happy, others need constant interaction with humans/pets, etc). There are rules that govern us based on the above, sure in this case they will be programmer based rules. Ours are genetics programmed over generations.
Share on other sites
Okay, I made a struct called Meter which holds personality traits & needs, such as entertainment. I guess the game loop could could check their current needs. But how do NPCs change these needs on a minute basis? Should it really be me (the programmer) that increments/decrements these needs for them?
Share on other sites
Here's some pseudo-code that demonstrates my idea (note: I am going to use a different example which demonstrates my idea better. My idea requires feedback about how well the AI is doing):
class NPC;enum Action { MOVING, EATING, // etc};class Behavior {protected: float fitness; std::map<Action, float> actions; // The likely hood to do an action.public: void apply(NPC& npc) { // Randomly preform the actions, based on the likelyhood map, and find a fitness. } void setAction(Action action, float percentage) { // ... Set the action ... } float getFitness() { return fitness; }};class NPC { Behavior currentBehavoir;public: NPC() { currentBehavior.setAction(MOVING, 0.5); currentBehavior.setAction(EATING, 0.5); // ... } void update() { currentBehavior.apply(); Behavior newBehavior; // Randomize newBehavior. newBehavior.apply(); if (newBehavior.getFitness() > currentBehavior.getFitness()) { currentBehavior = newBehavior; } }};
This goes against a few OO design principles, and (ideally) you'd find a way to calculate a better behavior rather than randomly generate it, but that's the general idea. Lookup genetic algorithms and neural nets (neural nets kind of build off of GAs, so look at them first).
Share on other sites
You should think of why you need entertainment, food, company, etc.
For example, if the NPC decides there is nothing to watch on the TV but still needs to be entertained, you could make it go see a movie. Of course making the NPC decide whether it likes what is on TV or not is another matter that would be based on how the NPC evolved over its life, yet another problem you will need to address.
Share on other sites
Thanks for the good information. It should give me a good start.
Share on other sites
Cool :) Check this out...
#include <iostream>#include <cstdlib>#include <string>#include <ctime>using namespace std;int randNum(int min, int max){ return (int) (((double) rand() / (double) RAND_MAX) * max + min);}// Analyze input buffer.void sortBuffer(NPC &N1, std::string &buffer){ if (buffer.find("/stats") != std::string::npos) N1.showStats(); }class NPC{private: // Stores the NPC's memory. struct Memory { int arrowsShot; }; // Stores the NPC's needs. 0 is low, 100 is high. struct Needs { double entertainment; double hunger; double tired; double dirty; }; // Private variables. std::string name; int age; public: Memory memory; // Where memory is stored at! Needs needs; // Stores all the NPC's needs. // NPC constructor. NPC(std::string name, int age) { // Initialize general information about the NPC. this->name = name; this->age = age; // Initialize memory and needs. memory.arrowsShot = 0; needs.entertainment = randNum(1, 100); // Dummy. needs.entertainment = randNum(1, 100); // Call RandNum() again to get a non-fixed return. needs.hunger = randNum(1, 100); needs.tired = randNum(1, 100); needs.dirty = randNum(1, 100); } // Shoot an arrow. void shootArrow() { memory.arrowsShot += 1; // Increment arrows shot in memory. needs.tired += 0.2; // Yes, you can get tired by shooting arrows. } // Output dialog. void speak(std::string dialog) { cout << name << " -> " << dialog << endl; } // Cleans thyself. void takeShower() { cout << "Taking shower." << endl; needs.dirty = 0; } // Snow NPC stats of general information, memory, and needs. void showStats() { cout << "\nName: " << name << endl; cout << "Age: " << age << endl; cout << "\nMemory:\n-------" << endl; cout << "Arrows Shot: " << memory.arrowsShot << endl; cout << "\nNeeds:\n------" << endl; cout << "Entertainment: " << needs.entertainment << "%" << endl; cout << "Hunger: " << needs.hunger << "%" << endl; cout << "Tired: " << needs.tired << "%" << endl; cout << "Dirty: " << needs.dirty << "%" << endl; } // Update character. void update() { if (needs.dirty > 97) takeShower(); needs.dirty += 0.2; }}; int main(){ srand((unsigned) time(NULL)); // Seed rand(). std::string buffer; NPC N1("Kylena", 23); N1.showStats(); for (;;) { std::cout << ": "; std::cin >> buffer; sortBuffer(N1, buffer); N1.update(); } system("pause"); return 0;}
This works well. After every input (or frame in graphics), it updates the NPC now by a small percentage in certain needs. For this, it's just to see if he/she is dirty above 97%. If so, the NPC will take care of it. The only problem I see is passing each NPC object to sortBuffer(). At the end, there will be 100+ NPCs in a basic world, so am I going to pass 100 NPC objects to this method? Is there a better way? I could declare the objects globally to get rid of the passing, but that can often lead to bad results at the end. Thoughts are welcomed as always.
Share on other sites
If the command "/stats" is supposed to show the stats of all NPCs then you are better off making a function to do that task.
...//in mainstd::cin >> buffer;if (buffer.find("/stats") != std::string::npos) showAllStats();...//shows the stats of all NPCsvoid showAllStats(){ for (int i=0;i<numNPCs;i++) NPC.showStats();}
1. 1
2. 2
3. 3
4. 4
Rutin
12
5. 5
• 12
• 18
• 10
• 14
• 10
• Forum Statistics
• Total Topics
632662
• Total Posts
3007702
• Who's Online (See full list)
There are no registered users currently online
× | 2018-09-26 09:28:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20171818137168884, "perplexity": 9358.758214531548}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00369.warc.gz"} |
https://www.elastic.co/blog/configure-pipeline-ingest-node | Brewing in Beats: Configure the Ingest Node Pipeline
New community Beat: Cassandrabeat
Cassandrabeat uses Cassandra’s nodetool cfstats utility to monitor Cassandra database nodes and lag. Please give it a try and let us know what do you think.
Select pipeline for Ingest Node from Beats
By defining the pipeline in Beats, you can dynamically choose the Ingest pipeline per event.
There are three options to define the pipeline. One is to define a single pipeline (under output.elasticsearch.pipeline) for all your events, in which you can access other fields from the event:
output.elasticsearch.pipeline: ‘%{[fields.example]}’
Another one is to define an array of pipeline rules (under output.elasticsearch.pipelines). A pipeline rule can be introduced when a condition is fulfilled. The condition is defined under when:
output.elasticsearch.pipelines:
- pipeline: 'ok-pipeline'
when.range:
http.code: [200, 299]
- pipeline: 'verybad-pipeline'
when.range:
http.code: [500, 999]
- pipeline: 'default-pipeline'
Joda compatible date-time formatting in libbeat
The Go standard library way of formatting dates uses examples for layouts. This is an interesting alternative to the way formatting dates work in most other languages, but we felt that exposing this to our users would make things more complicated than needed. Not happy with the existing alternatives, Steffen created a new date-time format library, so you are able to use Joda compatible syntax. For example, you can specify a YYY.MMM.dd format to have 2016. Aug.01 as a result.
This makes it possible to have a configuration similar to Logstash when defining, for example, the index-pattern (currently in progress).
Rename redis.index with redis.key
The output.redis.index setting name has historical reasons, from the times that all outputs shared the same configuration options. The meaning of the index varies from output to output and in the case of Redis, it’s the key name. The PR introduced output.redis.key and deprecates output.redis.index.
In addition, output.file.index is deprecated in favor of output.file.filename.
Extend the Kibana dashboards for the System module of Metricbeat
More Kibana dashboards are created for the System module of Metricbeat to serve as an example for your own Kibana dashboards. A navigation bar is added on the left side (see screenshot below) to make it easier to navigate between all the Kibana dashboards created to visualize the data exported by the System module of Metricbeat:
• An Overview with all the exported data types
• Load and CPU statistics
• Memory statistics
• Per process statistics
• Network statistics
• Filesystem statistics
Filebeat: Unmarshal JSON inputs to integer by default
The standard json library in Golang unmarshals the integer values of the json object into floats instead of integers. This leaded to some unexpected behaviour when using the conditions from processors as you couldn’t easily compare a status code from the JSON object as it was translated to float64. The PR overwrites the Unmarshal behaviour and it tries to convert the numbers from the json objects to integers first, and if it fails then it converts them to floats. This way, 1 unmarshals as int64 and 1.0 as float64.
Metricbeat: Enhance load metrics
A new Metricset called load is exported by Metricbeat instead of exporting the load statistics inside the CPU statistics. With this PR system.cpu.load.1 becomes system.load.1, system.cpu.load.5 becomes system.load.5 and system.cpu.load.15 becomes system.load.15. In addition, the load values divided by the number of cores are exported under system.load.norm.
Filebeat: Fix state remove and sending empty logs
When a very low scan_frequency is set, then it could happen that a state of a finished harvester was overwritten by the prospector and the state is never set to Finished. This is now fixed in that the prospector only sends a state when the state is set to Finished.
In addition, there is a fix to not send empty log lines.
Community Beats: Create pure Go binaries in packaging by default
The PR makes the Beat generator assume the Beat is pure Go (doesn’t have C dependencies). This simplifies the packaging process and produces fully static binaries by default. It’s still possible to create packages for the Beats that require Cgo, but you need to adjust the Makefile. Use the Beats packer Makefile as an example of the possible features.
Add support for cgroup in gosigar
The PR gives you the ability to ask gosigar for cgroup stats by PID. It returns metrics and limits from the blkio, cpu, cpuacct, and memory subsystems. This is part of a larger effort to build a solution on top of Beats to monitor containers. | 2021-10-20 16:59:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3931055963039398, "perplexity": 4449.462243452831}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00449.warc.gz"} |
https://cs.stackexchange.com/questions/141481/counting-strongly-connected-components-in-a-directed-graph-in-nl | # Counting strongly connected components in a directed graph in $NL$
Define $$K\_SCC = \{ \langle G, k \rangle \,:\, G \text{ has at least k strongly connected components} \}$$
I want to show that $$K\_SCC \in NSPACE(\log n)$$, using that $$st-CONN$$ and $$\overline{st-CONN}$$ are both in NL, where $$st-CONN = \{\langle G,s,t \rangle \,:\, \text{there is a path from s to t in G} \}$$.
Would appreciate any help
• Any strongly connected component $C$ is uniquely represented by the vertex $x\in C$ that has the smallest numerical label. Show that you can recognize such vertices in NL, and then you can just count them in increasing order. Jun 17 at 15:22
Ask the prover to give you any node from $$k$$ distinct connected components.
You have only to verify that the nodes are not in the same connected components (hence, they are in $$k$$ different components, meaning that $$\langle G, k\rangle \in K_{SCC}$$)
Also, ask for the proof of $$st-CON$$ between any two of them.
Notice that even though the proof is gigantic, at every point in time the verifier will need to only verify a small portion of the proof: only one $$st-CON$$ is being processed at a time, hence the verifier can be constructed in such a way that will require only $$O(\log(n))$$ space.
The pseudocode for the verifier should look similar to this:
• For every $$i\neq j$$ with $$1\le i,j\le k$$, do:
• Take a look at the next $$O(\log(n))$$ bits of proof, and verify $$st-CON$$ for the $$i$$'th and the $$j$$'th nodes
• As the question is stated, $k$ is not constant. It is part of the input. Jun 17 at 18:42
• Oops :p I totally missed this important thing... Jun 17 at 19:01
• When I think about it, since at every point in time the verifier only checks one $ST-con$ proof of size $O(\log(n))$, then the verifier can just check the proofs one by one in some predefined order, and the verifier wont use more than $O(\log(n))$ space. Yes, the proof is larger, but if I remember correctly thats allowed for as long as the verifier requires small space. Jun 17 at 19:15
• This is true except for how you specify which $k$ nodes you want to show are in different components. This will take $O(k\log(n))$ memory, which is not logarithmic in the size of $k$... Jun 17 at 19:23
• The definition of NL does not allow you to generate a polynomial-size “proof” and subsequently verify it in logarithmic space. This would in fact give you all of NP. See my comment below the question how to do this correctly. Jun 17 at 19:27 | 2021-11-27 06:33:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7843966484069824, "perplexity": 289.0510162326454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00449.warc.gz"} |
https://math.stackexchange.com/questions/2211651/well-posedness-of-variational-mapping-problem | # Well-posedness of variational mapping problem
Suppose $S_0,S\subseteq\mathbb{R}^3$ are embedded surfaces. For a smooth map $\phi:S_0\rightarrow S$, define an energy $E[\phi]$ by $$E[\phi]:=\int_{\Sigma_0}F[\Lambda(d\phi_p)]\,dA(p).$$ Here, $\Lambda(d\phi_p)\equiv (\sigma_1(d\phi_p),\sigma_2(d\phi_p))$ is the set of singular values of the Jacobian of $\phi$ at $p\in S_0$. As an example, if we define $F[\sigma_1,\sigma_2]:=\sigma_1^2+\sigma_2^2$, then $E[\phi]$ is the Dirichlet energy of $\phi$.
Here's my question: Given a smooth map $\phi_0:S_0\rightarrow S$, is there a sufficient condition on $F:\mathbb{R}^2\rightarrow\mathbb{R}$ guaranteeing existence of a map $\phi:S_0\rightarrow S$ in the homotopy class of $\phi$ that locally minimizes $E[\cdot]$?
What I have in mind is the gradient flow of Eells and Sampson, which proves existence of harmonic maps when $S$ has negative Gaussian curvature by starting with an arbitrary $\phi_0$ and flowing along the gradient of the Dirichlet energy to a local optimum. This is an elegant construction, but the drawback is that not all target surfaces $S$ admit a harmonic map $\phi:S_0\rightarrow S$.
My intuition is that Eells and Sampson's construction fails in the presence of positive curvature because $F[\sigma_1,\sigma_2]=\sigma_1^2+\sigma_2^2$ "wants to" pinch points, i.e. reach a Jacobian with as small singular values as possible. But perhaps objectives like the symmetric Dirichlet energy, which appears in computer graphics applications, which looks like $F[\sigma_1,\sigma_2]:=\sigma_1^2+\sigma_2^2+\sigma_1^{-2}+\sigma_2^{-2}$, would have better properties since it has an asymptote whenever $\sigma_i=0$ and is minimized when $\sigma_1=\sigma_2=1$.
• I think it's very difficult - as soon as you choose a different energy you're dealing with a quasilinear system, and very few of the estimates from HMHF carry over. I've been trying the Hölder approach to a similar problem (looking for harmonic diffeomorphisms using a gradient-like flow for the Dirichlet energy) for a long while now and have only managed to get it working in the absence of curvature - see arxiv.org/abs/1609.08317. It's possible the gradient flow structure could make things easier, but it certainly won't be as easy as Eells-Sampson. – Anthony Carapetis Apr 1 '17 at 10:08
• Hmm, that's too bad! I was hoping somehow things would be easier than Eells-Sampson if we build an objective function $F$ that resists collapse into singularities (e.g. the symmetric Dirichlet energy above, which has an asymptote that "wants to" avoid singular Jacobians). This seems to be the empirical observation in the applied world but we don't have math to back it up! I'll take a look at your paper ---- if you're curious about the applications we're considering, shoot me an email and I'd be happy to sketch out details! – Justin Solomon Apr 1 '17 at 13:26 | 2019-10-21 00:20:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859721302986145, "perplexity": 256.3535112390936}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00055.warc.gz"} |
https://www.physicsforums.com/threads/find-elementary-matrix-e-such-that-b-ea.270363/ | # Homework Help: Find elementary matrix E such that B=EA
1. Nov 8, 2008
### subopolois
1. The problem statement, all variables and given/known data
im having problems with this question, i dont know how they got their answer. the question is: find elementary matrix E such that B=EA
A=-1 2 B= 1 -2 (these are matrices)
0 1 0 1
2. Relevant equations
elementary row operations
3. The attempt at a solution
-1 2|1 0 (row 1x-1) 1 -2|-1 0 (row 1+2 row 2) 1 0|-1 2
0 1|0 1 0 1|0 1 0 1|0 1
the answer in my book says its -1 0 but i dont know how they got that
0 1
2. Nov 8, 2008
### gabbagabbahey
If you have learned about matrix inverses, the solution should be fairly simple...A quick calculation shows that $\text{det}(A) \neq 0$ and so its inverse exists...what do you get when you multiply both sides of the equation $B=EA$ from the right by $A^{-1}$? | 2018-07-17 02:55:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6690493822097778, "perplexity": 985.3166934872756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589537.21/warc/CC-MAIN-20180717012034-20180717032034-00442.warc.gz"} |
https://www.hepdata.net/record/ins681233 | • Browse all
Measurement of the $t \bar{t}$ production cross section in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96 = TeV using kinematic characteristics of lepton + jets events
The collaboration
Phys.Lett.B 626 (2005) 45-54, 2005.
Abstract (data abstract)
Fermilab-Tevatron. Measurement of the TOP TOPBAR production cross section in PBAR P collisions at a centre-of-mass energy of 1.96 TeV using the Kinematic charatceristics of Lepton+Jet events. The data, collected during the period August 2002 and March 2004, come from a sample of integrated luminsoity 230pb-1 with one charged lepton (e or mu), large missing transverse energy, and at least fourjets in the final state. The analysis assumes a top quark mass of 175 GeV.
• #### Table 1
Data from P 6 (C =PREPRINT)
10.17182/hepdata.27001.v1/t1
TTBAR production cross section from the combined electron+jet and muon+jet channels. | 2021-06-16 08:22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417313933372498, "perplexity": 4025.083575497237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00317.warc.gz"} |
https://med.libretexts.org/Courses/Chabot_College/Introduction_to_Nutrition_Science/03%3A_The_Human_Body/3.03%3A_Basic_Biology_Anatomy_and_Physiology | # 3.3: Basic Biology, Anatomy, and Physiology
## The Basic Structural and Functional Unit of Life: The Cell
What distinguishes a living organism from an inanimate object? A living organism conducts self-sustaining biological processes. A cell is the smallest and most basic form of life.
The cell theory incorporates three principles:
Cells are the most basic building units of life. All living things are composed of cells. New cells are made from preexisting cells, which divide in two. Who you are has been determined because of two cells that came together inside your mother’s womb. The two cells containing all of your genetic information (DNA) united to begin making new life. Cells divided and differentiated into other cells with specific roles that led to the formation of the body’s numerous body organs, systems, blood, blood vessels, bone, tissue, and skin. As an adult, you are made up of trillions of cells. Each of your individual cells is a compact and efficient form of life—self-sufficient, yet interdependent upon the other cells within your body to supply its needs.
Independent single-celled organisms must conduct all the basic processes of life. The single-celled organism must take in nutrients (energy capture), excrete wastes, detect and respond to its environment, move, breathe, grow, and reproduce. Even a one-celled organism must be organized to perform these essential processes. All cells are organized from the atomic level to all its larger forms. Oxygen and hydrogen atoms combine to make the molecule water ($$\ce{H2O}$$). Molecules bond together to make bigger macromolecules. The carbon atom is often referred to as the backbone of life because it can readily bond with four other elements to form long chains and more complex macromolecules. Four macromolecules—carbohydrates, lipids, proteins, and nucleic acids—make up all of the structural and functional units of cells.
Although we defined the cell as the “most basic” unit of life, it is structurally and functionally complex (Figure $$\PageIndex{1}$$). A cell can be thought of as a mini-organism consisting of tiny organs called organelles. The organelles are structural and functional units constructed from several macromolecules bonded together. A typical animal cell contains the following organelles: the nucleus (which houses the genetic material DNA), mitochondria (which generate energy), ribosomes (which produce protein), the endoplasmic reticulum (which is a packaging and transport facility), and the golgi apparatus (which distributes macromolecules). In addition, animal cells contain little digestive pouches, called lysosomes and peroxisomes, which break down macromolecules and destroy foreign invaders. All of the organelles are anchored in the cell’s cytoplasm via a cytoskeleton. The cell’s organelles are isolated from the surrounding environment by a plasma membrane.
Figure $$\PageIndex{1}$$: The Cell Structure. The cell is structurally and functionally complex.
## Tissues, Organs, Organ Systems, and Organisms
Unicellular (single-celled) organisms can function independently, but the cells of multicellular organisms are dependent upon each other and are organized into five different levels in order to coordinate their specific functions and carry out all of life’s biological processes (Figure $$\PageIndex{1}$$).
• Cells are the basic structural and functional unit of all life. Examples include red blood cells and nerve cells. There are hundreds of types of cells. All cells in a person contain the same genetic information in DNA. However, each cell only expresses the genetic codes that relate to the cell’s specific structure and function.
• Tissues are groups of cells that share a common structure and function and work together. There are four basic types of human tissues: connective, which connects tissues; epithelial, which lines and protects organs; muscle, which contracts for movement and support; and nerve, which responds and reacts to signals in the environment.
• Organs are a group of tissues arranged in a specific manner to support a common physiological function. Examples include the brain, liver, and heart.
• Organ systems are two or more organs that support a specific physiological function. Examples include the digestive system and central nervous system. There are eleven organ systems in the human body (see Table $$\PageIndex{1}$$).
• An organism is the complete living system capable of conducting all of life’s biological processes.
Figure $$\PageIndex{1}$$: Organization of Life. (CC BY-SA 4.0; Laia Martinez via Wikipedia)
Table $$\PageIndex{1}$$: The Eleven Organ Systems in the Human Body and Their Major Functions
Organ System Organ Components Major Function
Cardiovascular heart, blood/lymph vessels, blood, lymph Transport nutrients and waste products
Digestive mouth, esophagus, stomach, intestines Digestion and absorption
Endocrine all glands (thyroid, ovaries, pancreas) Produce and release hormones
Lymphatic tonsils, adenoids, spleen and thymus A one-way system of vessels that transport lymph throughout the body
Immune white blood cells, lymphatic tissue, marrow Defend against foreign invaders
Integumentary skin, nails, hair, sweat glands Protective, body temperature regulation
Muscular skeletal, smooth, and cardiac muscle Body movement
Nervous brain, spinal cord, nerves Interprets and responds to stimuli
Reproductive gonads, genitals Reproduction and sexual characteristics
Respiratory lungs, nose, mouth, throat, trachea Gas exchange
Skeletal bones, tendons, ligaments, joints Structure and support
Urinary, Excretory kidneys, bladder, ureters Waste excretion, water balance
Query $$\PageIndex{1}$$
This page titled 3.3: Basic Biology, Anatomy, and Physiology is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Jennifer Draper, Marie Kainoa Fialkowski Revilla, & Alan Titchenal via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2023-02-04 21:08:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38608697056770325, "perplexity": 5866.571173607944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00087.warc.gz"} |
https://www.researcher-app.com/paper/1951229 | 3 years ago
# On the multi-particle azimuthal anglular correlations in $pA$ collisions.
Cheng Zhang, Manyika Kabuswa Davy, Yu Shi, Enke Wang
In the Color Glass Condensate formalism, we evaluate the 3-dipole correlator up to the $\frac{1}{N_c^4}$ order with $N_c$ being the number of colors, and compute the azimuthal cumulant $c_{123}$ for 3-particle productions. In addition, we discuss the patterns appearing in the $n$-dipole formula in terms of $\frac{1}{N_c}$ expansions. This allows us to conjecture the $N_c$ scaling of $c_n\{m\}$, which is crosschecked by our calculation of $c_2\{4\}$ in the dilute limit.
Publisher URL: http://arxiv.org/abs/1901.01778
DOI: arXiv:1901.01778v2
You might also like
Discover & Discuss Important Research
Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.
Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article. | 2022-08-18 22:31:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3870762884616852, "perplexity": 3240.890380939788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00409.warc.gz"} |
https://www.physicsforums.com/threads/destroying-stars.99754/ | Destroying Stars?
1. Nov 13, 2005
Aurora Firestorm
Hello, all. This isn't entirely a real-life question, but I'd like to know your opinion. I'm a science fiction writer/astronomy lover trying for a "hard sci-fi" approach to the astronomy in my tale -- using as much real science as I can.
So, I have a dilemma. Stars naturally die after fusing the hydrogen in their cores, unless they are massive enough and begin to burn helium (several possible elements later, they die anyway), and become white dwarfs, neutron stars, or black holes. That's the typical routine at the very basics, as I understand it. But, if one wanted to, how could one conceivably destroy a star before the end of its natural "life span" through technological means? Could it be "damaged" in any conceivable way?
Any wild speculation is welcome, because I'm writing a futuristic universe where a lot more is possible than we can do today. Preferably, this technology needs to be quickly moved about and deployed, rather like a weapon of some kind than a complex setup that takes years to construct.
And if somehow, this thread is in the wrong area, please feel free to move it. :)
2. Nov 13, 2005
scott1
we could find some a way to decrease the hydorgen or acclerate the reacations but probally don't have the technology yet.I think we shouldn't find out so Terroist won't get any Idea's.
3. Nov 14, 2005
WarrenPlatts
Create an artificial black hole and fire it into the Sun. The black hole will then eat up the Sun from within, until there's nothing left but the black hole itself.
4. Nov 14, 2005
dgoodpasture2005
learn how to control time, and fast forward the star 'till it's death.
5. Nov 14, 2005
dgoodpasture2005
shoot it with a high powered particle beam coming from a source/laser head the radius of the earth... put a hole in it, and it will collapse on itself or defuse.
6. Nov 14, 2005
dgoodpasture2005
alter some other larger bodies course nearby, and make them collide.
7. Nov 14, 2005
dgoodpasture2005
pour a giant bucket of water on it.
8. Nov 14, 2005
Danger
If you could manage to divert enough iron-rich bodies into the core (and it would take one hell of a lot of them), the fusion could theoretically be put out. Iron absorbs heat and neutrons without fusing. (Double-check me on that, Space Tiger.)
9. Nov 14, 2005
SpaceTiger
Staff Emeritus
I know some of these suggestions weren't meant to be taken seriously, but for fun, let's discuss a little of their astrophysics...
One might be able to accelerate the nuclear reactions by introducing vast quantities of some mediator isotope, but that wouldn't destroy the sun, it would just cause it to expand and reach a new equilibrium. If the reactions increased the energy production extremely quickly, one might be able to blow the sun apart before it could equilibrate, but I'm pretty sure there aren't any known isotopes that could do this.
Decreasing the hydrogen content is another option, but since the sun is made mostly of hydrogen, this would be practically equivalent to pulling it apart, piece by piece. The gravitational binding energy of the sun is
$$E \sim \frac{GM^2}{R} \sim 10^{48}~ergs$$
so we won't be fulfilling these energy requirements anytime soon.
If we were somehow able to make the sun inert (i.e. stop fusion), then it would seem to have no way to replenish the energy radiated away. Thus, it would cool, the pressure would fall, and the sun would begin to contract. However, this contraction is itself a source of energy -- in fact, they used to think that this was what powered the sun. This means that the sun will live for a while even after burning ceases...about 10 million years.
This one's kinda tricky. A very low mass black hole would have no noticeable effect on the sun, while a very large one (of order the sun's mass) would gravitationally disrupt it -- but we couldn't create such a beast artificially. In the intermediate range, a black hole at the center of the sun might even make it live longer. The reason for this is that the lifetime of the sun depends on the efficiency of its energy source; that is, the more effectively it can turn matter into energy, the longer it can keep itself up from the pull of gravity. Other than matter-antimatter annihilation, accretion onto a black hole is the most efficient source of energy that we know of, converting of order 10% of the rest mass of its fuel into energy. If the star reached a stable equilibrium with the black hole at its center, then slow accretion onto the black hole could maintain the star for a very long time.
There a lot of "ifs" in this one, however. It's not clear what kind of equilibrium (if any) the star would reach with a sizable black hole at its center. The efficiency of an accreting black hole is also extremely uncertain. Finally, depending on the initial mass of the black hole, it may be a problem even getting it to the center without seriously disturbing the star's structure.
Shooting it with a laser beam wouldn't put a hole in it, but it would heat it up. Unfortunately (or fortunately), to have a noticable impact, you'd need an extremely powerful laser beam, powerful enough to provide an energy comparable to the gravitational binding energy I quoted above.
This may be the easiest way to seriously disturb the sun, but it would be difficult to destroy it completely. Any close gravitational interaction with an object of comparable mass would likely strip a significant portion of the sun's envelope. After this, however, the sun would just settle into the main sequence configuration for a star of lower mass.
It you pour on enough of anything, onto the sun, you can shorten its lifespan (water would boil before even reaching the surface and have no special effect). However, to shorten the lifespan to anything less than a million years, you would have to add on the order of 100 times the sun's current mass.
Unless you remove the hydrogen, I wouldn't expect the addition of iron to have more than an order unity consequence for the burning rates. If this were the case, you could refer to my response to the first idea. If you added enough (many times the mass of the sun), then you'd be reduced to my last response to dgoodpasture2005.
Really, I can't think of any remotely feasible methods of destroying the sun. Even if we were to somehow remove its source of pressure, its total mass doesn't exceed the Chandrasekhar limit, so it would form a stable white dwarf. We could hurl it at a supermassive black hole, but the nearest one is 30,000 light years away.
For better or for worse, I think we're stuck with the sun.
10. Nov 14, 2005
Aurora Firestorm
A white dwarf is fine. It doesn't have to be blown apart altogether, just without fusion in some way, shape, or form.
Would draining away some of the mass work? I know stars are incredibly massive, but if there were a way to take a large chunk of mass out of the outer layers, perhaps it would cause the star to expand due to less gravity. After all, the core would remain untouched. I'm not quite sure what that would do to the structure, and if it would eliminate fusion or at least slow it down or decrease it in some way.
Antimatter? Perhaps if some of the core could be neutralized, that would do something.
And just out of curiosity, if rotation could somehow be stopped in any conceivable way, would that do anything?
11. Nov 14, 2005
Danger
Thanks for the analysis of my post, Tiger. As always, I enjoy having new information from you.
As to the original question, I've come up with a different approach that's guaranteed to work. Put the sun on the Maury show. It won't physically destroy it, but it will destroy its credibility.
12. Nov 14, 2005
SpaceTiger
Staff Emeritus
Draining the mass would just cause it to settle into a longer-lived main sequence configuration. Low-mass stars live longer because their equilibrium luminosities are lower and they exhaust their energy at a much slower rate.
If you could get a hold of vast quantities of antimatter, that would do the trick. However, I would expect this to be even more difficult than taking the star apart, bit by bit.
Not sure what you mean. I already explained what happens if fusion is stopped.
The rotation of the sun is very slow (~25 days), so its structure wouldn't be changed much if it stopped.
Keep in mind that all of these responses are to the question of destroying the sun. If all you want to do is wipe out civilization as we know it, only an order unity change in the sun's energy output would be sufficient.
13. Nov 14, 2005
SpaceTiger
Staff Emeritus
I think your best bet is to have your civilization arrange for a precision rerouting of another object's orbit, perhaps a nearby star or white dwarf. Of the methods mentioned so far, I think this has the least demanding energy requirements.
Last edited: Nov 14, 2005
14. Nov 15, 2005
cd27
okay...that was..yea...
cd
Last edited: Nov 16, 2005
15. Nov 16, 2005
WarrenPlatts
OK, what if a giant Dyson sphere was constructed around the Sun with perfect reflecting material on the inside, so that all radiation released by the sun was reflected back onto it? Wouldn't it heat up until it went nova?
Also, what about the possibility of constructing or redirecting an otherwise existing wormhole that would suck the Sun away into another universe, or at least somewhere very far away in this visible universe?
16. Nov 17, 2005
SpaceTiger
Staff Emeritus
If you reflect all of the emitted energy back into the sun, then what you have, basically, is a non-cooling star. If it's not cooling, then the energy generated will go into increasing its pressure and, therefore, causing it to expand. However, when it expands, the energy generation rate goes down (nuclear fusion requires high densities), so this whole process will likely just make the star steadily larger at a rate that decreases with time. Even with a perfectly reflecting sphere, the process would be very slow -- I would guess the Kelvin-Helmholtz timescale, which is around one to ten million years. Also, there would still be some energy losses by neutrino emission and stellar winds, so in practice, you couldn't return all of the energy to the sun.
Wormholes are always an easy out for science fiction, but it may not even be possible to create them, given the apparent violation of the laws of thermodynamics.
17. Nov 17, 2005
Hurkyl
Staff Emeritus
I understand that it takes a very long time for the energy produced by fusion to reach the surface of a star anyways -- no matter what you did to the star, wouldn't you still have to find something to do with the thousands of years worth of photons beneath its surface trying to get out? (or is it millions?)
18. Nov 17, 2005
SpaceTiger
Staff Emeritus
There are a variety of timescales on which changes in the sun can take place. Some of the most important ones are:
Nuclear timescale - The approximate time it takes for the sun to burn the available nuclear fuel. This timescale is obviously relevant for the evolution of the sun, since this is the sun's energy source. In general, you'll see major changes in its position on the Hertzsprung-Russel diagram on the nuclear timescale. For hydrogen burning in the sun, it comes out to about 10 billion years.
Kelvin-Helmholtz timescale - The time it takes for the sun to radiate away its gravitational binding energy. If nuclear burning were to stop or become insufficient for compensating the energy losses, this would be the approximate lifetime of the sun. It comes out to around 10 million years.
Thermal timescale - This is the time on which the temperature profile of the sun changes. The virial theorem makes it so that the gravitational and thermal energies of the sun are about the same, so this timescale turns out to be approximately equivalent to the Kelvin-Helmholtz timescale.
Diffusion timescale - The average time it takes for a photon to undergo a random walk from the core to the surface. I'm guessing this is the one you were referring to. I've seen estimates that range from 50,000 to 10 million years, but to my knowledge, this timescale doesn't play a big role in calculating changes in the sun, so the precise time isn't very important.
Dynamical timescale - The time on which gravitational perturbations are communicated across the sun. It turns out to be comparable to the sound-crossing time, the free-fall time, and the hydrostatic time. For the sun, all of these come out to about 30 minutes.
If we wanted the sun to be "destroyed" on any timescale that was relevant for a science fiction novel, the process would pretty much have to be occurring on the dynamical timescale. Most of the things suggested so far would change the sun on one of the first three timescales. The one exception (which I noted as being the most promising) was the one that involved a collision with another object. When the sun and the object collided, the sun would be disrupted on the dynamical timescale.
Another process that's known to occur on the dynamical timescale is stellar pulsation. When a star is unstable to hydrostatic perturbations, it can pulsate. If the amplitude of these pulsations were large enough, presumably the star could be blown apart. Perhaps if there was some contraption that created gravitational perturbations at one of the sun's resonant frequencies, a futuristic society could succeed in causing the sun to pulsate itself to pieces. It's a bit far-fetched, but then science fiction usually is...
Last edited: Nov 17, 2005
19. Nov 18, 2005
EngineeredVision
I was about to suggest resonance. Is it possible to create as SpaceTiger suggested some sort of gravitational resonance in the Sun. I recall Nikola Tesla talking about how the Earth could be destroyed in under 2 years if the correct resonant frequency was achieved through explosive devices (I don't recall if they were in the kiloton or megaton range) set off with each recurring resonant wave. I believe that an explosion on this magnitude would have to be set off every 45 minutes for 2 years to destroy the Earth. However, after only a few weeks Tesla claimed that any surface topology of the Earth would be drastically destroyed as the surface rose and fell several hundred feet. I don't remember the exact setup for this scenario but hopefully everyone gets the idea. Granted this is not gravitational resonance, but if a similar effect could be employed on the Sun would this technique be possible?
20. Nov 18, 2005
JesseM
Ha, that's awesome that Tesla was thinking about ways to destroy the earth. Doesn't sound plausible though--there's no getting around the fact that you need a huge input of energy to get all the mass of the earth up to escape velocity, and the energy generated by some explosions won't come close (the energy needed is estimated at between 2.2 * 10^32 and 3.7 * 10^32 joules on this star wars page written by an engineer). Resonance can't generate more energy than is put in by the external driver, can it? I suppose the stretching and compression of bonds between molecules that make up the object would be converting potential energy to kinetic energy at times, but at other times kinetic energy would be converted back to potential (like an object bouncing up and down on a spring) so there shouldn't be any net increase in energy over time, aside from the energy put in externally by the explosions. | 2016-09-29 11:54:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5399934649467468, "perplexity": 636.8737930983209}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661795.48/warc/CC-MAIN-20160924173741-00179-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/group-theory-n/ | ×
# Group Theory
Given G is a group, and H is a subgroup of G; for a,b in G, what does it mean for a to be congruent to b mod H?
Note by Siddharth Sabharwal
3 years, 11 months ago
Sort by:
They belongs to the same class means b is conjugate to a iff at least one x exist in G such that b=xax–¹ · 1 year, 9 months ago
Two elements $$x,y \in G$$ are left congruent modulo the subgroup $$H$$ if $$x^{-1}y \in H$$, namely if $$xH = yH$$, so that $$x$$ and $$y$$ define the same left coset. Left congruence is an equivalence relation on $$G$$ - the decomposition of $$G$$ into a disjoint union of equivalence classes (left cosets) gives us Lagrange's Theorem (when $$G$$ is finite).
Two elements $$x,y \in G$$ are right congruent modulo $$H$$ if $$xy^{-1} \in H$$, namely if $$Hx = Hy$$, so that $$x$$ and $$y$$ define the same right coset. Right congruence is another equivalence relation on $$G$$, which is different to left congruence unless the subgroup is normal. · 3 years, 11 months ago | 2017-08-20 17:09:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.913841962814331, "perplexity": 314.40258961555213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00428.warc.gz"} |
https://stats.stackexchange.com/questions/399445/calculate-variance-without-calculating-the-mean?noredirect=1 | # Calculate variance without calculating the mean
Can we calculate the variance without using the mean as the 'base' point?
• Given $\mathbb{E}(X^2)<\infty$, the variance is given by $\sigma^2 = \mathbb{E}((X-\mathbb{E}(X))^2)$ by definition. The formular simplifies to $\sigma^2 =\mathbb{E}(X^2) - \mathbb{E}(X)^2$. I.e., for the variance you need $\mathbb{E}(X)$. Of course you could define your own dispersion measure using some other statistic...or use one from the answers. – BloXX Mar 26 at 7:56
• Short answer: Lots of other ways to summarize variability (dispersion, spread, scale) but none of the others would be the variance. (In fact, the variance can be defined without reference to the mean.) – Nick Cox Mar 26 at 8:28
• Yes: given data $X,$ compute the covariance of $(X,X)$ as described at stats.stackexchange.com/a/18200/919. This method never computes the mean. – whuber Mar 26 at 13:15
The median absolute deviation is defined as $$\text{MAD}(X) = \text{median} |X-\text{median}(X)|$$ and is considered an alternative to the standard deviation. But this is not the variance. In particular, it always exists, whether or not $$X$$ allows for moments. For instance, the MAD of a standard Cauchy is equal to one since $$\underbrace{\Bbb P(|X-0|<1)}_\text{0 is the median}=\arctan(1)/\pi-\arctan(-1)/\pi=\frac{1}{2}$$
• Newcomers to this idea should watch out also for mean absolute deviation from the mean (mean deviation, often) and median absolute deviation from the mean. I don't recall mean absolute deviation from the median, but am open to examples. The abbreviation MAD, unfortunately, has been applied variously, so trust people's code first, then their algebraic or verbal definition, but use of an abbreviation MAD only not at all. In symmetric distributions, and some others, MAD as defined here is half the interquartile range. (Punning on MAD I resist as a little too obvious.) – Nick Cox Mar 26 at 8:23
• Also, note that software implementations of the median absolute deviation function can scale the MAD value by a constant factor from the form presented in this answer, so that its value coincides with the standard deviation for a normal distribution. – EdM Mar 26 at 8:30
• @EdM Excellent point. Personally I dislike that practice unless people use some different term. It's no longer the MAD! – Nick Cox Mar 26 at 8:35
• @NickCox: the appeal of centring on the median is that the quantity always exists, whether or not the distribution enjoys a mean. This is the definition found in Wikipedia. – Xi'an Mar 26 at 9:20
• – kjetil b halvorsen Mar 26 at 10:05
There is already a solution for this question on Math.stackexchange:
1. You can use that the variance is $$\overline{x^2} - \overline {x}^2$$, which takes only one pass (computing the mean and the mean of the squares simultaneously), but can be more prone to roundoff error if the variance is small compared with the mean.
$$2v_X = \frac{1}{n(n-1)}\sum_{1 \le i < j \le n}(x_i - x_j)^2.$$
1. The sample variance without mean is calculated as: $$v_{X}=\frac{1}{n-1}\left [ \sum_{i=1}^{n}x_{i}^{2}-\frac{1}{n}\left ( \sum_{i=1}^{n}x_{i} \right ) ^{2}\right ]$$ | 2019-08-23 22:56:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956904411315918, "perplexity": 642.3111826094726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00310.warc.gz"} |
http://joshkos.blogspot.com/2007/05/texbook.html | $\newcommand{\defeq}{\mathrel{\mathop:}=}$
## 2007/05/27
### TeXbook
--
Quote from preface of TeXbook:
This manual is inteded for people who have never used TeX before, as well as for experienced TeX hackers. In other words, it's supposed to be a panacea that satisfies everybody, at the risk of satisfying nobody.
Knuth 的書寫真是幽默生動 XD。或許鍛鍊文字最好的方法,就是像 Knuth 一樣,研究一個主題就寫一本書出來 XD。
Labels:
skusi5/27/2007 12:53 pm 說:
yen35/27/2007 3:44 pm 說: | 2018-03-22 00:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994189739227295, "perplexity": 11829.73888598108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647707.33/warc/CC-MAIN-20180321234947-20180322014947-00599.warc.gz"} |
https://documentation.inesonic.com/reference_manual/function_lognormalq.html | # $$\text{LogNormalQuantile}$$¶
You can use the $$\text{LogNormalQuantile}$$ function to calculate the quantile function of the log-normal distribution. The quantile function is the inverse of the cumulative distribution function.
You can use the \lognormalq backslash command to insert this function.
The following variants of this function are available:
• $$\text{real } \text{LogNormalQuantile} \left ( \text{<p>} \right )$$
• $$\text{real } \text{LogNormalQuantile} \left ( \text{<p>}, \text{<}\mu\text{>} \right )$$
• $$\text{real } \text{LogNormalQuantile} \left ( \text{<p>}, \text{<}\mu\text{>}, \text{<}\sigma\text{>} \right )$$
Where $$p$$, $$\mu$$, and $$\sigma$$ are scalar values representing the probability, the mean value and the standard deviation. If not specified, the mean value will be 0 and the standard deviation will be 1. Note that this function is defined over the range $$0 \leq p \leq 1$$ and $$\sigma > 0$$. The $$\text{LogNormalQuantile}$$ function will generate a runtime error or return NaN for values for which the function is not defined.
The value is calculated directly using the relation:
$\text{LogNormalQuantile} \left ( p, \mu, \sigma \right ) = e ^ { \mu + \sqrt{2 \sigma ^ 2} \text{erf}^{-1} \left ( 2 p - 1 \right ) }$
Where $$\text{erf}^{-1}$$ represents the inverse error function.
Figure 173 shows the basic use of the $$\text{LogNormalQuantile}$$ function.
Figure 173 Example Use Of the LogNormalQuantile Function | 2023-02-07 12:29:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960279822349548, "perplexity": 241.0390666375159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00133.warc.gz"} |
https://www.sarthaks.com/2712833/acceleration-gravity-earth-height-above-surface-earth-distance-location-from-centre-earth | # The acceleration due to gravity of Earth at a height above the surface of Earth is 1 mm/s2. The distance of this location from the centre of Earth is
24 views
in Physics
closed
The acceleration due to gravity of Earth at a height above the surface of Earth is 1 mm/s2. The distance of this location from the centre of Earth is (Assume g = 10 m/s2, radius of earth = 6400 km)
1.
3200 km
2.
7650 km
3.
8640 km
4.
9600 km
by (53.7k points)
selected
Correct Answer - Option 4 :
9600 km
The correct answer is option 4) i.e. 9600 km
CONCEPT:
• Acceleration due to gravity on the surface of Earth of mass M and radius Re is denoted by g.
• It has an approximated uniform value of 9.8 m/s2 on the surface of Earth.
The acceleration due to gravity at a depth 'd' below the surface of Earth is given by
$⇒ g' = g(1- \frac{d}{R_e})$
• The acceleration due to gravity at a height 'h' above the surface of Earth is given by
$⇒ g'' = g(1+ \frac{h}{R_e})^{-2}$
$⇒ g'' = g(1- \frac{2h}{R_e})$ for h << Re
EXPLANATION:
Given that:
g'' = 1 mm/s2 = 10-3 m/s2
$⇒ g'' = g(1- \frac{2h}{R_e})$
$⇒ 10^{-3}= 10(1- \frac{2h}{6400})$
⇒ h = 3200 km
Therefore, distance from the centre of earth = R + h = 6400 + 3200 = 9600 km. | 2022-12-08 02:37:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8908458948135376, "perplexity": 917.2449695781604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00016.warc.gz"} |
https://sourceware.org/legacy-ml/cygwin/2003-12/msg00746.html | This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
Index Nav: Message Nav: [Date Index] [Subject Index] [Author Index] [Thread Index] [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Raw text]
Re: Is cygwin_ROOT still there?
On Thu, Dec 18, 2003 at 07:34:11AM -0600, L. D. Marks wrote:
>Thanks. A slightly tortuous solution, but this works (after adding mount
>to a limited set of executables). FYI, I used a ls /cygwin/cygwin.bat
>to test for the existence of cygwin on the system.
Wouldn't checking for cygwin1.dll be an infinitely better test?
>N.B., to include a snippet from startxwin.bat:
>
>REM The path in the CYGWIN_ROOT environment variable assignment assume
>REM that Cygwin is installed in a directory called 'cygwin' in the root
>REM directory of the current drive. You will only need to modify
>REM CYGWIN_ROOT if you have installed Cygwin in another directory. For
>REM example, if you installed Cygwin in \foo\bar\baz\cygwin, you will need
>REM to change \cygwin to \foo\bar\baz\cygwin.
startxwin.bat != cygwin
This is just a convention used by one .bat file. It is not a universally
understood convention throughout cygwin.
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
Index Nav: Message Nav: [Date Index] [Subject Index] [Author Index] [Thread Index] [Date Prev] [Date Next] [Thread Prev] [Thread Next] | 2020-07-11 20:15:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7491991519927979, "perplexity": 9887.436251054727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00088.warc.gz"} |
https://brilliant.org/problems/box-and-box-and-box/ | Box and box and box!!
Algebra Level pending
[x/3] +[x/5] +[x/7] =[x/10]. Find all possible +ve integral solutions of the equation. Here , [m] denotes the greatest integer less than or equal to m.
× | 2017-10-18 04:02:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711319804191589, "perplexity": 2495.752633707936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822739.16/warc/CC-MAIN-20171018032625-20171018052625-00781.warc.gz"} |
https://encyclopediaofmath.org/wiki/Matrix_of_transition_probabilities | # Matrix of transition probabilities
The matrix $P _ {t} = \| p _ {ij} ( t) \|$ of transition probabilities in time $t$ for a homogeneous Markov chain $\xi ( t)$ with at most a countable set of states $S$:
$$p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} ,\ \ i, j \in S.$$
The matrices $\| p _ {ij} ( t) \|$ of a Markov chain with discrete time or a regular Markov chain with continuous time satisfy the following conditions for any $t > 0$ and $i, j \in S$:
$$p _ {ij} ( t) \geq 0,\ \ \sum _ {j \in S } p _ {ij} ( t) = 1,$$
i.e. they are stochastic matrices (cf. Stochastic matrix), while for irregular chains
$$p _ {ij} ( t) \geq 0,\ \ \sum _ {j \in S } p _ {ij} ( t) \leq 1,$$
such matrices are called sub-stochastic.
By virtue of the basic (Chapman–Kolmogorov) property of a homogeneous Markov chain,
$$p _ {ij} ( s+ t) = \sum _ {k \in S } p _ {ik} ( s) p _ {kj} ( t),$$
the family of matrices $\{ {P _ {t} } : {t > 0 } \}$ forms a multiplicative semi-group; if the time is discrete, this semi-group is uniquely determined by $P _ {1}$. | 2021-10-22 11:59:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379265666007996, "perplexity": 389.0623682764327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00408.warc.gz"} |