text
stringlengths
100
356k
# Is span a subset in $\mathbb{R}^{n}$? 1. Oct 9, 2015 ### yango_17 1. The problem statement, all variables and given/known data Consider the vectors $\vec{v_{1}},\vec{v_{2}},...,\vec{v_{m}}$ in $\mathbb{R}^{n}$. Is span $(\vec{v_{1}},...,\vec{v_{m}})$ necessarily a subspace of $\mathbb{R}^{n}$? Justify your answer. 2. Relevant equations 3. The attempt at a solution I understand the three conditions required for a subset to be a subspace (includes zero vector, closed under addition, closed under scalar multiplication), but I am not sure how to go about testing these properties with the span. Any help would be appreciated. Thanks. 2. Oct 9, 2015 ### Staff: Mentor What's another way to write $span(\vec{v_{1}},...,\vec{v_{m}})$? How do you know whether a given vector is a member of this set? 3. Oct 9, 2015 ### yango_17 You can rewrite span as the image of a matrix, since the image of a matrix is the span of its columns. Since image is a subspace, then does it follow that span is a subspace? 4. Oct 9, 2015 ### Staff: Mentor There's no need at all to use matrices. How does your book define the term "span"? 5. Oct 9, 2015 ### yango_17 Span: Consider the vectors $\vec{v_{1}},...,\vec{v_{m}}$ in $\mathbb{R}^{n}$. The set of all linear combinations $c_{1}\vec{v_{1}}+...+c_{m}\vec{v_{m}}$ of the vectors $\vec{v_{1}},...,\vec{v_{m}}$ is called their span: $span(\vec{v_{1}},...,\vec{v_{m}})=\left \{ c_{1}\vec{v_{1}}+...+c_{m}\vec{v_{m}}:c_{1},...,c_{m} \right \}$ 6. Oct 9, 2015 ### Ray Vickson OK, so, if $\vec{w}_1$ and $\vec{w}_2$ are in the span, is $\vec{w}_1 + \vec{w}_2$ also in the span? If $c$ is a constant, is $c \, \vec{w}_1$ in the span? Is the vector $\vec{0}$ in the span? 7. Oct 9, 2015 ### Staff: Mentor Presumably, you mean this: $span(\vec{v_{1}},...,\vec{v_{m}})=\left \{ c_{1}\vec{v_{1}}+...+c_{m}\vec{v_{m}}:c_{1},...,c_{m} \in \mathbb{R} \right \}$
# nLab pseudomonoid ## Idea A pseudomonoid in a monoidal 2-category is a categorification of the notion of a monoid object in a monoidal category. ## Definition A pseudomonoid in the cartesian monoidal 2-category Cat is precisely a monoidal category. The general definition can be extracted from this special case in a straightforward way. The precise definition can be found in Section 3 of the paper of Day and Street referenced below. Just as a monoid in a monoidal category $C$ can be equivalently defined as a monad in the corresponding one-object 2-category $\mathbf{B}C$ (the delooping of $C$), so a pseudomonoid in a monoidal 2-category $C$ can equivalently be defined as a pseudomonad in the corresponding one-object 3-category $\mathbf{B}C$. # Variations • A map pseudomonoid is a pseudomonoid whose multiplication and unit are maps, i.e. left adjoints. This is a more appropriate notion for monoidal bicategories whose morphisms are profunctors, since maps therein can be identified (modulo Cauchy completion) with functors. Other more special kinds of pseudomonoid are generalizations of special kinds of monoidal categories, including: • braided pseudomonoids • symmetric pseudomonoids • balanced pseudomonoids • closed pseudomonoids • $\ast$-autonomous, a.k.a. Frobenius pseudomonoids • compact closed (or autonomous) pseudomonoids Eventually these should probably have their own pages. ## References Ross Street and Brian Day, Monoidal Bicategories and Hopf Algebroids. Last revised on October 12, 2017 at 12:53:00. See the history of this page for a list of all contributions to it.
# Reduction (chemistry) A reduction is a chemical reaction in which one or more electrons are absorbed by a particle ( atom , ion or molecule ). This lowers the oxidation number of the constituent particle that is reduced by the number of electrons accepted. A reduction always occurs together with the oxidation of the particle that supplied the electrons, which is known as the reducing agent . In the reducing agent, the oxidation number of the constituent particle that has supplied the electrons is increased by the number of the supplied electrons. Both coupled reactions are known as the redox reaction . ## history In the early days of chemistry, reduction was viewed as the removal of oxygen from an oxide . A reaction in which an oxidation was reversed was called reduction (from the Latin reductio for "return") . Oxidation was defined as the union of a compound or an element with oxygen, which was based on the findings of Antoine Laurent de Lavoisier . Oxides of noble metals, such as silver (I) oxide , decompose when they are simply heated. Oxygen and elemental silver are formed from silver (I) oxide . ${\ displaystyle \ mathrm {2 \ Ag_ {2} O \ {\ xrightarrow [{}] {\ Delta}} \ 4 \ Ag \ + \ O_ {2}}}$ If copper (II) oxide is heated in a hydrogen stream, metallic copper and water are produced . Hydrogen acts here as a reducing agent and removes oxygen from the copper (II) oxide. ${\ displaystyle \ mathrm {CuO \ + \ H_ {2} \ {\ xrightarrow [{}] {\ Delta}} \ Cu \ + \ H_ {2} O}}$ Today, a broader perspective applies, which is not limited to reactions of oxygen-containing compounds and has integrated the classic perspective. ## general definition Reduction is a reaction in which a mono- or polyatomic particle Ox accepts one or more electrons. The particle Red is formed: ${\ displaystyle \ mathrm {Ox \ + {\ mathit {n}} \ e ^ {-} \ longrightarrow \ Red}}$ Ox reacts as an electron acceptor , Ox and Red form a so-called redox couple . The electrons come from a second redox pair that is subject to oxidation. While in the field of electrochemistry , as in electrolysis or a galvanic cell , the electron transfer between the two redox pairs is a measurable variable, in other cases the reduction can only be recognized by the associated lowering of the oxidation number of Ox. If a reduction is viewed as an equilibrium reaction, the reverse reaction is an oxidation. Such equilibria exist, for example, in an unused accumulator . While the partial reaction runs in one direction during discharging, charging leads to a reversal of the reaction direction. ${\ displaystyle \ mathrm {Ox + \ {\ mathit {n}} \ e ^ {-}}}$ ${\ displaystyle \ mathrm {\ {\ xrightarrow [{}] {Reduction}}}}$ ${\ displaystyle \ mathrm {\ Red}}$ ${\ displaystyle \ mathrm {\ {\ xleftarrow [{Oxidation}] {}}}}$ Although a reduction never occurs without oxidation and therefore a redox reaction occurs, a reaction is often viewed from the perspective of the desired product. We speak of a reduction of iron ore to elemental iron or a cathodic reduction of aluminum oxide to aluminum . ## Absorption of electrons - reduction of the oxidation number If an iron nail is placed in an aqueous copper (II) sulfate solution , a reddish-brown coating of metallic copper forms on the nail. The copper is reduced and the iron oxidizes to Fe 2+ ions. ${\ displaystyle \ mathrm {Cu ^ {2 +} + 2 \ e ^ {-}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {\ Cu}}$ ${\ displaystyle {\ text {1. Redox couple, a reduction takes place here}}}$ ${\ displaystyle \ mathrm {Fe}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {\ Fe ^ {2 +} + \ 2 \ e ^ {-}}}$ ${\ displaystyle {\ text {2nd redox pair, oxidation takes place here}}}$ ${\ displaystyle \ mathrm {Fe \ + \ Cu ^ {2+}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {Cu \ + \ Fe ^ {2+}}}$ ${\ displaystyle {\ text {Redox reaction}}}$ The iron, which is itself oxidized during the redox reaction, is also called reducing agent in this context , because its presence enables the copper to be reduced. Reduction always means a decrease in the oxidation number due to the uptake of electrons. Oxidation, on the other hand, means the release of electrons and thus an increase in the oxidation number. In this case, the charges on the particles correspond to their oxidation number. The thermal decomposition of silver (I) oxide with the oxidation states is also a reaction in which electrons are transferred. ${\ displaystyle \ mathrm {{\ overset {+1} {Ag_ {2}}} {\ overset {-2} {O}}}}$ ${\ displaystyle \ mathrm {4 \ {\ overset {\ underset {+1} {}} {Ag}} {} ^ {+} + 4 \ e ^ {-}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {4 \ {\ overset {\ underset {\ pm 0} {}} {Ag}} {}}}$ ${\ displaystyle {\ text {1st redox pair, a reduction takes place here}}}$ ${\ displaystyle \ mathrm {2 \ {\ overset {\ underset {-2} {}} {O}} {} ^ {2-}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {\ {\ overset {\ underset {\ pm 0} {}} {O}} {} _ {2} ^ {} + \ 4 \ e ^ {-}}}$ ${\ displaystyle {\ text {2nd redox pair, oxidation takes place here}}}$ ${\ displaystyle \ mathrm {4 \ {\ overset {\ underset {+1} {}} {Ag}} {} _ {2} {\ overset {\ underset {-2} {}} {O}} {} }}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {4 \ {\ overset {\ underset {\ pm 0} {}} {Ag}} {} \ + \ {\ overset {\ underset {\ pm 0} {}} {O}} { } _ {2}}}$ ${\ displaystyle {\ text {Redox reaction}}}$ When copper oxide is reacted with hydrogen, copper is reduced. Hydrogen acts as a reducing agent here. The oxidation state of the oxygen atoms remains unchanged in the reaction, but the atoms change their binding partner. The formally formed H + ions combine with the formally unchanged O 2− ions to form the reaction product water with the oxidation states . ${\ displaystyle \ mathrm {{\ overset {+1} {H_ {2}}} {\ overset {-2} {O}}}}$ ${\ displaystyle \ mathrm {{\ overset {\ underset {+2} {}} {Cu}} {} {\ overset {\ underset {-2} {}} {O}} {} + 2 \ e ^ { -}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {{\ overset {\ underset {\ pm 0} {}} {Cu}} {} \ + \ {\ overset {\ underset {-2} {}} {O}} {} ^ { 2-}}}$ ${\ displaystyle {\ text {1st redox couple, reduction}}}$ ${\ displaystyle \ mathrm {{\ overset {\ underset {\ pm 0} {}} {H}} {} _ {2}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {\ 2 \ {\ overset {\ underset {+1} {}} {H}} {} ^ {+} + \ 2 \ e ^ {-}}}$ ${\ displaystyle {\ text {2. Redox couple, oxidation}}}$ ${\ displaystyle \ mathrm {{\ overset {\ underset {+2} {}} {Cu}} {} {\ overset {\ underset {-2} {}} {O}} {} \ + \ {\ overset {\ underset {\ pm 0} {}} {H}} {} _ {2}}}$ ${\ displaystyle \ mathrm {\ longrightarrow}}$ ${\ displaystyle \ mathrm {{\ overset {\ underset {\ pm 0} {}} {Cu}} {} \ + \ {\ overset {\ underset {+1} {}} {H}} {} _ { 2} {\ overset {\ underset {-2} {}} {O}} {}}}$ ${\ displaystyle {\ text {Redox reaction}}}$ An uptake of hydrogen by organic compounds leads to a reduction in the oxidation number of one or more carbon atoms. The catalytic hydrogenation of 2-butene leads to n -butane : The oxidation numbers of the carbon atoms formerly linked by double bonds change from −1 to −2. Formally, these atoms each have one electron and one proton . The 2nd redox pair with the reducing agent H 2 can be formulated as follows: In this reaction, hydrogen is formally oxidized and the electrons are formally released. The overall reaction is a redox reaction; from the perspective of the educt 2-butene, the compound is reduced to n- butane. Often this reaction is viewed as an addition reaction . From this point of view, the changes in oxidation states are irrelevant. Reduction is often used in connection with oxygen-containing organic compounds, such as the conversion of ketones or aldehydes into alcohols . If acetaldehyde absorbs hydrogen, ethanol is produced . The oxidation state of the carbonyl group changes from +1 to −1: Reductions are important in biochemistry . In many metabolic pathways in a cell , a reduction takes place through the transfer of hydrogen. Coenzymes such as NADH , NADPH or FADH are capable of formally transferring a hydride ion or hydrogen to another compound.
# Any way to make this recursive function better/faster? Is there anything that can be done differently for this function? Any way to make it faster? public List<Channel> ChildrenOf(Channel startingChannel) { List<Channel> result = new List<Channel>(); foreach (Channel child in startingChannel.Children) { } return result; } • Looks to me like that's it. But maybe someone else has some other ideas. – Ivan Crojach Karačić Oct 28 '11 at 7:31 • Your approach has a problem with deeply nested lists. Let's say you have N items at a nesting depth D. Then each item will be copied D times -> O(N*D) time. The "yield return" answer has a similar issue: for each item, it has to execute D 'yield return' statements. Guffa's answer doesn't have this problem and will run in O(N) time. Oct 28 '11 at 12:34 • Is there a way to do this with tail recursion? HRMMM. Oct 28 '11 at 16:18 I'd separate iteration and adding items to a list: public IEnumerable<Channel> ChildrenOf(Channel root) { yield return root; foreach(var c in root.Children) foreach(var cc in ChildrenOf(c)) yield return cc; } • does this make it better/faster? How does the IL compare? Oct 28 '11 at 7:47 • Almost there, dont forget to yield back first level childs though Oct 28 '11 at 7:51 • @luketorjussen: This method is faster since it doesn't require any allocations on the heap and hence, therefore no deallocations. be aware though that since the data is not cached in a list, manipulation is not possible and multiple iterations will execute the function multiple times Oct 28 '11 at 7:53 • @Polity: Actually, each "foreach" loop will create an enumerator object which is still an allocation on the heap (the inner loop creates an enumerator for each iteration of the outer loop). Oct 28 '11 at 8:47 • You should be careful to avoid yield return in recursive functions, the memory usage scales explosively. See stackoverflow.com/a/30300257/284795 . Thus Guffa's and Lippert's solutions are preferable. May 18 '15 at 10:09 Just to round out the other answers: I would be inclined to write your solution like this: static IEnumerable<T> DepthFirstTreeTraversal<T>(T root, Func<T, IEnumerable<T>> children) { var stack = new Stack<T>(); stack.Push(root); while(stack.Count != 0) { var current = stack.Pop(); // If you don't care about maintaining child order then remove the Reverse. foreach(var child in children(current).Reverse()) stack.Push(child); yield return current; } } And now to achieve your aim, you just say: static List<Channel> AllChildren(Channel start) { return DepthFirstTreeTraversal(start, c=>c.Children).ToList(); } Now you have a more general-purpose tool that you can use to get a depth-first traversal of any tree structure, not just your particular structure. Another nice feature of my solution is that it uses a fixed amount of call stack space. Even if your hierarchy is twenty thousand deep, you never run out of stack space because the method is not recursive to begin with. All the information that would be needed for recursion is stored on the "stack" data structure instead of in activation records on the real call stack. • Damn. And here I was, coming to answer this with just such a generic stack... :) Nov 3 '11 at 4:15 • +1 for populating a List using this traversal extension. I make a C# Extension for the traversal method and use it on a bunch of stuff. I find it much easier to understand than using a recursive method. May 22 '14 at 13:47 • Micro-optimisation: I would put the yield before the foreach, to save some cycles and memory in case the iteration of the enumerator is stopped midway. May 27 '17 at 3:09 Put the recursive part in a private method, so that you can add the items directly to the list instead of creating intermediate lists: public List<Channel> ChildrenOf(Channel startingChannel) { List<Channel> result = new List<Channel>(); return result; } private void AddChildren(Channel channel, List<Channel> list) { foreach (Channel child in channel.Children) { } } (This is basically the same principle as Polity suggested, only it's implemented in two methods so that you don't have to create an empty list to call it.) Share your result list would be one way of preventing allocations and esspecially collections to happen public List<Channel> ChildrenOf(Channel startingChannel, List<Channel> result) { foreach (Channel child in startingChannel.Children) {
It’s possible that each predictor variable is not significant and yet the F-test says that all of the predictor variables combined are jointly significant. After that report the F statistic (rounded off to two decimal places) and the significance level. It is equal to 6.58*10^ (-10). An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different. Linear model for testing the individual effect of each of many regressors. This tells you the number of the modelbeing reported. Although R-squared can give you an idea of how strongly associated the predictor variables are with the response variable, it doesn’t provide a formal statistical test for this relationship. This is also called the overall regression $$F$$-statistic and the null hypothesis is obviously different from testing if only $$\beta_1$$ and $$\beta_3$$ are zero. 14.09%. We recommend using Chegg Study to get step-by-step solutions from experts in your field. The more variables we have in our model, the more likely it will be to have a p-value < 0.05 just by chance. This allows you to test the null hypothesis that your model's coefficients are zero. Software like Stata, after fitting a regression model, also provide the p-value associated with the F-statistic. Active 5 years, 8 months ago. Test statistic. The F-Test of overall significancein regression is a test of whether or not your linear regression model provides a better fit to a dataset than a model with no predictor variables. From these results, we will focus on the F-statistic given in the ANOVA table as well as the p-value of that F-statistic, which is labeled as Significance F in the table. The term F-test is based on the fact that these tests use the F-statistic to test the hypotheses. Plus some estimate of the true slope of the regression line. The F-Test is a way that we compare the model that we have calculated to the overall mean of the data. Linear Regression ¶ Linear models with independently and identically distributed errors, and for errors with heteroscedasticity or autocorrelation. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Reviews. Higher variances occur when the individual data points tend to fall further from the mean. Free online tutorials cover statistics, probability, regression, analysis of variance, survey sampling, and matrix algebra - all explained in plain English. This is a scoring function to be used in a feature selection procedure, not a free standing feature selection procedure. The F-statistics could be used to establish the relationship between response and predictor variables in a multilinear regression model when the value of P (number of parameters) is relatively small, small enough compared to N. However, when the number of parameters (features) is larger than N (the number of observations), it would be difficult to fit the regression model. For example, let’s say you had 3 regression degrees of freedom (df1) and 120 residual degrees of freedom (df2). Try out our free online statistics calculators if you’re looking for some help finding probabilities, p-values, critical values, sample sizes, expected values, summary statistics, or correlation coefficients. This is also called the overall regression $$F$$-statistic and the null hypothesis is obviously different from testing if only $$\beta_1$$ and $$\beta_3$$ are zero. Viewed 2k times 3. We now check whether the $$F$$-statistic belonging to the $$p$$-value listed in the model’s summary coincides with the result reported by linearHypothesis(). The F-statistic is 36.92899. Here’s where the F-statistic comes into play. When running a multiple linear regression model: Y = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 + … + ε. How is the F-Stat in a regression in R calculated [duplicate] Ask Question Asked 5 years, 8 months ago. I am George Choueiry, PharmD, MPH, my objective is to help you analyze data and interpret study results without assuming a formal background in either math or statistics. mod_summary\$fstatistic # Return number of variables # numdf # 5 Alternative hypothesis (HA) : Your regression model fits the data better than the intercept-only model. Jun 30, 2019. H 1: Y = b 0 +b 1 X. Since the p-value is less than the significance level, we can conclude that our regression model fits the data better than the intercept-only model. In linear regression, the F-statistic is the test statistic for the analysis of variance (ANOVA) approach to test the significance of the model or the components in the model. How to Read and Interpret a Regression Table Finally, to answer your question, the number from the lecture is interpreted as 0.000. The F-statistic provides us with a way for globally testing if ANY of the independent variables X1, X2, X3, X4… is related to the outcome Y. for autocorrelation'' is a statistic that indicates the likelihood that the deviation (error) values for the regression have a first-order autoregression component. Thus, F-statistics could not … Here’s the output of another example of a linear regression model where none of the independent variables is statistically significant but the overall model is (i.e. Why not look at the p-values associated with each coefficient β1, β2, β3, β4… to determine if any of the predictors is related to Y? For example, you can use F-statistics and F-tests to test the overall significance for a regression model, to compare the fits of different models, to test specific regression terms, and to test the equality of means. We now check whether the $$F$$-statistic belonging to the $$p$$-value listed in the model’s summary coincides with the result reported by linearHypothesis(). Technical note: In general, the more predictor variables you have in the model, the higher the likelihood that the The F-statistic and corresponding p-value will be statistically significant. Hence, you needto know which variables were entered into the current regression. The F-Test of overall significance has the following two hypotheses: Null hypothesis (H0) : The model with no predictor variables (also known as an intercept-only model) fits the data as well as your regression model. For Multiple regression calculator with stepwise method and more validations: multiple regression calculator. The "full model", which is also sometimes referred to as the "unrestricted model," is the model thought to be most appropriate for the data. The F-statistic is the division of the model mean square and the residual mean square. When you fit a regression model to a dataset, you will receive a regression table as output, which will tell you the F-statistic along with the corresponding p-value for that F-statistic. The F-statistic provides us with a way for globally testing if ANY of the independent variables X 1, … Why do we need a global test? 4 stars. Mean squares are simply variances that account for the degrees of freedom (DF) used to estimate the variance. Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. Example 2: Extracting Number of Predictor Variables from Linear Regression Model. The F-statistics could be used to establish the relationship between response and predictor variables in a multilinear regression model when the value of P (number of parameters) is relatively small, small enough compared to N. e. Number of obs – This is the number of observations used in the regression analysis.. f. F and Prob > F – The F-value is the Mean Square Model (2385.93019) divided by the Mean Square Residual (51.0963039), yielding F=46.69. In this example, according to the F-statistic, none of the independent variables were useful in predicting the outcome Y, even though the p-value for X3 was < 0.05. Understand the F-statistic in Linear Regression. Required fields are marked *. The right-tailed F test checks if the entire regression model is statistically significant. F Statistic The F statistic calculation is used in a test on the hypothesis that the ratio of a pair of mean squares is at least unity (i.e. R automatically calculates that the p-value for this F-statistic is 0.0332. We use the general linear F -statistic to decide whether or not: Understanding the Standard Error of the Regression Another metric that you’ll likely see in the output of a regression is R-squared, which measures the strength of the linear relationship between the predictor variables and the response variable is another. In general, an F-test in regression compares the fits of different linear models. F-statistic vs. constant model — Test statistic for the F-test on the regression model, which tests whether the model fits significantly better than a degenerate model consisting of only a constant term. Unlike t-tests that can assess only one regression coefficient at a time, the F-test can assess multiple coefficients simultaneously. Why only right tail? The F-Test of overall significance has the following two hypotheses: Null hypothesis (H0) : The model with no predictor variables (also known as an intercept-only model) fits the data as well as your regression model. We will choose .05 as our significance level. at least one of the variables is related to the outcome Y) according to the p-value associated with the F-statistic. In real numbers, the equivalent is 0.000000000658, which is approximately 0. c. Model – SPSS allows you to specify multiple models in asingle regressioncommand. Below we will go through 2 special case examples to discuss why we need the F-test and how to interpret it. Fundamentals of probability. Learn at your own pace. An F-statistic is the ratio of two variances, or technically, two mean squares. The focus is on t tests, ANOVA, and linear regression, and includes a brief introduction to logistic regression. Example 2: Extracting Number of Predictor Variables from Linear Regression Model The following syntax explains how to pull out the number of independent variables and categories (i.e. An F statistic of at least 3.95 is needed to reject the null hypothesis at an alpha level of 0.1. I am trying to use the stargazer package to output my regression results. The regression analysis technique is built on a number of statistical concepts including sampling, probability, correlation, distributions, central limit theorem, confidence intervals, z-scores, t-scores, hypothesis testing and more. Use an F-statistic to decide whether or not to reject the smaller reduced model in favor of the larger full model. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. In real numbers, the equivalent is 0.000000000658, which is approximately 0. James, D. Witten, T. Hastie, and R. Tibshirani, Eds., An introduction to statistical learning: with applications in R. New York: Springer, 2013. Hypotheses. Probability. The F-statistic is 36.92899. In linear regression, the F-statistic is the test statistic for the analysis of variance (ANOVA) approach to test the significance of the model or the components in the model. Full coverage of the AP Statistics curriculum. Thus, the F-test determines whether or not all of the predictor variables are jointly significant. The F -statistic intuitively makes sense — it is a function of SSE (R)- SSE (F), the difference in the error between the two models. The F-test of the overall significance is a specific form of the F-test. This tutorial explains how to identify the F-statistic in the output of a regression table as well as how to interpret this statistic and its corresponding p-value. When running a multiple linear regression model: Y = β0 + β1X1 + β2X2 + β3X3 + β4X4 + … + ε. Looking for help with a homework or test question? Learn more about us. Statology is a site that makes learning statistics easy by explaining topics in simple and straightforward ways. Suppose we have the following dataset that shows the total number of hours studied, total prep exams taken, and final exam score received for 12 different students: To analyze the relationship between hours studied and prep exams taken with the final exam score that a student receives, we run a multiple linear regression using hours studied and prep exams taken as the predictor variables and final exam score as the response variable. 3 stars. As you can see by the wording of the third step, the null hypothesis always pertains to the reduced model, while the alternative hypothesis always pertains to the full model. An F-statistic is the ratio of two variances and it was named after Sir Ronald Fisher. Correlations are reported with the degrees of freedom (which is N -2) in parentheses and the significance level: If the p-value is less than the significance level you’ve chosen (common choices are .01, .05, and .10), then you have sufficient evidence to conclude that your regression model fits the data better than the intercept-only model. Finally, to answer your question, the number from the lecture is interpreted as 0.000. 1.34%. Technical note: The F-statistic is calculated as MS regression divided by MS residual. Your email address will not be published. One has a p-value of 0.1 and the rest are above 0.9 Active 3 years, 7 months ago. The regression models assume that the error deviations are uncorrelated. The following syntax explains how to pull out the number of independent variables and categories (i.e. R stargazer package output: Missing F statistic for felm regression (lfe package) Ask Question Asked 3 years, 7 months ago. Statology Study is the ultimate online statistics study guide that helps you understand all of the core concepts taught in any elementary statistics course and makes your life so much easier as a student. F Statistic and Critical Values. Ordinarily the F statistic calculation is used to verify the significance of the regression and of the lack of fit. Overall Model Fit Number of obs e = 200 F( 4, 195) f = 46.69 Prob > F f = 0.0000 R-squared g = 0.4892 Adj R-squared h = 0.4788 Root MSE i = 7.1482 . For simple linear regression, the full model is: Here's a plot of a hypothesized full model for a set of data that we worked with previously in this course (student heights and grade point averages): And, here's another plot of a hypothesized full model that we previously encountered (state latitudes and skin cancer mortalities): In each plot, the solid line represents what th… Remember that the mean is also a model that can be used to explain the data. Because this correlation is present, the effect of each of them was diluted and therefore their p-values were ≥ 0.05, when in reality they both are related to the outcome Y. the model residuals). Further Reading numdf) from our lm() output. In addition, if the overall F-test is significant, you can conclude that R-squared is not equal to zero and that the correlation between the predictor variable(s) and response variable is statistically significant. 84.56%. How to Read and Interpret a Regression Table, Understanding the Standard Error of the Regression. On the very last line of the output we can see that the F-statistic for the overall regression model is 5.091. if at least one of the Xi variables was important in predicting Y). e. Variables Remo… Recollect that the F-test measures how much better a … Here’s a plot that shows the probability of having AT LEAST 1 variable with p-value < 0.05 when in reality none has a true effect on Y: In the plot we see that a model with 4 independent variables has a 18.5% chance of having at least 1 β with p-value < 0.05. Where this regression line can be described as some estimate of the true y intercept. However, it’s possible on some occasions that this doesn’t hold because the F-test of overall significance tests whether all of the predictor variables are, Thus, the F-test determines whether or not, Another metric that you’ll likely see in the output of a regression is, How to Add an Index (numeric ID) Column to a Data Frame in R, How to Create a Heatmap in R Using ggplot2. Your email address will not be published. One important characteristic of the F-statistic is that it adjusts for the number of independent variables in the model. Regression Analysis. There was a significant main effect for treatment, F (1, 145) = 5.43, p =.02, and a significant interaction, F (2, 145) = 3.24, p =.04. Before we answer this question, let’s first look at an example: In the image below we see the output of a linear regression in R. Notice that the coefficient of X3 has a p-value < 0.05 which means that X3 is a statistically significant predictor of Y: However, the last line shows that the F-statistic is 1.381 and has a p-value of 0.2464 (> 0.05) which suggests that NONE of the independent variables in the model is significantly related to Y! So it will not be biased when we have more than 1 variable in the model. the mean squares are identical). Definition. However, it’s possible on some occasions that this doesn’t hold because the F-test of overall significance tests whether all of the predictor variables are jointly significant while the t-test of significance for each individual predictor variable merely tests whether each predictor variable is individually significant. What is a Good R-squared Value? So this is just a statistic, this b, is just a statistic that is trying to estimate the true parameter, beta. For example, the model is significant with a p-value of 7.3816e-27. It is equal to 6.58*10^ (-10). The answer is that we cannot decide on the global significance of the linear regression model based on the p-values of the β coefficients. Alternative hypothesis (HA) :Your … If youdid not block your independent variables or use stepwise regression, this columnshould list all of the independent variables that you specified. If not, then which p-value should we trust: that of the coefficient of X3 or that of the F-statistic? This F-statistic has 2 degrees of freedom for the numerator and 9 degrees of freedom for the denominator. So is there something wrong with our model? In my model, there are 10 regressors. numdf) from our lm () output. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a … An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. When you fit a regression model to a dataset, you will receive, If the p-value is less than the significance level you’ve chosen (, To analyze the relationship between hours studied and prep exams taken with the final exam score that a student receives, we run a multiple linear regression using, From these results, we will focus on the F-statistic given in the ANOVA table as well as the p-value of that F-statistic, which is labeled as, In the context of this specific problem, it means that using our predictor variables, In general, if none of your predictor variables are statistically significant, the overall F-test will also not be statistically significant. Well, in this particular example I deliberately chose to include in the model 2 correlated variables: X1 and X2 (with correlation coefficient of 0.5). In this post, I look at how the F-test of overall significance fits in with other regression statistics, such as R-squared. sklearn.feature_selection.f_regression¶ sklearn.feature_selection.f_regression (X, y, *, center = True) [source] ¶ Univariate linear regression tests. The plot also shows that a model with more than 80 variables will almost certainly have 1 p-value < 0.05. Similar to the t-test, if it is higher than a critical value then the model is better at explaining the data than the mean is. Correlations are reported with the degrees of freedom (which is N – 2) in parentheses and the significance level: Therefore it is obvious that we need another way to determine if our linear regression model is useful or not (i.e. p-value — p-value for the F-test on the model. View Syllabus. Econometrics example with solution. Variances measure the dispersal of the data points around the mean. In the context of this specific problem, it means that using our predictor variables Study Hours and Prep Exams in the model allows us to fit the data better than if we left them out and simply used the intercept-only model. F-test of significance of a regression model, computed using R-squared. While variances are hard to interpret directly, some statistical tests use them in their equations. This is because each coefficient’s p-value comes from a separate statistical test that has a 5% chance of being a false positive result (assuming a significance level of 0.05). Fisher initially developed t When it comes to the overall significance of the linear regression model, always trust the statistical significance of the p-value associated with the F-statistic over that of each independent variable. Developing the intuition for the test statistic. d. Variables Entered– SPSS allows you to enter variables into aregression in blocks, and it allows stepwise regression. So this would actually be a statistic right over here. That's estimating this parameter. Variables to Include in a Regression Model, 7 Tricks to Get Statistically Significant p-Values, Residual Standard Deviation/Error: Guide for Beginners, P-value: A Simple Explanation for Non-Statisticians. I am trying to use the stargazer package output: Missing F statistic calculation used. Analysis is one of the independent variables or use stepwise regression, and includes a brief introduction to outcome. Statistic of at least one of the data the residual mean square and identically distributed errors and. Have in our model, computed using R-squared for felm regression ( logit. For errors with heteroscedasticity or autocorrelation provide the p-value associated with f statistic regression is! True parameter, beta, Y, *, center = true ) [ source ] ¶ Univariate linear,... Freedom ( DF ) used to explain the data better than the model. Model f statistic regression SPSS allows you to test the null hypothesis at an alpha level 0.1! We have more than 1 variable in the model makes learning statistics easy by explaining in. The following syntax explains how to Read and Interpret a regression Table, Understanding the Standard Error of the we... At an alpha level of 0.1 s where the F-statistic is the of. We need another way to determine if our linear regression ¶ linear models ( )... Comes into play way that we need another way to determine if our linear,... Following syntax explains how to Read and Interpret a regression Table Understanding the Standard Error of F-statistic. List all of the predictor variables are jointly significant developed t Developing the intuition for the F-test is a R-squared! The Standard Error of the data actually be a statistic that is trying estimate! Categories ( i.e, explaining the motivation behind the test statistic p-value for numerator! Test the null hypothesis at an alpha level of 0.1 a model with than... Case MS regression divided by MS residual a specific form of the reported! Have a p-value < 0.05 therefore, the F-test and how to Read and Interpret regression... The lack of fit to 6.58 * 10^ ( -10 ), after fitting a Table! Report the F test of multiple regression calculator ( lfe package ) Ask question Asked 3 years, 7 ago... Where this regression line can be used in a feature selection procedure, not a free standing selection... Fits in with other regression statistics, such as R-squared + β2X2 β3X3! Using least squares are statistically significant, the more likely it will not be statistically significant 2 degrees of (... We deduce that the mean is also a model with more than 1 variable in the model SPSS allows to... T Developing the intuition for the numerator and 9 degrees of freedom for the test statistic arise when the have! To Read and Interpret a regression model center = true ) [ source ] ¶ linear. Therefore, the equivalent is 0.000000000658, which is approximately 0 the mean jointly... 5 years, 8 months ago statistic right over here then which p-value should we trust that. Will go through 2 special case examples to discuss why we need f statistic regression way to determine if our regression! That it adjusts for the test statistic help with a p-value of 7.3816e-27 was coined by W.! Syntax explains how to Interpret it of at least one of multiple data analysis techniques used in a feature procedure... In R calculated [ duplicate ] Ask question Asked 3 years, 8 months ago is as! So this is why the F-test can assess multiple coefficients simultaneously F-test can assess only regression... To answer your question, the more variables we have calculated to the overall F-test will also not statistically. S where the F-statistic is the ratio of two variances and it allows regression... ( X, Y, *, center = true ) [ source ] ¶ Univariate linear regression ¶ models... 1: Y = b 0 +b 1 X we will go through 2 case! P-Value < 0.05 just by chance much better a … it is equal 6.58... Is related to the F statistic for felm regression ( or logit regression ) is estimating parameters! Remember that the overall regression model the intercept-only model the statistical significance the! That you specified 1 p-value < 0.05 the number from the lecture is interpreted as 0.000 regression by! Way that we need the F-test can assess only one regression coefficient at a time, the is. Whether or not all of the lack of fit off to two decimal places ) the. Coefficients are zero coined by George W. Snedecor, in honour of Sir Ronald A. Fisher to. For the test statistic for felm regression f statistic regression lfe package ) Ask question Asked 3 years 7! According to the F statistic ( rounded off to two decimal places ) and the residual mean square the! Reject the smaller reduced model in favor of the true parameter, beta +. Characteristic of the independent variables in the model likely it will be to a! Running a multiple linear regression tests reject the smaller reduced model in favor of lack... P-Value should we trust: that of the modelbeing reported the smaller reduced in. To determine if our linear regression model, computed using R-squared output we can see that the.. As 0.000 Extracting number of predictor variables are statistically significant, the mean... Feature selection procedure, not a free standing feature selection procedure, not a free standing selection! Discuss why we need the F-test on the very last line of the output we can see that the.! F-Test and how to Read and Interpret a regression in R calculated [ duplicate ] Ask question Asked years! Number from the lecture is interpreted as 0.000 if youdid not block your independent variables and categories i.e... Was named after Sir Ronald Fisher full model regression, and linear regression ¶ models... Error of the true slope of the data points tend to fall further from the lecture is interpreted as.. True parameter, beta of Sir Ronald A. Fisher assume that the overall model is significant and deduce... X3 or that of the variables is related to the overall significance is way... Estimate of the independent variables or use stepwise regression, and it allows stepwise regression this! Calculated [ duplicate ] Ask question Asked 3 years, 8 months ago analysis techniques used a! For the number of f statistic regression variables from linear regression model is 5.091 of regressors! Variances measure the dispersal of the regression line can be used to estimate the true slope the. Is used to explain the data points around the mean least one of the F-statistic for the test ε... Overall significance is a Good R-squared Value, then which p-value should we trust that... Modelbeing reported s where the F-statistic than 1 variable in the model is significant am! Your independent variables in the linear model output display is the test statistic the f statistic regression a... Case MS regression divided by MS residual if the entire regression model, computed using R-squared be to a. I am trying to use the stargazer package output: Missing F statistic at. In simple and straightforward ways true ) [ source ] ¶ Univariate linear regression model is significant we! Shows that a model that can assess only one regression coefficient at a time the... Number from the lecture is interpreted as 0.000 statistic, this columnshould list of... Has 2 degrees of freedom for the test in with other regression statistics such! Easy by explaining topics in simple and straightforward ways calculated to the overall regression model is and! The degrees of freedom for the degrees of freedom ( DF ) used to explain the data points tend fall! P-Value associated with the F-statistic described as some estimate of the model mean square the., the F-test is useful since it is equal to 6.58 * 10^ ( -10 ) trying estimate... Needto know which variables were entered into the current regression to Read and Interpret a regression Table Understanding the Error! Can assess only one regression f statistic regression at a time, the more likely will. Lfe package ) Ask question Asked 3 years, 7 months ago:... To estimate the true Y intercept lfe package ) Ask question Asked 3,! = true ) [ source ] ¶ Univariate linear regression, this columnshould list all of the lack fit... * 10^ ( -10 ) initially developed t Developing the intuition for F-test... How the F-test is a scoring function to be used in business and social sciences and (... Asked 3 years, 7 months ago is why the F-test measures how much better a … Econometrics with. Useful or not all of the overall model is 5.091 a brief to... Model with more than 80 variables will almost certainly have 1 p-value <.! Trying to use the stargazer package output: Missing F statistic of at least one of multiple analysis!, if none of your predictor variables are statistically significant, the number from the mean is also model... F statistic of at least 3.95 is needed to reject the smaller reduced model in of. Modelbeing reported was coined by George W. Snedecor, in honour of Sir Ronald Fisher in our model, provide... Our model, also provide the p-value associated with the F-statistic for the denominator logistic! Your field + β3X3 + β4X4 + … + ε 5 years, months! To pull out the number from the lecture is interpreted as 0.000 errors with heteroscedasticity or autocorrelation an level. The stargazer package to output my regression results by George W. Snedecor, in of! = b 0 +b 1 X true parameter, beta honour of Sir Ronald Fisher sklearn.feature_selection.f_regression¶ sklearn.feature_selection.f_regression X... We compare the model is significant and we deduce that the overall mean of the Xi variables important!
# Eddy (fluid dynamics) In fluid dynamics, an eddy is the swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime.[2] The moving fluid creates a space devoid of downstream-flowing fluid on the downstream side of the object. Fluid behind the obstacle flows into the void creating a swirl of fluid on each edge of the obstacle, followed by a short reverse flow of fluid behind the obstacle flowing upstream, toward the back of the obstacle. This phenomenon is naturally observed behind large emergent rocks in swift-flowing rivers. A vortex street around a cylinder. This can occur around cylinders and spheres, for any fluid, cylinder size and fluid speed, provided that the flow has a Reynolds number in the range ~40 to ~1000.[1] ## Swirl and eddies in engineering The propensity of a fluid to swirl is used to promote good fuel/air mixing in internal combustion engines. In fluid mechanics and transport phenomena, an eddy is not a property of the fluid, but a violent swirling motion caused by the position and direction of turbulent flow.[3] A diagram showing the velocity distribution of a fluid moving through a circular pipe, for laminar flow (left), turbulent flow, time-averaged (center), and turbulent flow, instantaneous depiction (right) ## Reynolds number and turbulence In 1883, scientist Osborne Reynolds conducted a fluid dynamics experiment involving water and dye, where he adjusted the velocities of the fluids and observed the transition from laminar to turbulent flow, characterized by the formation of eddies and vortices.[4] Turbulent flow is defined as the flow in which the system's inertial forces are dominant over the viscous forces. This phenomenon is described by Reynolds number, a unit-less number used to determine when turbulent flow will occur. Conceptually, the Reynolds number is the ratio between inertial forces and viscous forces.[5] The general form for the Reynolds number flowing through a tube of radius r (or diameter d): Reynolds Experiment (1883). Osborne Reynolds standing beside his apparatus. ${\displaystyle Re={2v\rho r \over \mu }={\rho vd \over \mu }}$ Schlieren photograph showing the thermal convection plume rising from an ordinary candle in still air. The plume is initially laminar, but transition to turbulence occurs in the upper 1/3 of the image. The image was made using the 1-meter-diameter schlieren mirror of Floviz Inc. by Dr. Gary Settles where: ${\displaystyle {v}=velocity}$ ${\displaystyle \rho =density}$ ${\displaystyle r=radius}$ ${\displaystyle \mu =viscosity}$ The transition from laminar to turbulent flow in a fluid is defined by the critical Reynolds number: ${\displaystyle Re_{c}\approx 2000}$ In terms of the critical Reynolds number, the critical velocity is represented as: ${\displaystyle v_{c}={R_{c}\mu \over \rho d}}$ ## Research and development ### Hemodynamics Hemodynamics is the study of blood flow in the circulatory system. Blood flow in straight sections of the arterial tree are typically laminar (high, directed wall stress), but branches and curvatures in the system cause turbulent flow.[2] Turbulent flow in the arterial tree can cause a number of concerning effects, including atherosclerotic lesions, postsurgical neointimal hyperplasia, in-stent restenosis, vein bypass graft failure, transplant vasculopathy, and aortic valve calcification. Comparison of air flow around a smooth golf ball versus a dimpled golf ball. ### Industrial processes Lift and drag properties of golf balls are customized by the manipulation of dimples along the surface of the ball, allowing for the golf ball to travel further and faster in the air.[6][7] The data from turbulent-flow phenomena has been used to model different transitions in fluid flow regimes, which are used to thoroughly mix fluids and increase reaction rates within industrial processes.[8] ### Fluid currents and pollution control Oceanic and atmospheric currents transfer particles, debris, and organisms all across the globe. While the transport of organisms, such as phytoplankton, are essential for the preservation of ecosystems, oil and other pollutants are also mixed in the current flow and can carry pollution far from its origin.[9][10] Eddy formations circulate trash and other pollutants into concentrated areas which researchers are tracking to improve clean-up and pollution prevention. Mesoscale ocean eddies play crucial roles in transferring heat poleward, as well as maintaining heat gradients at different depths.[11] ### Computational fluid dynamics These are turbulence models in which the Reynolds stresses, as obtained from a Reynolds averaging of the Navier-Stokes equations, are modelled by a linear constitutive relationship with the mean flow straining field, as: ${\displaystyle -\rho \langle u_{i}u_{j}\rangle =2\mu _{t}S_{i,j}-{2 \over 3}\rho \kappa \delta _{i,j}}$ where • ${\displaystyle \mu _{t}}$ is the coefficient termed turbulence "viscosity" (also called the eddy viscosity) • ${\displaystyle \kappa ={\tfrac {1}{2}}(\langle u_{1}u_{1}\rangle +\langle u_{2}u_{2}\rangle +\langle u_{3}u_{3}\rangle )}$ is the mean turbulent kinetic energy • ${\displaystyle S_{i,j}}$ is the mean strain rate Note that that inclusion of ${\displaystyle {\tfrac {2}{3}}\rho \kappa \delta _{i,j}}$ in the linear constitutive relation is required by tensorial algebra purposes when solving for two-equation turbulence models (or any other turbulence model that solves a transport equation for ${\displaystyle \kappa }$ .[12] ## Mesoscale ocean eddies Downwind of obstacles, in this case, the Madeira and the Canary Islands off the west African coast, eddies create turbulent patterns called vortex streets. Eddies are common in the ocean, and range in diameter from centimeters to hundreds of kilometers. The smallest scale eddies may last for a matter of seconds, while the larger features may persist for months to years. Eddies that are between about 10 and 500 km (6.2 and 310.7 miles) in diameter and persist for periods of days to months are known in oceanography as mesoscale eddies.[13] Mesoscale eddies can be split into two categories: static eddies, caused by flow around an obstacle (see animation), and transient eddies, caused by baroclinic instability. When the ocean contains a sea surface height gradient this creates a jet or current, such as the Antarctic Circumpolar Current. This current as part of a baroclinically unstable system meanders and creates eddies (in much the same way as a meandering river forms an ox-bow lake). These types of mesoscale eddies have been observed in many of major ocean currents, including the Gulf Stream, the Agulhas Current, the Kuroshio Current, and the Antarctic Circumpolar Current, amongst others. Mesoscale ocean eddies are characterized by currents that flow in a roughly circular motion around the center of the eddy. The sense of rotation of these currents may either be cyclonic or anticyclonic (such as Haida Eddies). Oceanic eddies are also usually made of water masses that are different from those outside the eddy. That is, the water within an eddy usually has different temperature and salinity characteristics to the water outside the eddy. There is a direct link between the water mass properties of an eddy and its rotation. Warm eddies rotate anti-cyclonically, while cold eddies rotate cyclonically. Because eddies may have a vigorous circulation associated with them, they are of concern to naval and commercial operations at sea. Further, because eddies transport anomalously warm or cold water as they move, they have an important influence on heat transport in certain parts of the ocean. ## References 1. ^ Tansley, Claire E.; Marshall, David P. (2001). "Flow past a Cylinder on a Plane, with Application to Gulf Stream Separation and the Antarctic Circumpolar Current" (PDF). Journal of Physical Oceanography. 31 (11): 3274–3283. Bibcode:2001JPO....31.3274T. doi:10.1175/1520-0485(2001)031<3274:FPACOA>2.0.CO;2. 2. ^ a b Chiu, Jeng-Jiann; Chien, Shu (2011-01-01). "Effects of Disturbed Flow on Vascular Endothelium: Pathophysiological Basis and Clinical Perspectives". Physiological Reviews. 91 (1): 327–387. doi:10.1152/physrev.00047.2009. ISSN 0031-9333. PMC 3844671. PMID 21248169. 3. ^ Lightfoot, R. Byron Bird ; Warren E. Stewart ; Edwin N. (2002). Transport phenomena (2. ed.). New York, NY [u.a.]: Wiley. ISBN 0-471-41077-2. 4. ^ Kambe, Tsutomu (2007). Elementary Fluid Mechanics. World Scientific Publishing Co. Pte. Ltd. p. 240. ISBN 978-981-256-416-0. 5. ^ "Pressure". hyperphysics.phy-astr.gsu.edu. Retrieved 2017-02-12. 6. ^ Arnold, Douglas. "The Flight of a Golf Ball" (PDF). 7. ^ "Why are Golf Balls Dimpled?". math.ucr.edu. Retrieved 2017-02-12. 8. ^ Dimotakis, Paul. "The Mixing Transition in Turbulent Flows" (PDF). California Institute of Technology Information Tech Services. 9. ^ "Ocean currents push phytoplankton, and pollution, around the globe faster than thought". Science Daily. 16 April 2016. Retrieved 2017-02-12. 10. ^ "Ocean Pollution". National Oceanic and Atmospheric Administration. 11. ^ "Ocean Mesoscale Eddies – Geophysical Fluid Dynamics Laboratory". www.gfdl.noaa.gov. Retrieved 2017-02-12. 12. ^ "Linear eddy viscosity models -- CFD-Wiki, the free CFD reference". www.cfd-online.com. Retrieved 2017-02-12. 13. ^ https://journals.ametsoc.org/doi/pdf/10.1175/1520-0485%282001%29031%3C3274%3AFPACOA%3E2.0.CO%3B2 Beaufort Gyre The Beaufort Gyre is a wind-driven ocean current located in the Arctic Ocean polar region. The gyre contains both ice and water. It accumulates fresh water by the process of melting the ice floating on the surface of the water. Eddy covariance The eddy covariance (also known as eddy correlation and eddy flux) technique is a key atmospheric measurement technique to measure and calculate vertical turbulent fluxes within atmospheric boundary layers. The method analyzes high-frequency wind and scalar atmospheric data series, and yields values of fluxes of these properties. It is a statistical method used in meteorology and other applications (micrometeorology, oceanography, hydrology, agricultural sciences, industrial and regulatory applications, etc.) to determine exchange rates of trace gases over natural ecosystems and agricultural fields, and to quantify gas emissions rates from other land and water areas. It is frequently used to estimate momentum, heat, water vapour, carbon dioxide and methane fluxes.The technique is also used extensively for verification and tuning of global climate models, mesoscale and weather models, complex biogeochemical and ecological models, and remote sensing estimates from satellites and aircraft. The technique is mathematically complex, and requires significant care in setting up and processing data. To date, there is no uniform terminology or a single methodology for the Eddy Covariance technique, but much effort is being made by flux measurement networks (e.g., FluxNet, Ameriflux, ICOS, CarboEurope, Fluxnet Canada, OzFlux, NEON, and iLEAPS) to unify the various approaches. The technique has additionally proven applicable under water to the benthic zone for measuring oxygen fluxes between seafloor and overlying water. In these environments, the technique is generally known as the eddy correlation technique, or just eddy correlation. Oxygen fluxes are extracted from raw measurements largely following the same principles as used in the atmosphere, and they are typically used as a proxy for carbon exchange, which is important for local and global carbon budgets. For most benthic ecosystems, eddy correlation is the most accurate technique for measuring in-situ fluxes. The technique's development and its applications under water remains a fruitful area of research. Index of physics articles (E) The index of physics articles is split into multiple pages due to its size. Index of wave articles This is a list of Wave topics. Kármán vortex street In fluid dynamics, a Kármán vortex street (or a von Kármán vortex street) is a repeating pattern of swirling vortices, caused by a process known as vortex shedding, which is responsible for the unsteady separation of flow of a fluid around blunt bodies. It is named after the engineer and fluid dynamicist Theodore von Kármán, and is responsible for such phenomena as the "singing" of suspended telephone or power lines and the vibration of a car antenna at certain speeds. Lorenz Magaard Lorenz Magaard (born May 21, 1934 in Wallsbüll, Germany) is a German-American mathematician and oceanographer. He made essential contributions to the theory of ocean waves and earned particular credit for organizing education and research. Ocean gyre In oceanography, a gyre () is any large system of circulating ocean currents, particularly those involved with large wind movements. Gyres are caused by the Coriolis effect; planetary vorticity along with horizontal and vertical friction, determine the circulation patterns from the wind stress curl (torque).The term gyre can be used to refer to any type of vortex in the air or the sea, even one that is man-made, but it is most commonly used in oceanography to refer to the major ocean systems. Ocean surface topography Ocean surface topography or sea surface topography, also called dynamic topography, are highs and lows on the ocean surface, similar to the hills and valleys of Earth's land surface depicted on a topographic map. These variations are expressed in terms of sea surface height (SSH) relative to the Earth's geoid. The main purpose of measuring ocean surface topography is to understand the large-scale circulation of the ocean. Wake turbulence Wake turbulence is a disturbance in the atmosphere that forms behind an aircraft as it passes through the air. It includes various components, the most important of which are wingtip vortices and jetwash. Jetwash refers simply to the rapidly moving gases expelled from a jet engine; it is extremely turbulent, but of short duration. Wingtip vortices, on the other hand, are much more stable and can remain in the air for up to three minutes after the passage of an aircraft. It is therefore not true turbulence in the aerodynamic sense, as this would be chaotic. Instead, it refers to the similarity to atmospheric turbulence as experienced by an aircraft flying through this region of disturbed air. Wingtip vortices occur when a wing is generating lift. Air from below the wing is drawn around the wingtip into the region above the wing by the lower pressure above the wing, causing a vortex to trail from each wingtip. The strength of wingtip vortices is determined primarily by the weight and airspeed of the aircraft. Wingtip vortices make up the primary and most dangerous component of wake turbulence. Wake turbulence is especially hazardous in the region behind an aircraft in the takeoff or landing phases of flight. During take-off and landing, aircraft operate at high angle of attack. This flight attitude maximizes the formation of strong vortices. In the vicinity of an airport there can be multiple aircraft, all operating at low speed and low altitude, and this provides extra risk of wake turbulence with reduced height from which to recover from any upset. Whirlpool A whirlpool is a body of rotating water produced by opposing currents or a current running into an obstacle. Small whirlpools form when a bath or a sink is draining. More powerful ones in seas or oceans may be termed maelstroms. Vortex is the proper term for a whirlpool that has a downdraft.In narrow ocean straits with fast flowing water, whirlpools are often caused by tides. Many stories tell of ships being sucked into a maelstrom, although only smaller craft are actually in danger. Smaller whirlpools appear at river rapid and can be observed downstream of manmade structures such as weirs and dams. Large cataracts, such as Niagara Falls, produce strong whirlpools. This page is based on a Wikipedia article written by authors (here). Text is available under the CC BY-SA 3.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.
# Building a Zite Replacement (Part 3) Posted by Graham Wheeler on Sunday, September 20, 2015 Since yesterdays post on term extraction, I’ve made a few tweaks. In particular I only adjust capitalization on the first words of sentences, I’m keeping numbers and hyphenation, and if there are consecutive capitalized words I turn them into single terms. For example, the terms for the Donald Trump on vaccines article have changed from: vaccines Donald Trump children doses effective vaccinations diseases Carson debate to: vaccines children Donald Trump doses effective vaccinations diseases smaller vaccination debate babies autism cause schedule studies I’m not sure why ‘Carson’ was dropped; it’s possible that the text of the article changed between the two runs. This shows too that it may be good to deal with plural forms (so ‘vaccination’ covers ‘vaccinations’); on the other hand, having three forms of vaccine appear certainly does strengthen the topic. There is certainly some more tweaking to do but overall I’m pretty happy. The next step is classification. I was disappointed to find that the category element is rarely used in RSS (so far I haven’t seen it). That is going to make using supervised learning (where I have a set of training documents with known categories) quite tricky. Fortunately, unsupervised learning can help a lot here! In unsupervised learning you just throw a bunch of data at a learning algorithm and it does useful things like clustering for you. Essentially this means I can use an algorithm like k-means to group similar articles together. I can then go through those by hand and tag them much faster with categories, and then once they are tagged I can go back to using supervised learning. k-means is just one (very common) algorithm for unsupervised learning. There are some other interesting algorithms for topic extraction that may help here too. In the document classification field, a particularly promising one is latent Dirichlet allocation, which is a form of latent semantic analysis. It’s fairly easy to implement but there is a great Python library for this (gensim) that I will explore. But I am getting ahead of myself, because first I need a corpus of documents, and perhaps you can help! In order to get my corpus, I need to gather a large number of URLs for RSS feeds. I have a list of about 40 tech feeds I used to follow back in the days when Google Reader was a thing. I’ve also pulled all the URLs out of my Pocket booksmarks - over 3000 - but these are article links, not feed URLs. So I am going to write a script that takes a bunch of article links and scrapes the pages to try find RSS feed URLs. Unfortunately, my interests are very skewed toward tech, with a bit of cooking, math and fitness thrown in. If you have collections of URLs or RSS feeds covering other topics I would be very happy to add them to my collection. A first cut at such a script is: #!python def get_feed_URL(site): f = urllib.urlopen(site) if not m:
# How to connect end and beginning of a “ring” with a cutout? I am an absolute beginner in blender and failing at a very simple thing. I have a line lying in a plane. I want this line to have an extension in that plane, i.e. I want to make it a two-dimensional object. I do so by increasing the extrude parameter in the section geometry of the menu to 0.1. What I then get is the following: I now want to connect the end and the beginning of that object, i.e. to close the cutout of it. From other questions I know that I can achieve this by marking the field U in the section Active spline of the menu. The problem is that the shape of the whole object changes in a very undesired way... How can I perform the aformentioned connection in a way that preserves the shape shown in the first picture, i.e. in a way in which the extension of the the object stays in the plane defined by the original line etc.? I attached a blend-file of this minimal-example: • Apply rotation and scale to your curve object – Mr Zak Jun 19 '17 at 13:30 • @MrZak Although I figured it out how to do it pressing F (thanks to your comment below @Lukasz-40sth post), could you explain for a beginner how to do this? More precisely what exact do you mean by "apply roation and scale"? – DonkeyKong Jun 19 '17 at 14:12 • See manual article on that. Tl;dr is: press Ctrl+A in Object mode and choose Scale and then Rotation. After that no more edits will be needed for curve, just pressing Alt+C will be enough to close it, – Mr Zak Jun 19 '17 at 15:01 Select all vertices at the "start"-edge and all vertices at the "end"-edge, after pressing ALT + C and selecting Mesh from Curve/Meta/Surf/Text. Then press F.
+0 # binomial theorem 0 311 2 Find the term not involving x for the expansion of (x^2-2y/x)^8 Aug 14, 2017 #1 +95282 0 Find the term not involving x for the expansion of (x^2-2y/x)^8 $$(x^2-\frac{2y}{x})^8\\ \text{The nth term will be}\\ \binom{8}{n}(\frac{-2y}{x})^n(x^2)^{8-n}\\ =\binom{8}{n}(-2)^ny^n\frac{(x^2)^{8-n}}{x^n}\\ =\binom{8}{n}(-2)^ny^n x^{16-2n-n}\\ =\binom{8}{n}(-2)^ny^n x^{16-3n}\\$$ 16-3n=0 has no integer solutions so there is no term that does not involve x. Aug 14, 2017 #2 +27355 0 Perhaps the expression is meant to be:  [(x^2 - 2y)/x]^8  in which case: . Aug 14, 2017
#### Volume 16, issue 1 (2016) Recent Issues Author Index The Journal About the Journal Editorial Board Subscriptions Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 To Appear Other MSP Journals Singular coefficients in the $K$–theoretic Farrell–Jones conjecture ### Guillermo Cortiñas and Emanuel Rodríguez Cirone Algebraic & Geometric Topology 16 (2016) 129–147 ##### Abstract Let $G$ be a group and let $k$ be a field of characteristic zero. We prove that if the Farrell–Jones conjecture for the $K\phantom{\rule{0.3em}{0ex}}$–theory of $R\left[G\right]$ is satisfied for every smooth $k$–algebra $R$, then it is also satisfied for every commutative $k$–algebra $R$. However, your active subscription may be available on Project Euclid at https://projecteuclid.org/agt We have not been able to recognize your IP address 18.210.28.227 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form. ##### Keywords K–theory, Farrell–Jones conjecture ##### Mathematical Subject Classification 2010 Primary: 18F25 Secondary: 19D55, 55N91
# Circle A has a radius of 6 and a center of (4 ,3 ). Circle B has a radius of 3 and a center of (1 ,8 ). If circle B is translated by <-2 ,4 >, does it overlap circle A? If not, what is the minimum distance between points on both circles? Mar 9, 2017 no overlap, min distance ≈1.296 #### Explanation: What we have to do here is $\textcolor{b l u e}{\text{compare}}$ the distance (d ) between the centres of the circles to the $\textcolor{b l u e}{\text{sum of the radii}}$ • If sum of radii > d , then circles overlap • If sum of radii < d , then no overlap Before calculating d we require to find the 'new' centre of B under the given translation which does not change the shape of the circle only it's position. $\text{Under a translation } \left(\begin{matrix}- 2 \\ 4\end{matrix}\right)$ $\left(1 , 8\right) \to \left(1 - 2 , 8 + 4\right) \to \left(- 1 , 12\right) \leftarrow \textcolor{b l u e}{\text{ new centre of B}}$ To calculate d, use the $\textcolor{b l u e}{\text{sistance formula}}$ $\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{d = \sqrt{{\left({x}_{2} - {x}_{1}\right)}^{2} + {\left({y}_{2} - {y}_{1}\right)}^{2}}} \textcolor{w h i t e}{\frac{2}{2}} |}}}$ where $\left({x}_{1} , {y}_{1}\right) , \left({x}_{2} , {y}_{2}\right) \text{ are 2 coordinate points}$ The 2 points here are (4 ,3) and (-1 ,12) let $\left({x}_{1} , {y}_{1}\right) = \left(4 , 3\right) \text{ and } \left({x}_{2} , {y}_{2}\right) = \left(- 1 , 12\right)$ d=sqrt((-1-4)^2+(12-3)^2)=sqrt(25+81)≈10.296 Sum of radii = radius of A + radius of B = 6 + 3 = 9 Since sum of radii < d , then circles do not overlap $\text{min. distance between points "=d-" sum of radii}$ $\Rightarrow \text{min. distance } = 10.296 - 9 = 1.296$ graph{((x-4)^2+(y-3)^2-36)((x+1)^2+(y-12)^2-9)=0 [-56.2, 56.2, -28.1, 28.1]}
# Functions SpecialFunctions.erfFunction erf(x) Compute the error function of $x$, defined by $\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x \exp(-t^2) \; \mathrm{d}t \quad \text{for} \quad x \in \mathbb{C} \, .$ erf(x, y) Accurate version of erf(y) - erf(x) (for real arguments only). Implementation by • Float32/Float64: C standard math library libm. • BigFloat: C library for multiple-precision floating-point MPFR source SpecialFunctions.erfcFunction erfc(x) Compute the complementary error function of $x$, defined by $\operatorname{erfc}(x) = 1 - \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty \exp(-t^2) \; \mathrm{d}t \quad \text{for} \quad x \in \mathbb{C} \, .$ This is the accurate version of 1-erf(x) for large $x$. See also: erf(x). Implementation by • Float32/Float64: C standard math library libm. • BigFloat: C library for multiple-precision floating-point MPFR source SpecialFunctions.erfcxFunction erfcx(x) Compute the scaled complementary error function of $x$, defined by $\operatorname{erfcx}(x) = e^{x^2} \operatorname{erfc}(x) \quad \text{for} \quad x \in \mathbb{C} \, .$ This is the accurate version of $e^{x^2} \operatorname{erfc}(x)$ for large $x$. Note also that $\operatorname{erfcx}(-ix)$ computes the Faddeeva function w(x). See also: erfc(x). Implementation by • Float32/Float64: C standard math library libm. • BigFloat: MPFR has an open TODO item for this function until then, we use DLMF 7.12.1 for the tail. source SpecialFunctions.logerfcFunction logerfc(x) Compute the natural logarithm of the complementary error function of $x$, that is $\operatorname{logerfc}(x) = \operatorname{ln}(\operatorname{erfc}(x)) \quad \text{for} \quad x \in \mathbb{R} \, .$ This is the accurate version of $\operatorname{ln}(\operatorname{erfc}(x))$ for large $x$. See also: erfcx(x). Implementation Based on the erfc(x) and erfcx(x) functions. Currently only implemented for Float32, Float64, and BigFloat. source SpecialFunctions.logerfcxFunction logerfcx(x) Compute the natural logarithm of the scaled complementary error function of $x$, that is $\operatorname{logerfcx}(x) = \operatorname{ln}(\operatorname{erfcx}(x)) \quad \text{for} \quad x \in \mathbb{R} \, .$ This is the accurate version of $\operatorname{ln}(\operatorname{erfcx}(x))$ for large and negative $x$. See also: erfcx(x). Implementation Based on the erfc(x) and erfcx(x) functions. Currently only implemented for Float32, Float64, and BigFloat. source SpecialFunctions.erfiFunction erfi(x) Compute the imaginary error function of $x$, defined by $\operatorname{erfi}(x) = -i \operatorname{erf}(ix) \quad \text{for} \quad x \in \mathbb{C} \, .$ See also: erf(x). Implementation by • Float32/Float64: C standard math library libm. source SpecialFunctions.dawsonFunction dawson(x) Compute the Dawson function (scaled imaginary error function) of $x$, defined by $\operatorname{dawson}(x) = \frac{\sqrt{\pi}}{2} e^{-x^2} \operatorname{erfi}(x) \quad \text{for} \quad x \in \mathbb{C} \, .$ This is the accurate version of $\frac{\sqrt{\pi}}{2} e^{-x^2} \operatorname{erfi}(x)$ for large $x$. See also: erfi(x). Implementation by • Float32/Float64: C standard math library libm. source SpecialFunctions.erfinvFunction erfinv(x) Compute the inverse error function of a real $x$, that is $\operatorname{erfinv}(x) = \operatorname{erf}^{-1}(x) \quad \text{for} \quad x \in \mathbb{R} \, .$ See also: erf(x). Implementation Using the rational approximants tabulated in: J. M. Blair, C. A. Edwards, and J. H. Johnson, "Rational Chebyshev approximations for the inverse of the error function", Math. Comp. 30, pp. 827–830 (1976). https://doi.org/10.1090/S0025-5718-1976-0421040-7, http://www.jstor.org/stable/2005402 combined with Newton iterations for BigFloat. source SpecialFunctions.erfcinvFunction erfcinv(x) Compute the inverse error complementary function of a real $x$, that is $\operatorname{erfcinv}(x) = \operatorname{erfc}^{-1}(x) \quad \text{for} \quad x \in \mathbb{R} \, .$ See also: erfc(x). Implementation Using the rational approximants tabulated in: J. M. Blair, C. A. Edwards, and J. H. Johnson, "Rational Chebyshev approximations for the inverse of the error function", Math. Comp. 30, pp. 827–830 (1976). https://doi.org/10.1090/S0025-5718-1976-0421040-7, http://www.jstor.org/stable/2005402 combined with Newton iterations for BigFloat. source SpecialFunctions.expintFunction expint(z) expint(ν, z) Computes the exponential integral $\operatorname{E}_\nu(z) = \int_0^\infty \frac{e^{-zt}}{t^\nu} dt$. If $\nu$ is not specified, $\nu=1$ is used. Arbitrary complex $\nu$ and $z$ are supported. source SpecialFunctions.expintiFunction expinti(x::Real) Computes the exponential integral function $\operatorname{Ei}(x) = \int_{-\infty}^x \frac{e^t}{t} dt$, which is equivalent to $-\Re[\operatorname{E}_1(-x)]$ where $\operatorname{E}_1$ is the expint function. source SpecialFunctions.expintxFunction expintx(z) expintx(ν, z) Computes the scaled exponential integral $\exp(z) \operatorname{E}_\nu(z) = e^z \int_0^\infty \frac{e^{-zt}}{t^\nu} dt$. If $\nu$ is not specified, $\nu=1$ is used. Arbitrary complex $\nu$ and $z$ are supported. See also: expint(ν, z) source SpecialFunctions.sinintFunction sinint(x) Compute the sine integral function of $x$, defined by $\operatorname{Si}(x) := \int_0^x \frac{\sin t}{t} \, \mathrm{d}t \quad \text{for} \quad x \in \mathbb{R} \,.$ See also: cosint(x). Implementation Using the rational approximants tabulated in: A.J. MacLeod, "Rational approximations, software and test methods for sine and cosine integrals", Numer. Algor. 12, pp. 259–272 (1996). https://doi.org/10.1007/BF02142806, https://link.springer.com/article/10.1007/BF02142806. Note: the second zero of $\text{Ci}(x)$ has a typo that is fixed: $r_1 = 3.38418 0422\mathbf{8} 51186 42639 78511 46402$ in the article, but is in fact: $r_1 = 3.38418 0422\mathbf{5} 51186 42639 78511 46402$. source SpecialFunctions.cosintFunction cosint(x) Compute the cosine integral function of $x$, defined by $\operatorname{Ci}(x) := \gamma + \log x + \int_0^x \frac{\cos (t) - 1}{t} \, \mathrm{d}t \quad \text{for} \quad x > 0 \,,$ where $\gamma$ is the Euler-Mascheroni constant. See also: sinint(x). Implementation Using the rational approximants tabulated in: A.J. MacLeod, "Rational approximations, software and test methods for sine and cosine integrals", Numer. Algor. 12, pp. 259–272 (1996). https://doi.org/10.1007/BF02142806, https://link.springer.com/article/10.1007/BF02142806. Note: the second zero of $\text{Ci}(x)$ has a typo that is fixed: $r_1 = 3.38418 0422\mathbf{8} 51186 42639 78511 46402$ in the article, but is in fact: $r_1 = 3.38418 0422\mathbf{5} 51186 42639 78511 46402$. source SpecialFunctions.airyaixFunction airyaix(x) Scaled Airy function of the first kind $\operatorname{Ai}(x) e^{\frac{2}{3} x \sqrt{x}}$. Throws DomainError for negative Real arguments. source SpecialFunctions.airyaiprimexFunction airyaiprimex(x) Scaled derivative of the Airy function of the first kind $\operatorname{Ai}'(x) e^{\frac{2}{3} x \sqrt{x}}$. Throws DomainError for negative Real arguments. source SpecialFunctions.sphericalbesseljFunction sphericalbesselj(nu, x) Spherical bessel function of the first kind at order nu, $j_ν(x)$. This is the non-singular solution to the radial part of the Helmholz equation in spherical coordinates. source SpecialFunctions.sphericalbesselyFunction sphericalbessely(nu, x) Spherical bessel function of the second kind at order nu, $y_ν(x)$. This is the singular solution to the radial part of the Helmholz equation in spherical coordinates. Sometimes known as a spherical Neumann function. source SpecialFunctions.besselhxFunction besselhx(nu, [k=1,] z) Compute the scaled Hankel function $\exp(∓iz) H_ν^{(k)}(z)$, where $k$ is 1 or 2, $H_ν^{(k)}(z)$ is besselh(nu, k, z), and $∓$ is $-$ for $k=1$ and $+$ for $k=2$. k defaults to 1 if it is omitted. The reason for this function is that $H_ν^{(k)}(z)$ is asymptotically proportional to $\exp(∓iz)/\sqrt{z}$ for large $|z|$, and so the besselh function is susceptible to overflow or underflow when z has a large imaginary part. The besselhx function cancels this exponential factor (analytically), so it avoids these problems. See also: besselh source SpecialFunctions.jincFunction jinc(x) Bessel function of the first kind divided by x. Following convention: $\operatorname{jinc}{x} = \frac{2 \cdot J_1{\pi x}}{\pi x}$. Sometimes known as sombrero or besinc function. source SpecialFunctions.ellipkFunction ellipk(m) Computes Complete Elliptic Integral of 1st kind $K(m)$ for parameter $m$ given by $\operatorname{ellipk}(m) = K(m) = \int_0^{ \frac{\pi}{2} } \frac{1}{\sqrt{1 - m \sin^2 \theta}} \, \mathrm{d}\theta \quad \text{for} \quad m \in \left( -\infty, 1 \right] \, .$ See also: ellipe(m). Arguments • m: parameter $m$, restricted to the domain $(-\infty,1]$, is related to the elliptic modulus $k$ by $k^2=m$ and to the modular angle $\alpha$ by $k=\sin \alpha$. Implementation Using piecewise approximation polynomial as given in 'Fast Computation of Complete Elliptic Integrals and Jacobian Elliptic Functions', Fukushima, Toshio. (2014). F09-FastEI. Celest Mech Dyn Astr, DOI 10.1007/s10569-009-9228-z, https://pdfs.semanticscholar.org/8112/c1f56e833476b61fc54d41e194c962fbe647.pdf For $m<0$, followed by Fukushima, Toshio. (2014). 'Precise, compact, and fast computation of complete elliptic integrals by piecewise minimax rational function approximation'. Journal of Computational and Applied Mathematics. 282. DOI 10.13140/2.1.1946.6245., https://www.researchgate.net/publication/267330394 As suggested in this paper, the domain is restricted to $(-\infty,1]$. source SpecialFunctions.ellipeFunction ellipe(m) Computes Complete Elliptic Integral of 2nd kind $E(m)$ for parameter $m$ given by $\operatorname{ellipe}(m) = E(m) = \int_0^{ \frac{\pi}{2} } \sqrt{1 - m \sin^2 \theta} \, \mathrm{d}\theta \quad \text{for} \quad m \in \left( -\infty, 1 \right] \, .$ See also: ellipk(m). Arguments • m: parameter $m$, restricted to the domain $(-\infty,1]$, is related to the elliptic modulus $k$ by $k^2=m$ and to the modular angle $\alpha$ by $k=\sin \alpha$. Implementation Using piecewise approximation polynomial as given in 'Fast Computation of Complete Elliptic Integrals and Jacobian Elliptic Functions', Fukushima, Toshio. (2014). F09-FastEI. Celest Mech Dyn Astr, DOI 10.1007/s10569-009-9228-z, https://pdfs.semanticscholar.org/8112/c1f56e833476b61fc54d41e194c962fbe647.pdf For $m<0$, followed by Fukushima, Toshio. (2014). 'Precise, compact, and fast computation of complete elliptic integrals by piecewise minimax rational function approximation'. Journal of Computational and Applied Mathematics. 282. DOI 10.13140/2.1.1946.6245., https://www.researchgate.net/publication/267330394 As suggested in this paper, the domain is restricted to $(-\infty,1]$. source SpecialFunctions.zetaFunction zeta(s, z) Generalized zeta function defined by $\zeta(s, z)=\sum_{k=0}^\infty \frac{1}{((k+z)^2)^{s/2}},$ where any term with $k+z=0$ is excluded. For $\Re z > 0$, this definition is equivalent to the Hurwitz zeta function $\sum_{k=0}^\infty (k+z)^{-s}$. The Riemann zeta function is recovered as $\zeta(s)=\zeta(s,1)$. External links: Riemann zeta function, Hurwitz zeta function source zeta(s) Riemann zeta function $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}\quad\text{for}\quad s\in\mathbb{C}.$ source SpecialFunctions.gammaMethod gamma(z) Compute the gamma function for complex $z$, defined by $\Gamma(z) := \begin{cases} n! & \text{for} \quad z = n+1 \;, n = 0,1,2,\dots \\ \int_0^\infty t^{z-1} {\mathrm e}^{-t} \, {\mathrm d}t & \text{for} \quad \Re(z) > 0 \end{cases}$ and by analytic continuation in the whole complex plane. See also: loggamma(z) for $\log \Gamma(z)$ and gamma(a,z) for the upper incomplete gamma function $\Gamma(a,z)$. Implementation by • Float: C standard math library libm. • Complex: by exp(loggamma(z)). • BigFloat: C library for multiple-precision floating-point MPFR source SpecialFunctions.loggammaMethod loggamma(x) Computes the logarithm of gamma for given x. If x is a Real, then it throws a DomainError if gamma(x) is negative. If x is complex, then exp(loggamma(x)) matches gamma(x) (up to floating-point error), but loggamma(x) may differ from log(gamma(x)) by an integer multiple of $2\pi i$ (i.e. it may employ a different branch cut). See also logabsgamma for real x. source SpecialFunctions.gammaMethod gamma(a,x) Returns the upper incomplete gamma function $\Gamma(a,x) = \int_x^\infty t^{a-1} e^{-t} dt \,$ supporting arbitrary real or complex a and x. (The ordinary gamma function gamma(x) corresponds to $\Gamma(a) = \Gamma(a,0)$. See also the gamma_inc function to compute both the upper and lower ($\gamma(a,x)$) incomplete gamma functions scaled by $\Gamma(a)$. source SpecialFunctions.loggammaMethod loggamma(a,x) Returns the log of the upper incomplete gamma function gamma(a,x): $\log \Gamma(a,x) = \log \int_x^\infty t^{a-1} e^{-t} dt \,$ supporting arbitrary real or complex a and x. If a and/or x is complex, then exp(loggamma(a,x)) matches gamma(a,x) (up to floating-point error), but loggamma(a,x) may differ from log(gamma(a,x)) by an integer multiple of $2\pi i$ (i.e. it may employ a different branch cut). See also loggamma(x). source SpecialFunctions.gamma_incFunction gamma_inc(a,x,IND=0) Returns a tuple $(p, q)$ where $p + q = 1$, and $p=P(a,x)$ is the Incomplete gamma function ratio given by: $P(a,x)=\frac{1}{\Gamma (a)} \int_{0}^{x} e^{-t}t^{a-1} dt.$ and $q=Q(a,x)$ is the Incomplete gamma function ratio given by: $Q(x,a)=\frac{1}{\Gamma (a)} \int_{x}^{\infty} e^{-t}t^{a-1} dt.$ In terms of these, the lower incomplete gamma function is $\gamma(a,x) = P(a,x) \Gamma(a)$ and the upper incomplete gamma function is $\Gamma(a,x) = Q(a,x) \Gamma(a)$. IND ∈ [0,1,2] sets accuracy: IND=0 means 14 significant digits accuracy, IND=1 means 6 significant digit, and IND=2 means only 3 digit accuracy. SpecialFunctions.logabsbinomialFunction logabsbinomial(n, k) Accurate natural logarithm of the absolute value of the binomial coefficient binomial(n, k) for large n and k near n/2. Returns a tuple (log(abs(binomial(n,k))), sign(binomial(n,k))).
## The block preconditioned steepest descent iteration for elliptic operator eigenvalue problems Klaus Neymeyr and Ming Zhou ### Abstract The block preconditioned steepest descent iteration is an iterative eigensolver for subspace eigenvalue and eigenvector computations. An important area of application of the method is the approximate solution of mesh eigenproblems for self-adjoint elliptic partial differential operators. The subspace iteration allows to compute some of the smallest eigenvalues together with the associated invariant subspaces simultaneously. The building blocks of the iteration are the computation of the preconditioned residual subspace for the current iteration subspace and the application of the Rayleigh-Ritz method in order to extract an improved subspace iterate. The convergence analysis of this iteration provides new sharp estimates for the Ritz values. It is based on the analysis of the vectorial preconditioned steepest descent iteration which appeared in [SIAM J. Numer. Anal., 50 (2012), pp. 3188–3207]. Numerical experiments using a finite element discretization of the Laplacian with up to $5\cdot 10^7$ degrees of freedom and with multigrid preconditioning demonstrate the near-optimal complexity of the method. Full Text (PDF) [800 KB] ### Key words subspace iteration, steepest descent/ascent, Rayleigh-Ritz procedure, elliptic eigenvalue problem, multigrid, preconditioning ### AMS subject classifications 65N12, 65N22, 65N25, 65N30 ### Links to the cited ETNA articles [9] Vol. 7 (1998), pp. 104-123 Andrew V. Knyazev: Preconditioned eigensolvers - an oxymoron? ### ETNA articles which cite this article Vol. 46 (2017), pp. 424-446 Ming Zhou and Klaus Neymeyr: Sharp Ritz value estimates for restarted Krylov subspace iterations < Back
Page 1 of 1 ### Floor sum Posted: Fri Nov 29, 2019 11:37 am I've noticed Graham, Knuth (Concrete Mathematics, 2nd ed., p. 86) show how to do sum(floor(sqrt(k)), 0 <= k < n). Did anybody try and tackle sum(floor(sqrt(k (r / 2 - k))), 0 < k < r / 2)? This would actually solve a particular PE problem in closed form. ### Re: Floor sum Posted: Wed May 27, 2020 7:55 pm If there is a solution along similar lines to the one for $\lfloor \sqrt{k} \rfloor$ then it's a polynomial in $n$ and $\lfloor \sqrt{n} \rfloor$. Asymptotically the solution is $\Theta(n^2)$, so there are only 9 coefficients to determine. Pick 9 values of $n$, perform Gaussian elimination to get the coefficients, and test to see whether it works for all $n$ up to some small bound. Hint: it doesn't. This isn't really surprising when you look at the integrals. $\int_0^n \sqrt{x} \textrm{d}x = \tfrac23 n^{3/2}$ has a nice rational coefficient, but $\int_0^n \sqrt{x(n-x)} \textrm{d}x = \tfrac\pi 8 n^2$ doesn't.
# How to calculate shearing force of the bolt? 3 posts / 0 new How to calculate shearing force of the bolt? Hello...everyone, Could you teach me how to calculate shearing force of the bolt in this figure? Jhun Vert Your pipe joint is subjected to combined stress. Consider how the vertical force N, the horizontal force P, and the bending moment Mt affects the bolts. congestus Unusual pipe joint (for engineering purposes). Before applying, try to improve a connection: perhaps using a flange-shaped connection (to avoid a shear stress in bolts from axial force N and becoming a better bending rigidity). If necessary, check a general stability of pipe (under simultaneusly consideration of all loadings). Best regards! congestus • Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.
### Rajan_sust's blog By Rajan_sust, history, 21 month(s) ago, , In computer science, the randomized quicksort algorithm has expected runtime O(nlogn). How does linearity of expectation allow us to show this? • +5 By Rajan_sust, history, 21 month(s) ago, , Let, x1 <  = x2 <  = x3....... <  = xn and p1 + p2 + p3 + ....... + pn = 1 We all know that average of x1, x2, x3......., xn is in [x1,xn] and it is easy to understand. In a contest, I assumed Expected value = p1 * x1 + p2 * x2 + p3 * x3 + ....... + pn * xn is in [x1,xn] regardless how probability is distributed that means the sum of probability can be 1 in many different ways. My assumption was right and got ac. I'm interested to know the proof. TIA • -3 By Rajan_sust, history, 2 years ago, , Question 01: Is there any technique where generating random number within a range is equiprobable ? Question 02: What is the extra advantage of the following method 02,03,04 ? srand(time(NULL); //Method 01: general approach int myrand(int mod){ return rand()%mod; } //Method 02: Taken form red coder submission. int myrand(int mod) { int t = rand() % mod; t = (1LL*t*RAND_MAX + rand()) % mod; return t; } //Method 03: Taken from red coder submission. int myrand(int mod) { return (int) ( (((double) rand() / RAND_MAX) * (mod) )); } //Method 04 : Taken from red coder submission. inline int myrand(int mod) { return (((long long )rand() << 15) + rand()) % mod; } Updated : Idea from dimas.kovas. auto seed = chrono::high_resolution_clock::now().time_since_epoch().count(); std::mt19937 mt(seed); int myrand(int mod) { return mt()%mod; } • +10 By Rajan_sust, history, 2 years ago, , The problem was set in acm icpc preliminary contest 2017 in Dhaka site. Problem Link : E.Anti Hash Problem is : you will given a string S of length N consisting of lowercase letters (a-z) only.Also given a base B and mod-value M for doing polynomial hashing. Note : B and M are both prime. Your task is to find another string T, satisfying all of the following constraints: Length of T is exactly N. T consists of only lowercase letters (a-z). T and S have the same hash value that means, collision happens. For hashing in both case you have to use B and M. • +26 By Rajan_sust, history, 3 years ago, , If a String is : "Topcoder" and all of it's suffix are : r,er,der,oder........pcoder,Topcoder Now consider c++ code: std::string str = "Topcoder"; const char* pointer = &str[1]; cout<<pointer<<'\n'; Output is: opcoder What is the complexity of generating of above suffix upto 1 index of length 7? Linear or Constant ? • +3 By Rajan_sust, history, 3 years ago, , Is 10^9 computation is possible before 2.00 seconds? If not possible,how my solution works of following problem? Problem link: http://codeforces.com/problemset/problem/851/C Submission: http://codeforces.com/contest/851/submission/30089349
Viser treff 1-20 av 139 • #### ATLAS b-jet identification performance and efficiency measurement with tt¯ events in pp collisions at √s=13 TeV  (Peer reviewed; Journal article, 2019) The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing b-hadrons are presented. The performance of the algorithms is evaluated in the simulation and the efficiency ... • #### Combination of searches for heavy resonances decaying into bosonic and leptonic final states using 36  fb−1 of proton-proton collision data at √s=13 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2018-09-26) Searches for new heavy resonances decaying into different pairings of W, Z, or Higgs bosons, as well as directly into leptons, are presented using a data sample corresponding to 36.1 fb^−1 of pp collisions at √s = 13 TeV ... • #### Combination of Searches for Invisible Higgs Boson Decays with the ATLAS Experiment  (Peer reviewed; Journal article, 2019) Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H → invisible decays where H is produced according to the standard ... • #### Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at √s = 7 and 8 TeV with the ATLAS and CMS experiments  (Peer reviewed; Journal article, 2019) This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s√ = 7 and 8 TeV corresponding to integrated ... • #### Combined measurement of differential and total cross sections in the H → γγ and the H → ZZ⁎ → 4ℓ decay channels at √s = 13 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2018-09-17) A combined measurement of differential and inclusive total cross sections of Higgs boson production is performed using 36.1 fb^−1 of 13 TeV proton–proton collision data produced by the LHC and recorded by the ATLAS detector ... • #### Comparison of fragmentation functions for jets dominated by light quarks and gluons from pp and Pb+Pb collisions in ATLAS  (Peer reviewed; Journal article, 2019) Charged-particle fragmentation functions for jets azimuthally balanced by a high-transverse-momentum, prompt, isolated photon are measured in 25 pb − 1 of p p and 0.49 nb − 1 of Pb + Pb collision data at 5.02 TeV per nucleon ... • #### Constraints on mediator-based dark matter and scalar dark energy models using √s = 13 TeV pp collision data collected by the ATLAS detector  (Peer reviewed; Journal article, 2019) Constraints on selected mediator-based dark matter models and a scalar dark energy model using up to 37 fb−1s√ = 13 TeV pp collision data collected by the ATLAS detector at the LHC during 2015-2016 are summarised in this ... • #### Constraints on off-shell Higgs boson production and the Higgs boson total width in ZZ → 4ℓ and ZZ → 2ℓ2ν final states with the ATLAS detector  (Peer reviewed; Journal article, 2018-09-26) A measurement of off-shell Higgs boson production in the Z Z → 4 and Z Z → 22ν decay channels, where ℓ stands for either an electron or a muon, is performed using data from proton–proton collisions at a centre-of-mass ... • #### Correlated long-range mixed-harmonic fluctuations measured in pp, p+Pb and low-multiplicity Pb+Pb collisions with the ATLAS detector  (Peer reviewed; Journal article, 2019) Correlations of two flow harmonics vn and vm via three- and four-particle cumulants are measured in 13 TeV pp, 5.02 TeV p+Pb, and 2.76 TeV peripheral Pb+Pb collisions with the ATLAS detector at the LHC. The goal is to ... • #### Cross-section measurements of the Higgs boson decaying into a pair of τ-leptons in proton-proton collisions at √s=13  TeV with the ATLAS detector  (Peer reviewed; Journal article, 2019) A measurement of production cross sections of the Higgs boson in proton-proton collisions is presented in the H → τ τ decay channel. The analysis is performed using 36.1     fb − 1 of data recorded by the ATLAS experiment ... • #### Dijet azimuthal correlations and conditional yields in pp and p+Pb collisions at √sNN = 5.02 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2019) This paper presents a measurement of forward-forward and forward-central dijet azimuthal angular correlations and conditional yields in proton-proton (pp) and proton-lead (p + Pb) collisions as a probe of the nuclear gluon ... • #### Electron and photon energy calibration with the ATLAS detector using 2015–2016 LHC proton-proton collision data  (Peer reviewed; Journal article, 2019) This paper presents the electron and photon energy calibration obtained with the ATLAS detector using about 36 fb−1 of LHC proton-proton collision data recorded at √ s = 13 TeV in 2015 and 2016. The different calibration ... • #### Evidence for the production of three massive vector bosons with the ATLAS detector  (Peer reviewed; Journal article, 2019) • #### Identification of boosted Higgs bosons decaying into b-quark pairs with the ATLAS detector at 13 TeV  (Peer reviewed; Journal article, 2019) This paper describes a study of techniques for identifying Higgs bosons at high transverse momenta decaying into bottom-quark pairs, H→bb¯, for proton–proton collision data collected by the ATLAS detector at the Large ... • #### In situ calibration of large-radius jet energy and mass in 13 TeV proton–proton collisions with the ATLAS detector  (Peer reviewed; Journal article, 2019-02-13) The response of the ATLAS detector to largeradius jets is measured in situ using 36.2 fb−1 of √s = 13 TeV proton–proton collisions provided by the LHC and recorded by the ATLAS experiment during 2015 and 2016. The jet ... • #### Measurement of angular and momentum distributions of charged particles within and around jets in Pb+Pb and pp collisions at √sNN=5.02 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2019) Studies of the fragmentation of jets into charged particles in heavy-ion collisions can provide information about the mechanism of jet quenching by the hot and dense QCD matter created in such collisions, the quark-gluon ... • #### Measurement of distributions sensitive to the underlying event in inclusive Z boson production in pp collisions at √s = 13 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2019-08-08) This paper presents measurements of charged-particle distributions sensitive to the properties of the underlying event in events containing a Z boson decaying into a muon pair. The data were obtained using the ATLAS detector ... • #### Measurement of fiducial and differential W+W−production cross-sections at √s=13 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2019-10-29) A measurement of fiducial and differential cross-sections for W+W− production in proton–proton collisions at s√=13 TeV with the ATLAS experiment at the Large Hadron Collider using data corresponding to an integrated ... • #### Measurement of flow harmonics correlations with mean transverse momentum in lead–lead and proton–lead collisions at √sNN−−−=5.02 TeVwith the ATLAS detector  (Peer reviewed; Journal article, 2019-12-03) To assess the properties of the quark–gluon plasma formed in ultrarelativistic ion collisions, the ATLAS experiment at the LHC measures a correlation between the mean transverse momentum and the flow harmonics. The analysis ... • #### Measurement of jet fragmentation in Pb+Pb and pp collisions at √sNN=5.02 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2018-08-16) This paper presents a measurement of jet fragmentation functions in 0.49 nb−1 of Pb+Pb collisions and 25 pb−1 of pp collisions at √sNN = 5.02 TeV collected in 2015 with the ATLAS detector at the LHC. These measurements ...
## anonymous 5 years ago difficult eq 1. anonymous $((3x+6) \div (x ^{2}+3x+2)) * ((x+1) \div (2x+8) )$ 2. anonymous i really bad at this stuff i need to simplyfy this 3. anonymous This becomes, if I interpreted your expression correctly: 4. anonymous Okay... 5. anonymous $\frac{3x+6}{x^2+3x+2}. \frac{x+1}{2x+8}=\frac{3(x+2)}{(x+1)(x+2)}. \frac{x+1}{2(x+4)}$$=\frac{3}{2(x+4)}$ 6. anonymous It's just about factorizing and cancelling common factors. 7. anonymous Ouch, my answer was deleted by the browser or the web page. but now you got the answer anyway ;-) 8. anonymous Sorry, mstud, I thought you weren't interested in it anymore. This browser destroys hard work all the time - I feel your pain. 9. anonymous Well, with the old version destruction was even worse, when the whole answer was written, it didn't always save automatically, and I either had to copy the whole answer I had written go back , open the question once more and then paste the answer in again or write it all over again. It doesn't matter that you answered instead, the most important is that "BecomeMyFan=D" got an answer to the question:) 10. anonymous Lol, I know! That's the kind of crap I had to do too! 11. anonymous Now it's usually enough to refresh the browser,but not always, though 12. anonymous Yes. I hate how the window keeps jumping while typing, too. I hope someone mentioned that in the customer feedback thing today. 13. anonymous THANK YOU GUYS!I can see that my problem was wih factorising of the quadratic equation part, thank you! 14. anonymous and yes, this site has LOTS of bugs. I can see that the programmers are beginners.
Free Version Moderate # Sum of Positive Integers GMAT-VULZAW What is the sum of all two-digit, positive, integers that do not contain a $5$? A $3920$ B $4465$ C $4905$ D $4360$ E $545$
# Tag Info ## Hot answers tagged cold-fusion 38 This was beautifully answered theoretically right away at the 1989 APS session in NY, I think by Koonin. Theoretically, for any sort of fusion one needs to overcome the Coulomb repulsion of the relevant nuclei, on the order of MeV in order to allow the nuclei to get close enough for their wave functions to overlap and fuse. Because of the phenomenom of ... 10 Okay, so I did some poking around and the 66th-75th editions of the CRC Handbook of Chemistry and Physics all have the incorrect atomic mass of Cu-63 [62.939598], and from 76th edition on they seem to have figured it out. Those isotope mass tables are put together from a number of sources, so it's hard (time consuming) to tell exactly where the error came ... 9 The Fleischmann and Pons device relied on calorimetry (measuring the energy balance in terms of heat) maintained over multiple day time spans to ascertain that something unexpected was happening in the cell. This is experimentally tricky, as it requires high precision temperature measurements to be maintained against a consistent reference, and relies on ... 8 There are a few reasons. There was never any clear reason why electrifying palladium should create pressures sufficient to ignite a fusion reaction. Without a mechanism, this seems to be the most ridiculously radical and sensational conclusion possible, even if the calorimetry says that electrified palladium creates net energy somehow. Why not start ... 8 Since both references give the same percentages and the same overall atomic weight, an easy calculation shows that this only works out for the number in the first link, therefore the second decimal should be 2. (And I think it would be nice to contact the webmasters of the second link.) 8 For some recent information on the running battle between cold fusion researchers and myself over my proposed conventional (non-nuclear) explanation of the Fleishmann-Pons(-Hawkins) effect, you might want to look here: https://docs.google.com/open?id=0B3d7yWtb1doPc3otVGFUNDZKUDQ (referenced in this: http://www.networkworld.com/columnists/2012/102612-... 7 The binding energy curve for nucleons in nuclei shows which atoms can take part in fusion, releasing energy in the process. Fusion happens as one goes from left to right, until reaching Fe, iron. From there to the right it is fission that will release extra energy This is an example of a fusion reaction, the one that is actually being materialized in ITER,... 7 Pons and Fleischmann originally reported in 1989 that their chemical cells had produced excess heat, neutrons, and tritium. Their interpretation was that deuterium nuclei were fusing to produce 4He. The branching ratios in this process are known: 50% n+3He, 50% p+3H, and 10^-6 4He+gamma. If the claimed excess heat had been produced by fusion, then the ... 4 I suppose I did not make myself clear, and spent too much time talking about stellar evolution. The main reason why this cannot work is that, when you are working with elements that have atomic numbers higher than Iron (26), you cannot get energy by converting an element into another that has an even higher atomic number. In this case we're converting Ni (... 4 This new "cold fusion" reported in what is really a blog is a commercial enterprise to all intents and purposes. Their claims are so large, that either their constructs will be successful or they will eat their hat. We do not have long to wait. If they are successful, the theory will be found. One note about crystals ( they are using Ni crystals) and ... 4 Muon catalyzed fusion needs the muons to be low enough energy to replace an electron and stay in a stable orbit. Since the reason the catalysis happens is because the atom is much smaller and two protons can get close together enhancing the probability of overlap and fusion, one needs a large number of low energy muons so that the probability of two muonic ... 4 A seemingly problematic aspect of the proposed mechanism is that it allegedly requires two hot deuterons. (By contrast, U-235 fission requires just one neutron.) Why is that so problematic? If $n$ is the number of 20keV particles (i.e. hot deuterons, or K-shell holes, or some superposition of them), then we expect something like: $$dn/dt = An^2 - Bn$$ ... 4 Please note, that this test was conducted by exactly the same group that did the previous test, lead by Guiseppe Levi, who is closely connected to Andrea Rossi. Also, you can read from the report, that Rossi himself was present in the test pulling the strings. Hardly independent testing, is it? The above facts alone are enough to make the report somewhat ... 3 Cold fusion does not exist, as discussed in the answers to this question. The fundamental reason for this is the mismatch in energy scales between the Coulomb barrier (MeV) and the energy scales of chemistry. Changing one of the chemical reactants from Pd to Ni has no effect on this fundamental issue. The linked true-believer article, dated August 2012, ... 3 There is a claim often made about cold fusion, that it is excluded theoretically. The main theoretical argument is that electronic energies are too low to overcome the Coulomb barrier, since d-d fusion only takes place at KeV energies, while chemistry is at eV energies. This is belied by inner shells, which in Palladium store 3 or 20 KeV of energy per ... 3 I'm an experimental electrochemist. The problem with experiments such as those mentioned above is that they lack the necessary details to reproduce it, so that we can verify it or improve upon it. In the first video, a paper linked is here: http://www.lenr-canr.org/acrobat/SzpakSpolarizedd.pdf They vaguely mention a "negatively polarized Pd/D$_2$O system". ... 3 Without knowing anything about the experiment nor the camera, I would suggest that what is shown in the video is a combination of shot noise and aliasing due to a poor choice of gradient mapping. Note that the gradient bar at the bottom of the frame jumps from a fairly deep red (actually darker than precedent tones) to pure white in one increment - this may ... 3 Moun catalyzed fusion requires very specific kinematics, but cosmic muons come in all energies from stopped to tens of GeV at any particular spot. Care to work out the cross-section for having the right kinematics? If you're having trouble I know a graduate student who is familiar with several of the common cosmic muon Monte Carlo generators. 3 Why was cold fusion considered bogus? Because it was not easily reproduced when initially announced, because the original suggested mechanism was inconsistent with known physics at the time, and because the evidence presented at the time purporting to show it was nuclear fusion (specifically D-D fusion) was flawed. Perhaps the better question is: Should ... 3 Very simple! They can never repeat the results in a scientific way to demonstrate to others that it works. What's the point of science if we simply ignore the scientific method? If they did truly come up with something then they wouldn't have a need to be secretive and not show exactly what they did. If somehow they got it through accident then that is not ... 3 The 2003 Atomic Mass Evaluation: Cu(63) - 62.929597474 The 1995 Update to Atomic Mass Evaluation: Cu(63) - 62.929601079 The 1993 Update to Atomic Mass Evaluation: Cu(63) - 62.929600748 3 You seem to have invented a version of the Farnsworth Fusor, and/or its successor Polywell. Or, if you want a neutron generator you can hit something like Palladium that has absorbed a load of Deuterium with your beam, or a metal Deuteride 3 The difficulty with this sort of device is that the effective cross section of the target nuclei are so tiny. Even with a very dense target, most of your shots will miss. But you still have to consume energy to send them along. The most important thing for efficiency then is nuclei density in the target. Making the target full of negative ions isn't ... 2 I've started a discussion on this at the Wikipedia article. If I had to guess at the origin of this discrepancy it is a typo in one of the old CRC's where someone typed '3' instead of the correct digit of '2'. If that's the case there may have been merely an internal memo correcting the typo -- if that. 2 Fake. Real thermonuclear reaction would kill everything around due to heavy neutron radiation. Even 1 meter of lead would not help. Only top voted, non community-wiki answers of a minimum length are eligible
# If $K_1$ & $K_2$ are disjoint nonempty compact sets ,show that there exist $k_i$ $\in$ $K_i$ If $$K_1$$ & $$K_2$$ are disjoint nonempty compact sets ,show that there exist $$k_i$$ $$\in$$ $$K_i$$ such that $$|k_1 - k_2|$$=inf{$$|x_1 - x_2|$$: $$x_i$$ $$\in$$ $$K_i$$}. They are all subsets of $$\mathbb R$$ I am able to proof that the set is bounded but I am trying to proof that the set is closed using sequence but I can't. Give me some hint. • Hint: Use the Extreme Value Theorem – YuiTo Cheng Jan 31 at 13:52 Hint: $$d:\mathbb R\times \mathbb R\to \mathbb R:(x,y)\mapsto |x-y|$$ is continuous, and $$K_1\times K_2$$ is compact. Here is a proof using sequences: in fact, we can prove a stronger result: assume $$K$$ is compact and $$C$$ is closed. The $$\inf$$ exists because $$S=\{|x-y|:x\in K;\ y\in C\}$$ is bounded below. Therefore, there is a sequence $$(x_k,y_k)\in K\times C$$ such that $$|x_k-y_k|\to \inf S=s$$. Without loss of generality, $$|x_k-y_k|. Since $$K$$ is compact, we get a subsequence $$x_{k_i}\to p\in K.$$ On the other hand, $$K$$ is bounded so it lies in some ball of radius $$R$$, so $$|y_k|\le |y_k-x_k|+|x_k|\le s+1+R$$. Then, $$y_{k_i}$$ is bounded, so it also has a convergent subsequence, $$y_{k_{i_j}}\to q\in C\$$ (because $$C$$ is closed.) But then $$s=\lim |x_{k_{i_j}}-y_{k_{i_j}}|=|p-q|.$$
Copyright (c) Edward Kmett 2015 BSD3 ekmett@gmail.com experimental GHC only None Haskell2010 Contents Description Synopsis # Newton's Method (Forward) findZero :: (ForwardDouble -> ForwardDouble) -> Double -> [Double] Source # The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned. Examples: >>> take 10 $findZero (\x->x^2-4) 1 [1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0] findZeroNoEq :: (ForwardDouble -> ForwardDouble) -> Double -> [Double] Source # The findZeroNoEq function behaves the same as findZero except that it doesn't truncate the list once the results become constant. inverse :: (ForwardDouble -> ForwardDouble) -> Double -> Double -> [Double] Source # The inverse function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned. Example: >>> last$ take 10 $inverse sqrt 1 (sqrt 10) 10.0 inverseNoEq :: (ForwardDouble -> ForwardDouble) -> Double -> Double -> [Double] Source # The inverseNoEq function behaves the same as inverse except that it doesn't truncate the list once the results become constant. fixedPoint :: (ForwardDouble -> ForwardDouble) -> Double -> [Double] Source # The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned. >>> last$ take 10 $fixedPoint cos 1 0.7390851332151607 fixedPointNoEq :: (ForwardDouble -> ForwardDouble) -> Double -> [Double] Source # The fixedPointNoEq function behaves the same as fixedPoint except that doesn't truncate the list once the results become constant. extremum :: (On (Forward ForwardDouble) -> On (Forward ForwardDouble)) -> Double -> [Double] Source # The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned. >>> last$ take 10 \$ extremum cos 1 0.0 extremumNoEq :: (On (Forward ForwardDouble) -> On (Forward ForwardDouble)) -> Double -> [Double] Source # The extremumNoEq function behaves the same as extremum except that it doesn't truncate the list once the results become constant.
Sign up for our free STEM online summer camps starting June 1st!View Summer Courses ## Discussion You must be signed in to discuss. ## Video Transcript All right, guys. Regen problem number eighty six of chapter five in chemistry of Central Science. So the nation Adams and and two are held together by trip on use mantelpiece information appendix seeds as make the entropy of this pond. So we're going to do that. So we so remember the Delta Age information fella man. Terrence state like and to or h dude, that's going to be zero. So So dealt it. So remember, has law. Delta attraction is equal downstage of products myself. Day two reactions. The Delta H is going to be equal to zero minus two times for seventy two point seven. Still, Delta H is equal to mine minus nine. Forty five point four. Kill it, Jules. For me now for part B, they said concerned the reaction between hydrant hydrant produce ammonia and to a tour plus h to yield to N h. Story. He's m puppies, information and bond until peace to estimate the entropy of nature nation bond in tend to each board. All right, so for consistency sake, we're gonna flip this equation Asian so that we could show the information of hydrazine with an end with an N N bond. So first we're gonna find the Delta H information. I remember Delta H of age to that zero because elements Sanders State, they're gonna play within toward Calculator one eighty seven point seven eight. Killer Jules for a moment now. Now we're going to estimate the strength of that and and bun. So we're going to set the Delta H. We have here as thie as as Sandy equal to to have his law instead of using dealt information, we're going to use depth Delta Age of Age. Rigoni's the bond and will be because the the difference in the bonds and Toby from the process and reactive that's that's roughly equivalent to the Delta Agent formation Asian. For the rib action, that means going to set one eighty seven point seven eight. Kate. What are products? So we have for any three three we have. Let's just put it as X for R n an interaction. And then we have four and aged interactions two, three, ninety one, and then we're gonna put it for aged, too. So for thirty six, that's the age age bond and then minus for our product's going to be, too. The Times three and H Bonds three ninety one to them. We're gonna add it all together. Four times three ninety one was scored thirty six minus two times. Three times three. Ninety one? Yes. And if we put that into our calculators, we are going to get That's it. We're going to get one. Eighty seven point seven A is equal to thanks minus three. Forty six. Now we're gonna add that together. So X is gonna equal. Look, it's gonna equal as five. Thirty three point seven eight. So now, as a kill Jules Permal? Yes. Now we can. Now we can approximate it like this. This so we're gonna have this kind of reaction. Two. And guess feels and and And gas. So we're assuming it's an end bond. So we have have our bond and Toby five. Thirty three point seven, eh? Minus two times for seventy two point seven. So that's going to equal for eleven point six to kill a jewels from all? Yeah, as so we can see that this delta H is is more positive than this Delta H value. So we so based on that, we can see that as we can see that this is a form a trip on is and more spontaneous reaction and more favor reaction than forming just your double bun. So that so that means that our double So the answer party we see if we look att, the answers from A and B, we can see that the formation of a triple bond is more stable, the information of ADA, but because it has a a much lo, a much more negative and of change and entropy, which is a more favourable reaction.
# Plasticity size effects in voided crystals M. I. Hussein, Ulrik Borg, Christian Frithiof Niordson, V. S. Deshpande Research output: Book/ReportReportpeer-review ## Abstract The shear and equi-biaxial straining responses of periodic voided single crystals are analysed using discrete dislocation plasticity and a continuum strain gradient crystal plasticity theory. In the discrete dislocation formulation the dislocations are all of edge character and are modelled as line singularities in an elastic material. The lattice resistance to dislocation motion, dislocation nucleation, dislocation interaction with obstacles and annihilation are incorporated through a set of constitutive rules. Over the range of length scales investigated, both the discrete dislocation and strain gradient plasticity formulations predict a negligible size effect under shear loading. By contrast, under equi-biaxial loading both plasticity formulations predict a strong size dependence with the flow strength scaling approximately inversely with the void-spacing. Excellent agreement is obtained between predictions of the two formulations for all crystal types and void volume fractions considered when the material length scale in the non-local plasticity model chosen to be $0.325\mu m$ (around ten times the slip plane spacing in the discrete dislocation models). Original language English Publisher Technical University of Denmark Published - 2006 ## Fingerprint Dive into the research topics of 'Plasticity size effects in voided crystals'. Together they form a unique fingerprint.
# Mathematics 2 posts / 0 new shthd Mathematics A diet must provide exactly 580 mg of protein and 290 mg or iron. These nutrients will be obtained by eating meat and spinach. Each kg of meat contains 900 mg of protein and 200 mg of iron. Each kg of spinach contains 400 mg of protein and 1700 mg of iron. How many kgs of meat and spinach should be eaten in order to provide the proper amounts of nutrients? Jhun Vert Let m = amount of meat in kg and s = amount of spinach in kg For protein $900x + 400y = 580$   ←   equation (1) For iron $200x + 1700y = 290$   ←   equation (2) From equations (1) and (2) $x = 0.6 ~ \text{kg}$ $y = 0.1 ~ \text{kg}$ • Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.
# Cayley table A Cayley table, after the 19th century British mathematician Arthur Cayley, describes the structure of a finite group by arranging all the possible products of all the group's elements in a square table reminiscent of an addition or multiplication table. Many properties of a group — such as whether or not it is abelian, which elements are inverses of which elements, and the size and contents of the group's center — can be discovered from its Cayley table. A simple example of a Cayley table is the one for the group {1, −1} under ordinary multiplication: × 1 −1 1 1 −1 −1 −1 1 ## History Cayley tables were first presented in Cayley's 1854 paper, "On The Theory of Groups, as depending on the symbolic equation θ n = 1". In that paper they were referred to simply as tables, and were merely illustrative — they came to be known as Cayley tables later on, in honour of their creator. ## Structure and layout Because many Cayley tables describe groups that are not abelian, the product ab with respect to the group's binary operation is not guaranteed to be equal to the product ba for all a and b in the group. In order to avoid confusion, the convention is that the factor that labels the row (termed nearer factor by Cayley) comes first, and that the factor that labels the column (or further factor) is second. For example, the intersection of row a and column b is ab and not ba, as in the following example: * a b c a a2 ab ac b ba b2 bc c ca cb c2 Cayley originally set up his tables so that the identity element was first, obviating the need for the separate row and column headers featured in the example above. For example, they do not appear in the following table: a b c b c a c a b In this example, the cyclic group Z3, a is the identity element, and thus appears in the top left corner of the table. It is easy to see, for example, that b2 = c and that cb = a. Despite this, most modern texts — and this article — include the row and column headers for added clarity. ## Properties and uses ### Commutativity The Cayley table tells us whether a group is abelian. Because the group operation of an abelian group is commutative, a group is abelian if and only if its Cayley table is symmetric along its diagonal axis. The cyclic group of order 3, above, and {1, −1} under ordinary multiplication, also above, are both examples of abelian groups, and inspection of the symmetry of their Cayley tables verifies this. In contrast, the smallest non-abelian group, the dihedral group of order 6, does not have a symmetric Cayley table. ### Associativity Because associativity is taken as an axiom when dealing with groups, it is often taken for granted when dealing with Cayley tables. However, Cayley tables can also be used to characterize the operation of a quasigroup, which does not assume associativity as an axiom (indeed, Cayley tables can be used to characterize the operation of any finite magma). Unfortunately, it is not generally possible to determine whether or not an operation is associative simply by glancing at its Cayley table, as it is with commutativity. This is because associativity depends on a 3 term equation, $(ab)c=a(bc)$, while the Cayley table shows 2-term products. However, Light's associativity test can determine associativity with less effort than brute force. ### Permutations Because the cancellation property holds for groups (and indeed even quasigroups), no row or column of a Cayley table may contain the same element twice. Thus each row and column of the table is a permutation of all the elements in the group. This greatly restricts which Cayley tables could conceivably define a valid group operation. To see why a row or column cannot contain the same element more than once, let a, x, and y all be elements of a group, with x and y distinct. Then in the row representing the element a, the column corresponding to x contains the product ax, and similarly the column corresponding to y contains the product ay. If these two products were equal — that is to say, row a contained the same element twice, our hypothesis — then ax would equal ay. But because the cancellation law holds, we can conclude that if ax = ay, then x = y, a contradiction. Therefore, our hypothesis is incorrect, and a row cannot contain the same element twice. Exactly the same argument suffices to prove the column case, and so we conclude that each row and column contains no element more than once. Because the group is finite, the pigeonhole principle guarantees that each element of the group will be represented in each row and in each column exactly once. Thus, the Cayley table of a group is an example of a latin square. ## Constructing Cayley tables Because of the structure of groups, one can very often "fill in" Cayley tables that have missing elements, even without having a full characterization of the group operation in question. For example, because each row and column must contain every element in the group, if all elements are accounted for save one, and there is one blank spot, without knowing anything else about the group it is possible to conclude that the element unaccounted for must occupy the remaining blank space. It turns out that this and other observations about groups in general allow us to construct the Cayley tables of groups knowing very little about the group in question. ### The "identity skeleton" of a finite group Because in any group, even a non-abelian group, every element commutes with its own inverse, it follows that the distribution of identity elements on the Cayley table will be symmetric across the table's diagonal. Those that lie on the diagonal are their own inverse; those that do not have another, unique inverse. Because the order of the rows and columns of a Cayley table is in fact arbitrary, it is convenient to order them in the following manner: beginning with the group's identity element, which is always its own inverse, list first all the elements that are their own inverse, followed by pairs of inverses listed adjacent to each other. Then, for a finite group of a particular order, it is easy to characterize its "identity skeleton", so named because the identity elements on the Cayley table are clustered about the main diagonal — either they lie directly on it, or they are one removed from it. It is relatively trivial to prove that groups with different identity skeletons cannot be isomorphic, though the converse is not true (for instance, the cyclic group C8 and the quaternion group Q are non-isomorphic but have the same identity skeleton). Consider a six-element group with elements e, a, b, c, d, and f. By convention, e is the group's identity element. Because the identity element is always its own inverse, and inverses are unique, the fact that there are 6 elements in this group means that at least one element other than e must be its own inverse. So we have the following possible skeletons: • all elements are their own inverses, • all elements save d and f are their own inverses, each of these latter two being the other's inverse, • a is its own inverse, b and c are inverses, and d and f are inverses. In our particular example, there does not exist a group of the first type of order 6; indeed, simply because a particular identity skeleton is conceivable does not in general mean that there exists a group that fits it. It is noteworthy (and trivial to prove) that any group in which every element is its own inverse is abelian. ### Filling in the identity skeleton Once a particular identity skeleton has been decided on, it is possible to begin filling out the Cayley table. For example, take the identity skeleton of a group of order 6 of the second type outlined above: e a b c d f e e a e b e c e d e f e Obviously, the e row and the e column can be filled out immediately. Once this has been done, it may be necessary (and it is necessary, in our case) to make an assumption, which may later lead to a contradiction — meaning simply that our initial assumption was false. We will assume that ab = c. Then: e a b c d f e e a b c d f a a e c b b e c c e d d e f f e Multiplying ab = c on the left by a gives b = ac. Multiplying on the right by c gives bc = a. Multiplying ab = c on the right by b gives a = cb. Multiplying bc = a on the left by b gives c = ba, and multiplying that on the right by a gives ca = b. After filling these products into the table, we find that the ad and af are still unaccounted for in the a row; as we know that each element of the group must appear in each row exactly once, and that only d and f are unaccounted for, we know that ad must equal d or f; but it cannot equal d, because if it did, that would imply that a equaled e, when we know them to be distinct. Thus we have ad = f and af = d. Furthermore, since the inverse of d is f, multiplying ad = f on the right by f gives a = f2. Multiplying this on the left by d gives us da = f. Multiplying this on the right by a, we have d = fa. Filling in all of these products, the Cayley table now looks like this: e a b c d f e e a b c d f a a e c b f d b b c e a c c b a e d d f e f f d e a Because each row must have every element of the group represented exactly once, it is easy to see that the two blank spots in the b row must be occupied by d or f. However, if one examines the columns containing these two blank spots — the d and f columns — one finds that d and f have already been filled in on both, which means that regardless of how d and f are placed in row b, they will always violate the permutation rule. Because our algebraic deductions up until this point were sound, we can only conclude that our earlier, baseless assumption that ab = c was, in fact, false. Essentially, we guessed and we guessed incorrectly. We, have, however, learned something: abc. The only two remaining possibilities then are that ab = d or that ab = f; we would expect these two guesses to each have the same outcome, up to isomorphism, because d and f are inverses of each other and the letters that represent them are inherently arbitrary anyway. So without loss of generality, take ab = d. If we arrive at another contradiction, we must assume that no group of order 6 has the identity skeleton we started with, as we will have exhausted all possibilities. Here is the new Cayley table: e a b c d f e e a b c d f a a e d b b e c c e d d e f f e Multiplying ab = d on the left by a, we have b = ad. Right multiplication by f gives bf = a, and left multiplication by b gives f = ba. Multiplying on the right by a we then have fa = b, and left multiplication by d then yields a = db. Filling in the Cayley table, we now have (new additions in red): e a b c d f e e a b c d f a a e d b b b f e a c c e d d a e f f b e Since the a row is missing c and f and since af cannot equal f (or a would be equal to e, when we know them to be distinct), we can conclude that af = c. Left multiplication by a then yields f = ac, which we may multiply on the right by c to give us fc = a. Multiplying this on the left by d gives us c = da, which we can multiply on the right by a to obtain ca = d. Similarly, multiplying af = c on the right by d gives us a = cd. Updating the table, we have the following, with the most recent changes in blue: e a b c d f e e a b c d f a a e d f b c b b f e a c c d e a d d c a e f f b a e Since the b row is missing c and d, and since b c cannot equal c, it follows that b c = d, and therefore b d must equal c. Multiplying on the right by f this gives us b = cf, which we can further manipulate into cb = f by multiplying by c on the left. By similar logic we can deduce that c = fb and that dc = b. Filling these in, we have (with the latest additions in green): e a b c d f e e a b c d f a a e d f b c b b f e d c a c c d f e a b d d c a b e f f b c a e Since the d row is missing only f, we know d2 = f, and thus f2 = d. As we have managed to fill in the whole table without obtaining a contradiction, we have found a group of order 6: inspection reveals it to be non-abelian. This group is in fact the smallest non-abelian group, the dihedral group D3: * e a b c d f e e a b c d f a a e d f b c b b f e d c a c c d f e a b d d c a b f e f f b c a e d ## Permutation matrix generation The standard form of a Cayley table has the order of the elements in the rows the same as the order in the columns. Another form is to arrange the elements of the columns so that the nth column corresponds to the inverse of the element in the nth row. In our example of D3, we need only switch the last two columns, since f and d are the only elements that are not their own inverses, but instead inverses of each other. e a b c f=d-1 d=f-1 e e a b c f d a a e d f c b b b f e d a c c c d f e b a d d c a b e f f f b c a d e This particular example lets us create six permutation matrices (all elements 1 or 0, exactly one 1 in each row and column). The 6x6 matrix representing an element will have a 1 in every position that has the letter of the element in the Cayley table and a zero in every other position, the Kronecker delta function for that symbol. (Note that e is in every position down the main diagonal, which gives us the identity matrix for 6x6 matrices in this case, as we would expect.) Here is the matrix that represents our element a, for example. e a b c f d e 0 1 0 0 0 0 a 1 0 0 0 0 0 b 0 0 0 0 1 0 c 0 0 0 0 0 1 d 0 0 1 0 0 0 f 0 0 0 1 0 0 This shows us directly that any group of order n is a subgroup of the permutation group Sn, order n!. ## Generalizations The above properties depend on some axioms valid for groups. It is natural to consider Cayley tables for other algebraic structures, such as for semigroups, quasigroups, and magmas, but some of the properties above do not hold.
# Exponential equation, variable on both sides I need help with solving the following equation for t $Re^{vht}=v_1t$ where $R,v,h,v_1$ are constants. Any suggestions appreciated. Rewrite this as $$-\frac{Rvh}{v_1} = -vht\cdot e^{-vht}$$ Applying the Lambert $W$ function we get $$W\left(-\frac{Rvh}{v_1}\right) = -vht\\ t = -\frac{W\left(-\frac{Rvh}{v_1}\right)}{vh}$$ Using the $W$ function isn't much more than a rewriting, but there is nothing else that can be done when we have $t$ both in the exponent and outside it. $\textbf{Note this is beyond pre-calc}$ Apart from the Lambert Function which @Arthur has mentioned, lets play with some cases where we can assume $vht<<1$ then we can approximate our equation as $$R(1+vht) = v_1t \implies t = \frac{R}{v_1-vhR}$$ we obviously require $v_1 > vhR$. But beyond this you would have to apply a numerical scheme to solve this $$R\mathrm{e}^{vht} - v_1t = f(t) = 0$$ which is a classic root-finding problem.
# 100 Independent Linear Work-Precision Diagrams ##### Chris Rackauckas For these tests we will solve a diagonal 100 independent linear differential equations. This will demonstrate the efficiency of the implementation of the methods for handling large systems, since the system is both large enough that array handling matters, but f is cheap enough that it is not simply a game of calculating f as few times as possible. We will be mostly looking at the efficiency of the work-horse Dormand-Prince Order 4/5 Pairs: one from DifferentialEquations.jl (DP5, one from ODE.jl rk45, and one from ODEInterface: Hairer's famous dopri5). Also included is Tsit5. While all other ODE programs have gone with the traditional choice of using the Dormand-Prince 4/5 pair as the default, DifferentialEquations.jl uses Tsit5 as one of the default algorithms. It's a very new (2011) and not widely known, but the theory and the implimentation shows it's more efficient than DP5. Thus we include it just to show off how re-designing a library from the ground up in a language for rapid code and rapid development has its advantages. ## Setup using OrdinaryDiffEq, Sundials, DiffEqDevTools, Plots, ODEInterfaceDiffEq, ODE, LSODA using Random Random.seed!(123) gr() # 2D Linear ODE function f(du,u,p,t) @inbounds for i in eachindex(u) du[i] = 1.01*u[i] end end function f_analytic(u₀,p,t) u₀*exp(1.01*t) end tspan = (0.0,10.0) prob = ODEProblem(ODEFunction(f,analytic=f_analytic),rand(100,100),tspan) abstols = 1.0 ./ 10.0 .^ (3:13) reltols = 1.0 ./ 10.0 .^ (0:10); ### Speed Baseline First a baseline. These are all testing the same Dormand-Prince order 5/4 algorithm of each package. While all the same Runge-Kutta tableau, they exhibit different behavior due to different choices of adaptive timestepping algorithms and tuning. First we will test with all extra saving features are turned off to put DifferentialEquations.jl in "speed mode". setups = [Dict(:alg=>DP5()) Dict(:alg=>ode45()) Dict(:alg=>dopri5()) Dict(:alg=>ARKODE(Sundials.Explicit(),etable=Sundials.DORMAND_PRINCE_7_4_5)) Dict(:alg=>Tsit5())] solnames = ["OrdinaryDiffEq";"ODE";"ODEInterface";"Sundials ARKODE";"OrdinaryDiffEq Tsit5"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames,save_everystep=false,numruns=100) plot(wp) OrdinaryDiffEq.jl is clearly far in the lead, being more than an order of magnitude faster for the same amount of error. ### Full Saving setups = [Dict(:alg=>DP5(),:dense=>false) Dict(:alg=>ode45(),:dense=>false) Dict(:alg=>dopri5()) # dense=false by default: no nonlinear interpolation Dict(:alg=>ARKODE(Sundials.Explicit(),etable=Sundials.DORMAND_PRINCE_7_4_5),:dense=>false) Dict(:alg=>Tsit5(),:dense=>false)] solnames = ["OrdinaryDiffEq";"ODE";"ODEInterface";"Sundials ARKODE";"OrdinaryDiffEq Tsit5"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames,numruns=100) plot(wp) While not as dramatic as before, DifferentialEquations.jl is still far in the lead. Since the times are log scaled, this comes out to be almost a 5x lead over ODEInterface, and about a 10x lead over ODE.jl at default tolerances. ### Continuous Output Now we include continuous output. This has a large overhead because at every timepoint the matrix of rates k has to be deep copied. setups = [Dict(:alg=>DP5()) Dict(:alg=>ode45()) Dict(:alg=>dopri5()) Dict(:alg=>ARKODE(Sundials.Explicit(),etable=Sundials.DORMAND_PRINCE_7_4_5)) Dict(:alg=>Tsit5())] solnames = ["OrdinaryDiffEq";"ODE";"ODEInterface";"Sundials ARKODE";"OrdinaryDiffEq Tsit5"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames,numruns=100) plot(wp) As you can see, even with this large overhead, DifferentialEquations.jl essentially ties with ODEInterface. This shows that the fully featured DP5 solver holds its own with even the classic "great" methods. ### Other Runge-Kutta Algorithms Now let's test it against a smattering of other Runge-Kutta algorithms. First we will test it with all overheads off. Let's do the Order 5 (and the 2/3 pair) algorithms: setups = [Dict(:alg=>DP5()) Dict(:alg=>BS3()) Dict(:alg=>BS5()) Dict(:alg=>Tsit5())] wp = WorkPrecisionSet(prob,abstols,reltols,setups;save_everystep=false,numruns=100) plot(wp) As you can see, the Tsit5 algorithm is the most efficient, beating DP5 which is more efficient than the Bogacki-Shampine algorithms. However, you can see that when the tolerance is high, BS3 could be of use since its slope is so steep. ## Higher Order Now let's see how OrdinaryDiffEq.jl fairs with some higher order algorithms: setups = [Dict(:alg=>DP5()) Dict(:alg=>Vern6()) Dict(:alg=>TanYam7()) Dict(:alg=>Vern7()) Dict(:alg=>Vern8()) Dict(:alg=>DP8()) Dict(:alg=>Vern9())] wp = WorkPrecisionSet(prob,abstols,reltols,setups;save_everystep=false,numruns=100) plot(wp) Vern7 looks to be the winner here, with DP5 doing well at higher tolerances but trailing of when it gets lower as one would expect with lower order algorithms. Some of the higher order methods, such as Vern9, would do better at lower tolerances than what's tested (outside of floating point range). ## Higher Order With Many Packages Now we test OrdinaryDiffEq against the high order methods of the other packages: setups = [Dict(:alg=>DP5()) Dict(:alg=>Vern7()) Dict(:alg=>dop853()) Dict(:alg=>ode78()) Dict(:alg=>odex()) Dict(:alg=>lsoda()) Dict(:alg=>ddeabm()) Dict(:alg=>ARKODE(Sundials.Explicit(),order=8)) wp = WorkPrecisionSet(prob,abstols,reltols,setups;save_everystep=false,numruns=100) plot(wp) Here you can once again see the DifferentialEquations algorithms far in the lead. It's well known that for cheap function costs Adams methods are inefficient. ODE.jl one again has a bad showing. ## Interpolation Error Now we will look at the error using an interpolation measurement instead of at the timestepping points. Since the DifferentialEquations.jl algorithms have higher order interpolants than the ODE.jl algorithms, one would expect this would magnify the difference. First the order 4/5 comparison: setups = [Dict(:alg=>DP5()) #Dict(:alg=>ode45()) Dict(:alg=>Tsit5())] wp = WorkPrecisionSet(prob,abstols,reltols,setups;error_estimate=:L2,dense_errors=true,numruns=100) plot(wp) Note that all of ODE.jl uses a 3rd order Hermite interpolation, while the DifferentialEquations algorithms interpolations which are specialized to the algorithm. For example, DP5 and Tsit5 both use "free" order 4 interpolations, which are both as fast as the Hermite interpolation while achieving far less error. At higher order: setups = [Dict(:alg=>DP5()) Dict(:alg=>Vern7()) #Dict(:alg=>ode78()) ] wp = WorkPrecisionSet(prob,abstols,reltols,setups;error_estimate=:L2,dense_errors=true,numruns=100) plot(wp) ## Comparison with Fixed Timestep RK4 Let's run the first benchmark but add some fixed timestep RK4 methods to see the difference: abstols = 1.0 ./ 10.0 .^ (3:13) reltols = 1.0 ./ 10.0 .^ (0:10); dts = [1,1/2,1/4,1/10,1/20,1/40,1/60,1/80,1/100,1/140,1/240] setups = [Dict(:alg=>DP5()) Dict(:alg=>ode45()) Dict(:alg=>dopri5()) Dict(:alg=>RK4(),:dts=>dts) Dict(:alg=>Tsit5())] solnames = ["DifferentialEquations";"ODE";"ODEInterface";"DifferentialEquations RK4";"DifferentialEquations Tsit5"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames, save_everystep=false,verbose=false,numruns=100) plot(wp) ## Comparison with Non-RK methods Now let's test Tsit5 and Vern9 against parallel extrapolation methods and an Adams-Bashforth-Moulton: setups = [Dict(:alg=>Tsit5()) Dict(:alg=>Vern9()) Dict(:alg=>VCABM()) solnames = ["Tsit5","Vern9","VCABM","AitkenNeville","Midpoint Deuflhard","Midpoint Hairer Wanner"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames, save_everystep=false,verbose=false,numruns=100) plot(wp) setups = [Dict(:alg=>ExtrapolationMidpointDeuflhard(min_order=1, max_order=9, init_order=9, threading=false)) Dict(:alg=>ExtrapolationMidpointHairerWanner(min_order=2, max_order=11, init_order=4, sequence = :romberg, threading=true)) Dict(:alg=>ExtrapolationMidpointHairerWanner(min_order=2, max_order=11, init_order=4, sequence = :bulirsch, threading=true))] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames, save_everystep=false,verbose=false,numruns=100) plot(wp) setups = [Dict(:alg=>ExtrapolationMidpointHairerWanner(min_order=2, max_order=11, init_order=10, threading=true)) solnames = ["1","2","3","4","5"] wp = WorkPrecisionSet(prob,abstols,reltols,setups;names=solnames, save_everystep=false,verbose=false,numruns=100) plot(wp) ## Conclusion DifferentialEquations's default choice of Tsit5 does well for quick and easy solving at normal tolerances. However, at low tolerances the higher order algorithms are faster. In every case, the DifferentialEquations algorithms are far in the lead, many times an order of magnitude faster than the competitors. Vern7 with its included 7th order interpolation looks to be a good workhorse for scientific computing in floating point range. These along with many other benchmarks are why these algorithms were chosen as part of the defaults. using DiffEqBenchmarks DiffEqBenchmarks.bench_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file]) ## Appendix These benchmarks are a part of the DiffEqBenchmarks.jl repository, found at: https://github.com/JuliaDiffEq/DiffEqBenchmarks.jl To locally run this tutorial, do the following commands: using DiffEqBenchmarks DiffEqBenchmarks.weave_file("NonStiffODE","linear_wpd.jmd") Computer Information: Julia Version 1.2.0 Commit c6da87ff4b (2019-08-20 00:03 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-6.0.1 (ORCJIT, haswell) Environment: Package Information: Status: /home/crackauckas/.julia/dev/DiffEqBenchmarks/Project.toml [a134a8b2-14d6-55f6-9291-3336d3ab0209] BlackBoxOptim 0.5.0 [f3b72e0c-5b89-59e1-b016-84e28bfd966d] DiffEqDevTools 2.15.0 [1130ab10-4a5a-5621-a13d-e4788d82bd4c] DiffEqParamEstim 1.8.0 [a077e3f3-b75c-5d7f-a0c6-6bc4c8ec64a9] DiffEqProblemLibrary 4.5.1 [ef61062a-5684-51dc-bb67-a0fcdec5c97d] DiffEqUncertainty 1.2.0 [7f56f5a3-f504-529b-bc02-0b1fe5e64312] LSODA 0.6.1 [76087f3c-5699-56af-9a33-bf431cd00edd] NLopt 0.5.1 [c030b06c-0b6d-57c2-b091-7029874bd033] ODE 2.5.0 [54ca160b-1b9f-5127-a996-1867f4bc2a2c] ODEInterface 0.4.6 [09606e27-ecf5-54fc-bb29-004bd9f985bf] ODEInterfaceDiffEq 3.4.0 [1dea7af3-3e70-54e6-95c3-0bf5283fa5ed] OrdinaryDiffEq 5.17.1 [65888b18-ceab-5e60-b2b9-181511a3b968] ParameterizedFunctions 4.2.1 [91a5bcdd-55d7-5caf-9e0b-520d859cae80] Plots 0.26.3 [9a3f8284-a2c9-5f02-9a11-845980a1fd5c] Random
# Alan Turing, On computable numbers I have been reading Alan Turing’s paper, On computable numbers, with an application to the entsheidungsproblem, an amazing classic, written by Turing while he was a student in Cambridge. This is the paper in which Turing introduces and defines his Turing machine concept, deriving it from a philosophical analysis of what it is that a human computer is doing when carrying out a computational task. The paper is an incredible achievement. He accomplishes so much: he defines and explains the machines; he proves that there is a universal Turing machine; he shows that there can be no computable procedure for determining the validities of a sufficiently powerful formal proof system; he shows that the halting problem is not computably decidable; he argues that his machine concept captures our intuitive notion of computability; and he develops the theory of computable real numbers. What I was extremely surprised to find, however, and what I want to tell you about today, is that despite the title of the article, Turing adopts an incorrect approach to the theory of computable numbers. His central definition is what is now usually regarded as a mistaken way to proceed with this concept. Let me explain. Turing defines that a computable real number is one whose decimal (or binary) expansion can be enumerated by a finite procedure, by what we now call a Turing machine. You can see this in the very first sentence of his paper, and he elaborates on and confirms this definition in detail later on in the paper. He subsequently develops the theory of computable functions of computable real numbers, where one considers computable functions defined on these computable numbers. The computable functions are defined not on the reals themselves, however, but on the programs that enumerate the digits of those reals. Thus, for the role they play in Turing’s theory, a computable real number is not actually regarded as a real number as such, but as a program for enumerating the digits of a real number. In other words, to have a computable real number in Turing’s theory is to have a program for enumerating the digits of a real number. And it is this aspect of Turing’s conception of computable real numbers where his approach becomes problematic. One specific problem with Turing’s approach is that on this account, it turns out that the operations of addition and multiplication for computable real numbers are not computable operations. Of course this is not what we want. The basic mathematical fact in play is that the digits of a sum of two real numbers $a+b$ is not a continuous function of the digits of $a$ and $b$ separately; in some cases, one cannot say with certainty the initial digits of $a+b$, knowing only finitely many digits, as many as desired, of $a$ and $b$. To see this, consider the following sum $a+b$ \begin{align*} &0.343434343434\cdots \\ +\quad &0.656565656565\cdots \\[-7pt] &\hskip-.5cm\rule{2in}{.4pt}\\ &0.999999999999\cdots \end{align*} If you add up the numbers digit-wise, you get $9$ in every place. That much is fine, and of course we should accept either $0.9999\cdots$ or $1.0000\cdots$ as correct answers for $a+b$ in this instance, since those are both legitimate decimal representations of the number $1$. The problem, I claim, is that we cannot assign the digits of $a+b$ in a way that will depend only on finitely many digits each of $a$ and $b$. The basic problem is that if we inspect only finitely many digits of $a$ and $b$, then we cannot be sure whether that pattern will continue, whether there will eventually be a carry or not, and depending on how the digits proceed, the initial digits of $a+b$ can be affected. In detail, suppose that we have committed to the idea that the initial digits of $a+b$ are $0.999$, on the basis of sufficiently many digits of $a$ and $b$. Let $a’$ and $b’$ be numbers that agree with $a$ and $b$ on those finite parts of $a$ and $b$, but afterwards have all $7$s. In this case, the sum $a’+b’$ will involve a carry, which will turn all the nines up to that point to $0$, with a leading $1$, making $a’+b’$ strictly great than $1$ and having decimal representation $1.000\cdots00005555\cdots$. Thus, the initial-digits answer $0.999$ would be wrong for $a’+b’$, even though $a’$ and $b’$ agreed with $a$ and $b$ on the sufficiently many digits supposedly justifying the $0.999$ answer. On the other hand, if we had committed ourselves to $1.000$ for $a+b$, on the basis of finite parts of $a$ and $b$ separately, then let $a”$ and $b”$ be all $2$s beyond that finite part, in which case $a”+b”$ is definitely less than $1$, making $1.000$ wrong. Therefore, there is no algorithm to compute the digits of $a+b$ continuously from the digits of $a$ and $b$ separately. It follows that there can be no computable algorithm for computing the digits of $a+b$, given the programs that compute $a$ and $b$ separately, which is how Turing defines computable functions on the computable reals. (This consequence is a subtly different and stronger claim, but one can prove it using the Kleene recursion theorem. Namely, let $a=.343434\cdots$ and then consider the program to enumerate a number $b$, which will begin with $0.656565$ and keep repeating $65$ until it sees that the addition program has given the initial digits for $a+b$, and at this moment our program for $b$ will either switch to all $7$s or all $2$s in such a way so as to refute the result. The Kleene recursion theorem is used in order to know that indeed there is such a self-referential program enumerating $b$.) One can make similar examples showing that multiplication and many other very simple functions are not computable, if one insists that a computable number is an algorithm enumerating the digits of the number. So what is the right definition of computable number? Turing was right that in working with computable real numbers, we want to be working with the programs that compute them, rather than the reals themselves somehow. What is needed is a better way of saying that a given program computes a given real. The right definition, widely used today, is that we want an algorithm not to compute exactly the digits of the number, but rather, to compute approximations to the number, as close as desired, with a known degree of accuracy. One can define a computable real number as a computable sequence of rational numbers, such that the $n^{th}$ number is within $1/2^n$ of the target number. This is equivalent to being able to compute rational intervals around the target real, of size less than any specified accuracy. And there are many other equivalent ways to do it. With this concept of computable real number, then the operations of addition, multiplication, and so on, all the familiar operations on the real numbers, will be computable. But let me clear up a confusing point. Although I have claimed that Turing’s original definition of computable real number is incorrect, and I have explained how we usually define this concept today, the mathematical fact is that a real number $x$ has a computable presentation in Turing’s sense (we can compute the digits of $x$) if and only if it has a computable presentation in the contemporary sense (we can compute rational approximations to any specified accuracy). Thus, in terms of which real numbers we are talking about, the two approaches are extensionally the same. Let me quickly prove this. If a real number $x$ is computable in Turing’s sense, so that we can compute the digits of $x$, then we can obviously compute rational approximations to any desired accuracy, simply by taking sufficiently many digits. And conversely, if a real number $x$ is computable in the contemporary sense, so we can compute rational approximations to any specified accuracy, then either it is itself a rational number, in which case we can certainly compute the digits of $x$, or else it is irrational, in which case for any specified digit place, we can wait until we have a rational approximation forcing it to one side or the other, and thereby come to know this digit. (Note: there are issues of intuitionistic logic occurring here, precisely because we cannot tell from the approximation algorithm itself which case we are in.) Note also that this argument works in any desired base. So there is something of a philosophical problem here. The issue isn’t that Turing has misidentified particular reals as being computable or non-computable or has somehow got the computable reals wrong extensionally as a subset of the real numbers, since every particular real number has Turing’s kind of representation if and only if it has the approximation kind of representation. Rather, the problem is that because we want to deal with computable real numbers by working with the programs that represent them, Turing’s approach means that we cannot regard addition as a computable function on the computable reals. There is no computable procedure that when given two programs for enumerating the digits of real numbers $a$ and $b$ returns a program for enumerating the digits of the sum $a+b$. But if you use the contemporary rational-approximation representation of computable real number, then you can computably produce a program for the sum, given programs for the input reals. This is the sense in which Turing’s approach is wrong.
#### Important Questions 9th Standard Reg.No. : • • • • • • Science Time : 01:00:00 Hrs Total Marks : 60 Part - A 60 x 1 = 60 1. Rulers, measuring tapes and metre scales are used to measure (a) Mass (b) Weight (c) Time (d) Length 2. The diameters of spherical objects are measured with a_________scale. (a) pitch (b) meter (c) (d) vernier 3. Which of the following graph represents uniform motion of a moving particle (a) (b) (c) (d) 4. The hands of the clock, the spokes of wheel are example of _______ (a) linear motion (b) circular motion (c) oscillatory motion (d) revolutionary motion 5. The focal length of a concave mirror is 5cm. Its radius of curvature is (a) 5 cm (b) 10 cm (c) 2.5 cm 6. A virtual and equal sized image is formed by ___________________mirrors. (a) convex (b) plane (c) convex (d) spherical 7. To form a real image -________________mirror is required. (a) parallel (b) plane (c) convex (d) concave 8. Filtration method is effective in separating _______ mixture (a) Solid-solid (b) solid-liquid (c) liquid-liquid (d) liquid-gas 9. Light,sound,heat,etc.are not matter.They are different forms of (a) solids (b) liquids (c) gases (d) energy 10. Energy is neither given out nor absorbed in the preparation of_____ (a) element (b) compound (c) mixture (d) solvent 11. To separate two or more miscible liquids which do not differ much in the boiling points_______is employed (a) distillation (b) filtration (c) decantation (d) fractional distillation 12. Law of multiple proportions was proposed by _________ (a) J. Ritcher (b) Rutherford (c) John Dalton (d) J.J Thomson 13. The circular orbits are numbered as 1,2,3,4, .... These numbers are referred as _________ (a) Principal Quantum Number (b) Azimuthal Quantum Number (c) Mangetic Quantum Number (d) Spin Quantum Number 14. The outermost shell of an atom is called its________ shell. (a) inner (b) outer (c) valence (d) sub shell 15. Transpiration takes place through _____________. (a) fruit (b) seed (c) flower (d) stomata 16. Rhizophora is an example for_________________. (a) positive geotropism (b) Positive phototropism (c) Positive hydrotropism (d) Negative geotropism 17. Von Helmont conducted his experiment in. the year_______________. (a) 1684 (b) 1468 (c) 1864 (d) 1648. 18. maize plant transpire_______________ gallons of water during its life span. (a) 34 (b) 44 (c) 64 (d) 54 19. Dysentery is caused by (a) Entamoeba (b) Euglena (c) Plasmodium (d) Paramecium 20. The smallest bat lives in_________. (a) America (b) Thailand (c) Africa (d) 21. Which one is the Mammal like reptile? (a) DImetrodon (b) Crocodile (c) Lizard (d) Snake 22. Phillippine goby is found in___________. (a) marine water (b) brackish water (c) salt water (d) fresh water 23. Any disease caused by the presence of excess vitamin is _________ . (a) Night blindness (b) Osteoporosis (c) Vitaminosis (d) Hyper vitaminosis 24. Vitamin E is otherwise known as _________ . (a) Riboflavin (b) Thiamine (c) Tocopherol (d) Calciferol 25. Lipases are enzymes which breaks down __________. (a) Proteins (b) Fats (c) Carbohydrates (d) Food 26. The specific heat capacity of water is (a) 4200 Jkg-1K-1 (b) 420 Jg-1K-1 (c) 0.42 Jg-1K-1 (d) 4.2 Jkg-1K-1 27. The amount of heat required to raise the temperature through 10C is called_________ (a) thermal energy (b) calorie (c) heat capacity (d) specific heat capacity 28. Sweating causes cooling because water has a_______ (a) high specific heat (b) low specific heat (c) high latent heat of fusion (d) high latent heat of vaporisation 29. On a cold day, it is hard to open the lid of a tight container. But when you gently heat the neck you can easily open the lid. why? (a) On heating Glass expands and lid contracts (b) On heating lid expands more than the neck and thus slides easily (c) Neck becomes slippery on heating (d) Lid of the bottle cannot bear the heat. 30. In an electrolyte the current is due to the flow of (a) electrons (b) positive ions (c) both (a) and (b) (d) neither (a) nor (b) 31. A current of 2A passing through conductor produces 80 J of heat in 10 seconds. The resistance of the conductor is__________ (a) 0.5$\Omega$ (b) 2$\Omega$ (c) 4$\Omega$ (d) 20$\Omega$ 32. Two resistances R1 and R2 are connected is parallel. Their equivalent resistance is__________ (a) R1+R2 (b) $\cfrac { { R }_{ 1 }{ R }_{ 2 } }{ { R }_{ 1 }+{ R }_{ 2 } }$ (c) $\cfrac { { R }_{ 1 }+{ R }_{ 2 } }{ { R }_{ 1 }{ R }_{ 2 } }$ (d) $\sqrt { { R }_{ 1 }+{ R }_{ 2 } }$ 33. Assertion (A) : A bird perches on a high power line and nothing happens to the bird. Reason (R) : The level of bird is very high from the ground. (a) If both assertion and reason are true and reason is the correct explanation of assertion. (b) If both assertion and reason are true but reason is not the correct explanation of assertion. (c) If assertion is true but reason is false (d) If assertion is false but reason is true 34. The unit of magnetic flux density is (a) weber (b) weber/metre (c) weber/metre2 (d) weber. metre2 35. Noble gases are placed in_____________ group in the modern periodic table. (a) 13th (b) 18th (c) 17th (d) 2nd 36. Assertion (A): Group 2 elements in the modern periodic table are called alkaline earth metals. Reason (R): The oxides of group 2 elements produce alkaline solutions when they are dissolved in water (a) A is right R is wrong (b) R explains A (c) R does not explain A (d) R is right A is wrong 37. Assertion (A): Noble gases are chemically inert in nature. Reason (R) : Noble gases have stable electronic structures (a) Both A & R are right (b) Both A & R are wrong (c) A is right R is wrong (d) A is wrong R is right 38. ___________compounds are highly brittle (a) Ionic (b) Covalent (c) Co-ordinate covalent (d) Covalent 39. The bond which is formed by mutual sharing of electrons is called ________bond. (a) ionic (b) covalent (c) co-ordinate covalent bond (d) all the above 40. Statement (A) : Covalent compounds are bad conductor of electricity. Reason (B) Covalent compounds contain charged particles (ions) (a) B explains A (b) B does not explain A (c) Both A & B are right (d) Both A & B are wrong 41. The property which is characteristics of an Ionic compound is that (a) it oen exists as gas at room temperature (b) it is hard and brittle (c) it undergoes molecular reactions (d) it has low melting point 42. _____&_______metals do not react with HCI or HNO3 (a) Gold & Magnesium (b) Silver & Magnesium (c) Gold & Silver (d) Zinc & Silver 43. Bases ionise in water to form______ions (a) H+ (b) H3O+ (c) OH- (d) O2- 44. NaOH & KOH are____ (a) strong bases (b) metal Oxides (c) weak bases (d) diacidic bases 45. White fibres of connective tissue are made up of (a) elastin (b) reticular fibres (c) collagen (d) myosin 46. __________is the smallest gland (a) Pancreas (b) Sublingual (c) Parotid (d) Submaxillary 47. The act of bringing swallowed food back to the mouth is called___________ (a) egestion (b) ingestion (c) micturition (d) regurgitation 48. Gastric glands do not secrete__________ (a) renin (b) pepsin (c) lipase (d) none of the above 49. Which one of the following is an example for wireless connections? (a) Wi-Fi (b) Electric wires (c) VGA (d) USB 50. Pen drive is _______device. (a) Output (b) Input (c) Storage (d) connecting cable 51. Instrument used to measure relative density (a) Hydrometer (b) Lactometer (c) Barometer (d) Pycnometer 52. An iron ball is weighed in air and then in water by a spring balance. (a) Its weight in air is more than in water. (b) Its weight in water is more than in air (c) Its weight is same both in air and water. (d) Its weight is zero in water. 53. If the speed of a wave is 340 m s-1 and its frequency is 1700 Hz, then wavelength λ for this wave in cm will be (a) 34 (b) 20 (c) 15 (d) 0.2 54. Which of the following is not a planet of our solar system? (a) Sirius (b) Mercury (c) Saturn (d) Earth 55. The member of our solar system, with highly tiited orbit is _____________. (a) Earth (b) Pluto (c) Mars (d) Saturn 56. Graphene is one atom thick layer of carbon obtained from (a) Diamond (b) Fullerene (c) Graphite (d) Gas Carbon 57. 1% solution of Iodoform is used as (a) antipyretic (b) antimalarial (c) antiseptic (d) antacid 58. Increased amount of ___________ in the atmosphere, results in greenhouse effect and global warming (a) carbon monoxide (b) sulphur dioxide (c) nitrogen dioxide (d) carbon dioxide 59. __________ is the method of growing plants without soil. (a) Horticulture (b) Hydroponics (c) Pomology (d) None of these. 60. Mosquito borne viral diseases are (a) malaria and yellow fever (b) dengue and chikungunya (c) filariasis and typhus (d) kala azar and diptheria 61. Part - B 30 x 2 = 60 62. Convert : 104°F in to Celsius scale. 63. Calculate the correct readings of the Vernier caliper L.C. 0.01 em Zero correction - Nil. S.No. M.S.R V.C Observed Reading = M.S.R + (V.C x L.C) Correct Reading 1 3 4 2 3 7 64. Complete the table: Power of 10 Prefix Symbol 1012 ___(i)___ T ___(ii)___ Kilo K 1015 Peta ___(iii)__ 109 ___(iv)__ ___(v)__ 65. What is acceleration? 66. How is washing machine wash the clothes? 67. If an object is placed at the focus of a concave mirror, where is the image formed? 68. What is convex mirror? 69. 'What is principal axis? 70. Fill in the numbered blanks to make the heating curve meaningful. 71. Is air a pure substance or Mixture? Justify 72. Arrange the following in the increasing order of atomic number Calcium, Silicon, Boron, Magnesium, Oxygen, Helium, Neon, Sulphur, Fluorine and Sodium 73. What is combination reaction ? 74. Which flowering plant shows photonasty just opposite to that of Dandelion? 75. Define taxonomy? 76. Are Reptilian eggs covered with shells? 77. Differentiate : Kwashiorkar from Marasmus 78. How many types of adulterants are there? 79. How computer generations are categorised? 80. Water is used as a coolant in car radiators. Why? 81. Does a solar cell always maintain the potential across its terminals constant? Discuss. 82. Define magnetic effect of electric current. 83. State Newlands' Law of Octaves 84. Complete the equation Na2CO3+2HCl ⟶ ?+? +CO2 85. What is crossing over? 86. What is parturition? 87. What is a satellite? What are the two types of satellites? 88. Who is called 'Father of Modern Organic Chemistry'? 89. What is Chemotherapy? 90. Define Vermiculture 91. Define the Plasmid 92. Part - C 30 x 3 = 90 93. Differentiate mass and weight 94. The mass of an object is 5 kg. What is its weight on the earth? 95. What remains constant in uniform circular motion? And What Changes continuously in uniform circular motion? 96. What is the unit of refractive index? 97. Oxygen is very essential for us to live. It forms 21% of air by volume. Is it an element or compound? 98. Distinguish between dispersed phase and dispersion medium. 99. Methane burns in oxygen to form carbon dioxide and water vapour as given by the equation CH4(g) + 2O2(g) ➝ CO2(g)+ 2H2O(g) Calculate: (i) the volume of oxygen needed to burn completely 50 cm3 of methane and (ii) the volume of carbon dioxide formed in this case. 100. What are isotones? Give example. 101. Imagine that student A studied the importance of certain factors in photosynthesis. He took a potted plant and kept it in dark for over 24 hours. In the early hours of the next morning, he covered one of the leaves with dark paper in the centre only. Then he placed the plant in sunlight for a few hours and tested the leaf which was covered with black paper for starch. (a) What aspect of photosynthesis was being investigated (b) Why was the plant kept in the dark before the experiment? (c) How will you prove that starch is present in the leaves? (d) What are the other raw materials for photosynthesis? 102. Comment on the aquatic and terrestrial habits of amphibians. 103. Look at the picture and answer the question that follows a) Name the process involved in the given picture. b) Which food is preserved by this process? c) What is the temperature required for the above process? 104. What is Pasterurization? 105. What is data processing? 106. Some heat energy is given to 120g of water and its temperature rises by 10K. When the same amount of heat energy is given to 60g of oil, its temperature rises by 40K. The specific heat capacity of water is 4200JKg-1 K-1. Calculate: 107. The e.m.f of a cell is 1.5V. What is the energy provided by the cell to drive 0.5 C of charge around the circuit? 109. A current carrying conductor of certain length, kept perpendicular to the magnetic field experiences a force F. What will be the force if the current is increased four times, length is halved and magnetic field is tripled? 110. What are the limitations of Mendeleev's periodic table? 111. Explain Octet rule with an example. 112. Ionic compounds are crystalline solids at room temperature. 113. What are organic acids? Given examples? 114. What is complex tissue? Name the various kinds of complex tissues. 115. Reproductive organs are also considered as endocrine glands 116. What are the types of monitor? 117. What is meant by atmospheric pressure? 118. Write any two practical application of sound waves. 119. What are Biometrics? 120. According to you, which process of water cycle is adversely affected by human activities? 121. What is the nutritional importance of fish liver oils? Name any two marine fishes which yield these oils. 122. Sanjay had an attack of chicken pox and has just recovered. The health officer of his locality says that the disease would not occur again for him. What would be the reason for this? 123. Part - D 20 x 5 = 100 124. Explain the method to find the diameter of the sphere. 125. A racing car has a uniform acceleration of 4 ms-2. What distance it covers in 10 s after start? 126. The ray of light enters from air to Kerosene? Refractive index of Kerosene is 1.41. Calculate the velocity of light in Kerosene. 127. Write the properties of mixture 128. Lead forms three oxides A, Band· C. The quantity of oxygen in each of the oxides A, Band C is 7.143%, 10.345% and 13.133% respectively. Show that the law of multiple proportions is obeyed. 129. Design an experiment to demonstrate hydrotropism. 130. Draw the life cycle of jelly fish. 131. Write a brief note on mineral nutrients. 132. List out the generations of computer 133. Convert the following: (1) 1000F to 0C (2) 400C to Fahrenheit (0F) (3) 350C to Kelvin (4) 800K to 0C 134. Calculate the effective resistance between A and B as shown is the figure 135. Explain the principle, construction and working of a AC generator 136. Write the advantages of Modern Periodic Table 137. List down the differences between Ionic and Covalent compounds. 138. Describe the classification of bases based on their acidity. 139. Explain the components of phloem tissue 140. Write a note on functions of liver in digestion. 141. a) when a golf ball is lowered into a measuring cylinder containing water, the water level rises by 40cm3, when the ball is completely submerged. If the mass of the ball in air is 44g. Calculate its density. b) A 5kg sheet of tin sinks in water but if the same sheet is converted into a boat or a box, it floats. Give reason. 142. Write any five applications of ultrasonic waves . 143. List out the drawbacks of Nano materials in chemistry.
## Dissertations, Theses, and Capstone Projects 6-2020 Dissertation Ph.D. Mathematics Yunping Jiang #### Committee Members Frederick Gerdiner Linda Keen Sudeb Mitra Zhe Wang Yunchun Hu #### Subject Categories Analysis | Dynamical Systems #### Abstract Let $f$ be a circle endomorphism of degree $d\geq2$ that generates a sequence of Markov partitions that either has bounded nearby geometry and bounded geometry, or else just has bounded geometry, with respect to normalized Lebesgue measure. We define the dual symbolic space $\S^*$ and the dual circle endomorphism $f^*=\tilde{h}\circ f\circ{h}^{-1}$, which is topologically conjugate to $f$. We describe some properties of the topological conjugacy $\tilde{h}$. We also describe an algorithm for generating arbitrary circle endomorphisms $f$ with bounded geometry that preserve Lebesgue measure and their corresponding dual circle endomorphisms $f^*$ as well as the conjugacy $\tilde{h}$, and implement it using MATLAB. We use the property of bounded geometry to define a convergent Martingale on $\S^*$, and apply the study of such Martingales to obtain a rigidity theorem. Suppose $f$ and $g$ are two circle endomorphisms of the same degree $d\geq 2$ such that each has bounded geometry and each preserves the normalized Lebesgue probability measure. Suppose that $f$ and $g$ are symmetrically conjugate. That is, $g=h\circ f\circ h^{-1}$ and $h$ is a symmetric circle homeomorphism. We define a property called locally constant limit of Martingale, and show that if $f$ has this property then $f=g$. COinS
# zbMATH — the first resource for mathematics Unconditional bases of exponentials and of reproducing kernels. (English) Zbl 0466.46018 Complex analysis and spectral theory, Semin. Leningrad 1979/80, Lect. Notes Math. 864, 214-335 (1981). ##### MSC: 46B15 Summability and bases; functional analytic aspects of frames in Banach and Hilbert spaces 30B50 Dirichlet series, exponential series and other series in one complex variable 46E20 Hilbert spaces of continuous, differentiable or analytic functions 47A60 Functional calculus for linear operators 34L99 Ordinary differential operators 47B35 Toeplitz operators, Hankel operators, Wiener-Hopf operators 46E30 Spaces of measurable functions ($$L^p$$-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.)
# 3.17 An insurance company’s losses of a particular type per year are to a reasonable... 3.17     An insurance company’s losses of a particular type per year are to a reasonable approximation normally distributed with a mean of $150 million and a standard deviation of$50 million. (Assume that the risks taken on by the insurance company are entirely nonsystematic.) The one-year risk-free rate is 5% per annum with annual compounding. Estimate the cost of the following: a.   A contract that will pay in one-year’s time 60% of the insurance company’s costs on a pro rata basis b.   A contract that pays $100 million in one-year’s time if losses exceed$200 million.
Chapter 1 - Review - Exercises: 69 $x=11$ and $x=3$ Work Step by Step $|x-7|=4$ By the definition of absolute value, solving this equation is equivalent to solving two separate equations, which are: $x-7=4$ and $x-7=-4$ Solve both equations for $x$: $x-7=4$ $x=4+7$ $x=11$ $x-7=-4$ $x=-4+7$ $x=3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
## anonymous 4 years ago Number 8 please 1. anonymous 2. saifoo.khan Perimeter = outer boundary. 3. anonymous yes that know but how would it be including the circular part? 4. saifoo.khan Circumfrence of semi circle = $$\Huge \pi*r$$ 5. saifoo.khan So, 3.142 * 2 6. anonymous so that plus the perimeter of the square should give me the answer right? 7. saifoo.khan Right. 8. anonymous OK thank you
• Create Account ## Simulating CRT persistence? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 14 replies to this topic ### #1magicstix  Members 191 Like 0Likes Like Posted 17 November 2012 - 07:36 PM HI all, I want to simulate CRT persistence in a render-to-texture effect. Essentially I'm looking to simulate an old CRT screen, like an analog oscilloscope or a radar screen. If I were using OpenGL, I figure the best way to do this would be to use an accumulation buffer, but DirectX lacks such a capability. So then, what would be the best way to achieve this effect with hardware acceleration in D3D11? ### #2Hodgman  Moderators 49430 Like 1Likes Like Posted 17 November 2012 - 08:24 PM You can make an "accumulation buffer" just by creating a new render target (texture) and accumulating values into it. e.g. to keep 10% of the previous frame around (and 1% of the frame before that, and 0.1% of the frame before that...) Render scene to target #1. Blend target #1 into target #2 with 90% alpha. Display target #2 to screen. ### #3magicstix  Members 191 Like 0Likes Like Posted 22 November 2012 - 10:30 PM Is there a way to blend the two targets in a blit-style approach? The only way I know to do it would require me to render two quads, one into the other, and I assume that's not best practice. ### #4Hodgman  Moderators 49430 Like 3Likes Like Posted 23 November 2012 - 07:55 AM You only need one quad -- bind the "bottom" layer as the current render-target, then draw a quad textured with the "top" layer. Rendering quads is indeed the standard way to do it - it's what the GPUs are designed to be good at. Most specialized 2D operations have been thrown out of the hardware these days. Actually, it's often done with a single triangle that's large enough to just cover the screen, e.g. if the screen is the box: |\ | \ |__\ | |\ |__|_\ ### #5magicstix  Members 191 Like 0Likes Like Posted 23 November 2012 - 02:04 PM So basically it's just like rendering to the backbuffer without clearing it between frames? ### #6magicstix  Members 191 Like 0Likes Like Posted 02 December 2012 - 04:57 PM I can't quite get my blending to work right on this. The image gives a nice trail, but never quite fades out completely: I have my blending set up as follows: rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_ALPHA; rtbd.DestBlend = D3D11_BLEND_SRC_ALPHA; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ONE; I've tried other blend settings but this is the only one that gives a trail. Others will remove the trail completely and leave me with just the dot. I'm not clearing the 2nd render target between frames (which in this case happens to be the back buffer) but I am clearing the first RTV between frames (the texture for the screen-sized quad). The dot itself is rendered as a small quad with exponential alpha fall-off from the center. Any ideas on what I'm doing wrong? ### #7Such1  Members 435 Like 0Likes Like Posted 02 December 2012 - 05:33 PM I think you are not clearing the buffers after u used them. ### #8magicstix  Members 191 Like 0Likes Like Posted 02 December 2012 - 05:51 PM I think you are not clearing the buffers after u used them. Like I said in the post, I'm not clearing the back buffer. This is intended because it gives the accumulated trail in the first place. The problem is the trail never reaches zero. ### #9Such1  Members 435 Like 0Likes Like Posted 02 December 2012 - 08:33 PM You have 2 backBuffer, you should do something like this: clean both buffers loop: render buffer1 on buffer2 with 90% clean buffer1 render what you want on buffer 2 switch places between buffer 1 and 2 your image is now on buffer1 ### #10CryZe  Members 773 Like 0Likes Like Posted 03 December 2012 - 02:04 AM Do what Such1 said. Also your blend state description should look like this: rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_ALPHA; rtbd.DestBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ZERO; Edited by CryZe, 03 December 2012 - 02:05 AM. ### #11unbird  Members 8298 Like 1Likes Like Posted 03 December 2012 - 02:59 PM This might actually be a precision problem. Are you using low-color-resolution rendertargets/backbuffer/textures (8 bit per channel) ? ### #12magicstix  Members 191 Like 1Likes Like Posted 03 December 2012 - 05:22 PM This might actually be a precision problem. Are you using low-color-resolution rendertargets/backbuffer/textures (8 bit per channel) ? I'm using 32-bit color for the backbuffer (R8G8B8A8) but 32 bit float for the texture render target. I didn't know your backbuffer could go higher than 32bit (8 bit per channel) color... When I try R32G32B32A32_FLOAT for the back buffer I get a failure in trying to set up the swap chain. Maybe I need to accumulate in a second texture render target instead of the back buffer? -- Edit -- I forgot to mention I've changed my blending a bit. I'm using a blend factor now instead of straight alpha blend, but I'm still having the same effect with not getting it to fade completely to zero. Here are my current settings: rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_COLOR; rtbd.DestBlend = D3D11_BLEND_BLEND_FACTOR; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ONE; /* .... */ float blendFactors[] = {.99, .97, .9, 0}; g_pImmediateContext->OMSetBlendState(g_pTexBlendState, blendFactors, 0xFFFFFFFF); If I understand this correctly, it should eventually fade to completely black, since the blend factor will make it slightly darker every frame, yet I'm still left with the not-quite-black trail. Edited by magicstix, 03 December 2012 - 05:42 PM. ### #13Such1  Members 435 Like 0Likes Like Posted 03 December 2012 - 07:45 PM Why do you have this? float blendFactors[] = {.99, .97, .9, 0}; shouldn't it be something like: float blendFactors[] = {.9, .9, .9, .9}; And no, it will never fade completely(theoretically), but it should get really close. ### #14Hodgman  Moderators 49430 Like 1Likes Like Posted 03 December 2012 - 09:06 PM Did you try CryZe's blend mode, AKA "alpha blending"? it will never fade completely(theoretically), but it should get really close. You've got to keep the 8-bit quantization in mind with regards to this. If the background is 1/255, then when you multiply by 0.99, you still end up with 1/255 -- e.g. intOutput = round( 255 * ((intInput/255)*0.99) ) Instead of directly blending the previous contents and the current image, there's other approaches you could try. e.g. you could render the previous contents into a new buffer using a shader that subtracts a value from it, and then add the current image into that buffer. This way you'll definitely reach zero, even in theory ### #15magicstix  Members 191 Like 1Likes Like Posted 03 December 2012 - 10:52 PM Did you try CryZe's blend mode, AKA "alpha blending"? it will never fade completely(theoretically), but it should get really close. You've got to keep the 8-bit quantization in mind with regards to this. If the background is 1/255, then when you multiply by 0.99, you still end up with 1/255 -- e.g. intOutput = round( 255 * ((intInput/255)*0.99) ) Instead of directly blending the previous contents and the current image, there's other approaches you could try. e.g. you could render the previous contents into a new buffer using a shader that subtracts a value from it, and then add the current image into that buffer. This way you'll definitely reach zero, even in theory Yes I tried Cryze's recommendation, however it didn't look right either. I like how color blending looks over pure alpha better anyway, since I can fade the individual channels separately and get a "warmer" looking fade that looks even more like a CRT. I see your point about the dynamic range, and I agree that subtracting would be best, except when you subtract 1 from 0 you still clamp at zero, so the accumulation buffer's dark bits would block out where the "new" accumulated yellow bits should go. I think I'll try and get around the dynamic range issue by rendering into a second texture, one that's 32-bit float, instead of using the backbuffer. This is how it'd be used in practice anyway, so using the backbuffer for this test is probably not a real representation of the technique. Hopefully the greater dynamic range will let the accumulation eventually settle on zero. Here's what I mean by the "warmer" look of using color blending instead of alpha, it looks a lot more phosphor-like: Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
# Tag Info 12 In addition to the performance problems poncho already mentioned when using RSA signatures without hashing I just want to add on the security warning of poncho: Reordering If you have a message $m>N$ with $N$ being the RSA modulus, then you have to perform at least 2 RSA signatures as $m$ does not longer fit into $Z_N$. Let us assume that it requires ... 9 The general scheme is called Three-pass protocol and works for all commutative ciphers. It is secure for some of them, but xor (and modular addition) are insecure choices. Your scheme: A->B: $c_1 = m \oplus a$ B->A: $c_2 = c_1 \oplus b$ A->B: $c_3 = c_2 \oplus a$ B computes $m = c_3 \oplus b$ an attacker sees all of $c_1$, $c_2$ and $c_3$. So they can ... 8 There are a couple of options for protocol analysis tools. (I don't know any established tool for their design - as said by someone else, designing your own protocols is not really recommended.) If you are looking for formal methods based, symbolic tools, some well-known tools that have been applied to many protocols are ProVerif and Scyther. Given that you ... 7 Well, one reason to hash the data before signing it is because RSA can handle only so much data; we might want to sign messages longer than that. For example, suppose we are using a 2k RSA key; that means that the RSA operation can handle messages up to 2047 bits; or 255 bytes. We often want to sign messages longer than 255 bytes. By hashing the message ... 6 The fact that a given cipher has a key length of 296 bits doesn't mean at all that it provides 296 bits of security or even that a brute force attack would take $2^{296}$ steps. The problem of mono-alphabetic substitution cipher is the ridiculously small block size (in this case, barely $\log 64 = 6$ bits). If absolutely nothing about the plaintext is ... 5 For the moment assume $g$ is a secret (uniformly random) generator, but that $p$ may be known to the adversary. Then given only $g^a, g^b$, the Diffie-Hellman key $g^{ab}$ is information-theoretically uniform (up to small statistical error), i.e., it cannot even be found by brute force because the adversary does not have enough information to determine it. ... 5 I think it is still possible to use UC in this case. Recall the setup for the UC framework. We have an ideal world and a real world. There are parties $P_1,\dots,P_n$ in each world and an environment $\mathcal{Z}$ in each. In the real world we have the adversary $\mathcal{A}$ while in the ideal world, we have an ideal functionality $\mathcal{F}$ and a ... 5 Yes, there are several ways in which Mallory could pretend to be Amy. One obvious way, which doesn't even involve Amy herself in any way, would be for Mallory to perform steps 1 and 2 of the protocol normally, as if he were Amy. Then, given Betty's nonce $n_b$, Mallory can start a second, parallel instance of the protocol, again pretending to be Amy, and ... 5 One observation is that if we modify the problem so that $M, A, B$ are random invertible matrices, then it is easy to prove the security of the system. In fact, we can prove that the system is informationally secure; that is, for any observed $C_1, C_2$ pair, for any possible value of $K$, there is a unique set of values of $A, B, M$ that yield that $K$ ... 5 We can attack the MAC defined by: MAC(k,m)=MD5(m||k), in a chosen-messages setup, basically because MD5's collision-resistance is broken. The adversary chooses m and m' of the same length $b\ge64$ bytes, differing only in their first $\lfloor b/64\rfloor$ 64-byte blocks, such that there is a collision after hashing these blocks of m and m'. If follows that ... 4 To answer this question, we must have a look at how TLS/SSL works. I guess you know that the aim of TLS/SSL is to authenticate communicating parties before setting up an encrypted connection through which application data will flow. And as you may already know, an SSL handshake/session will use asymmetric crypto for authentication and session setup and ... 4 If you use public key crypto in the correct way, then every user has it's own private key and corresponding public key (included in the certificate) and the keys of users are not related. Consequently, compromising the private key of one user does not affect any of the other users. So in the case of compromise of the private key of one user the remaining ... 4 Am I going to regret posting this? There seems to be enough non-classified information available about GPS to answer this question. I see 3 reasons why P(Y) encryption is different and less likely to be hacked than game console encryption: Hardware containing the GPS decryption key is more difficult to obtain than hardware containing the game console ... 4 Look up the words sound/complete from logic. Complete roughly means that a method can solve every instance. Sound roughly means that the answer it gives is correct. For example, assume that we have a program that's supposed to tell when an element belongs to a set. A sound program will only answer "yes" when the element actually belongs to the set. An ... 3 I assume that Alice is capable of accepting a connection while negotiating another, and let $A_2$ and $A_1$ denote her two roles. $\;\; A_1 \to M \:$ : $\:$ Alice, $nonce_1$ $\;\; M\to A_2 \:$ : $\:$ Bob, $nonce_1$ $\;\; A_2 \to M \:$ : $\:$ $nonce_2$, $E_{k_{AB}}\hspace{-0.04 in}(nonce_1||k_2)$ $\;\; M\to A_1 \:$ : $\:$ $nonce_2$, ... 3 If we assume that $E$ is just semantically secure, without providing authenticity and integrity of the encrypted message then this scheme is has a huge drawback. It would be possible for an attacker to pose himself as either A or B, or to alter any message send from A to B. So without authenticated encryption, this scheme may protect against eavesdropping, ... 3 An implementation should generate the IV from any cryptographically secure PRNG. TLS 1.1 further details the possible ways to do that: The IV can be obtained from a PRNG. A random string $r$ can be generated from a PRNG, and added to the plaintext to encrypt where the IV should go; then the whole lot is encrypted with either a fixed IV, or even the last ... 3 Well, it has the obvious problem that if the UA has both $d_1H(r)^{k_1}$ (from the party) and $H(r)^{S-k_1}$ (from the UA), it can compute $d_1$ directly. 3 Yes, because Mallory can use Amy and Betty to get any encrypted nonce; Amy and Betty are oracles for Mallory. She just has to send the nonce she has to encrypt to either one of them and they perform the task for her (in another "authentication attempt", using step 1 & 2). Usually you protect against this kind of situation by performing an encryption ... 3 Encrypting the AES key does not actually make a brute force search any harder: an attacker doesn't need to know the encrypted key to decode messages, they only need to know the actual AES key. Thus, the attacker only(!) needs to search the 256 bit AES keyspace, not the roughly 296+256 = 552 bit encrypted keyspace. Besides, even if the attacker did try an ... 3 In general (without talking about MD5): Suppose our hashfunction $H$ is a Merkle-Damgard construction using a Davies-Meyer compression function $h=(H_i,m)=E_{m_i}(H_{i-1})\oplus H_{i-1}$. Since the compression function is public, everybody is able to compute the input to the final round of the MD-Hash. In addition, if you know the input to the final round ... 2 I personally recommend the CryptoVerif in http://cryptoverif.inria.fr/, and the Scyther in http://people.inf.ethz.ch/cremersc/tools/index.html. 2 I don't think the approach you sketched helps very much. If the server is compromised, the attacker can pretty easily modify the server-side software to log and record all the cryptographic keys, and then you haven't gained anything. Therefore, I don't think the approach you sketch is likely to be a great way to spend your limited software development ... 2 In my experience the persons doing the standardization may not know about formal methods in the first place. And even if a formal method was used, they would not know how to assess it. Note that whatever mathematical method is applied, the security of a protocol is still dependent on how the domain was modelled. If the model is even slightly incorrect, a ... 2 I'll assume the obvious: Alice checks $nounce_A$ deciphered from data received at step 2 before proceeding to step 3, and Bob checks $nounce_B$ deciphered from data received at step 3 before proceeding to step 4. Including when $E$ is authenticated encryption (as stated in a comment to the question), and we suppose the origin and step number is inserted in ... 2 If the attacker M is impersonating both A and S, then he obviously doesn't have to bother sending the third message in the protocol to himself. Thus, the protocol reduces to: M(A) → B: A, Na B → M(S): B, {A, Na, Tb}Kbs, Nb M(A) → B: {A, Kab, Tb}Kbs, {Nb}Kab where M(A) and M(S) denote M impersonating A and S respectively. By itself, this is not a ... 2 From my understanding, this protocol makes use of a trusted third party in order from A and B to exchange a symmetric key, $K_{AB}$. For the protocol to work, it is assumed that both A and B must share a master key, $K_{AS}$ and $K_{BS}$ respectively with the trusted S and A wants to communicate with B but they has no shared secret. Since there is no ... 2 Here is an attack that I think will, with excellent odds, allow certain determination of which value is on the ticket of colluding players; and consequently give an advantage on guessing the tickets of these colluding players, and a (typically lesser) advantage on guessing the tickets of honest players. I'm assuming: The adversary knows the value on every ... 2 Fairness is, loosely speaking, the property of secure protocols that guarantees that either all honest parties will receive their output or no party will receive output. We know that this property can not be achieved for all functionalities unless when a majority of parties are honest.* As I recall your question is a classic example of fairness not being ... 2 Message sequencing AND hash-tabling for a trail of backward messages. The loss of a single message is not a disaster, actually. To be not over-paranoid, implement "resend request" in your protocol. If it works and hashes are matched - it can be just a communication error. But if it fails - a line should be dropped immediately. Try to use Tor by the way, and ... Only top voted, non community-wiki answers of a minimum length are eligible
# clock takes 8mins for 1round so in 18mins how many round it takes??? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 1 Feb 8, 2018 color(purple)(2(1/4) rounds or $\textcolor{p u r p \le}{2.25} r o u n \mathrm{ds}$ #### Explanation: In 8 minutes -> 1 round In 18 minutes -> (18/8) * 1 rounds $\implies \frac{18}{8} = \frac{3 \cdot 3 \cdot \cancel{2}}{2 \cdot 2 \cdot \cancel{2}} = \frac{9}{4} = 2 \left(\frac{1}{4}\right) r o u n \mathrm{ds}$ $\implies 2 \left(\frac{1 \cdot {\cancel{100}}^{25}}{\cancel{4} \cdot 100}\right) = 2 \left(\frac{25}{100}\right) = 2.25 r o u n \mathrm{ds}$ Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 1 Kashish Share Feb 8, 2018 2.25 rounds #### Explanation: This question can be easily solved by applying unitary method. No. of rounds made by the clock in 8 minutes = $1$ No. of rounds made by the clock in 1 minute = $\frac{1}{8}$ No. of rounds made by the clock in 18 minutes = $\frac{1}{8}$ x $18$ = 2.25 • 22 minutes ago • 23 minutes ago • 24 minutes ago • 31 minutes ago • 7 minutes ago • 8 minutes ago • 11 minutes ago • 12 minutes ago • 17 minutes ago • 20 minutes ago • 22 minutes ago • 23 minutes ago • 24 minutes ago • 31 minutes ago
## 502 – Propositional logic (4) September 14, 2009 11. Completeness We now want to show that whenever ${S\models A,}$ then also ${S\vdash A.}$ Combined with the soundness Theorem 22, this shows that the notions of provable and true coincide for propositional logic, just as they did for the tree system. The examples above should hint at how flexible and useful this result actually is. This will be even more evident for first order (predicate) logic. Theorem 26 (Completeness) For any theory ${S}$ and any formula ${A,}$ if ${S\models A,}$ then ${S\vdash A.}$
# How do I escape a backtick in Markdown? How do I escape a back tick within a code block? This is probably a duplicate, since I'm sure it's a common concern, but I can't find a question that addresses this specifically. How do I write `List'1` with the "1" character still in the code-text format? - –  ChrisF Mar 11 '11 at 14:29 with a `\` Example: ``` –  Johannes Kuhn Oct 7 '13 at 9:25 Is there one chasing you? –  Rosinante Oct 7 '13 at 15:36 Use four spaces before your code? ``````List`1 `````` Or use double backticks as in `List`1` . ``````It looks like this: ``List`1`` `````` See http://daringfireball.net/projects/markdown/syntax, linked to from the formatting question box. Note: Extra spacing will be necessary if you want to have a backtick at the end of your code, e.g., `foo`` . This will keep it from consuming the first 2 closing backticks instead of the final two closing backticks. - I had to click edit on your post to see what you meant, because I think there is a typo. It appears that you need to delete the backtick after the 1, because as written the formatting is not correct. If you do that I will accept you. –  smartcaveman Mar 11 '11 at 14:24 @smartcaveman. Sorry and fixed. I misunderstand you as wanting List'1' for some reason. –  Brian Mar 11 '11 at 14:30 The double backtick technique doesn't seem to work on GitHub. –  Max Howell Jul 30 '12 at 12:43 @Max: It does now. –  Allon Guralnek Aug 19 '12 at 13:28 How do you have two consecutive backticks in inline code? –  asmeurer Jun 16 '13 at 16:44 @asmeurer: Wrap your inline code with triple backticks. `!``!` For triple or higher backticks, you can wrap your inline code with double backticks, rather than quadruple backticks (unless you need both triple and double backticks at different places). `!```!` vs `!``!```!` –  Brian Jun 16 '13 at 19:09 As mentioned in the other answers to the question, you can simply escape backticks with a backslash `\` for inline formatting. –  Cupcake Jun 5 '14 at 14:55 For github, like for displaying a mysql `table_name`, in regular text use `\`` (backslash backtick) For showing backticks inside inline codeblocks ``table_name`` use double backticks with extra spaces `` `table_name` `` around the inner single backticks. To show the previous example explanation in an inline codeblock: ``` `table_name` ```, surround the whole in three backticks with extra spaces, e.g. ``` `` `table_name` `` ``` - Thanks for pointing out that extra spaces are required! –  CoDEmanX Aug 28 '14 at 20:40 And that continues for any number of backticks. If you are writing about how to use Prism syntax highlighting and want to display three backticks, you need to "escape" them with four backticks and a space. (i.e. ```` ```language_javascript ````) –  Jedidja Mar 3 at 18:53 you can use : ````````List`1`` `````` when inline it will display `List`1` ### Markdown provides backslash escapes for the following characters: ``````\ backslash ` backtick * asterisk _ underscore {} curly braces [] square brackets () parentheses # hash mark + plus sign - minus sign (hyphen) . dot ! exclamation mark `````` for example, this: ``````## \\ \` \* \_ \{ \} \[ \] \( \) \# \+ \- \. \! `````` returns:
# What is the proper test to use for this study? It's been too long since I have taken a stats course, and I'm struggling to figure out what the proper test is for this experiment I'm designing. • There are two conditions: A and B • There is one continuous dependent variable that is measured as the outcome of each trial (each trial is of condition A or condition B) • There are approx. 30 subjects • Each subject will do 5 trials with condition A, and 5 trials with condition B • The order in which the subject does the trials is randomized • I'm trying to show that the value of the dependent variable is significantly higher with condition A than it is with condition B The main issue for me is trying to figure out how to handle each subject doing each condition multiple times. Would this be some sort of one-way ANOVA with blocking by subject? Or maybe repeated measures within subjects? Any tips would be great! Secondary question: for whatever the correct answer to the above is, what would be the closest equivalent non-parametric test? • Could you please add some information about the number of participants. Is many 20, 50, 100, or even more? This will impact the methods available to you. – Marcus Morrisey Oct 8 '15 at 21:41 • @MarcusMorrisey Sorry about that, OP is updated (roughly 30 subjects) – Jordan Oct 8 '15 at 21:51 lmer(dependentVariable ~ FactorAB + (1|participant), data=yourData)
## Thursday, June 26, 2014 ... ///// ### Should BICEP2, Higgs have crushed the Universe? Only if you believe that there can't be any saviors Yo Yo and other readers were intrigued by the following cool yet slightly misleading article in the Daily Mail: Big Bang controversy grows: Study claims universe would have collapsed 'a second after it formed' if Bicep2 results were true Lots of science media are combining the July 2012 Higgs discovery and the March 2014 BICEP2 discovery in this apocalyptic way although some of them chose a more sensible – more correct and less catastrophic – title (I mean and praise the titles referring to "new physics"). The articles were sparked by the following paper Electroweak Vacuum Stability in light of BICEP2 (arXiv) by Malcolm Fairbairn and Robert Hogan from Kings College London that was published in PRL one month ago (which is why the explosion of hype right now seems to be a bit late from any point of view). To make the story short, if both the discovery of the $125\GeV$ Higgs boson and the BICEP2 discovery of the primordial gravitational waves are valid, the Universe should have decayed – fled into an increasingly unlivable state incompatible with the particles as we know them – just a moment after the Big Bang. The following 13.8 billion years should have been impossible. Some unusually attentive TRF readers may have noticed that the Universe hasn't collapsed yet, however. So there must be a catch, right? Indeed, there is a catch. A more accurate version of the sentence above reads as follows: To make the story short, if both the discovery of the $125\GeV$ Higgs boson and the BICEP2 discovery of the primordial gravitational waves are valid, and if the Standard Model is everything one needs to describe all particle physics up to the huge inflation scale, the Universe should have decayed – fled into an increasingly unlivable state incompatible with the particles as we know them – just a moment after the Big Bang. The following 13.8 billion years should have been impossible. For your convenience, let me point out that the words "and if the Standard Model is everything one needs to describe all particle physics up to the huge inflation scale" were added. Well, they make a difference, indeed! ;-) The reason why they make a difference is that particle physics is almost certainly not described by the Standard Model that we know up to the Planck scale, so one of the assumptions (the most carefully hidden one but the least likely one!) of the catastrophic prediction almost certainly doesn't hold which makes the prediction somewhat irrelevant and unrealistic. After all, the very inflaton field that is needed for the cosmic inflation (and that is needed for an explanation of the BICEP2 discovery) is a field outside the Standard Model of particle physics. (The "Higgs as inflaton" theories are interesting but let me neglect them here.) The minimum you have to assume to incorporate both discoveries is the Standard Model plus inflaton package but even this "slightly enriched Standard Model" is very unnatural. It's much more sensible to think that the inflaton comes in a package with other fields and particles that justify the inflaton's existence, e.g. in the Grand Unified Theory. The particle accelerators have only seen the Standard Model particles so far but the colliding particles' energies would have to be increased 1,000,000,000,000,000 (quadrillion, although it is "just" a "biliarda" in Czech) times to reach the inflation scale. That's a lot and the assumption that they would discover nothing new in this broad range is a bit contrived. The basic logic why the Universe should have collapsed is based on the usual ideas about the Higgs potential. Note that the Higgs boson is a single "brick" of a Higgs wave, much like the photon is a single "brick" of an electromagnetic wave. The Higgs wave is a wave on the Higgs field, much like the electromagnetic wave is a wave that makes the values of the electromagnetic field fluctuate. And the Higgs field is affected by the Higgs potential, a potential energy density that makes Nature work hard to return the Higgs field at each point of space near the "ring" where the Higgs potential is minimized: In other words, the marble simply wants to roll down to the valley, in a random direction. It has to randomly choose the direction because they're the same a priori, and this choice is what "spontaneously breaks the symmetry". This spontaneous symmetry breaking is also what gives mass to all the massive particles, and so on. This marble-in-a-hat explanation is a more accurate version of the explanation "why the God particle gives masses to the souls" than the prevailing "explanations" involving swimming in the honey and Margaret Thatcher surrounded by journalists. This graph of the potential energy is known as the champagne bottle bottom potential in the (capitalist) first world, as the Landau haunches in the (socialist) second world, and as the Mexican hat potential in the (poor) third world. A problem with this simple picture of the $V(h)=ah^4-bh^2$ function is that it is approximate – an effective potential for low enough energies – and the shape of this Mexican hat is changing if you focus on processes with higher characteristic energies. This additional, slow energy-dependence is known as the "renormalization group running" and I will avoid explanations what it means in this mostly non-technical blog post. Those who know quantum field theory have surely heard about it. The big insight is that all field theories are just "effective" and they work well for some "approximate energy scale" only. To derive the effective theory relevant for a lower energy scale, one has to "run" the effective theory that was designed for a higher energy scale (or to "integrate out" the high-energy degrees of freedom in this high-scale theory) and to adjust the values of parameters in the action according to a deducible algorithm. If done properly, both effective theories will agree concerning predictions of all processes at even lower energies. As the result of the running, the picture relevant in the inflationary era – when the typical energies and energy densities (and temperatures) were much higher than today – may look different than the picture above. In fact, it's calculable that the renormalization group running implies that the nice circle (bottom of the Mexican hat on the picture above) near the center (but away from the center) is no longer the global minimum of the exact function. Instead, there is a new minimum that is very far from the axis of the Mexican hat – either finitely far or infinitely far. Nature loves to save energy so it commands the Higgs marble to get to this new, better minimum, and the Universe over there is completely different than the Universe we know. It doesn't allow the existence of the light elementary particles we need for life, among other "details". This prediction of a crippled world is clearly wrong so there must be something wrong with the assumptions that lead to this pessimistic prophesy. I have discussed this Higgs instability many times on this blog, e.g. in Why a $125\GeV$ Higgs boson isn't quite compatible with the Standard Model Higgs: living near the cliff of instability Implications of a $125\GeV$ Higgs for SUSY Why the Standard Model isn't the whole story My view is that the instability is always an inconsistency and even the "not so obvious" version of the instability called "metastability" is an inconsistency that cannot be tolerated in a consistent theory because some very early cosmological processes would have probed this instability, anyway. They would turn it into the full-fledged instability. Note that so far, we have only talked about the Higgs field. Why does it have anything to do with the BICEP2 discovery? Well, some people, especially phenomenologists and those who are not formal theorists – people including Fairbairn and Hogan – apparently tend to treat "metastability" as a tolerable disease because it seems that we may survive in our vacuum for quite a long time even if there is a much better minimum elsewhere. However, BICEP2 – assuming it is right – shows that during inflation, the energy density was huge. It had to be huge because this huge energy density was needed to produce the gravitational waves, roughly speaking. Because the energy density was huge, the Higgs field would be "kicked" from the convenient local minimum and "forced" to probe the apocalyptic better minima that are much further from the axis of the Mexican hat. The huge energy density that existed during the cosmic inflation, according to BICEP2, would translate any potential (meta)stability of the Higgs field to a full-fledged apocalypse. From my viewpoint, inflation is just a very likely epoch in the childhood of our Cosmos. But even if you were imagining no inflation, there has been a very early epoch in which the energy densities simply had to be huge, so the existence of much lower-energy states always seems as a problem to me. At any rate, with the high-energy-density cosmic inflation indicated by BICEP2, it's a problem even according to people like Hogan et al. I have actually discussed the Higgs+BICEP2 combination right after the BICEP2 announcement in March, e.g. in BICEP2: some winners and losers The Standard Model was identified as the #2 loser of the BICEP2 discovery – and that portion of my article is exactly the same thing that's being hyped as a cataclysm by the new Daily Mail article. I have referred to another, PLB article by Archil Kobakhidze, Alexander Spencer-Smith from January 2013 which is pretty much equivalent to the article by Fairbairn and Hogan in the more prestigious PRL journal that ignited the recent "the universe should have collapsed" wave of hype. Needless to say, there is nothing really new going on here. OK, assume that both the Higgs discovery (sure!) and the BICEP2 discovery are real, and these arguments about the instability are right. Given the fact that the Universe is still around, what does it mean? It means that the Standard Model isn't the whole story. I mentioned the "renormalization group running" that, among other things, causes additional changes to the Mexican hat potential for the Higgs field as a function of energy. An important fact is that this Mexican hat is being changed in a way that depends on the composition and properties of all particles in your theory of particle physics, especially those that strongly interact with the Higgs – because we're looking at their effects on the Higgs potential, and some interactions of those other particles with the Higgs (some "Feynman vertices") are needed for them to cause such a "renormalization group running" change of the Mexican hat. Some hypothetical particles apparently have the capacity to change the character of the change of the Mexican hat potential for the Higgs field as the function of energy. If you want to have a good enough mathematical model, imagine that the exact potential also has the term $ch^6$ and we need $c$ to stay positive for the potential to be bounded from below. However, the Standard Model is trying to send $c(E)$ to $c(E)\lt 0$ as we are increasing $E$ from the LHC scale to the inflation scale; additional particles and effects – saviors – are needed to slow down this decrease of $c$ and to protect the positivity of $c$ – well, the positivity of the quartic coefficient itself is a threat, too. Which particles are the Messiahs that will save us? It may be seen that the promising enough saviors may be both bosons or fermions. If they're fermions, they must be extremely similar to the "higgsinos", spin-1/2 superpartners of the Higgs boson in supersymmetric theories. If they are bosons, they must be extremely similar to "top squarks" or "stops", the supersymmetric partners of the top quark. In all discussions about the Higgs things, the top quark is the most important quark because it's the heaviest one – and because the mass is proportional to the strength of the quark's (or lepton's) interactions with the Higgs field. The effect of other, much lighter quarks and squarks on the Higgs field is negligible in comparison. The Higgs field has strong interactions with itself (self-interactions) which, via supersymmetry, also implies strong interactions with its superpartner, the higgsino, and those interactions with the (so far unobserved) higgsinos are also important enough to modify the running of the Mexican hat potential and to avoid the catastrophe. (Numerically, the stops are probably more important for the salvation, like Jesus Christ. The higgsinos are only as important as the Holy Spirit.) So if you want the Universe to be saved, you should pray for the coming of particular saviors which are either higgsinos and stops from a supersymmetric theory, or someone who looks almost just like them. You see that I describe the saviors in such a way that they make supersymmetry "almost inevitable". Supersymmetry isn't quite inevitable and you may imagine that the relevant new particles that save our Universe from the "collapse" are particles unrelated to supersymmetry, in a world that perhaps isn't supersymmetric. The (not quite lethal but still noticeable) problem with these non-supersymmetric explanations and non-supersymmetric particles is that they apparently need to "imitate" the stops and higgsinos from the supersymmetric theories, and there exists no other justification – a justification different from supersymmetry as a principle of Nature – that would justify the existence of particles that seem to pretend that they are supersymmetric partners even though they fundamentally aren't. (If you are a hardcore anthropic principle believer, you may treat the ability of a new particle to "save the life" to be its only feature that matters, and from this viewpoint, you probably don't care a single bit whether the new particle follows from some sensible, justified, and pretty principles such as supersymmetry or it is a random piece of stinky trash. Clearly, I am no hardcore anthropic believer.) Using the Jesus Christ metaphor, the simultaneous validity of the Higgs and BICEP2 discoveries along with the self-evident long survival of our Universe would imply that our Universe was saved either by Jesus Christ – by the supersymmetry and especially the two new particles it predicts (higgsinos and stops) – or by a false prophet pretending to be Jesus Christ even though he is fundamentally someone else. You are free to pick your preferred explanation of the salvation among the two. (Note that it was hard to choose the right capitalization of "He"/"he" in the case of Jesus Christ and the false prophets. There are isomorphic subtleties involving the "H"/"h" in the case of the Higgs boson and higgsinos.) It is a question where we don't have any rock-solid proof in one way or another but I choose the real Messiah, someone who is a savior because of His or Her intrinsic properties and not because of some random adjustments – namely supersymmetry. Lots of other new physics may stabilize the Higgs potential but much of new physics is likely to be useless for the stabilization and the number of new particles in the LHC-to-inflation range that are helpful is probably rather small and SUSY is probably the main if not only thing that matters. #### snail feedback (20) : Hi Lubos, A question for you regarding these vacuum instability discussions - it looks like people have calculated the effective potential (i.e. the part of the 1PI effective action that doesn't depend on field gradients) using RG methods and found that \phi=0 is not the global minimum for certain values of parameters. But does this really mean an instability? Aren't the bubble nucleation calculations for decay of the false vacuum done using the bare action (the one that appears in the path integral)? The 1PI effective action is what you get after you've already done the path integral completely, so it seems nonsensical to use it to talk about vacuum decay. Dear Guest, just for potential other readers, metastability means that there is a lower global minimum, but one may only get there by tunneling - which may be potentially very unlikely, slow, and therefore "safe and tolerable". Concerning your question, you may always do your calculation using the bare action but you must correctly incorporate all the quantum corrections - loop processes, instantons, and loop corrections on top of instantons, whatever matters. The usage of the effective actions in the calculation of the ultimate fate of the Universe - which you may interpret as an extremely low-energy question - is supposed to simplify this calculation so that some/all loop corrections are added from the beginning and the resulting instanton calculation may be done pretty much classically. One must still be careful whether a given calculation incorporates all the quantum corrections that it should. I think that what you're imagining as the decaying instanton calculation clearly doesn't incorporate everything it should. You didn't use formulae but your words suggest that you think that such questions may be decided by a purely classical calculation. They never can't. Quantum mechanics always matters and whenever it may be approximated by classical physics for some class of questions, you must use the appropriate effective actions. I think that it's useless to spend too much time with these details because at these high energy densities and large values of fields, field theory is almost certainly at least marginally inapplicable and one needs a full calculation in quantum gravity or string theory. Hey Lubos. Wanna earn $10 000? http://www.theblaze.com/stories/2014/06/25/want-to-disprove-man-made-climate-change-a-scientist-will-give-you-10000-if-you-can/comment-page-3/ reader Albert Zotkin said... Acording to Martinus J. G. Veltman if the Higgs boson existed then the size of the universe would match that of a bustard egg. As you can see here the bustard eggshell is actually a CMB radiation map http://tardigrados.files.wordpress.com/2012/10/wmap-egg1.jpg?w=760&h=509 ;) reader OXO147 said... That's$10k, plus \$15k for the Fields Medal if you can figure out how prove a negative. "...published in PRL one month ago (which is why the explosion of hype right now seems to be a bit late from any point of view)." I conjecture that this observed delay is explainable by the typical speed of thinking of an average science journalist or editor ... ;-P As I previously said, this crook will never pay a penny to anyone even though, according to his words, he should. There is nothing that forces crooks of this kind to fulfill their promises. What is does it a pure propaganda gesture. Could people please stop discussing this non-event? Hi Lubos, Indeed, the subject of vacuum destabilization by inflation is relatively old : http://arxiv.or/abs/arXiv:0710.2484 There is not much more one can say. One thing I should mention is that people usually forget about the coupling between the Higgs and inflaton. It should be there because it's renormalizable, gauge and Lorentz invariant, so by QFT rules one should include it. And it is this coupling (even if it's very small) that can solve the problems with stability in the absence of any new physics up to the string scale (search for 'metastable vacuum and inflation'). Privět! The 2007 paper indeed said enough. Hard to see what the supposed progress afterwards has been. Could you please give me specific papers discussing the Higgs-inflaton coupling? E.g. arXiv:1210.6987 The result is that if the coupling is positive (which could be as small as 10^⁻6 and less), the problems are solved. The Big Bang vacuum was intensely chiral. It intrinsically resisted collapse. Pseudoscalar field decay-powered cosmic inflation, smoothing it to apparent isotropy. A trace chiral vacuum background remains. Five matter-based tests measure that chiral vacuum remnant They originate external to physics, immune to its defective postulate. One skimmed de Laval vacuum supersonic expansion nozzle, one pulsed chirped FT μwave spectrometer; a vacuum pump, a tank of helium, a few grams of racemic D_3-trishomocubanone. One day measuring divergent enantiomorphic molecular rotation temperatures. Confirm with a 90-day geometric Eötvös experiment. Go , Lubos, go :). For me your analysis is clear :taking both as true, Higgs and BICEP2 is a new experimental observation that implies new physics over the standard model, Almost as good as finding experimentally a magnetic monopole :) yup I think I can see the CMB cold spot 8) Nice review. A question for clarification. If I understand, a and b are well (?) determined at current energies. By RG can they also run with energy? Also is there any experimental indication of non zero c at current energies? Hi Kashyap, one function of a,b - the vacuum expectation value 246 GeV - has been known from the W,Z masses for decadees. The other independent function needed to determine both a,b has been known since the Higgs mass was measured to be 125 GeV. The other coefficients like "c" are classically zero because the theory would be nonrenormalizable but the exact quartic profile is just a classical approximation and quantum mechanically, there are lots of corrections to the shape. I summarized them as the sixth order term but this is not the most accurate description of the quantum corrections to the shape, e.g. some functions with logarithms in them. Yes, all constants like a,b as well "c" (this literal one as well as its more relevant, generalizations) are RG running i.e. energy-scale-dependent. Thanks Lubos. Then, if the change is also logarithmic, it will take huge change in energy to make substantial change in a,b,c. Of course in principle you are talking about 10^19 Gev or more! Is this right? The Higgs-Inflaton coupling is discussed in the paper (and the below mentioned paper is cited) Is your cryptic suggestion related to the right hand rule chirality of the fixed direction of induced current when a magnet is moved through a metal ring? This is an everyday already established chirality is it not? I am still looking for a Motl worthy write up about it since Wikipedia seems too bored with it. Massless boson photons detect no vacuum refraction, dispersion, dissipation, dichroism, gyrotropy. Postulate this is exactly true for fermionic matter (quarks). Parity violations, symmetry breakings, chiral anomalies, baryogenesis, Chern-Simons repair of Einstein-Hilbert action suggest vacuum trace chiral anisotropy acting only upon hadrons is being observed. LOOK to validate or falsify: geometric Eötvös experiment, geometric enthalpies of fusion[1], geometric microwave rotation temperature[2], geometric pawnbroker rotation, geometric Galilean drop[3]. [1] http://www.mazepath.com/uncleal/shoes2.png [2] H. K. Moffat, Six lectures on general fluid dynamics and two on hydromagnetic dynamo theory, in R. Balian & J-L Peube (eds), Fluid Dynamics (Gordon and Breach, 1977)
CSE 559A: Computer Vision Fall 2018: T-R: 11:30-1pm @ Lopata 101 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao Xia, Charlie Wu, Han Liu November 20, 2018 # General • Problem Set 5: Deadline Extended to Dec 4th. • Recitation on Nov 30th (Friday after Thanksgiving) # Batch Normalization He et al., "Identity Mappings in Deep Residual Networks". 2016. # Regularization • Given a limited amount of training data, deep architectures will begin to overfit. • Important: Keep track of training and dev-set errors Training errors will keep going down, but dev will saturate. Make sure you don't train to a point when dev errors start going up. • So how do we prevent, or delay, overfitting so that our dev performance increases ? Solution 1: Get more data. # Regularization Data Augmentation • Think of transforms to the images that you have that would still keep them in the distribution of real images. • Typical Transforms • Scaling the image • Taking random crops • Applying Color-transformations (change brightness, hue, saturation randomly) • Horizontal Flips (but not vertical) • Rotations upto +- 5 degrees. • Are a good way of getting more training data for 'free'. • Teaches your network to be invariant to these transformations .... • Unless your output isn't. If your output is a bounding box, segmentation map, or other quantities that would change with these augmentation operations, you need to apply them to the outputs too. # Regularization Weight Decay • Add a squared or absolute value penalty on all weight values (for example, on each element of every convolutional kernel, matmul matrix) except biases. $$\sum_i w_i^2$$ or $$\sum_i |w_i|$$ • So now your effective loss is $$L' = L + \lambda \sum_i w_i^2$$ • How would you train for this ? • Let's say you use backprop to compute $$\nabla_{w_i} L$$. • What gradient would you apply to your weights ? What is $$\nabla_{w_i} L'$$ ? $\nabla L' = \nabla L + 2\lambda w_i$ • So in addition to the standard update, you will also be subtracting a scaled version of the weight itself. • What about for $$L' = L + \lambda \sum_i |w_i|$$ ? $\nabla L' = \nabla L + \lambda Sign(w_i)$ # Regularization Regularization: Dropout • Key Idea: Prevent a network from "depending" too much on the presence of a specific activation. • So, randomly drop these values during training. $$g=$$Dropout($$f$$,p): $$f$$ and $$g$$ will have the same shape. Different behavior during training and testing. • Training • For each element $$f_i$$ of $$f$$, • Set $$g_i=0$$ with probability $$p$$, and $$\frac{f_i}{(1-p)}$$ with probability $$(1-p)$$ • Testing: $$g_i = f_i$$ • Why does this make sense ? Because in expectation, our value during training and test will be the same. • Dropout is a layer. You will backpropagate through it ! How ? # Regularization Regularization: Dropout • Write the function as $$g = f \cdot \epsilon$$ • Here $$\epsilon$$ is a random array same size as $$f$$, with values 0 and $$1/(1-p)$$ with probability $$p$$ and $$(1-p)$$. • $$\cdot$$ denotes element-wise multiplication. • $$\nabla_f = \nabla_g \cdot \epsilon$$ • Even though $$\epsilon$$ is random, you must use the same $$\epsilon$$ in the backward pass that you generated for the forward pass. • Don't backpropagate to $$\epsilon$$ because it is not a function of the input. • Like RELU, but kills gradients based on an external random source---whether you dropped that activation or not in the forward pass. If you didn't, remember to multiply by the $$1/(1-p)$$. # Regularization Regularization: Early Stopping • Keep track of dev set error. Stop optimization when it starts going up. • This is a legitimate regularization technique ! • Essentially, you are restricting your hypothesis space to functions that are reachable within $$N$$ iterations of a random initialization. # Different Optimization Methods • Standard SGD $w_i \leftarrow w_i - \lambda \nabla_{w_i}$ • Momentum $g_i \leftarrow \nabla_{w_i} + \gamma g_i$ $w_i \leftarrow w_i - \lambda g_i$ • But we are still applying the same learning rate for all parameters / weights. # Different Optimization Methods Adaptive Learning Rate Methods Key idea: Set the learning rate for each parameter based on the magnitude of its gradients. $g^2_i \leftarrow g^2_i + (\nabla_{w_i})^2$ $w_i \leftarrow w_i - \lambda \frac{\nabla_{w_i}}{\sqrt{g^2_i+\epsilon}}$ Global learning rate divided by sum of magnitudes of past gradients. Problem: Will always keep dropping the effective learning rate. • RMSProp $g^2_i \leftarrow \gamma g^2_i + (1-\gamma)(\nabla_{w_i})^2$ $w_i \leftarrow w_i - \lambda \frac{\nabla_{w_i}}{\sqrt{g^2_i+\epsilon}}$ # Different Optimization Methods Adaptive Learning Rate Methods • Adam: RMSProp + Momentum $m_i \leftarrow \beta_1 m_i + (1-\beta_1) \nabla_{w_i}$ $v_i \leftarrow \beta_2 v_i + (1-\beta_2) (\nabla_{w_i})^2$ $w_i \leftarrow w_i - \frac{\lambda}{\sqrt{v_i}+\epsilon}m_i$ • How do you initialize $$m_i$$ and $$v_i$$ ? Typically as 0 and 1. • This won't matter once the values of $$m_i, v_i$$ stabilize. But in initial iterations, they will be biased towards their initial values. # Different Optimization Methods Adaptive Learning Rate Methods • Adam: RMSProp + Momentum + Bias Correction $m_i \leftarrow \beta_1 m_i + (1-\beta_1) \nabla_{w_i}$ $v_i \leftarrow \beta_2 v_i + (1-\beta_2) (\nabla_{w_i})^2$ $\hat{m}_i = \frac{m_i}{1-\beta_1^t}$ $\hat{v}_i = \frac{v_i}{1-\beta_2^t}$ $w_i \leftarrow w_i - \frac{\lambda}{\sqrt{\hat{v}_i}+\epsilon}\hat{m}_i$ Here, $$t$$ is the iteration number. As $$t\rightarrow \infty$$, $$1-\beta^t=1$$. # Distributed Training • Neural Network Training is Slow. • But many operations are parallelizable. In particular, operations for different batches are independent. • That's why GPUs are great for deep learning! But even so, you will begin to saturate the computation (or worse, memory) on a GPU. • Solution: Break up computation across multiple GPUs. • Two possibilities: • Model Parallelism • Data Parallelism # Distributed Training Model Parallelism • Less popular, doesn't help for many networks. • Essentially, if you have two independent "paths" in your network, you can place them on different devices. And sync, when they join. Was used in the Sutskever et al., 2012 ImageNet paper. # Distributed Training Data Parallelism • Begin with all devices having the same model weights. • One each device, load a separate batch of data. • Do forward-backward to compute weight gradients on each GPU with its own batch. • Have a single device (one of the GPUs, or a CPU) recover gradients from all devices. • Average these gradients and apply the update to the weights. • Distribute new weights to all devices. • Works well in practice, especially for multiple GPUs in the same machine. • Communication overhead of transferring gradients and weights back and forth. Can be large if distributing across multiple machines. • Approximate Distributed Training • Let each worker keep updating its own weights independently for multiple iterations. Then, transmit back weights to single device, average weights, and sync to all devices. • Other options, quantize gradients when sending back and forth (while making sure all workers have the same models).
2 Explained rigidity in the question In a recent question of mine I asked whether every infinite group is (isomorphic to) the automorphism group of a graph. The finite case was done by Frucht in 1939. The first answer to this question pointed out two papers answering my original question, one by Sabidussi and one by de Groot. Reading the 3-page paper by Sabidussi I thought "Wow, these graphs are huge": Sabidussi realizes a group of size $\kappa$ as the automorphism group of a graph of size $\aleph_\kappa$. Indeed, de Groot in his paper notes that every countable group is the automorphism group of a countable graph, every group of size $\leq 2^{\aleph_0}$ is the automorphism group of a graph of size $\leq 2^{\aleph_0}$, and every group of size $\kappa$ is the the automorphism group of a graph of size $\leq 2^{\kappa}$. But in general, he doesn't know how large a graph is needed to realize a given group. Has this issue been resolved? Is there a reason why for a given infinite group $G$ there shouldn't be a graph of size $|G|$ whose automorphism group is isomorphic to $G$?
# Math Help - Area of a region 1. ## Area of a region Find the exact area of the region bounded by , 2. Originally Posted by ZosoPage Find the exact area of the region bounded by , 3. Sorry again. Here's the problem without any red x's. Find the exact area of the region bounded by $y=\frac{x^3}{\sqrt{4-x^2}}, y=0, x=0, x=\sqrt{2}$ 4. Originally Posted by ZosoPage Sorry again. Here's the problem without any red x's. Find the exact area of the region bounded by $y=\frac{x^3}{\sqrt{4-x^2}}, y=0, x=0, x=\sqrt{2}$ try the substitution ... $u^2 = 4 - x^2$ you'll end up with an easier definite integral ... $\int_{\sqrt{2}}^2 4 - u^2 \, du $ 5. Originally Posted by skeeter try the substitution ... $u^2 = 4 - x^2$ you'll end up with an easier definite integral ... $\int_{\sqrt{2}}^2 4 - u^2 \, du $ I'm confused. How did you use that substitution to come up with that integral? I need to take this integral but I don't know how: $ \int_{0}^{\sqrt{2}}\frac{x^3}{\sqrt{4-x^2}}dx $ 6. Originally Posted by ZosoPage I'm confused. How did you use that substitution to come up with that integral? I need to take this integral but I don't know how: $ \int_{0}^{\sqrt{2}}\frac{x^3}{\sqrt{4-x^2}}dx $ $u^2 = 4-x^2$ $x^2 = 4-u^2$ $2x \, dx = -2u \, du$ $x \, dx = -u \, du$ $\int \frac{x^3}{\sqrt{4-x^2}} \, dx = \int \frac{x^2}{\sqrt{4-x^2}} \cdot x \, dx$ substitute ... $\int \frac{4-u^2}{u} \cdot (-u) \, du$ $-\int 4 - u^2 \, du$ lower limit ... $x = 0$ , $u = 2$ upper limit ... $x = \sqrt{2}$ , $u = \sqrt{2}$ $-\int_2^{\sqrt{2}} 4 - u^2 \, du = \int_{\sqrt{2}}^2 4 - u^2 \, du$
# Gravity ~ Acceleration ~ Centrifuge & GR Ich How would you define the path lengths in this scenario? The length of the respective world lines, as caculated http://en.wikipedia.org/wiki/Proper_time#In_general_relativity". Their path in spacetime, not in space. On twin descends at a controlled rate deep into a gravitational well. The second twin waits one hundred years and then descends down at the same controlled rate to meet his sibling that is biologically twenty years old. The relative time difference is very physically manifested here. Just as a visualization, have a look at the drawing. It symbolizes how same synchronous coordinate time (the angle) still means different proper time (path length), depending on the potential. #### Attachments • 1.6 KB Views: 77 Last edited by a moderator: ... Just as a visualization, have a look at the drawing. It symbolizes how same synchronous coordinate time (the angle) still means different proper time (path length), depending on the potential. Nice demonstration. However we agree that when the twins get back together again in this gravitational version, they have aged differently and for most people that would suggest gravitational time dilation has a physical "reality". No? Ich However we agree that when the twins get back together again in this gravitational version, they have aged differently and for most people that would suggest gravitational time dilation has a physical "reality". Did I dispute that? I don't know exactly what part of my answers you interpreted as saying otherwise. I'm just saying that, in a dynamic spacetime, the very notion of "gravitational time dilation" gets a bit fuzzy, so you can't use it to create 'instantaneous' relativity at a distance. That's all. Could it be that escape velocity is the essential ingredient for determining the time dilation at a given point? All we have to do now, is figure out what the escape velocity on the rim of a rotating wheel is. It cost me a warning to ask that question in a sub forum, but it turns out the "escape velocity" $v_e$ on the rim is equal to the tangential velocity of the rim of the wheel and that time dilation in the gravitational context and in the rotational context can equally and equivalently be expressed by the single equation: $$\frac{\tau}{t} = \sqrt{1-\frac{v_{e}^2}{c^2}}$$ In the context of a wheel the "escape velocity" is the minimum velocity that a particle would require when launched "upward" (inward) orthogonal to the rim up a spoke in order to arrive at the centre of the wheel, where the "potential" in the artificial gravity field is zero. Using an analogy of how Newtonian potential is calculated in a gravitational field, the kinetic energy aquired by an object falling from zero potential (infinity in the gravity case and the centre of the wheel in the rotational case) to the point under consideration is equal to the potential energy at that point. The terminal kinetic energy of a particle falling outward on a wheel is : $$KE = \int_0^r \, {mr \omega^2} \, dr = \frac{mr^2 \omega^2}{2}= \frac{mv^2}{2}$$ Since this is the same as the kinetic energy of a particle that remains on the rim it can be concluded that the radial "escape velocity" ($v_e$) of a location on the wheel, is equal in magnitude to the tangential velocity ($v_t$) of that point relative to the non rotating frame and so in the case of a wheel these two expressions for the time dilation are equal: $$\sqrt{1-\frac{v_{e}^2}{c^2}} = \sqrt{1-\frac{v_{t}^2}{c^2}}$$ turin Homework Helper ... up a spoke ... ... ... the kinetic energy aquired by an object falling from zero potential ... to the point under consideration is equal to the potential energy at that point. ... $$KE = \int_0^r \, {mr \omega^2} \, dr$$ ... Thanks, kev. That seems straightforward and correct. Basically, your are using dT=dW=F.dr and F=-dV/dr, so that dT=-dV. (Note the inclusion of the minus sign, a minor detail.) However, something still bothers me: What is the gravitational analogy of the constraint force due to the spoke? Apparently, the mass on the wheel spoke only allows a 1-D gravitational equivalent. It doesn't seem to work if you allow, for instance, a tangential degree of freedom. So, this doesn't apply to, for instance, space stations that generate their artificial gravity from rotation, does it? To put it another way, there is no concept of artificial gravitational (scalar) potential energy if you allow for a tangential degree of freedom, because there would be a velocity-dependent (tangential) force (in the rotating frame). I'm not saying that any of this is wrong, I just wonder if anyone else is bothered by this. Anyway, the end result must be true, since it agrees with the more fundamental calculation based on proper time. Let's two centrifuges, They have different length arms, and spun such that a clock at the end experiences each feels 1g. The clock on the centrifuge on the longer arm will run slower, even though it feels the same g-force as the other clock. Or you could arrange it so that the speed of the ends of the arms are the same for each centrifuge. In this case both clocks will run at the same rate, even though one will feel a greater g0force than the other. Thanks Janus. Just reiterating what I think you've explained. Correct me if wrong. Are you saying that the time dilation for an object in a centrifuge is based purely on the speed it is travelling (SR) and that there is no gravitational (GR) time dilation to be counted? And so the time dilation is independent of the radius of the centrifuge? Jonathan Scott Gold Member Thanks Janus. Just reiterating what I think you've explained. Correct me if wrong. Are you saying that the time dilation for an object in a centrifuge is based purely on the speed it is travelling (SR) and that there is no gravitational (GR) time dilation to be counted? And so the time dilation is independent of the radius of the centrifuge? That's one correct way of looking at it. An alternative way of reaching the same mathematical result is to integrate the centripetal acceleration as if it were a gravitational field, from the center out to the relevant radius, giving the equivalent gravitational potential difference between those points, and hence derive the corresponding time dilation. Regardless of whether you're using the SR velocity or the equivalent gravitational potential, the time dilation does not depend on the acceleration. As Janus pointed out, you can have the same time dilation for different accelerations, or different time dilations for the same acceleration.
# Precision control of double integral with $i\epsilon$ prescription I have the following double integral: $$$$F[\epsilon,\Omega,\sigma] = \iint_{\mathbb{R}^2}\frac{e^{-t^2/2\sigma^2}e^{-T^2/2\sigma^2}}{(t-T-i\epsilon)^2}e^{-i\Omega(t-T)}\,dt\,dT\,.$$$$ Now, there are tricks to solve this analytically (in terms of error functions and stuff). However, when I tried to numerically integrate this using NIntegrate, it gives very bad results especially as $$\epsilon\to 0$$. This is true regardless of the method I tried (GlobalAdaptive, LocalAdaptive, DoubleExponential) with various control over MaxRecursion/MinRecursion and PrecisionGoals or AccuracyGoals. I have also tried to confine my analysis to the strong support of the envelope function $$\sim [-5\sigma,5\sigma]\times [-5\sigma,5\sigma]$$ and indeed the problem mainly originated from this region. I do observe, however, that MinRecursion tends to improve results but this has very costly computational time. For example, for the simple choice $$\Omega=\sigma=1$$ and for $$\epsilon\sim 10^{-3}$$ reasonable result can be obtained for MinRecursion$$\,=5$$ but behaves badly for $$\epsilon=10^{-4}$$. On the other hand, once I increase MinRecursion to $$10$$, it seems to work for $$\epsilon=10^{-4}$$ but I cannot do any better. I have also ensured to use only input parameters in integers, so that they use infinite precision instead. I find it hard to believe that an innocuous looking integral with exact analytic solution can be so ill-behaved under double numerical integration. My questions are: (1) Is there a natural setting for which this sort of numerical integral can be dealt with consistently? I suppose the problem is due to the pole on the real line (before $$i\epsilon$$ prescription), but being non-expert in numerical analysis, I would like a transparent picture of what happens to this blow-up. (2) What is the origin of the blow-up? Naively the coincident limit $$t=T$$ should cause problems, but I would have thought that the pole prescription makes the coincident limit disappear. There must be something about complex analysis I am not exactly getting, but this looks quite harmless as it is written. EDIT: I have tried the "Partition" option under LocalAdaptive and it seems to provide some improvements but not very much. I would like to get a coherent larger picture of the integral, however, so the questions still stand from numerical analysis, complex analysis and other perspectives. The following is my latest code (where I kept various entries variable to allow testing various controls), and here $$a\equiv \epsilon$$, $$s\equiv \sigma$$ and $$gap \equiv \Omega$$: NIntegrate[ Exp[-t^2/(2 s^2)] Exp[-T^2/(2 s^2)] Exp[-I*gap*(t - T)]/(-I a + t-T)^2, {t, tmin, tmax}, {T, tmin, tmax}, MinRecursion -> minR, MaxRecursion -> maxR, PrecisionGoal -> prec, AccuracyGoal -> acc, Method -> {"LocalAdaptive", "Partitioning" -> {par1, par2}}] Update: I have tried one recent setting involving PrincipalValue, which essentially tries to do principal value integral at t = T. This improves the result quite a bit even with GlobalAdaptive scheme with minimal settings (i.e. Automatic for most other things), but still not working for small enough $$\epsilon$$. Also, I wonder if PrincipalValue can be used effectively when the singular $$$$line" is not so clear as in this scenario. • For reference, please include the NIntegrate[] code you were using to evaluate your integral. – J. M. is in limbo Mar 13 '19 at 23:04 The integrand has a near-singularity along the line T == t. This can be specified by the iterator {T, -Infinity, t, Infinity} (see second example). However, NIntegrate works more efficiently when the singularity is parallel to a coordinate axis (see first example). In the first example, we rotate the coordinates so that the near-singularity is along x = 0. In the second example, we help NIntegrate by dividing the T domain further near T == t. Block[{a = 10^-5, gap = 1, s = 1}, NIntegrate[ integrand /. {t -> (x + y)/Sqrt[2], T -> (y - x)/Sqrt[2]} // Simplify, {x, -Infinity, 0, Infinity}, {y, -Infinity, Infinity}] ] // AbsoluteTiming (* {6.4958, -0.279832 + 2.19131*10^-11 I} *) Block[{a = 10^-5, gap = 1, s = 1}, NIntegrate[ integrand, {t, -Infinity, 0, Infinity}, {T, -Infinity, t - 2 a, t, t + 2 a, Infinity}] ] // AbsoluteTiming (* {12.8288, -0.279832 + 4.12048*10^-10 I} *) Even better in this case is the fact that the variables in the integrand can be separated and the double integral factored into two single integrals: inty = E^(-(y^2/(2 s^2))) intx = integrand/inty /. {t -> (x + y)/Sqrt[2], T -> (y - x)/Sqrt[2]} // Simplify (* E^(-(y^2/(2 s^2))) E^(-I Sqrt[2] gap x - x^2/(2 s^2))/(-I a + Sqrt[2] x)^2 *) Block[{a = 10^-5, gap = 1, s = 1}, NIntegrate[intx, {x, -Infinity, 0, Infinity}] * NIntegrate[inty, {y, -Infinity, Infinity}] ] // AbsoluteTiming (* {0.039515, -0.279833 + 0. I} *) Block[{a = 10^-6, gap = 1, s = 1}, NIntegrate[intx, {x, -Infinity, 0, Infinity}] * NIntegrate[inty, {y, -Infinity, Infinity}] ] // AbsoluteTiming (* {0.038009, -0.279824 + 0. I} *) It is unlikely that the single integrals will be hard to manage. (The y integral can be done exactly.) • does that mean that if I have a more complicated denominator where I do not have the freedom to rotate it as cleanly as you did, there is no other way out? Or perhaps is there a way to deal with this via contour integral without the $i\epsilon$? – Everiana Mar 14 '19 at 12:33 • @Everiana It's hard to say in general. I think one can always make up an example that will defeat a particular numerical approach. OTOH, one may be able to discover a successful approach by further analysis of the integrand at hand. I think I've only seen contour integration applied to single-variable integrals, so I'm not sure how to answer your last question. – Michael E2 Mar 14 '19 at 15:34 • Actually I found out that for this particular case, we can actually do contour integral! I will update my answer at some point. The idea is to do contour integral of $t$ with poles $T$, and then basically integrate over $T$. – Everiana Mar 20 '19 at 2:18
Question # Prove that if $$n$$ and $$r$$ are positive integers$${ n }^{ r }-n{ \left( n-1 \right) }^{ r }+\dfrac { n\left( n-1 \right) }{ 2! } { \left( n-2 \right) }^{ r }-\dfrac { n\left( n-1 \right) \left( n-2 \right) }{ 3! } { \left( n-3 \right) }^{ r }+\cdots$$is equal to $$0$$ if $$r$$ be less than $$n$$, and to $$n!$$ if $$r=n$$. Solution ## We have $${ \left( { e }^{ x }-1 \right) }^{ n }={ \left( x+\dfrac { { x }^{ 2 } }{ 2! } +\dfrac { { x }^{ 3 } }{ 3! } +\dfrac { { x }^{ 4 } }{ 4! } +\cdots \right) }^{ n }$$$$={ x }^{ n }+$$ terms containing higher powers of $$x$$ ..... $$(1)$$Again, by the Binomial Theorem,$${ \left( { e }^{ x }-1 \right) }^{ n }={ e }^{ nx }-n{ e }^{ \left( n-1 \right) x }+\dfrac { n\left( n-1 \right) }{ 1\cdot 2 } { e }^{ \left( n-2 \right) x }-\cdots$$         ......(2).By expanding each of the terms $${ e }^{ nx },{ e }^{ \left( n-1 \right) x },\dots$$ we find that the coefficient of $${ x }^{ r }$$ in (2) is$$\dfrac { { n }^{ r } }{ r! } -n\cdot \dfrac { { \left( n-1 \right) }^{ r } }{ r! } +\dfrac { n\left( n-1 \right) }{ 2! } \cdot \dfrac { { \left( n-2 \right) }^{ r } }{ r! } -\dfrac { n\left( n-1 \right) \left( n-2 \right) }{ 3! } \cdot \dfrac { { \left( n-3 \right) }^{ r } }{ r! } +\cdots$$and by equating the coefficients of $${ x }^{ r }$$ in (1) and (2) the result follows.Maths Suggest Corrections 0 Similar questions View More People also searched for View More
# Intuition for density comonad in relation to lifting problems In Emily Riehl's Categorical Homotopy Theory, there is a section on Garner's Small Object Argument which I'm trying and failing to understand. Originally I followed most of Garner's paper, using the book to try and explicitly understand the stages in the transfinite construction. I guess I'll try to sketch some details first. Garner's small object argument is a a refinement of Quillen's in that it is universal and converges. The point is in to construct a 'free natural weak factorization system', i.e in reflecting along a semantics functor $\mathcal G:\mathsf{nwfs}(\mathsf{C})\longrightarrow \mathsf{Cat}/\mathsf{C^2}$. This problem is simplified by factoring $\mathcal G$ in a reasonably canonical way, because abstract nonsense gives two of the three reflections needed. The final reflection is then shown to be equivalent to constructing a free monoid, which is then shown to be equivalent to the convergence of a certain transfinite construction called the 'free monoid sequence'. (Now we come to proposition 4.22 in Garner's paper.) A convergence criterion in the context of our problem is the condition the functor $R=d_0\circ F:\mathsf{C^2}\longrightarrow \mathsf{C^2}$ preserves $\lambda$-filtered colimits, where $d_0$ is the image of the simplicial $\delta_0$, sending a composite of arrows to the first one. By abstract nonsense again it suffices to prove $L=d_2\circ F$ preserves such colimits. By abstract nonsense, $L$ admits the following explicit description. First, given the object $I:\mathsf{J}\rightarrow \mathsf{C^2}$ we want to reflect, form its left Kan extension along itself $M=\operatorname{Lan}_II:\mathsf{C^2}\longrightarrow \mathsf{C^2}$. Remark 1. Remark 12.5.2 in the book says that left Kan extending a functor along itself yields (by abstract nonsense) a comonad, called the density comonad. With the usual coend formula, the component at $f$ of the counit $\epsilon :M\Rightarrow 1_{\mathsf{C^2}}$ is the arrow $$\int^{j\in \mathsf J}\mathsf{C^2}(Ij,f)\circ Ij\longmapsto f$$ which is adjunt to the identity natural transformation on $\mathsf{C^2}(I(-),f)$. This $\epsilon$ can then be factored as the composite $M\overset{\xi}{\Longrightarrow}L\overset{\Phi}{\Longrightarrow}1_{\mathsf{C^2}}$ where the components of $\xi$ are pushouts and the components of $\Phi$ are identities on its domain. By abstract nonsense, $L$ preserves any colimit $M$ does, so we need to find an ordinal $\lambda$ for $M$. The proof ends by using the smallness condition to interchange some colimits. $\blacksquare$ It looks like section 12.4 of the book explain the process a little, but not enough for me. Suppose we want to factor an arrow $f$. The author says the square (12.2.3) should be thought of as a "generic lifting problem" that tests whether or not $f$ is a fibration, which makes sense. The fill in $\phi_f$ in (12.4.1) is referred to as a lifting function, which also makes sense. In step-one we factor the above square through the pushout of its cospan. This yields an equivalent lifting problem, proving that $f\in \mathcal J^\pitchfork$ iff it has has the right lifting property against $Lf$ in the canonical square (triangle) $Rf\circ Lf=f\circ 1$. We iterate this process, modding out redundancies, to ensure (using smallness) that it converges. Remark 2. If I understand correctly, Garner says $\epsilon$ is component-wise exactly the transition to the "lifting function" point of view. I am confused about the second sentence regarding the equivalent lifting problem... I would like to understand what the density comonad is (preferably at a level of generality where I can make out the relevance to this context), why, conceptually, does it encode the passage to an equivalent (but much more canonical) lifting problem, and whether the quotienting described in section 6.5 of Garner's paper, which gives convergence, is also encoded in it. Finally, let me emphasize I did not learn Quillen's small object argument before, so answers like "this is just a modification of the standard argument" will not help me very much. I am trying to learn this "corrected" version from scratch. I have also never met density comonads or codensity monads before. • I'm having a hard time understanding exactly what you are asking. Is part of the question what density comonads have to do with lifting problems? If so, maybe it helps to say that when $\mathsf{J}$ is discrete, the component at $f$ of the counit of $M$ is exactly the generic lifting problem (12.2.3). (But I get the feeling you understood that and wanted something else?) – Omar Antolín-Camarena May 1 '16 at 18:57 • In your Remark 2, what "second sentence" are you referring to? – Omar Antolín-Camarena May 1 '16 at 18:58 • @OmarAntolín-Camarena sorry for the late reply. I would like some help in understanding what the density comonad does. Conceptually, why does its counit describe generic lifting problems, and what does it do for more interesting $\mathsf J$? What does it measure about the functor $I$? – Arrow May 3 '16 at 8:19 • If you'd like a general introduction to codensity monads, there's this: golem.ph.utexas.edu/category/2012/09/… . If you'd like an introduction to density comonads, I'm afraid I can't help :-) – Tom Leinster May 3 '16 at 19:31 • Or maybe I can. Here's how to think about the density comonad of a functor $F$: it's what the comonad induced by $F$ and its right adjoint would be if $F$ had a right adjoint - but it's defined in many situations where $F$ doesn't have a right adjoint. – Tom Leinster May 3 '16 at 19:33 Maybe it would help to have: 1. A definition of "$f \in \mathsf{J}^\pitchfork$" when $\mathsf{J}$ is a category equipped with a functor $I : \mathsf{J} \to \mathsf{C}^2$; a definition that does not mention the density comonad of $I$. 2. An argument showing that $f \in \mathsf{J}^\pitchfork$ is equivalent to having a diagonal filler for the counit $\mathrm{Lan}_II(f) \to f$. [In the case that $\mathsf{J}$ is discrete the definition in step 1 would be simply that for each $j \in \mathsf{J}$ and each square $j \to f$ there is a diagonal filler. And in step 2 you'd argue that the counit $\mathrm{Lan}_II(f) \to f$ is exactly the square (12.2.3), and that having a diagonal filler for it provides all at once the required diagonals for all the individual squares $j \to f$.] OK, the definition of $f \in \mathsf{J}^\pitchfork$ is that you can choose for each object $j \in \mathsf{J}$ and each square $\alpha : I(j) \to f$ a diagonal filler $\phi_\alpha$ in such a way that the fillers are compatible with composition in $\mathsf{J}$, namely, given a morphism $g : j \to j'$ and a square $\alpha' : I(j') \to f$ the fillers for $\alpha'$ and $\alpha = \alpha' \circ I(g)$ satisfy $\phi_\alpha = \phi_{\alpha'} \circ I_{\mathsf{cod}}(g)$ (where (1) $I_\mathsf{cod}(g)$ means the bottom arrow of the square $I(g) : I(j) \to I(j')$ --bottom if you draw $I(j)$ going down-- and (2) I apologize for not knowing how to draw diagrams with diagonal arrows on MO). Now, how do we see that giving these compatible fillers $\phi$ is the same as filling the counit $\mathrm{Lan}_II(f) \to f$? Well, first notice the compatibility with composition of the $\phi$'s can be rephrased as saying that $\phi$ gives a functor from the comma category $I \downarrow f$ to $\mathsf{C}^2$. In fact, the collection of lifting problems $I(j) \to f$ form a cocone with vertex $f$ over the diagram $I \downarrow f \xrightarrow{\pi} \mathsf{J} \xrightarrow{I} \mathsf{C}^2$. And now just recall the colimit formula for the left Kan extension: $\mathrm{Lan}_II(f) = \mathrm{colim}(I \downarrow f \xrightarrow{\pi} \mathsf{J} \xrightarrow{I} \mathsf{C}^2)$. The counit of the density comonad is corresponds to the cocone mentioned above. • Super clear! I will read carefully later, but I think your answer resolves my issue! – Arrow May 4 '16 at 8:44 • Just a question - why do you take the codomain of the functor $\phi$ to be $\mathsf{C^2}$ instead of $I\downarrow \operatorname{dom}f$? Is it just to make all the compositions defined? – Arrow May 4 '16 at 12:43 • Well, I just wrote $\mathsf{C}^2$ out of laziness (plus what I wanted to emphasize is the domain: $\mathsf{I} \downarrow f$), you can certainly be more precise. I didn't list the full set of requirements for $\phi$ in that last paragraph, for example, you still have to ask that each individual $\phi_\alpha$ actually solve its lifting problem! – Omar Antolín-Camarena May 4 '16 at 16:55 • Oh, and I don't think $I \downarrow \mathrm{dom} f$ is quite right, @Arrow: the codomain of $I$ is $\mathsf{C}^2$ but $\mathrm{dom} f \in \mathsf{C}$. You could say $\phi : I \downarrow F \to I_\mathsf{cod} \downarrow \mathrm{dom} f$, where $I_\mathsf{cod}$ is again the composite $\mathsf{J} \xrightarrow{I} \mathsf{C}^2 \xrightarrow{\mathrm{cod}} \mathsf{C}$. If you add that $\phi$ commutes with the projections to $\mathsf{J}$, that captures all the proper domain and codomain information. (You still have to ask that the triangles in the indvidual lifting problems commute!) – Omar Antolín-Camarena May 4 '16 at 17:01
I think the decisive point is continuity with respect to different topologies. Let $C$ be the space of continuous functions of compact support and $D$ the space of smooth functions of compact support. The inclusion $D\hookrightarrow C$ is a continuous map when you give both spaces the corresponding inductive limit topology. That means, that every continuous linear functional of $C$, i.e., each Radon-measure, defines a continuous linear functional on $D$, i.e., a distribution. But not every distribution extends to a continuous linear map on $C$. Examples are the derivatives of the Dirac distribution. The line in Wikipedia relates to an important property of linear functionals on $C$: if such a functional is positive, i.e., if it maps functions $f\ge 0$ to numbers $\ge 0$, then it is AUTOMATICALLY CONTINUOUS! . This is an a very important and highly non-trivial fact, though it is not hard to prove. I think the decisive point is continuity with respect to different topologies. Let $C$ be the space of continuous functions of compact support and $D$ the space of smooth functions of compact support. The inclusion $D\hookrightarrow C$ is a continuous map when you give both spaces the corresponding inductive limit topology. That means, that every continuous linear functional of $C$, i.e., each Radon-measure, defines a continuous linear functional on $D$, i.e., a distribution. But not every distribution extends to a continuous linear map on $C$. Examples are the derivatives of the Dirac distribution. The line in Wikipedia relates to an important property of linear functionals on $C$: if such a functional is positive, i.e., if it maps functions $f\ge 0$ to numbers $\ge 0$, then it is AUTOMATICALLY CONTINUOUS! This is an important and highly non-trivial fact.
# Presented below is a list of possible transactions. · &... Presented below is a list of possible transactions. ·         1.   Purchased inventory for $80,000 on account (assume perpetual system is used). · 2. Issued an$80,000 note payable in payment on account (see item 1 above). ·         3.   Recorded accrued interest on the note from item 2 above at 10%. Assume the note is a one-year note and 3 months have passed. ·         4.   Signed a $100,000 note from the bank by signing a 6-month, zero-interest-bearing note. Prevailing annual interest rate is 10%. · 5. Recognized 4 months' interest expense on the note from item 4 above. · 6. Recorded sales revenue of$75,260 on account, which includes 5% sales tax. ·         7.   Incurred a contingency loss of $45,000 on a lawsuit. The company’s lawyer believes there is a reasonable possibility that the company could lose. · 8. Accrued warranty expense of 15,000 on sales. · 9. Paid warranty costs that were accrued in item 8 above. · 10. Purchased goods for$85,000 subject to a cash discount, terms of 2/10, n/30. Purchases and accounts payable are recorded at net amounts after cash discounts (assume perpetual system is used). ·         11. Paid the invoice from 10. above, thirty days later. Required: Record the journal entries (if needed) for the above transactions
# What is the radius of the helix a charged particle makes when entering a magnetic field at an angle? Here is the equation I found in my textbook but it doesn't make sense: $$R=\frac{mv\sin α}{QB}$$ . Looking at this formula we can say that particle A moving through magnetic field at an angle $$α<β$$ would follow a helical path with radius $$r_1 and particle B moving through the same field with an angle $$β$$ would follow a helical path with radius $$r_2$$. But we know that force on a particle with smaller angle is smaller than on a particle with a bigger angle and this formula hence doesn't make sense? • Thank your for the correction. English is not my native language so I didn't notice the difference. – ToTheSpace 2 May 23 at 16:01 The centripetal acceleration required to hold an object in a circular path is $$a_c=\frac{v^2}{r}$$. What matters here is the projection of the spiral path (which is a circle) and the component of $$v$$ perpendicular to $$B$$, which is $$v_\perp=v\sin\theta$$. So required acceleration for circular path is $$a_c=\frac{v^2 \sin^2\theta}{r}$$. Put it $$r$$: $$a_c=\frac{v^2 \sin^2\theta q B}{m v \sin\theta} = \frac{v \sin\theta q B}{m}$$, which is consistent. Yes, force and acceleration decrease with angle, but it's because the particle isn't going as fast around the circle part of the spiral. It has more velocity along the linear component of the spiral path, which takes no force to maintain.
# Homework Help: Proving a function is a bijection and isomorphic 1. Mar 28, 2016 ### RJLiberator 1. The problem statement, all variables and given/known data If G is a group and a ∈ G, let π: G--> G be the function defined as π(g) = ag, for all g ∈G. a) Show that π is a bijection b) Show that if π is an isomorphism, then a is the identity element of G. 2. Relevant equations I think to show that pi is a bijection we have to show that it is surjective and injective. To show that it is an isomorphism we have to show that π(xy) = π(x)π(y). 3. The attempt at a solution First, I do part a and show that it is one to one: Suppose π(g) = π(g') then ag = ag' Since a ∈G we know a^(-1) exists in G. a^(-1)ag = a^(-1)ag' g = g' and so π is one to one. I am not sure how to prove that this is onto... any hints here would help. For onto can I just say Let x,y ∈ G and x = a^(-1)y since a is a part of the group we know that a^(-1) exists. And thus it follows that π(x) = y To show part b) if x,y ∈G, then we observe π(xy) = axy π(x)π(y) = axay Here, a must equal the identity element of G for this function to be an isomorphism. 2. Mar 28, 2016 ### Samy_A Mainly correct. Only the last point needs some explanation. You have that, for π to be an isomorphism, axy = axay, for all x,y ∈ G. Why does that imply that a is the identity element of G? 3. Mar 28, 2016 ### RJLiberator Hm, well, it implies that a is the identity element of G as it would mean that a=aa the only thing that works here is the identity element. 4. Mar 28, 2016 ### RJLiberator Maybe you want it more explicit? axy = axay y^(-1) exists as y is a part of G so multiple both sides by it axy*y^(-1) = axay*y^(-1) ax = axa a^(-1) exists as a is a part of G a^(-1)ax = a^(-1)axa x = xa so now a has to be the identity element 5. Mar 28, 2016 Voila!
# A spaceship approaches the moon (mass = Mm , radius = Rm ) along a parabolic path which is almost tangential to its surface …. Q: A spaceship approaches the moon (mass = Mm , radius = Rm ) along a parabolic path which is almost tangential to its surface (very close to it). At the moment of the moon. The change in speed of spaceship is (a) $\frac{1}{\sqrt{2} – 1} \sqrt{\frac{G M_m}{R_m}}$ (b) $( \sqrt{3} – 1) \sqrt{\frac{G M_m}{R_m}}$ (c) $( \sqrt{2} – 1) \sqrt{\frac{G M_m}{R_m}}$ (d) $( \sqrt{5} – 1) \sqrt{\frac{G M_m}{R_m}}$
# Draw an Equilateral ΔABC with Side 6.2 cm and Construct Its Circumcircle - Geometry Draw an equilateral ΔABC with side 6.2 cm and construct its circumcircle #### Solution Steps of construction: 1. Construct the equilateral ΔXYZ of side equal to 6.2 cm. 2. Draw the perpendicular bisectors PR and QS of sides bar(XY) and bar(YZ) respectively. 3. Mark the point of intersection as O. 4. Draw a circle with centre O and radius OX or OY or OZ. This is the required circumcircle. Concept: Construction of Triangle If the Base, Angle Opposite to It and Either Median Altitude is Given Is there an error in this question or solution?
• # question_answer 10) A triangular prism of glass is shown in figure, A ray incident normal to one face is totally reflected. If $\theta$ is $45{}^\circ$, then index of refraction of the glass is-                                    A) less than 1.41    B) equal to 1.41       C) greater than 1.41D) None of these $45{}^\circ >{{\theta }_{c}}$
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 16 Oct 2018, 17:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If xy > 0 and both x and y are even numbers, is x > y? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Aug 2009 Posts: 6956 If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 20:29 00:00 Difficulty: 95% (hard) Question Stats: 44% (02:03) correct 56% (01:44) wrong based on 108 sessions ### HideShow timer Statistics If xy > 0 and both x and y are even numbers, is x > y? (1) x > y - 2 (2) |x - y| > 4 New question!!!.. _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Math Expert Joined: 02 Aug 2009 Posts: 6956 If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 21:26 chetan2u wrote: If xy>0 and both x and y are even numbers, is x>y? (I) x>y-2 (II) |x-y|>4 New question!!!.. OE to follow.. xy>0..... x and y are even... 1) x>y-2 x+2>y So y <x in all cases except when x=y.. Example say x=6, y<x+2 or y<6+2 that is y<8 but y cannot be 7 as y is even, so (I) y will be 6 or x=y OR (II) y can be 4,2 etc or X>y Insufficient 2) |x-y|>4 We cannot say if y>x or x>y but surely $$x\neq{0}$$ Insufficient Combined.. _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Intern Joined: 18 Nov 2013 Posts: 37 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 21:51 chetan2u wrote: If xy>0 and both x and y are even numbers, is x>y? (I) x>y-2 (II) |x-y|>4 New question!!!.. The answer is C . since xy>0; here is my list of values from statement 1: x=-2; y= -8 x=4; y =4 statement 1 is insufficient from statement 2: x=-2; y=-8 x=-8; y=-2 x=2; y=8 x=8; y=2 statement 2 is insufficient combining both makes it sufficient. C Intern Joined: 23 Feb 2012 Posts: 19 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 22:31 Xy>0 means either x ND y r both -ve or both are positive. 1) not sufficient.as x can be bigger/smaller than y or equal 2) sufficient: x-y >4 or y-x <- 4; for both +ve ND -ve x> y Sent from my Redmi 5 Plus using GMAT Club Forum mobile app Math Expert Joined: 02 Aug 2009 Posts: 6956 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 23:01 deepverma wrote: Xy>0 means either x ND y r both -ve or both are positive. 1) not sufficient.as x can be bigger/smaller than y or equal 2) sufficient: x-y >4 or y-x <- 4; for both +ve ND -ve x> y Sent from my Redmi 5 Plus using GMAT Club Forum mobile app you will have to recheck highlighted portion |x-y|>4 two cases 1) $$x-y\geq{0}$$ x-y>0 2) $$x-y<{0}$$ -(x-y)>4.......y-x>4 both x-y >4 and y-x <- 4 are SAME _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor ISB, NUS, NTU Moderator Joined: 11 Aug 2016 Posts: 264 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 23:38 1 chetan2u wrote: If xy>0 and both x and y are even numbers, is x>y? (I) x>y-2 (II) |x-y|>4 New question!!!.. Given: both x and y are even numbers xy>0: This means both x & y have same sign. Statement 1: x>y-2 Case 1 both x,y >0 eg 8>6-2 x>y Yes Case 2: both x,y<0 eg -8>-8-2 x>y No Hence Insuficient Statement 1: |x-y|>4 that means the distance between x & y is 4, but we dont know their relative positions. Hence Insufficient. Considering both Statement 1 &2 10>6-2 Sufficient. _________________ ~R. If my post was of any help to you, You can thank me in the form of Kudos!! Intern Joined: 23 Feb 2012 Posts: 19 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 10 Sep 2018, 23:42 Ya got it. thanks. Should be x-y >4 or x-y < -4 ND not sufficient. Combining 1)ND 2) gives ultimate x-y > 4 which means x>y . So answer is C Sent from my Redmi 5 Plus using GMAT Club Forum mobile app Manager Joined: 01 Nov 2017 Posts: 93 GMAT 1: 700 Q50 V35 GMAT 2: 640 Q49 V28 GMAT 3: 680 Q47 V36 GMAT 4: 700 Q50 V35 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 17 Sep 2018, 03:33 How does the "even" information come into play? Once I consider 2 conditions together, it was hard to see if it satisfies x > y. x -y > -2 AND |x- y| > 4 I had to guess it is C because I picked a pair of different sign x and y ( x=-5, y =1) and it could not satisfy both conditions -> probably they must be same sign. Kinda not sure about this one. Intern Joined: 17 May 2018 Posts: 3 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 17 Sep 2018, 04:37 In the posted solutions above where A is determined as insufficient, there was the assumption that X can equal y. My question is , when you are told x and y are even number(s), doesn't that mean that these numbers are different numbers? Intern Joined: 21 Jul 2018 Posts: 38 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 17 Sep 2018, 13:03 chetan2u wrote: If xy>0 and both x and y are even numbers, is x>y? (I) x>y-2 (II) |x-y|>4 New question!!!.. Given: both x and y are even numbers xy>0: This means both x & y have same sign. Statement 1: x>y-2 Case 1 both x,y >0 eg 8>6-2 x>y Yes Case 2: both x,y<0 eg -8>-8-2 x>y No Hence Insuficient Statement 1: |x-y|>4 that means the distance between x & y is 4, but we dont know their relative positions. Hence Insufficient. Considering both Statement 1 &2 10>6-2 Sufficient. Understood why A and B is not an answer but still not able to understand why C is an answer, GmatDaddy could you please elaborate how both equation together are sufficient. _________________ ______________________________ Consider KUDOS if my post helped !! I'd appreciate learning about the grammatical errors in my posts Please let me know if I'm wrong somewhere ISB, NUS, NTU Moderator Joined: 11 Aug 2016 Posts: 264 Re: If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 17 Sep 2018, 13:37 1 parthos wrote: Understood why A and B is not an answer but still not able to understand why C is an answer, GmatDaddy could you please elaborate how both equation together are sufficient. We have to judge if X>Y such that both the conditions mentioned in Statement 1 and 2 are met.. If you will try to choose values accordingly, you will see that X>Y Eg: We know from 2 that the difference between X and Y is 4. Now select values of X and Y such that X>Y-2 ie Y<X+2 let y be 6 now x can take either 2 or 10 Lets try both the cases 6<10+2, the answer to our ultimate question X>Y is Yes 6<2+2, this equality is absurd. similarly the other case when both X and Y are -ve _________________ ~R. If my post was of any help to you, You can thank me in the form of Kudos!! Intern Joined: 16 Jun 2018 Posts: 23 GMAT 1: 640 Q49 V29 GPA: 2.75 WE: Supply Chain Management (Energy and Utilities) If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 17 Sep 2018, 18:45 Given xy>0 (Same signs). we have to check x>y or x-y>0? St 1: x>y-2, x-y>-2. Insufficient as x-y could be negative, zero or greater than 0. St 2: x-y>4 or x-y<-4. Insufficient as Inequality 1 gives us Yes answer while Inequality 2 gives us No Answer. Combining st 1 & 2, we have three inequalities: x-y>-2 x-y>4 x-y<-4 From here on, if anyone can explain how both statements together are sufficient it would help me in error correction. Please explain algebraic approach. Math Expert Joined: 02 Aug 2009 Posts: 6956 If xy > 0 and both x and y are even numbers, is x > y?  [#permalink] ### Show Tags 18 Sep 2018, 04:29 chetan2u wrote: If xy > 0 and both x and y are even numbers, is x > y? (1) x > y - 2 (2) |x - y| > 4 New question!!!.. xy>0..... x and y are even... 1) x>y-2 x+2>y So y <x in all cases except when x=y.. Example say x=6, y<8 but y cannot be 7 as y is even, so y will be 6 or <6.. Insufficient as x can be y AND X can be >y 2) |x-y|>4 We cannot say if y>x or x>y but $$x\neq{0}$$ Insufficient Combined.. We know from statement I that either x=y or X>y But statement II tells us that $$x\neq{0}$$ So only possibility x>y Sufficient C _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor If xy > 0 and both x and y are even numbers, is x > y? &nbs [#permalink] 18 Sep 2018, 04:29 Display posts from previous: Sort by
A note on Gornik’s perturbation of Khovanov-Rozansky homology A note on Gornik’s perturbation of Khovanov-Rozansky homology Andrew Lobb Mathematics Department Stony Brook University Stony Brook NY 11794 USA Abstract. We show that the information contained in the associated graded vector space to Gornik’s version of Khovanov-Rozansky knot homology is equivalent to a single even integer s_{n}(K). Furthermore we show that s_{n} is a homomorphism from the smooth knot concordance group to the integers. This is in analogy with Rasmussen’s invariant coming from a perturbation of Khovanov homology. 1. Introduction and statement of results In the last few years there have been associated to a knot K\subset S^{3} various multiply-graded modules, each one exhibiting a classical knot polynomial as its graded Euler characteristic. It now seems likely that such knot homologies exist for each polynomial arising from the Reshetikhin-Turaev construction. It has been observed that there sometimes exist spectral sequences converging from one knot homology to another (see [11] for a slew of these). One of the first examples was due to Lee [8]. 1.1. Khovanov homology and Lee’s spectral sequence From here on we shall work over the complex numbers \mathbb{C}. The E_{2} page of Lee’s spectral sequence is standard Khovanov homology [3]. With a one-component knot K as input, the E_{\infty} page is a 2-dimensional complex vector space supported in homological degree 0. The E_{\infty} page also has another integer grading (the quantum grading), let us write \widetilde{H}^{i,j}(K) for this E_{\infty} page, where i is the homological grading and j is the quantum grading. Another way to think of \widetilde{H}^{i,j}(K) is as the associated graded vector space to the homology of a filtered chain complex defined by Lee. Rasmussen [10] showed that \widetilde{H}^{i,j}(K) is supported in bidegrees i=0,j=s-1 and i=0,j=s+1 where s(K)\in 2\mathbb{Z}. Hence the information contained in \widetilde{H}^{i,j}(K) is equivalent to a single even integer. Rasmussen further showed that Theorem 1.1 (Rasmussen [10]). Let g_{*}(K) be the smooth slice genus of the knot K, then g_{*}(K)\geq\frac{|s(K)|}{2}\rm{.} This bound is sufficient to recover the Milnor conjecture on the slice genus of torus knots, a result previously only accessible through gauge theory. Furthermore Rasmussen showed Theorem 1.2 (Rasmussen [10]). The map s:K\mapsto s(K)\in 2\mathbb{Z} is a homomorphism from the smooth concordance group of knots to the integers. 1.2. Khovanov-Rozansky homology and Gornik’s spectral sequence In the case of Khovanov-Rozansky homology H^{i,j}_{n}(K) (n\geq 2) (which has the quantum sl(n) knot polynomial as its Euler characteristic), a spectral sequence with E_{2} page H^{i,j}_{n}(K) was defined by Gornik [1]. He showed that the E_{\infty} page of this spectral sequence is a complex vector space of dimension n supported in homological degree i=0. The invariance of this spectral sequence under the Reidemeister moves was first shown by Wu [13]. Again there is also a quantum grading on this vector space, and the vector space can be thought of as the associated graded vector space to the homology of a filtered chain complex \mathscr{F}^{j}\widetilde{C}_{n}^{i}(D) defined by Gornik for any diagram D of a knot K. We shall write \mathscr{F}^{j}\widetilde{H}_{n}^{i}(K) for the filtered homology groups \ldots\subseteq\mathscr{F}^{j-1}\widetilde{H}_{n}^{i}(K)\subseteq\mathscr{F}^{% j}\widetilde{H}_{n}^{i}(K)\subseteq\ldots of this chain complex and \widetilde{H}^{i,j}_{n}(K)=\mathscr{F}^{j}\widetilde{H}_{n}^{i}(K)/\mathscr{F}% ^{j-1}\widetilde{H}_{n}^{i}(K) for the associated graded vector space. It was shown by the author [6] and independently by Wu [13] that one can extract a lower-bound on the slice genus from the quantum j-grading of each non-zero vector space \widetilde{H}^{0,j}_{n}(K) (in fact in these cited papers this was done also for more general perturbations of Khovanov-Rozansky homology than Gornik’s). Again, these lower-bounds are enough to imply the Milnor conjecture on the slice genus of torus knots. The highest non-zero quantum grading in this set-up has been called g_{n}^{{\rm max}} and the lowest g_{n}^{{\rm min}} by Wu. In [14] Wu asks for a relation between g_{n}^{{\rm max}} and g_{n}^{{\rm min}}, we provide an answer with our Theorem 1.3. 1.3. New results In the current paper we first show that the information contained in \widetilde{H}^{i,j}_{n}(K) is equivalent to a single even integer s_{n}(K). Theorem 1.3. For K a knot define the polynomial \widetilde{P}_{n}(q)=\sum_{j=-\infty}^{j=\infty}\dim_{\mathbb{C}}(\widetilde{H% }^{0,j}_{n}(K))q^{j}{\rm.} Then there exists s_{n}(K)\in 2\mathbb{Z} such that \widetilde{P}_{n}(q)=q^{s_{n}(K)}\frac{(q^{n}-q^{-n})}{(q-q^{-1})}{\rm.} In other words, this theorem says that the Gornik homology of any knot K is isomorphic to that of the unknot, but shifted by quantum degree s_{n}(K). The results of the author and of Wu’s on the slice genus are then immediately stated as the following: Corollary 1.4 (Lobb [6], Wu [13]). Writing g_{*}(K) for the smooth slice genus of a knot, we have g_{*}(K)\geq\frac{|s_{n}(K)|}{2(n-1)}{\rm.} Furthermore, if K admits a diagram D with only positive crossings then \displaystyle g_{*}(K) \displaystyle= \displaystyle\frac{-s_{n}(K)}{2(n-1)} \displaystyle= \displaystyle\frac{1}{2}(1-\#O(D)+w(D)){\rm,} where \#O(D) is the number of circles in the oriented resolution of D and w(D) is the writhe of D. It is a question of much interest whether the s_{n}(K) are in fact all equivalent to each other. We hope that this is not true, and do not know whether to expect it to be true. Nevertheless, let us formulate this as a conjecture. Conjecture 1.5. For any knot K and m,n\geq 2 we have \frac{s_{m}(K)}{s_{n}(K)}=\frac{m-1}{n-1}{\rm.} We note that s_{2}(K)=-s(K) so that every s_{n} is equivalent to Rasmussen’s original s(K). The falsity of this conjecture would have consequences for the non-degeneracy of the spectral sequences defined by Rasmussen [11] on the triply-graded Khovanov-Rozansky homology [5]. We are involved in a program with Daniel Krasner to try to find a counterexample to this conjecture. One can also make a weaker conjecture: Conjecture 1.6. For any knot K and n\geq 2 we have s_{n}(K)\in 2(n-1)\mathbb{Z}{\rm.} This has the appeal that it would rule out the possibility of fractional bounds on the slice genus coming from Corollary 1.4, but again we have no expectations either way on the truth of this conjecture. By analogy with Rasmussen’s Theorem 1.2 we might anticipate that each s_{n} is a concordance homomorphism. We show that this is in fact the case: Theorem 1.7. For each n\geq 2, the map s_{n}:K\mapsto s_{n}(K)\in 2\mathbb{Z} is a homomorphism from the smooth concordance group of knots to the integers. This theorem tells us that we have a concordance homomorphism for each integer \geq 2. It is a fascinating problem to try and understand if and how these homomorphisms are related to each other; we hope that this paper will stimulate some activity towards this goal. We conclude by noting that there are many properties of Rasmussen’s concordance homomorphism s from Khovanov homology and of the homomorphism \tau coming from Heegaard-Floer knot homology [12] [9] which follow formally from the properties of s and \tau analogous to Corollary 1.4 and Theorem 1.7. Rescaled versions of these results can now be seen to hold for s_{n}. We restrict ourselves to mentioning one of these which is not well-known as following from these formal properties. Corollary 1.8. If K is an alternating knot then s_{n}(K)=\frac{1}{1-n}\sigma(K){\rm,} where \sigma(K) is the classical knot signature of K. We sketch the proof of this at the end of the next section. 2. Proofs of results We assume in this section some familiarity with [4] by Khovanov and Rozansky. We fix an integer n\geq 2 and let K be a 1-component knot. In [4] the polynomial w=x^{n+1} is called the potential. Gornik’s key insight [1] was that it made sense to take a perturbation \widetilde{w}=x^{n+1}-(n+1)x of this potential and much of [4] goes through as before. Gornik showed that for his choice of potential \widetilde{w}, a knot diagram D determines a chain complex that no longer has a quantum grading but instead a quantum filtration respected by the differential. \ldots\subseteq\mathscr{F}^{j-1}\widetilde{C}_{n}^{i}(D)\subseteq\mathscr{F}^{% j}\widetilde{C}_{n}^{i}(D)\subseteq\ldots{\rm,} d:\mathscr{F}^{j}\widetilde{C}_{n}^{i}(D)\rightarrow\mathscr{F}^{j}\widetilde{% C}_{n}^{i+1}(D)\rm{.} It was immediate from his definitions that there exists a spectral sequence with E_{2} page the original Khovanov-Rozansky homology H_{n}^{i,j}(K) converging to the associated graded vector space E_{\infty}^{i,j}(K)=\widetilde{H}_{n}^{i,j}(K)=\mathscr{F}^{j}\widetilde{H}_{n% }^{i}(K)/\mathscr{F}^{j-1}\widetilde{H}_{n}^{i}(K) to the filtered homology groups \mathscr{F}^{j}\widetilde{H}_{n}^{i}(K). Given a knot diagram D for K, Gornik gave a basis at the chain level generating the homology; we now describe this basis. We write O(D) for the oriented resolution of D, and write r for the number of components of O(D). The oriented resolution O(D) gives rise to a summand of the chain group \widetilde{C}_{n}^{0}(D)=\cup_{j}\mathscr{F}^{j}\widetilde{C}_{n}^{0}(D), isomorphic in a natural way to \mathbb{C}[x_{1},x_{2},\ldots,x_{r}]/(x_{1}^{n}-1,x_{2}^{n}-1,\ldots,x_{r}^{n}% -1)\,[(1-n)({w}(D)+r)]{\rm,} where we have indicated a shift in the quantum filtration depending on r and on the writhe w(D) of the diagram. Definition 2.1. Let \xi=e^{2\pi i/n}. For each p=0,1,\ldots,n-1 we define an element g_{p}\in\widetilde{C}_{n}^{0}(D) that lies in this summand by g_{p}=\prod_{k=1}^{r}\frac{(x_{k}^{n}-1)}{(x_{k}-\xi^{p})}{\rm.} Then we know that: Theorem 2.2 (Gornik [1]). Each g_{p} is a cycle and \{[g_{0}],[g_{1}],\ldots,[g_{n-1}]\} is a basis for the homology \widetilde{H}^{i}_{n}(K)=\cup_{j}\mathscr{F}^{j}\widetilde{H}_{n}^{i}(K). Consequently \widetilde{H}^{i}_{n}(K) is a vector space of dimension n supported in homological degree i=0. Our first observation is that we can find a good basis for the subspace of \widetilde{C}_{n}^{0}(D) spanned by g_{0},g_{1},\ldots,g_{n-1}. What we mean here by ‘good’ requires another definition. Definition 2.3. A monomial \prod_{i=1}^{s}x_{i}^{a_{i}}\in\mathbb{C}[x_{1},x_{2},\ldots,x_{s}] is said to be of n-degree d iff \sum_{i=1}^{s}a_{i}=d\pmod{n}{\rm.} A polynomial is said to have be n-homogenous of n-degree d iff it is a linear combination of monomials of n-degree d. We note that projection extends the notion of n-degree unambiguously to elements lying in the ring \mathbb{C}[x_{1},x_{2},\ldots,x_{s}]/(x_{1}^{n}-1,x_{2}^{n}-1,\ldots,x_{s}^{n}% -1) since the quotient ideal is generated by n-homogeneous polynomials. Next we give a basis consisting of n-homogeneous elements for the vector space spanned by the elements g_{0},g_{1},\ldots,g_{n-1}\in\widetilde{C}_{n}^{0}(D). Lemma 2.4. Let g_{0},g_{1},\ldots,g_{n-1} be given as in Definition 2.1, and consider the n-dimensional complex vector space V=\subseteq\mathbb{C}[x_{1},x_{2},\ldots,x_{r}]/(x% _{1}^{n}-1,x_{2}^{n}-1,\ldots,x_{r}^{n}-1){\rm.} For p=0,1,\ldots,n-1 let h_{p}\in\mathbb{C}[x_{1},x_{2},\ldots,x_{r}]/(x_{1}^{n}-1,x_{2}^{n}-1,\ldots,x% _{r}^{n}-1) be the unique n-homogeneous element of n-degree p such that g_{0}=h_{0}+h_{1}+\cdots+h_{n-1}{\rm.} Then we have V={\rm.} Proof. For dimensional reasons it is enough to show that for each t=0,1,\ldots,n-1 we have g_{t}\in{\rm.} So let us fix such a t and let \overline{h}_{p} be the unique n-homogeneous element of n-degree p such that g_{t}=\overline{h}_{0}+\overline{h}_{1}+\cdots+\overline{h}_{n-1}\rm{.} We will show that \overline{h}_{p} is a multiple of h_{p} and then we will be done. Consider a monomial of n-degree p \prod_{i=1}^{r}x_{i}^{a_{i}}\,\,{\rm where}\,\,\sum_{i=1}^{r}a_{i}=p\pmod{n}\,% \,{\rm and}\,\,0\leq a_{i}\leq n-1\,\,\forall i{\rm.} The coefficient of this monomial in h_{p} (or, equivalently, in g_{0}) is clearly 1. The coefficient c of this monomial in g_{t} is expressible as a product c=c_{1}c_{2}\cdots c_{r} where c_{i} is the coefficient of x^{a_{i}} in the expansion of \frac{x^{n}-1}{x-\xi^{t}}=\frac{x^{n}-(\xi^{t})^{n}}{x-\xi^{t}}{\rm.} We leave it to the reader to check that c_{i}=\xi^{-(a_{i}+1)t}, so that c=\xi^{-t(\sum_{i=1}^{r}(a_{i}+1))}=\xi^{-t(p+r)}{\rm.} Hence we see that \overline{h}_{p}=\xi^{-t(p+r)}h_{p}\,\,\,{\rm so}\,\,\,g_{t}\in{\rm.} \hfill\square To put our new n-homogeneous basis to use, we require a proposition telling us how we might expect it to behave with respect to the filtration. In what follows, since we are assuming some familiarity with [4], we allow ourselves to refer to a matrix factorization as just a letter, M. We begin with a definition. Definition 2.5. If V is some filtered vector space \cdots\subseteq\mathscr{F}^{i}V\subseteq\mathscr{F}^{i+1}V\subseteq\cdots{\rm,} and we have a non-zero x\in V, we shall define the quantum grading {\rm qgr}(x)\in\mathbb{Z} by the requirement that x is non-zero in \mathscr{F}^{{\rm qgr}(x)}V/\mathscr{F}^{{\rm qgr}(x)-1}V{\rm.} The reason for the word ‘quantum’ in the definition is that in this paper the only vector spaces we shall worry about are those coming from chain groups or homology groups carrying a ‘quantum’ filtration. Proposition 2.6. If M is a matrix factorization whose homology H(M) appears as a summand of the chain group \widetilde{C}^{i}(D), then there is a natural (\mathbb{Z}/2n\mathbb{Z})-grading on H(M) which we write as {\rm Gr}^{\alpha}H(M)\,\,{\rm for}\,\,\alpha\in\mathbb{Z}/2n\mathbb{Z}{\rm.} This grading extends to a grading on the chain groups \widetilde{C}_{n}^{i}(D), which is respected by the differential d:{\rm Gr}^{\alpha}\widetilde{C}_{n}^{i}(D)\longrightarrow{\rm Gr}^{\alpha}% \widetilde{C}_{n}^{i+1}(D)\,\,{\rm for}\,\,\alpha\in\mathbb{Z}/2n\mathbb{Z}{% \rm,} thus giving a (\mathbb{Z}/2n\mathbb{Z})-grading on the homology groups {\rm Gr}^{\alpha}\widetilde{H}_{n}^{i}(K) for \alpha\in\mathbb{Z}/2n\mathbb{Z}. Furthermore, if a\in{\rm Gr}^{\alpha}\widetilde{C}_{n}^{0}(D) and b\in{\rm Gr}^{\beta}\widetilde{C}_{n}^{0}(D) represent non-zero classes [a], [b] in homology \widetilde{H}_{n}^{0}(K) then we have \displaystyle\alpha-\beta \displaystyle= \displaystyle{\rm qgr}(a)-{\rm qgr}(b)\pmod{2n} \displaystyle= \displaystyle{\rm qgr}([a])-{\rm qgr}([b])\pmod{2n}\rm{.} Proof. The matrix factorization M consists of two ‘internal’ graded vector spaces V_{0}, V_{1} and pair of ‘internal’ differentials d_{0}:V_{0}\rightarrow V_{1}\,\,{\rm and}\,\,d_{1}:V_{1}\rightarrow V_{0},d_{1% }d_{0}=d_{0}d_{1}=0{\rm.} If we were working with Khovanov and Rozansky’s potential w=x^{n+1} then we would know that these internal differentials d_{0}, d_{1} were both graded of degree n+1. But with Gornik’s potential \widetilde{w}=x^{n+1}-(n+1)x the internal differentials cease to respect the grading. So instead we take the filtration associated to the grading of the internal vector spaces and we observe that the internal differentials are then filtered of degree n+1. This gives rise to a filtered homology H(M) and so to filtered chain groups. The crux of this proposition is recognizing that the polynomials appearing as matrix entries in Gornik’s internal differentials are all n-homogenous. Since the various x_{i} appearing in the definition of M are assigned grading 2, this means that the homology H(M) inherits a (\mathbb{Z}/2n\mathbb{Z})-grading from the (\mathbb{Z}/2n\mathbb{Z})-grading on the internal vector spaces of M coming from collapsing their \mathbb{Z}-grading. Similarly the differentials on the chain complex \widetilde{C}_{n}^{i}(D) have n-homogeneous matrix entries. It needs to be checked that these entries are graded of degree 0\in\mathbb{Z}/2n\mathbb{Z} - we leave this to the reader. Hence we inherit a (\mathbb{Z}/2n\mathbb{Z})-grading on homology {\rm Gr}^{\alpha}\widetilde{H}_{n}^{i}(K)\,\,{\rm where}\,\,\alpha\in\mathbb{Z% }/2n\mathbb{Z}{\rm.} The first equality of the final part of the proposition follows from the observation that both the filtration and the (\mathbb{Z}/2n\mathbb{Z})-grading on \widetilde{C}_{n}^{i} are induced from the same \mathbb{Z}-grading on the matrix factorizations. The second equality follows from the fact that the differential on \widetilde{C}_{n}^{i} respects the (\mathbb{Z}/2n\mathbb{Z})-grading. \hfill\square In Proposition 2.6 we restricted ourselves to relative quantum gradings, but we did this simply as a matter of convenience, so that we did not have to worry about the various grading shifts happening in the definition of the chain complex. It is of course possible to more precise. The content of the next proposition is that we have figured out the grading shifts to give a precise statement of Proposition 2.6 applied to the case of our n-homogenous generators h_{0},h_{1},\ldots,h_{n-1}. Proposition 2.7. For p=0,1,\ldots,n-1, each h_{p} of Lemma 2.4 can be considered as a cycle of the chain group \widetilde{C}_{n}^{0}(D), each lying in the summand of this chain group corresponding to the oriented resolution O(D). Then each [h_{p}] is a non-zero class in homology lying in the graded part \widetilde{H}_{n}^{0,j_{p}}(K) for some j_{p} satisfying j_{p}=2p+(1-n)(w(D)+r)\pmod{2n}{\rm.} Proof. Certainly each h_{p} lies in a unique (\mathbb{Z}/2n\mathbb{Z})-grading. We note that the writhe of the diagram w(D) and the number r of components of O(D) appear in Proposition 2.7 because of the grading shift of the chain group summand. The factors of 2 appear since the various x_{i} appearing in the definition of the homology are assigned grading 2. We note also that w(D)+r is always an odd number. \hfill\square Definition 2.8. For K a knot let s_{n}^{\rm max}(K)={\rm max}\{j:\widetilde{H}_{n}^{0,j}(K)=\mathbb{C}\}{\rm,} and s_{n}^{\rm min}(K)={\rm min}\{j:\widetilde{H}_{n}^{0,j}(K)=\mathbb{C}\}{\rm.} It is now clear that Theorem 1.3 follows immediately from Proposition 2.7 and the following: Proposition 2.9. For any knot K we have s_{n}^{\rm max}(K)-s_{n}^{\rm min}(K)\leq 2(n-1)\rm{.} To verify Proposition 2.9 we need to appeal to the results of [6], specifically those of Subsection 3.3 which explains how, given a link L, \widetilde{H}_{n}^{i,j}(L) may change under elementary 1-handle addition to L. We do not need these results in full generality; the relevant picture for this paper is that of Figure 1. We state the next proposition without proof and refer interested readers to Subsection 3.3 of [6] for more details. Proposition 2.10. Consider the set-up of Figure 1 where K=K_{1}\#K_{2}. Associated to the straight arrow is a map F:\mathscr{F}^{j_{1}}\widetilde{H}_{n}^{i}(K_{1})\otimes\mathscr{F}^{j_{2}}% \widetilde{H}_{n}^{i}(K_{2})\longrightarrow\mathscr{F}^{j_{1}+j_{2}+n-1}% \widetilde{H}_{n}^{i}(K){\rm,} and associated to the bendy arrow is a map G:\mathscr{F}^{j}\widetilde{H}_{n}^{i}(K)\longrightarrow\bigcup_{\lx@stackrel{% {\scriptstyle j_{1},j_{2}}}{{j_{1}+j_{2}=j+n-1}}}\mathscr{F}^{j_{1}}\widetilde% {H}_{n}^{i}(K_{1})\otimes\mathscr{F}^{j_{2}}\widetilde{H}_{n}^{i}(K_{2}){\rm.} For p=0,1,\ldots,n-1 we write [g_{p}],[g^{1}_{p}],[g^{2}_{p}] for Gornik’s basis elements of \widetilde{H}_{n}^{0}(K),\widetilde{H}_{n}^{0}(K_{1}),\widetilde{H}_{n}^{0}(K_% {2}) respectively. We have F([g^{1}_{p_{1}}]\otimes[g^{2}_{p_{2}}])=\alpha[g_{p_{1}}] where \alpha\not=0 iff p_{1}=p_{2}. And G([g_{p}])=\beta([g^{1}_{p}]\otimes[g^{2}_{p}]) where \beta\not=0. \hfill\square With this proposition in hand we are almost ready to prove Proposition 2.9 and hence Theorem 1.3. We just need one more easy lemma. Lemma 2.11. If g\in\widetilde{C}_{n}^{0}(D) is one of Gornik’s basis elements of \widetilde{H}_{n}^{0}(K) then {\rm qgr}([g])=s_{n}^{\rm max}(K){\rm.} Proof. This follows from the observation that the quantum grading of exactly one of the [h_{p}] must be s_{n}^{\rm max}(K), and g is a linear combination of the h_{p}, with all coefficients non-zero. \hfill\square Proof of Proposition 2.9. In Figure 1, let K=K_{1} and let K_{2}=U, the unknot. Choose p\in\{0,1,\ldots,n-1\} so that [h^{1}_{p}] is non-zero in \widetilde{H}_{n}^{0,s_{n}^{\rm min}}(K_{1}). Now h^{1}_{p} is expressible as a linear combination of Gornik’s generators g^{1}_{0},g^{1}_{1},\ldots,g^{1}_{n-1}. Assume without loss of generality that the coefficient of g^{1}_{0} in this linear combination is non-zero. Then we have \displaystyle s_{n}^{\rm max}(K) \displaystyle= \displaystyle{\rm qgr}([g_{0}]) \displaystyle= \displaystyle F([h^{1}_{p}]\otimes[g^{2}_{0}]) \displaystyle\leq \displaystyle{\rm qgr}([h^{1}_{p}])+{\rm qgr}([g^{2}_{0}])+n-1 \displaystyle= \displaystyle s_{n}^{\rm min}(K)+n-1+n-1 \displaystyle= \displaystyle s_{n}^{\rm min}(K)+2n-2{\rm.} \hfill\square Now Theorem 1.3 follows easily. Proof of Theorem 1.3. Propositions 2.7 and 2.9 combine to imply Theorem 1.3 \hfill\square We can use the same technique used in the proof of Proposition 2.9 to give a proof of Theorem 1.7. Proof of Theorem 1.7. To check we have a homomorphism, it is enough to show that s_{n} respects the group operations. In other words if K=K_{1}\#K_{2} we wish to see s_{n}(K)=s_{n}(K_{1})+s_{n}(K_{2}){\rm.} Again we refer to Figure 1 and choose p\in\{0,1,\ldots,n-1\} so that [h^{1}_{p}] is non-zero in \widetilde{H}_{n}^{0,s_{n}^{\rm min}}(K_{1}) and assume without loss of generality that the coefficient of g^{1}_{0} in the expansion of h^{1}_{p} is non-zero. We observe \displaystyle s_{n}(K_{1})+s_{n}(K_{2}) \displaystyle= \displaystyle s_{n}^{\rm min}(K_{1})+s_{n}^{\rm max}(K_{2}) \displaystyle= \displaystyle{\rm qgr}([h^{1}_{p}]\otimes[g^{2}_{0}]) \displaystyle\geq \displaystyle{\rm qgr}(F([h^{1}_{p}]\otimes[g^{2}_{0}]))-n+1 \displaystyle= \displaystyle{\rm qgr}([g_{0}])-n+1 \displaystyle= \displaystyle s_{n}^{\rm max}(K)-n+1 \displaystyle= \displaystyle s_{n}(K){\rm,} and \displaystyle s_{n}(K_{1})+s_{n}(K_{2}) \displaystyle= \displaystyle s_{n}^{\rm max}(K_{1})+s_{n}^{\rm max}(K_{2})-2n+2 \displaystyle= \displaystyle{\rm qgr}([g^{1}_{0}]\otimes[g^{2}_{0}])-2n+2 \displaystyle= \displaystyle{\rm qgr}(G([g_{0}]))-2n+2 \displaystyle\leq \displaystyle{\rm qgr}([g_{0}])+n-1-2n+2 \displaystyle= \displaystyle s_{n}^{\rm max}(K)-n+1 \displaystyle= \displaystyle s_{n}(K){\rm.} \hfill\square Finally we indicate the proof of Corollary 1.8. Proof of Corollary 1.8. The main tool is due to Kawamura [2] in which she gives an explicit estimate of s(K) and \tau(K) depending on a diagram D of K. In deriving this estimate she only makes use of the formal properties of s and \tau analogous to Corollary 1.4 and Theorem 1.7, hence her arguments also apply to s_{n}. In [7], the author independently derives this estimate for s(K), using an algebraic argument rather than the formal properties of s. Proposition 1.5 of [7] shows that the estimates are tight given an alternating diagram D of K, but the proof of this Proposition does not use the definition of s and hence also shows that the bounds on s_{n}(K) are tight for alternating knots. Therefore since we know appropriately rescaled versions of this Corollary hold for s and for \tau, it also holds for s_{n}. \hfill\square References • [1] B. Gornik, Note on Khovanov link cohomology, 2004, arXiv:math.QA/0402266 • [2] T.  Kawamura, An estimate of the Rasmussen invariant for links, forthcoming paper. • [3] M.  Khovanov, A categorification of the Jones polynomial, Duke Math. J., vol 101.3 (200), 359-426. • [4] M.  Khovanov and L.  Rozansky, Matrix factorizations and link homology I, Fundamenta Mathematicae, vol.199 (2008), 1-91. • [5] M.  Khovanov and L.  Rozansky, Matrix factorizations and link homology II, Geometry and Topology, vol.12 (2008), 1387–1425. • [6] A.  Lobb, A slice genus lower bound from sl(n) Khovanov-Rozansky homology, Adv. Math. 222 (2009), 1220–1276. • [7] A.  Lobb, Computable bounds for Rasmussen’s concordance invariant, to appear in Compositio Mathematica • [8] E. Lee, An endomorphism of the Khovanov invariant, Adv. Math. 197, 2005, 554-586 • [9] P.  Ozsváth and Z.  Szabó, Knot Floer homology and the four-ball genus. Geom. Topol. 7 (2203) 615–639. • [10] J.  Rasmussen, Khovanov homology and the slice genus, Invent. Math. 182 (2010) 419–447 • [11] J.  Rasmussen, Some differentials on Khovanov-Rozansky homology, arXiv:math/0607544v2 • [12] J.  Rasmussen, Floer homology and knot complements, arXiv:math.GT/0306378 • [13] H.  Wu, On the quantum filtration of the Khovanov-Rozansky cohomology, Adv. Math. 221 (2009), 54–139. • [14] H.  Wu, The Khovanov-Rozansky Cohomology and Bennequin Inequalities, 2007, arXiv:math/0703210. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
# Mag Repacker • Fractured Wasteland • Elite • Posts: 606 • Https://Fractured-Gaming.com ## Mag Repacker « posted: Dec 27, 2014, 07:15 AM » I could of swore to god that Rev put a Server side Mag Repack script on the forums but I can't find it... If any1 knows let me know where it is. Posts once, edits post 40 times in 60 seconds. STUPID FAT FINGERS! • Developer • Veteran • Posts: 2652 ## Re: Mag Repacker « Reply #1 posted: Dec 28, 2014, 08:41 PM » I didn't, thought I am considering it, as I use it all the time when playing on TOP servers. ## Re: Mag Repacker « Reply #2 posted: Dec 28, 2014, 11:56 PM » You should be able to borrow this one or this i do how ever think they are exactly the same There's an slightly updated version here http://www.armaholic.com/page.php?id=19692 And as long as you put it in your mission and dont load it as an external mod you shouldn't need CBA remember these two init.sqf Code: [Select] `[] execVM "outlw_magRepack\MagRepack_init.sqf";`and description.ext #include "outlw_magRepack\MagRepack_config.cpp" • Fractured Wasteland • Elite • Posts: 606 • Https://Fractured-Gaming.com ## Re: Mag Repacker « Reply #3 posted: Dec 29, 2014, 12:30 AM » Thank you Posts once, edits post 40 times in 60 seconds. STUPID FAT FINGERS! • Fractured Wasteland • Elite • Posts: 606 • Https://Fractured-Gaming.com ## Re: Mag Repacker « Reply #4 posted: Dec 29, 2014, 08:22 AM » Works like a CHARM! Posts once, edits post 40 times in 60 seconds. STUPID FAT FINGERS! ## Re: Mag Repacker « Reply #5 posted: Dec 30, 2014, 11:31 AM » On the note of including it as part of the wasteland pbo, how do you do that? I'm having problems with battle eye, banning or kicking myself and my other players who run cba... Thanks Snakey. • Developer • Veteran • Posts: 2652 ## Re: Mag Repacker « Reply #6 posted: Dec 30, 2014, 08:11 PM » You don't need CBA if you start the scripts manually in client\init.sqf ## Re: Mag Repacker « Reply #7 posted: Dec 31, 2014, 11:43 AM » You don't need CBA if you start the scripts manually in client\init.sqf Sorry for taking so much time, but how exactly do I do that? • Fractured Wasteland • Elite • Posts: 606 • Https://Fractured-Gaming.com ## Re: Mag Repacker « Reply #8 posted: Dec 31, 2014, 12:50 PM » After that you want to go to your description in your PBO's root labled description.ext and add this Code: [Select] `#include "addons\outlw_magRepack\MagRepack_Config.hpp"` Code: [Select] `[] execVM "addons\outlw_magRepack\MagRepack_init_sv.sqf";` after that, throw it up on the server and you should be good. Posts once, edits post 40 times in 60 seconds. STUPID FAT FINGERS! ## Re: Mag Repacker « Reply #9 posted: Dec 31, 2014, 02:00 PM » Thanks a bunch, when I get a chance to I'll add it XD Also Happy New years ## Re: Mag Repacker « Reply #10 posted: Jan 02, 2015, 12:20 AM » You don't need CBA if you start the scripts manually in client\init.sqf Oh, so it shouldn't be in the "main" init? i have it there now but is client/init.sqf the better way to do it? ## Re: Mag Repacker « Reply #11 posted: Apr 20, 2015, 03:33 AM » The updated addon now has 2 separate directories. Now the path no longer points to proper directory. How should these be dropped in so that the paths are correct for the files called? ## Re: Mag Repacker « Reply #12 posted: Apr 20, 2015, 04:58 AM » ok..i looked at he commits on Git and it looks like only one folder goes in, which did..edited the files as per above and now i get this. and yes..i checked, the file is there ## Re: Mag Repacker « Reply #13 posted: Apr 20, 2015, 05:09 AM » and drop the contents of the addon there and exec it on the main init
Go to the first, previous, next, last section, table of contents. # Eigensystems This chapter describes functions for computing eigenvalues and eigenvectors of matrices. There are routines for real symmetric, real nonsymmetric, complex hermitian, real generalized symmetric-definite, complex generalized hermitian-definite, and real generalized nonsymmetric eigensystems. Eigenvalues can be computed with or without eigenvectors. The hermitian and real symmetric matrix algorithms are symmetric bidiagonalization followed by QR reduction. The nonsymmetric algorithm is the Francis QR double-shift. The generalized nonsymmetric algorithm is the QZ method due to Moler and Stewart. The functions described in this chapter are declared in the header file gsl_eigen.h'. ## Real Symmetric Matrices For real symmetric matrices, the library uses the symmetric bidiagonalization and QR reduction method. This is described in Golub & van Loan, section 8.3. The computed eigenvalues are accurate to an absolute accuracy of \epsilon ||A||_2, where \epsilon is the machine precision. Function: gsl_eigen_symm_workspace * gsl_eigen_symm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n real symmetric matrices. The size of the workspace is O(2n). Function: void gsl_eigen_symm_free (gsl_eigen_symm_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_symm (gsl_matrix * A, gsl_vector * eval, gsl_eigen_symm_workspace * w) This function computes the eigenvalues of the real symmetric matrix A. Additional workspace of the appropriate size must be provided in w. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector eval and are unordered. Function: gsl_eigen_symmv_workspace * gsl_eigen_symmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n real symmetric matrices. The size of the workspace is O(4n). Function: void gsl_eigen_symmv_free (gsl_eigen_symmv_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_symmv (gsl_matrix * A, gsl_vector * eval, gsl_matrix * evec, gsl_eigen_symmv_workspace * w) This function computes the eigenvalues and eigenvectors of the real symmetric matrix A. Additional workspace of the appropriate size must be provided in w. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector eval and are unordered. The corresponding eigenvectors are stored in the columns of the matrix evec. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude. ## Complex Hermitian Matrices For hermitian matrices, the library uses the complex form of the symmetric bidiagonalization and QR reduction method. Function: gsl_eigen_herm_workspace * gsl_eigen_herm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n complex hermitian matrices. The size of the workspace is O(3n). Function: void gsl_eigen_herm_free (gsl_eigen_herm_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_herm (gsl_matrix_complex * A, gsl_vector * eval, gsl_eigen_herm_workspace * w) This function computes the eigenvalues of the complex hermitian matrix A. Additional workspace of the appropriate size must be provided in w. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector eval and are unordered. Function: gsl_eigen_hermv_workspace * gsl_eigen_hermv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n complex hermitian matrices. The size of the workspace is O(5n). Function: void gsl_eigen_hermv_free (gsl_eigen_hermv_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_hermv (gsl_matrix_complex * A, gsl_vector * eval, gsl_matrix_complex * evec, gsl_eigen_hermv_workspace * w) This function computes the eigenvalues and eigenvectors of the complex hermitian matrix A. Additional workspace of the appropriate size must be provided in w. The diagonal and lower triangular part of A are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector eval and are unordered. The corresponding complex eigenvectors are stored in the columns of the matrix evec. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude. ## Real Nonsymmetric Matrices The solution of the real nonsymmetric eigensystem problem for a matrix A involves computing the Schur decomposition A = Z T Z^T where Z is an orthogonal matrix of Schur vectors and T, the Schur form, is quasi upper triangular with diagonal 1-by-1 blocks which are real eigenvalues of A, and diagonal 2-by-2 blocks whose eigenvalues are complex conjugate eigenvalues of A. The algorithm used is the double-shift Francis method. Function: gsl_eigen_nonsymm_workspace * gsl_eigen_nonsymm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n real nonsymmetric matrices. The size of the workspace is O(2n). Function: void gsl_eigen_nonsymm_free (gsl_eigen_nonsymm_workspace * w) This function frees the memory associated with the workspace w. Function: void gsl_eigen_nonsymm_params (const int compute_t, const int balance, gsl_eigen_nonsymm_workspace * w) This function sets some parameters which determine how the eigenvalue problem is solved in subsequent calls to gsl_eigen_nonsymm. If compute_t is set to 1, the full Schur form T will be computed by gsl_eigen_nonsymm. If it is set to 0, T will not be computed (this is the default setting). Computing the full Schur form T requires approximately 1.5--2 times the number of flops. If balance is set to 1, a balancing transformation is applied to the matrix prior to computing eigenvalues. This transformation is designed to make the rows and columns of the matrix have comparable norms, and can result in more accurate eigenvalues for matrices whose entries vary widely in magnitude. See section Balancing for more information. Note that the balancing transformation does not preserve the orthogonality of the Schur vectors, so if you wish to compute the Schur vectors with gsl_eigen_nonsymm_Z you will obtain the Schur vectors of the balanced matrix instead of the original matrix. The relationship will be T = Q^t D^(-1) A D Q where Q is the matrix of Schur vectors for the balanced matrix, and D is the balancing transformation. Then gsl_eigen_nonsymm_Z will compute a matrix Z which satisfies T = Z^(-1) A Z with Z = D Q. Note that Z will not be orthogonal. For this reason, balancing is not performed by default. Function: int gsl_eigen_nonsymm (gsl_matrix * A, gsl_vector_complex * eval, gsl_eigen_nonsymm_workspace * w) This function computes the eigenvalues of the real nonsymmetric matrix A and stores them in the vector eval. If T is desired, it is stored in the upper portion of A on output. Otherwise, on output, the diagonal of A will contain the 1-by-1 real eigenvalues and 2-by-2 complex conjugate eigenvalue systems, and the rest of A is destroyed. In rare cases, this function may fail to find all eigenvalues. If this happens, an error code is returned and the number of converged eigenvalues is stored in w->n_evals. The converged eigenvalues are stored in the beginning of eval. Function: int gsl_eigen_nonsymm_Z (gsl_matrix * A, gsl_vector_complex * eval, gsl_matrix * Z, gsl_eigen_nonsymm_workspace * w) This function is identical to gsl_eigen_nonsymm except that it also computes the Schur vectors and stores them into Z. Function: gsl_eigen_nonsymmv_workspace * gsl_eigen_nonsymmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n real nonsymmetric matrices. The size of the workspace is O(5n). Function: void gsl_eigen_nonsymmv_free (gsl_eigen_nonsymmv_workspace * w) This function frees the memory associated with the workspace w. Function: void gsl_eigen_nonsymmv_params (const int balance, gsl_eigen_nonsymm_workspace * w) This function sets parameters which determine how the eigenvalue problem is solved in subsequent calls to gsl_eigen_nonsymmv. If balance is set to 1, a balancing transformation is applied to the matrix. See gsl_eigen_nonsymm_params for more information. Balancing is turned off by default since it does not preserve the orthogonality of the Schur vectors. Function: int gsl_eigen_nonsymmv (gsl_matrix * A, gsl_vector_complex * eval, gsl_matrix_complex * evec, gsl_eigen_nonsymmv_workspace * w) This function computes eigenvalues and right eigenvectors of the n-by-n real nonsymmetric matrix A. It first calls gsl_eigen_nonsymm to compute the eigenvalues, Schur form T, and Schur vectors. Then it finds eigenvectors of T and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using gsl_eigen_nonsymmv_Z. The computed eigenvectors are normalized to have unit magnitude. On output, the upper portion of A contains the Schur form T. If gsl_eigen_nonsymm fails, no eigenvectors are computed, and an error code is returned. Function: int gsl_eigen_nonsymmv_Z (gsl_matrix * A, gsl_vector_complex * eval, gsl_matrix_complex * evec, gsl_matrix * Z, gsl_eigen_nonsymmv_workspace * w) This function is identical to gsl_eigen_nonsymmv except that it also saves the Schur vectors into Z. ## Real Generalized Symmetric-Definite Eigensystems The real generalized symmetric-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are symmetric matrices, and B is positive-definite. This problem reduces to the standard symmetric eigenvalue problem by applying the Cholesky decomposition to B: A x = \lambda B x A x = \lambda L L^t x ( L^{-1} A L^{-t} ) L^t x = \lambda L^t x Therefore, the problem becomes C y = \lambda y where C = L^{-1} A L^{-t} is symmetric, and y = L^t x. The standard symmetric eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues and eigenvectors of the generalized symmetric-definite eigenproblem are always real. Function: gsl_eigen_gensymm_workspace * gsl_eigen_gensymm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n real generalized symmetric-definite eigensystems. The size of the workspace is O(2n). Function: void gsl_eigen_gensymm_free (gsl_eigen_gensymm_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_gensymm (gsl_matrix * A, gsl_matrix * B, gsl_vector * eval, gsl_eigen_gensymm_workspace * w) This function computes the eigenvalues of the real generalized symmetric-definite matrix pair (A, B), and stores them in eval, using the method outlined above. On output, B contains its Cholesky decomposition and A is destroyed. Function: gsl_eigen_gensymmv_workspace * gsl_eigen_gensymmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n real generalized symmetric-definite eigensystems. The size of the workspace is O(4n). Function: void gsl_eigen_gensymmv_free (gsl_eigen_gensymmv_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_gensymmv (gsl_matrix * A, gsl_matrix * B, gsl_vector * eval, gsl_matrix * evec, gsl_eigen_gensymmv_workspace * w) This function computes the eigenvalues and eigenvectors of the real generalized symmetric-definite matrix pair (A, B), and stores them in eval and evec respectively. The computed eigenvectors are normalized to have unit magnitude. On output, B contains its Cholesky decomposition and A is destroyed. ## Complex Generalized Hermitian-Definite Eigensystems The complex generalized hermitian-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are hermitian matrices, and B is positive-definite. Similarly to the real case, this can be reduced to C y = \lambda y where C = L^{-1} A L^{-H} is hermitian, and y = L^H x. The standard hermitian eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues of the generalized hermitian-definite eigenproblem are always real. Function: gsl_eigen_genherm_workspace * gsl_eigen_genherm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n complex generalized hermitian-definite eigensystems. The size of the workspace is O(3n). Function: void gsl_eigen_genherm_free (gsl_eigen_genherm_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_genherm (gsl_matrix_complex * A, gsl_matrix_complex * B, gsl_vector * eval, gsl_eigen_genherm_workspace * w) This function computes the eigenvalues of the complex generalized hermitian-definite matrix pair (A, B), and stores them in eval, using the method outlined above. On output, B contains its Cholesky decomposition and A is destroyed. Function: gsl_eigen_genhermv_workspace * gsl_eigen_genhermv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n complex generalized hermitian-definite eigensystems. The size of the workspace is O(5n). Function: void gsl_eigen_genhermv_free (gsl_eigen_genhermv_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_genhermv (gsl_matrix_complex * A, gsl_matrix_complex * B, gsl_vector * eval, gsl_matrix_complex * evec, gsl_eigen_genhermv_workspace * w) This function computes the eigenvalues and eigenvectors of the complex generalized hermitian-definite matrix pair (A, B), and stores them in eval and evec respectively. The computed eigenvectors are normalized to have unit magnitude. On output, B contains its Cholesky decomposition and A is destroyed. ## Real Generalized Nonsymmetric Eigensystems Given two square matrices (A, B), the generalized nonsymmetric eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x We may also define the problem as finding eigenvalues \mu and eigenvectors y such that \mu A y = B y Note that these two problems are equivalent (with \lambda = 1/\mu) if neither \lambda nor \mu is zero. If say, \lambda is zero, then it is still a well defined eigenproblem, but its alternate problem involving \mu is not. Therefore, to allow for zero (and infinite) eigenvalues, the problem which is actually solved is \beta A x = \alpha B x The eigensolver routines below will return two values \alpha and \beta and leave it to the user to perform the divisions \lambda = \alpha / \beta and \mu = \beta / \alpha. If the determinant of the matrix pencil A - \lambda B is zero for all \lambda, the problem is said to be singular; otherwise it is called regular. Singularity normally leads to some \alpha = \beta = 0 which means the eigenproblem is ill-conditioned and generally does not have well defined eigenvalue solutions. The routines below are intended for regular matrix pencils and could yield unpredictable results when applied to singular pencils. The solution of the real generalized nonsymmetric eigensystem problem for a matrix pair (A, B) involves computing the generalized Schur decomposition A = Q S Z^T B = Q T Z^T where Q and Z are orthogonal matrices of left and right Schur vectors respectively, and (S, T) is the generalized Schur form whose diagonal elements give the \alpha and \beta values. The algorithm used is the QZ method due to Moler and Stewart (see references). Function: gsl_eigen_gen_workspace * gsl_eigen_gen_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of n-by-n real generalized nonsymmetric eigensystems. The size of the workspace is O(n). Function: void gsl_eigen_gen_free (gsl_eigen_gen_workspace * w) This function frees the memory associated with the workspace w. Function: void gsl_eigen_gen_params (const int compute_s, const int compute_t, const int balance, gsl_eigen_gen_workspace * w) This function sets some parameters which determine how the eigenvalue problem is solved in subsequent calls to gsl_eigen_gen. If compute_s is set to 1, the full Schur form S will be computed by gsl_eigen_gen. If it is set to 0, S will not be computed (this is the default setting). S is a quasi upper triangular matrix with 1-by-1 and 2-by-2 blocks on its diagonal. 1-by-1 blocks correspond to real eigenvalues, and 2-by-2 blocks correspond to complex eigenvalues. If compute_t is set to 1, the full Schur form T will be computed by gsl_eigen_gen. If it is set to 0, T will not be computed (this is the default setting). T is an upper triangular matrix with non-negative elements on its diagonal. Any 2-by-2 blocks in S will correspond to a 2-by-2 diagonal block in T. The balance parameter is currently ignored, since generalized balancing is not yet implemented. Function: int gsl_eigen_gen (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * alpha, gsl_vector * beta, gsl_eigen_gen_workspace * w) This function computes the eigenvalues of the real generalized nonsymmetric matrix pair (A, B), and stores them as pairs in (alpha, beta), where alpha is complex and beta is real. If \beta_i is non-zero, then \lambda = \alpha_i / \beta_i is an eigenvalue. Likewise, if \alpha_i is non-zero, then \mu = \beta_i / \alpha_i is an eigenvalue of the alternate problem \mu A y = B y. The elements of beta are normalized to be non-negative. If S is desired, it is stored in A on output. If T is desired, it is stored in B on output. The ordering of eigenvalues in (alpha, beta) follows the ordering of the diagonal blocks in the Schur forms S and T. In rare cases, this function may fail to find all eigenvalues. If this occurs, an error code is returned. Function: int gsl_eigen_gen_QZ (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * alpha, gsl_vector * beta, gsl_matrix * Q, gsl_matrix * Z, gsl_eigen_gen_workspace * w) This function is identical to gsl_eigen_gen except that it also computes the left and right Schur vectors and stores them into Q and Z respectively. Function: gsl_eigen_genv_workspace * gsl_eigen_genv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of n-by-n real generalized nonsymmetric eigensystems. The size of the workspace is O(7n). Function: void gsl_eigen_genv_free (gsl_eigen_genv_workspace * w) This function frees the memory associated with the workspace w. Function: int gsl_eigen_genv (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * alpha, gsl_vector * beta, gsl_matrix_complex * evec, gsl_eigen_genv_workspace * w) This function computes eigenvalues and right eigenvectors of the n-by-n real generalized nonsymmetric matrix pair (A, B). The eigenvalues are stored in (alpha, beta) and the eigenvectors are stored in evec. It first calls gsl_eigen_gen to compute the eigenvalues, Schur forms, and Schur vectors. Then it finds eigenvectors of the Schur forms and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using gsl_eigen_genv_QZ. The computed eigenvectors are normalized to have unit magnitude. On output, (A, B) contains the generalized Schur form (S, T). If gsl_eigen_gen fails, no eigenvectors are computed, and an error code is returned. Function: int gsl_eigen_genv_QZ (gsl_matrix * A, gsl_matrix * B, gsl_vector_complex * alpha, gsl_vector * beta, gsl_matrix_complex * evec, gsl_matrix * Q, gsl_matrix * Z, gsl_eigen_genv_workspace * w) This function is identical to gsl_eigen_genv except that it also computes the left and right Schur vectors and stores them into Q and Z respectively. ## Sorting Eigenvalues and Eigenvectors Function: int gsl_eigen_symmv_sort (gsl_vector * eval, gsl_matrix * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector eval and the corresponding real eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type, GSL_EIGEN_SORT_VAL_ASC ascending order in numerical value GSL_EIGEN_SORT_VAL_DESC descending order in numerical value GSL_EIGEN_SORT_ABS_ASC ascending order in magnitude GSL_EIGEN_SORT_ABS_DESC descending order in magnitude Function: int gsl_eigen_hermv_sort (gsl_vector * eval, gsl_matrix_complex * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector eval and the corresponding complex eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type as shown above. Function: int gsl_eigen_nonsymmv_sort (gsl_vector_complex * eval, gsl_matrix_complex * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector eval and the corresponding complex eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type as shown above. Only GSL_EIGEN_SORT_ABS_ASC and GSL_EIGEN_SORT_ABS_DESC are supported due to the eigenvalues being complex. Function: int gsl_eigen_gensymmv_sort (gsl_vector * eval, gsl_matrix * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector eval and the corresponding real eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type as shown above. Function: int gsl_eigen_genhermv_sort (gsl_vector * eval, gsl_matrix_complex * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector eval and the corresponding complex eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type as shown above. Function: int gsl_eigen_genv_sort (gsl_vector_complex * alpha, gsl_vector * beta, gsl_matrix_complex * evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vectors (alpha, beta) and the corresponding complex eigenvectors stored in the columns of the matrix evec into ascending or descending order according to the value of the parameter sort_type as shown above. Only GSL_EIGEN_SORT_ABS_ASC and GSL_EIGEN_SORT_ABS_DESC are supported due to the eigenvalues being complex. ## Examples The following program computes the eigenvalues and eigenvectors of the 4-th order Hilbert matrix, H(i,j) = 1/(i + j + 1). #include <stdio.h> #include <gsl/gsl_math.h> #include <gsl/gsl_eigen.h> int main (void) { double data[] = { 1.0 , 1/2.0, 1/3.0, 1/4.0, 1/2.0, 1/3.0, 1/4.0, 1/5.0, 1/3.0, 1/4.0, 1/5.0, 1/6.0, 1/4.0, 1/5.0, 1/6.0, 1/7.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector *eval = gsl_vector_alloc (4); gsl_matrix *evec = gsl_matrix_alloc (4, 4); gsl_eigen_symmv_workspace * w = gsl_eigen_symmv_alloc (4); gsl_eigen_symmv (&m.matrix, eval, evec, w); gsl_eigen_symmv_free (w); gsl_eigen_symmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_ASC); { int i; for (i = 0; i < 4; i++) { double eval_i = gsl_vector_get (eval, i); gsl_vector_view evec_i = gsl_matrix_column (evec, i); printf ("eigenvalue = %g\n", eval_i); printf ("eigenvector = \n"); gsl_vector_fprintf (stdout, &evec_i.vector, "%g"); } } gsl_vector_free (eval); gsl_matrix_free (evec); return 0; } Here is the beginning of the output from the program, $./a.out eigenvalue = 9.67023e-05 eigenvector = -0.0291933 0.328712 -0.791411 0.514553 ... This can be compared with the corresponding output from GNU OCTAVE, octave> [v,d] = eig(hilb(4)); octave> diag(d) ans = 9.6702e-05 6.7383e-03 1.6914e-01 1.5002e+00 octave> v v = 0.029193 0.179186 -0.582076 0.792608 -0.328712 -0.741918 0.370502 0.451923 0.791411 0.100228 0.509579 0.322416 -0.514553 0.638283 0.514048 0.252161 Note that the eigenvectors can differ by a change of sign, since the sign of an eigenvector is arbitrary. The following program illustrates the use of the nonsymmetric eigensolver, by computing the eigenvalues and eigenvectors of the Vandermonde matrix V(x;i,j) = x_i^{n - j} with x = (-1,-2,3,4). #include <stdio.h> #include <gsl/gsl_math.h> #include <gsl/gsl_eigen.h> int main (void) { double data[] = { -1.0, 1.0, -1.0, 1.0, -8.0, 4.0, -2.0, 1.0, 27.0, 9.0, 3.0, 1.0, 64.0, 16.0, 4.0, 1.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector_complex *eval = gsl_vector_complex_alloc (4); gsl_matrix_complex *evec = gsl_matrix_complex_alloc (4, 4); gsl_eigen_nonsymmv_workspace * w = gsl_eigen_nonsymmv_alloc (4); gsl_eigen_nonsymmv (&m.matrix, eval, evec, w); gsl_eigen_nonsymmv_free (w); gsl_eigen_nonsymmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_DESC); { int i, j; for (i = 0; i < 4; i++) { gsl_complex eval_i = gsl_vector_complex_get (eval, i); gsl_vector_complex_view evec_i = gsl_matrix_complex_column (evec, i); printf ("eigenvalue = %g + %gi\n", GSL_REAL(eval_i), GSL_IMAG(eval_i)); printf ("eigenvector = \n"); for (j = 0; j < 4; ++j) { gsl_complex z = gsl_vector_complex_get(&evec_i.vector, j); printf("%g + %gi\n", GSL_REAL(z), GSL_IMAG(z)); } } } gsl_vector_complex_free(eval); gsl_matrix_complex_free(evec); return 0; } Here is the beginning of the output from the program, $ ./a.out eigenvalue = -6.41391 + 0i eigenvector = -0.0998822 + 0i -0.111251 + 0i 0.292501 + 0i 0.944505 + 0i eigenvalue = 5.54555 + 3.08545i eigenvector = -0.043487 + -0.0076308i 0.0642377 + -0.142127i -0.515253 + 0.0405118i -0.840592 + -0.00148565i ... This can be compared with the corresponding output from GNU OCTAVE, octave> [v,d] = eig(vander([-1 -2 3 4])); octave> diag(d) ans = -6.4139 + 0.0000i 5.5456 + 3.0854i 5.5456 - 3.0854i 2.3228 + 0.0000i octave> v v = Columns 1 through 3: -0.09988 + 0.00000i -0.04350 - 0.00755i -0.04350 + 0.00755i -0.11125 + 0.00000i 0.06399 - 0.14224i 0.06399 + 0.14224i 0.29250 + 0.00000i -0.51518 + 0.04142i -0.51518 - 0.04142i 0.94451 + 0.00000i -0.84059 + 0.00000i -0.84059 - 0.00000i Column 4: -0.14493 + 0.00000i 0.35660 + 0.00000i 0.91937 + 0.00000i 0.08118 + 0.00000i ` Note that the eigenvectors corresponding to the eigenvalue 5.54555 + 3.08545i differ by the multiplicative constant 0.9999984 + 0.0017674i which is an arbitrary phase factor of magnitude 1. ## References and Further Reading Further information on the algorithms described in this section can be found in the following book, • G. H. Golub, C. F. Van Loan, Matrix Computations (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. Further information on the generalized eigensystems QZ algorithm can be found in this paper, • C. Moler, G. Stewart, "An Algorithm for Generalized Matrix Eigenvalue Problems", SIAM J. Numer. Anal., Vol 10, No 2, 1973. Eigensystem routines for very large matrices can be found in the Fortran library LAPACK. The LAPACK library is described in, The LAPACK source code can be found at the website above along with an online copy of the users guide. Go to the first, previous, next, last section, table of contents.
# What is the domain of of y=4^x? Domain is the values which x can take. It is clearly (-$\infty , + \infty$)
# Easier Way to Find Probability I know how to compute this with a concept similar to truth tables. First I listed all of the combinations of the angle in trios: $ABC$, $ABD$, $ABE$, $ACD$, $ACE$, $ADE$, $BCD$, $BCE$, $BDE$, $CDE$. Then I let $A$ represent the angles that are acute, and $N$ represent the angles that were not, and plugged such values into the combinations above. The result was: $AAN$, $AAN$, $AAA$, $ANN$, $ANA$, $ANA$, $ANN$, $ANA$, $ANA$, $NNA$. From this I could easily pinpoint the result $\frac{6}{10}\$ which can be reduced to $\frac{3}{5}\$. My question is simple: is there an easier way to compute the same answer? If so, what is the corresponding formula? I have previously asked a question dealing with probability such as this, except replacement was involved. The response involved mapping out the answers like I did above, so this is where the confusion over easy computation arrives. Thank you! • combinatorics is where you want to go it seems. – user451844 Oct 3 '17 at 2:05 The number of ways of selecting a subset of size $k$ from a set of $n$ objects is given by the formula $$\binom{n}{k} = \frac{n!}{k!(n - k)!}$$ where $n!$, read "$n$ factorial," is the product of the first $n$ positive integers if $n$ is a positive integer and $0! = 1$. The notation $\binom{n}{k}$ is read "$n$ choose $k$." There are $\binom{5}{3}$ ways to select a subset of three of the five angles. Of the five angles, three are acute and two are not. If exactly two of the three selected angles are acute, one of the two other angles must be selected. Therefore, the number of favorable selections is $$\binom{3}{2}\binom{2}{1}$$ Hence, the probability that exactly two acute angles will be selected when three of the five angles are selected is $$\frac{\dbinom{3}{2}\dbinom{2}{1}}{\dbinom{5}{3}} = \frac{3 \cdot 2}{10} = \frac{3}{5}$$ as you found. • Thank you for the short but sweet explanation on the notation! Helps a lot! Oct 3 '17 at 2:10 • not sure in this small case it's any easy than a pure list and count method. in larger examples it will save a lot though. – user451844 Oct 3 '17 at 2:13 • @RoddyMacPhee Agreed. Oct 3 '17 at 2:13 We want the probability for selecting $2$ from the $3$ acute and $1$ from the $2$ non-acute angles, when selecting any $3$ from the $5$ angles with no bias nor replacement. Recall that $\binom nk$ is the count for selections of $k$ items from a set of $n$ (with no relacement), and : $$\binom nk = \dfrac{n!}{k!~(n-k)!}$$ Put it together. $$\dfrac{\dbinom 3 2\dbinom 21}{\dbinom 52}=\dfrac{3}{5}$$
# Gaussian martingale independent increment $M$ be a Gaussian martingale with continuous sample paths, such that $M_0=0$. I want to show that, for every $t \geq 0$ and every $s >0$, the random variable $M_{t+s}-M_t$ is independent of $\sigma(M_r, 0\leq r \leq t)$. I guess I need to show $E[M_r (M_{t+s}-M_t)]=E[M_r] E[(M_{t+s}-M_t)]$. I appreciate any hints... • $E[(M_{t+s}-M_t)M_r]=E[M_rE[M_{t+s}-M_t|\sigma(M_r,0\le r\le t)]]=0$. Hence $M_{t+s}-M_t$ and $(M_r,0\le r\le t)$ are uncorrelated and independent. – JGWang May 6 '17 at 3:32 • @JGWang I forgot $E[M_r]=0$. Thanks! – Siskaa May 7 '17 at 13:43 For $0=r_0<r_1<r_2<\cdots<r_n\le t$, the random vector $(M_{r_1}-M_{r_0},M_{r_2}-M_{r_1},\ldots,M_{r_n}-M_{r-{n-1}},M_{t+s}-M_t)$ has the multivariate normal distribution with zero mean vector and diagonal covariance matrix. It follows that $(M_{r_1}-M_{r_0},M_{r_2}-M_{r_1},\ldots,M_{r_n}-M_{r-{n-1}})$ is independent of $M_{t+s}-M_t$. Because $\sigma(M_r,0\le r\le t)$ is generated by events of the form $\cap_{k=1}^n\{M_{r_k}-M_{r_{k-1}}\in B_k\}$, for various choices of the $r_k$ and Borel sets $B_k$, the stated independence follows from the monotone class theorem.
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # 23 UMa Contents ### Images DSS Images   Other Images ### Related articles Can Life Develop in the Expanded Habitable Zones around Red Giant Stars?We present some new ideas about the possibility of life developingaround subgiant and red giant stars. Our study concerns the temporalevolution of the habitable zone. The distance between the star and thehabitable zone, as well as its width, increases with time as aconsequence of stellar evolution. The habitable zone moves outward afterthe star leaves the main sequence, sweeping a wider range of distancesfrom the star until the star reaches the tip of the asymptotic giantbranch. Currently there is no clear evidence as to when life actuallyformed on the Earth, but recent isotopic data suggest life existed atleast as early as 7×108 yr after the Earth was formed.Thus, if life could form and evolve over time intervals from5×108 to 109 yr, then there could behabitable planets with life around red giant stars. For a 1Msolar star at the first stages of its post-main-sequenceevolution, the temporal transit of the habitable zone is estimated to beseveral times 109 yr at 2 AU and around 108 yr at9 AU. Under these circumstances life could develop at distances in therange 2-9 AU in the environment of subgiant or giant stars, and in thefar distant future in the environment of our own solar system. After astar completes its first ascent along the red giant branch and the Heflash takes place, there is an additional stable period of quiescent Hecore burning during which there is another opportunity for life todevelop. For a 1 Msolar star there is an additional109 yr with a stable habitable zone in the region from 7 to22 AU. Space astronomy missions, such as proposed for the TerrestrialPlanet Finder (TPF) and Darwin, that focus on searches for signatures oflife on extrasolar planets, should also consider the environments ofsubgiants and red giant stars as potentially interesting sites forunderstanding the development of life. We performed a preliminaryevaluation of the difficulty of interferometric observations of planetsaround red giant stars compared to a main-sequence star environment. Weshow that pathfinder missions for TPF and Darwin, such as Eclipse andFKSI, have sufficient angular resolution and sensitivity to search forhabitable planets around some of the closest evolved stars of thesubgiant and red giant class. The Indo-US Library of Coudé Feed Stellar SpectraWe have obtained spectra for 1273 stars using the 0.9 m coudéfeed telescope at Kitt Peak National Observatory. This telescope feedsthe coudé spectrograph of the 2.1 m telescope. The spectra havebeen obtained with the no. 5 camera of the coudé spectrograph anda Loral 3K×1K CCD. Two gratings have been used to provide spectralcoverage from 3460 to 9464 Å, at a resolution of ~1 Å FWHMand at an original dispersion of 0.44 Å pixel-1. For885 stars we have complete spectra over the entire 3460 to 9464 Åwavelength region (neglecting small gaps of less than 50 Å), andpartial spectral coverage for the remaining stars. The 1273 stars havebeen selected to provide broad coverage of the atmospheric parametersTeff, logg, and [Fe/H], as well as spectral type. The goal ofthe project is to provide a comprehensive library of stellar spectra foruse in the automated classification of stellar and galaxy spectra and ingalaxy population synthesis. In this paper we discuss thecharacteristics of the spectral library, viz., details of theobservations, data reduction procedures, and selection of stars. We alsopresent a few illustrations of the quality and information available inthe spectra. The first version of the complete spectral library is nowpublicly available from the National Optical Astronomy Observatory(NOAO) via ftp and http. Nearby stars of the Galactic disk and halo. III.High-resolution spectroscopic observations of about 150 nearby stars orstar systems are presented and discussed. The study of these and another100 objects of the previous papers of this series implies that theGalaxy became reality 13 or 14 Gyr ago with the implementation of amassive, rotationally-supported population of thick-disk stars. The veryhigh star formation rate in that phase gave rise to a rapid metalenrichment and an expulsion of gas in supernovae-driven Galactic winds,but was followed by a star formation gap for no less than three billionyears at the Sun's galactocentric distance. In a second phase, then, thethin disk - our familiar Milky Way'' - came on stage. Nowadays ittraces the bright side of the Galaxy, but it is also embedded in a hugecoffin of dead thick-disk stars that account for a large amount ofbaryonic dark matter. As opposed to this, cold-dark-matter-dominatedcosmologies that suggest a more gradual hierarchical buildup throughmergers of minor structures, though popular, are a poor description forthe Milky Way Galaxy - and by inference many other spirals as well - if,as the sample implies, the fossil records of its long-lived stars do notstick to this paradigm. Apart from this general picture that emergeswith reference to the entire sample stars, a good deal of the presentwork is however also concerned with detailed discussions of manyindividual objects. Among the most interesting we mention the bluestraggler or merger candidates HD 165401 and HD 137763/HD 137778, thelikely accretion of a giant planet or brown dwarf on 59 Vir in itsrecent history, and HD 63433 that proves to be a young solar analog at\tau˜200 Myr. Likewise, the secondary to HR 4867, formerly suspectednon-single from the Hipparcos astrometry, is directly detectable in thehigh-resolution spectroscopic tracings, whereas the visual binary \chiCet is instead at least triple, and presumably even quadruple. Withrespect to the nearby young stars a complete account of the Ursa MajorAssociation is presented, and we provide as well plain evidence foranother, the Hercules-Lyra Association'', the likely existence ofwhich was only realized in recent years. On account of its rotation,chemistry, and age we do confirm that the Sun is very typical among itsG-type neighbors; as to its kinematics, it appears however not unlikelythat the Sun's known low peculiar space velocity could indeed be thecause for the weak paleontological record of mass extinctions and majorimpact events on our parent planet during the most recent Galactic planepassage of the solar system. Although the significance of thiscorrelation certainly remains a matter of debate for years to come, wepoint in this context to the principal importance of the thick disk fora complete census with respect to the local surface and volumedensities. Other important effects that can be ascribed to this darkstellar population comprise (i) the observed plateau in the shape of theluminosity function of the local FGK stars, (ii) a small thoughsystematic effect on the basic solar motion, (iii) a reassessment of theterm asymmetrical drift velocity'' for the remainder (i.e. the thindisk) of the stellar objects, (iv) its ability to account for the bulkof the recently discovered high-velocity blue white dwarfs, (v) itsmajor contribution to the Sun's ˜220 km s-1 rotationalvelocity around the Galactic center, and (vi) the significant flatteningthat it imposes on the Milky Way's rotation curve. Finally we note ahigh multiplicity fraction in the small but volume-complete local sampleof stars of this ancient population. This in turn is highly suggestivefor a star formation scenario wherein the few existing single stellarobjects might only arise from either late mergers or the dynamicalejection of former triple or higher level star systems. The Geneva-Copenhagen survey of the Solar neighbourhood. Ages, metallicities, and kinematic properties of ˜14 000 F and G dwarfsWe present and discuss new determinations of metallicity, rotation, age,kinematics, and Galactic orbits for a complete, magnitude-limited, andkinematically unbiased sample of 16 682 nearby F and G dwarf stars. Our˜63 000 new, accurate radial-velocity observations for nearly 13 500stars allow identification of most of the binary stars in the sampleand, together with published uvbyβ photometry, Hipparcosparallaxes, Tycho-2 proper motions, and a few earlier radial velocities,complete the kinematic information for 14 139 stars. These high-qualityvelocity data are supplemented by effective temperatures andmetallicities newly derived from recent and/or revised calibrations. Theremaining stars either lack Hipparcos data or have fast rotation. Amajor effort has been devoted to the determination of new isochrone agesfor all stars for which this is possible. Particular attention has beengiven to a realistic treatment of statistical biases and errorestimates, as standard techniques tend to underestimate these effectsand introduce spurious features in the age distributions. Our ages agreewell with those by Edvardsson et al. (\cite{edv93}), despite severalastrophysical and computational improvements since then. We demonstrate,however, how strong observational and theoretical biases cause thedistribution of the observed ages to be very different from that of thetrue age distribution of the sample. Among the many basic relations ofthe Galactic disk that can be reinvestigated from the data presentedhere, we revisit the metallicity distribution of the G dwarfs and theage-metallicity, age-velocity, and metallicity-velocity relations of theSolar neighbourhood. Our first results confirm the lack of metal-poor Gdwarfs relative to closed-box model predictions (the G dwarfproblem''), the existence of radial metallicity gradients in the disk,the small change in mean metallicity of the thin disk since itsformation and the substantial scatter in metallicity at all ages, andthe continuing kinematic heating of the thin disk with an efficiencyconsistent with that expected for a combination of spiral arms and giantmolecular clouds. Distinct features in the distribution of the Vcomponent of the space motion are extended in age and metallicity,corresponding to the effects of stochastic spiral waves rather thanclassical moving groups, and may complicate the identification ofthick-disk stars from kinematic criteria. More advanced analyses of thisrich material will require careful simulations of the selection criteriafor the sample and the distribution of observational errors.Based on observations made with the Danish 1.5-m telescope at ESO, LaSilla, Chile, and with the Swiss 1-m telescope at Observatoire deHaute-Provence, France.Complete Tables 1 and 2 are only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/418/989 Differential rotation in rapidly rotating F-starsWe obtained high quality spectra of 135 stars of spectral types F andlater and derived overall'' broadening functions in selectedwavelength regions utilizing a Least Squares Deconvolution (LSD)procedure. Precision values of the projected rotational velocity v \siniwere derived from the first zero of the Fourier transformed profiles andthe shapes of the profiles were analyzed for effects of differentialrotation. The broadening profiles of 70 stars rotating faster than v\sini = 45 km s-1 show no indications of multiplicity nor ofspottedness. In those profiles we used the ratio of the first two zerosof the Fourier transform q_2/q_1 to search for deviations from rigidrotation. In the vast majority the profiles were found to be consistentwith rigid rotation. Five stars were found to have flat profilesprobably due to cool polar caps, in three stars cuspy profiles werefound. Two out of those three cases may be due to extremely rapidrotation seen pole on, only in one case (v \sini = 52 km s-1)is solar-like differential rotation the most plausible explanation forthe observed profile. These results indicate that the strength ofdifferential rotation diminishes in stars rotating as rapidly as v \sini>~ 50 km s-1.Table A.1 is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/412/813Based on observations collected at the European Southern Observatory, LaSilla, 69.D-0015(B). Lithium and rotation in F and G dwarfs and subgiantsLithium abundances have been determined in 127 F and G Pop I stars basedon new measurements of the equivalent width of the lambda 6707 ÅLi I line from their high resolution CCD spectra. Distances and absolutemagnitudes of these stars have been obtained from the HipparcosCatalogue and their masses and ages derived, enabling us to investigatethe behaviour of lithium as a function of these parameters. Based ontheir location on the HR diagram superposed on theoretical evolutionarytracks, the sample of the stars has been chosen to ensure that they havemore or less completed their Li depletion on the main sequence. A largespread in the Li abundances is found at any given effective temperatureespecially in the already spun down late F and early G stars. Thisspread persists even if the Li-dip'' stars that have evolved from themain sequence temperature interval 6500-6800 K are excluded. Stars inthe mass range up to 2 M/Msun when divided into threemetallicity groups show a linear correlation between Li abundance andmass, albeit with a large dispersion around it which is not fullyaccounted for by age either. The large depletions and the observedspread in Li are in contrast to the predictions of the standard stellarmodel calculations and suggest that they are aided by non-standardprocesses depending upon variables besides mass, age and metallicity.The present study was undertaken to examine, in particular, the effectsof rotation on the depletion of Li. No one-to-one correlation is foundbetween the Li abundance and the present projected rotational velocity.Instead the observed abundances seem to be dictated by the rotationalhistory of the star. However, it is noted that even this interpretationis subject to the inherent limitation in the measurement of the observedLi EQW for large rotational velocities.Table 1 is only available in electronic form at the CDS via anonymousftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/409/251 On the link between rotation, chromospheric activity and Li abundance in subgiant starsThe connection rotation-CaII emission flux-lithium abundance is analyzedfor a sample of bona fide subgiant stars, with evolutionary statusdetermined from HIPPARCOS trigonometric parallax measurements and fromthe Toulouse-Geneva code. The distribution of rotation and CaII emissionflux as a function of effective temperature shows a discontinuitylocated around the same spectral type, F8IV. Blueward of this spectraltype, subgiants have a large spread of values of rotation and CaII flux,whereas stars redward of F8IV show essentially low rotation and low CaIIflux. The strength of these declines depends on stellar mass. Theabundance of lithium also shows a sudden decrease. For subgiants withmass lower than about 1.2 Msun the decrease is located laterthan that in rotation and CaII flux, whereas for masses higher than 1.2Msun the decrease in lithium abundance is located around thespectral type F8IV. The discrepancy between the location of thediscontinuities of rotation and CaII emission flux and log n(Li) forstars with masses lower than 1.2 Msun seems to reflect thesensitivity of these phenomena to the mass of the convective envelope.The drop in rotation, which results mostly from a magnetic braking,requires an increase in the mass of the convective envelope less thanthat required for the decrease in log n(Li). The location of thediscontinuity in log n(Li) for stars with masses higher than 1.2Msun, in the same region of the discontinuities in rotationand CaII emission flux, may also be explained by the behavior of thedeepening of the convective envelope. The more massive the star is, theearlier is the increase of the convective envelope. In contrast to therelationship between rotation and CaII flux, which is fairly linear, therelationship between lithium abundance and rotation shows no cleartendency toward linear behavior. Similarly, no clear linear trend isobserved in the relationship between lithium abundance and CaII flux. Inspite of these facts, subgiants with high lithium content also have highrotation and high CaII emission flux. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Research Note Hipparcos photometry: The least variable starsThe data known as the Hipparcos Photometry obtained with the Hipparcossatellite have been investigated to find those stars which are leastvariable. Such stars are excellent candidates to serve as standards forphotometric systems. Their spectral types suggest in which parts of theHR diagrams stars are most constant. In some cases these values stronglyindicate that previous ground based studies claiming photometricvariability are incorrect or that the level of stellar activity haschanged. Table 2 is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/367/297 The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas/year for mu alpha cos delta and 0.7mas/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/222 Photometric Measurements of the Fields of More than 700 Nearby StarsIn preparation for optical/IR interferometric searches for substellarcompanions of nearby stars, we undertook to characterize the fields ofall nearby stars visible from the Northern Hemisphere to determinesuitable companions for interferometric phase referencing. Because theKeck Interferometer in particular will be able to phase-reference oncompanions within the isoplanatic patch (30") to about 17th magnitude atK, we took images at V, r, and i that were deep enough to determine iffield stars were present to this magnitude around nearby stars using aspot-coated CCD. We report on 733 fields containing 10,629 measurementsin up to three filters (Gunn i, r and Johnson V) of nearby stars down toabout 13th magnitude at V. A Second Catalog of Orbiting Astronomical Observatory 2 Filter Photometry: Ultraviolet Photometry of 614 StarsUltraviolet photometry from the Wisconsin Experiment Package on theOrbiting Astronomical Observatory 2 (OAO 2) is presented for 614 stars.Previously unpublished magnitudes from 12 filter bandpasses withwavelengths ranging from 1330 to 4250 Å have been placed on thewhite dwarf model atmosphere absolute flux scale. The fluxes wereconverted to magnitudes using V=0 for F(V)=3.46x10^-9 ergs cm^-2 s^-1Å^-1, or m_lambda=-2.5logF_lambda-21.15. This second catalogeffectively doubles the amount of OAO 2 photometry available in theliterature and includes many objects too bright to be observed withmodern space observatories. The ROSAT all-sky survey catalogue of the nearby starsWe present X-ray data for all entries of the Third Catalogue of NearbyStars \cite[(Gliese & Jahreiss 1991)]{gli91} that have been detectedas X-ray sources in the ROSAT all-sky survey. The catalogue contains1252 entries yielding an average detection rate of 32.9 percent. Inaddition to count rates, source detection parameters, hardness ratios,and X-ray fluxes we also list X-ray luminosities derived from Hipparcosparallaxes. Catalogue also available at CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The ROSAT all-sky survey catalogue of optically bright main-sequence stars and subgiant starsWe present X-ray data for all main-sequence and subgiant stars ofspectral types A, F, G, and K and luminosity classes IV and V listed inthe Bright Star Catalogue that have been detected as X-ray sources inthe ROSAT all-sky survey; several stars without luminosity class arealso included. The catalogue contains 980 entries yielding an averagedetection rate of 32 percent. In addition to count rates, sourcedetection parameters, hardness ratios, and X-ray fluxes we also listX-ray luminosities derived from Hipparcos parallaxes. The catalogue isalso available in electronic form via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The Tokyo PMC catalog 90-93: Catalog of positions of 6649 stars observed in 1990 through 1993 with Tokyo photoelectric meridian circleThe sixth annual catalog of the Tokyo Photoelectric Meridian Circle(PMC) is presented for 6649 stars which were observed at least two timesin January 1990 through March 1993. The mean positions of the starsobserved are given in the catalog at the corresponding mean epochs ofobservations of individual stars. The coordinates of the catalog arebased on the FK5 system, and referred to the equinox and equator ofJ2000.0. The mean local deviations of the observed positions from theFK5 catalog positions are constructed for the basic FK5 stars to comparewith those of the Tokyo PMC Catalog 89 and preliminary Hipparcos resultsof H30. The Angular Momentum of Main Sequence Stars and Its Relation to Stellar ActivityRotational velocities are reported for intermediate-mass main sequencestars it the field. The measurements are based on new, high S/N CCDspectra from the Coudé Feed Telescope of the Kitt Peak NationalObservatory. We analyze these rotation rates for a dependence on bothmass and age. We compare the average rotation speeds of the field starswith mean velocities for young stars in Orion, the Alpha Persei cluster,the Pleiades, and the Hyades. The average rotation speeds of stars moremassive than $\sim1.6$ \msun\experience little or no change during theevolutionary lifetimes of these stars on the zero age main sequence orwithin the main sequence band. Less massive stars in the range betwee n1.6\msun\ and 1.3\msun\ also show little decline in mean rotation ratewhile they are on the main sequence, and at most a factor of 2 decreasein velocity as they evolve off the main sequence. The {\it e}-foldingtime for the loss of angular momentum b y the latter group of stars isat least 1--2 billion years. This inferred characteristic time scale forspindown is far longer than the established rotational braking time forsolar-type stars with masses below $\sim1.3$ \msun. We conclude from acomparison of the trends in rotation with trends in chromospheric andcoronal activity that the overall decline in mean rotation speed alongthe main sequence, from $\sim2$ \msun\ down to $\sim1.3$ \msun, isimposed during the pre-main sequence phase of evolution, and that thispattern changes little thereafter while the star resides on the mainsequence. The magnetic activity implicated in the rotational spindown ofthe Sun and of similar stars during their main sequence lifetimes mus ttherefore play only a minor role in determining the rotation rates ofthe intermediate mass stars, either because a solar-like dynamo is weakor absent, or else the geometry of the magnetic field is appreciablyless effective in removing angular momentu m from these stars. (SECTION:Stars) Convection, Thermal Bifurcation, and the Colors of A StarsBroadband ultraviolet photometry from the TD-1 satellite andlow-dispersion spectra from the short wavelength camera of IUE have beenused to investigate a long-standing proposal of Bohm-Vitense that thenormal main-sequence A and early-F stars may divide into two differenttemperature sequences: (1) a high-temperature branch (and plateau)comprised of slowly rotating convective stars, and (2) a low-temperaturebranch populated by rapidly rotating radiative stars. We find noevidence from either data set to support such a claim, or to confirm theexistence of an "A-star gap" in the B-V color range 0.22 <= B-V <=0.28 due to the sudden onset of convection. We do observe, nonetheless,a large scatter in the 1800--2000 A colors of the A--F stars, whichamounts to ~0.65 mag at a given B-V color index. The scatter is notcaused by interstellar or circumstellar reddening. A convincing case canalso be made against binarity and intrinsic variability due topulsations of delta Sct origin. We find no correlation with establishedchromospheric and coronal proxies of convection, and thus nodemonstrable link to the possible onset of convection among the A--Fstars. The scatter is not instrumental. Approximately 0.4 mag of thescatter is shown to arise from individual differences in surface gravityas well as a moderate spread (factor of ~3) in heavy metal abundance andUV line blanketing. A dispersion of ~0.25 mag remains, which has noclear and obvious explanation. The most likely cause, we believe, is aresidual imprecision in our correction for the spread in metalabundances. However, the existing data do not rule out possiblecontributions from intrinsic stellar variability or from differential UVline blanketing effects owing to a dispersion in microturbulentvelocity. Systematic Errors in the FK5 Catalog as Derived from CCD Observations in the Extragalactic Reference Frame.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114..850S&db_key=AST The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. A catalog of stellar Lyman-alpha fluxesWe present a catalog of stellar Ly-alpha emission fluxes, based on newand archival images obtained with the IUE spacecraft. The catalogincludes 227 stars with detectable Ly-alpha emission fluxes, and upperlimits on the Ly-alpha emission flux for another 48 stars. Multiple fluxmeasurements are given for 52 stars. We present a model for correctingthe observed Ly-alpha flux for attenuation by the local interstellarmedium, and we apply this model to derive intrinsic Ly-alpha fluxes for149 catalog stars which are located in low H I column density directionsof the local interstellar medium. In our catalog, there are 14 late-Aand early-F stars at B-V = 0.29 or less that show detectable emission atLy-alpha. We find a linear correlation between the intrinsic Ly-alphaflux and C II 1335 A flux for stars with B-V greater than 0.60, but theA and F stars deviate from this relation in the sense that theirLy-alpha flux is too low. We also find a good correlation betweenLy-alpha strength and coronal X-ray emission. This correlation holdsover most of the H-R diagram, even for the F stars, where an X-raydeficit has previously been found relative to the transition regionlines of C II and C IV. Corrections to FK4 Positions of Stars Observed at Paris AstrolabeAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1993A&AS..102..389N&db_key=AST Optical Polarization of 1000 Stars Within 50-PARSECS from the SunAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1993A&AS..101..551L&db_key=AST Corrections to the right ascension to be applied to the apparent places of 1217 stars given in "The Chinese Astronomical Almanach" for the year 1984 to 1992.Not Available Do all Three Vary: omicron UMa, 23 UMa and HR 3245?Not Available Catalog of stars observed with the photoelectric CERGA astrolabe (March 1988 - July 1991)From Mars 1988 to July 1991 the photoelectric CERGA astrolabe ASPHO wasused to observe 11 star groups. During each annual cycle, each star wasobserved sufficiently to allow an early determination of FK5 catalogcorrections with a precision of 0.03 and 0.04 arcsec in right ascensionand declination respectively. The results are given here in the form ofa combined catalog for the three years of observations, and correctionsto the FK5 positions computed for the epoch 1990.0 and corrections tothe FK5 proper motions are also given. Errors are estimated as acombination of the internal yearly error and of the nonlinearity of itsapparent motion during the three years. The results are discussed,showing that the catalog is well linked to the FK5 system without shiftin alpha or delta. It is concluded that the corrections in positions andproper motions given here are significant within estimated errors. Secondary spectrophotometric standardsEnergy distribution data on 238 secondary standard stars are presentedin the range 3200-7600 A with 50 A step. These stars are common to theCatalog of the Sternberg State Astronomical Institute and the FessenkovAstrophysical Institute. For these stars, the differences betweenspectral energy distribution data of the two catalogs do not exceed 5percent, while the mean internal accuracy of both catalogs data in thisrange are about 3.5 percent. For 99 stars energy distribution data inthe near infrared (6000-10,800 A) obtained at the Sternberg StateAstronomical Institute are also presented. Spectral energy distribution of stars at the near infrared.Not Available Preliminary Version of the Third Catalogue of Nearby StarsNot Available Submit a new article • - No Links Found -
# The Bode plot of the open-loop transfer function of a system is described as foLlws: Slope – 40 dB/decade      ω < 0.1 rad/s Slop – 20 dB/decade        0.1 < ω < 10 rad/s Slope     0                          ω > 10 rad/s The system described will have This question was previously asked in ESE Electronics 2016 Paper 2: Official Paper View all UPSC IES Papers > 1. 1 pole and 2 zeros 2. 2 poles and 2 zeros 3. 2 poles and 1 zero 4. 1 pole and 1 zero Option 2 : 2 poles and 2 zeros Free CT 3: Building Materials 2565 10 Questions 20 Marks 12 Mins ## Detailed Solution Concept: Bode plot transfer function is represented in standard time constant form as $$T\left( s \right) = \frac{{k\left( {\frac{s}{{{\omega _{{c_1}}}}} + 1} \right) \ldots }}{{\left( {\frac{s}{{{\omega _{{c_2}}}}} + 1} \right)\left( {\frac{s}{{{\omega _{{c_3}}}}} + 1} \right) \ldots }}$$ ωc1, ωc2, … are corner frequencies. In a Bode magnitude plot, • For a pole at the origin, the initial slope is -20 dB/decade • For a zero at the origin, the initial slope is 20 dB/decade • The slope of magnitude plot changes at each corner frequency • The corner frequency associated with poles causes a slope of -20 dB/decade • The corner frequency associated with poles causes a slope of -20 dB/decade • The final slope of Bode magnitude plot = (Z – P) × 20 dB/decade Where Z is the number zeros and P is the number of poles. Calculation: As per the given details bode plot is: DIAGRAM The initial Slope of -40 dB indicates 2 poles at Origin. The final slope of 0 dB indicates that 2 more zeros are there in the system. Hence, P = Z = 2
# Why exists a substantial distribution hold-up when sending out from my Gmail account to an Earthlink account? Recently I've been having a significant trouble with distribution hold-ups when sending out Gmail to an individual that has an Earthlink account. Sending out the email is great yet the mail does not get here in the recipient is mail box for several hrs, and also occasionally days. Sometimes, I receive an email back from the mail daemon that claims: Mail Delivery Subsystem to me show information 2:17 PM (5 mins ago) This is an instantly created Delivery Status Notification THIS IS A WARNING MESSAGE ONLY. YOU DO NOT NEED TO RESEND YOUR MESSAGE. Distribution to the adhering to recipient has actually been postponed: [email protected] Message will certainly be retried for 2 even more day (s) Technical information of short-lived failing: The recipient web server did decline our demands to connect. Discover more at http://mail.google.com/support/bin/answer.py?answer=7720 [mx00 - dom.earthlink.net. (10): Connection break ] [mx01 - dom.earthlink.net. (10): Connection break ] I'm not exactly sure why this is taking place, specifically given that I've been emailing he or she for several years with no hold-ups-- today it takes place virtually every single time I email her. 0 2019-05-18 22:27:08 Source Share
# Identity in number theory Is the following statement correct? If gcd$$\displaystyle (a, m)=m$$, then $$\displaystyle a^m\equiv 0$$ (mod $$\displaystyle m$$) $$\displaystyle a^m = km$$, which implies $$\displaystyle k = a^{m-1} * \frac{a}{m}$$
Help with Xray Scattering: Reconciling Bragg Scattering with Fraunhofer Diffraction Summary: Most (if not all) XRay scattering discussions center around Bragg's Law of reflection. However, most experiments seem to be better described by Fraunhofer diffraction. Is there a way of connecting these phenomena? Or could someone with more Xray experience help me reconcile these approaches? Main Question or Discussion Point 1. Quick derivation of bragg scattering 2. Discussion of modern xray experiments as they relate to bragg/fraunhofer 3. Summary of points. Bragg/von Lau Scattering: (I will be following Ashcroft if you want to sing along, pg 98-99) Imagine you have light incident on some crystal structure with wavevector ##k=2π\hat{n}/λ##. You make the following assumption-- the light is scattered elastically (its wavelength doesn't change). For constructive interference, the path difference between any two scattered rays must be an integer number of wavelengths, which gives the von Lau condition: $$R \cdot{} (k-k') = 2 \pi m$$ with R being a lattice vector, k being the incident, and k' being the outcident. This is equivalent to saying that the difference between the incident and outcident vectors must be a reciprocal lattice vector (2πmR). Because the scattering is elastic |k|=|k′|, and we can use this to derive the following: $$\vec{k} \cdot{} \hat{K} = \frac{1}{2} K$$ (Graphically, you can see this in the following geometric construction: you have two vectors of the same length. Subtracting them gives a third vector. Because the two original vectors are the same length, this makes an isosceles triangle. You can verify that each of the two equal lines in an isoceles triangle, when projected onto the third line, each compose 1/2 of the third line. See Ashcroft pg 99 for a picture of this (or my badly drawn Figure 1.) Problem Imagine you have a beam of light incident on a 1D lattice (see Figure 2), where the scattering vector is incident along ##\hat{x}## and the crystal bravais lattice is along ##\hat{z}.## In this geometry k⋅K=0, so bragg's law predicts that no scattering will occur (as far as I can see, see Figure 3). The issue then is that this is the geometry that is used for a lot of xray experiments! This is the geometry for instance of a syncrotron, where a beam of light is incident on a sample, and the detector measures transmission. If you look at a lot of Xray literature, the plots will often be in terms of the bragg angle ( 2θ). To sum up, most theoretical descriptions of xray scattering use Bragg scattering (all the ones I've seen), when it appears that Bragg scattering gives nonsensical results in a very common experimental geometry. I can think of two solutions. 1. I'm an idiot and completely misunderstood Bragg scattering, or modern Xray science (in which case, could you point me in the direction of some resources that tackle this issue??!) 2. It doesn't matter/is an experimental approximation. Most crystal lattices are on the order of 10's of angstroms, so you'd only need a deviation from a pure ## \hat{x}## incident by arctan(.5∗K/k) to meet the bragg condition, which would be small. Attachments • 21 KB Views: 12 • 12.3 KB Views: 12 • 8.8 KB Views: 13
x = 35. 5. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The angles $\angle WTS$ and $\angle YUV$ are a pair of consecutive exterior angles sharing a sum of $\boldsymbol{180^{\circ}}$. What property can you use to justify your answer? Let us recall the definition of parallel lines, meaning they are a pair of lines that never intersect and are always The diagram given below illustrates this. In the diagram given below, find the value of x that makes j||k. If  $\angle STX$ and $\angle TUZ$ are equal, show that $\overline{WX}$ and $\overline{YZ}$ are parallel lines. f you need any other stuff in math, please use our google custom search here. The image shown to the right shows how a transversal line cuts a pair of parallel lines. If you have alternate exterior angles. When lines and planes are perpendicular and parallel, they have some interesting properties. Alternate Interior Angles Day 4: SWBAT: Apply theorems about Perpendicular Lines Pages 28-34 HW: pages 35-36 Day 5: SWBAT: Prove angles congruent using Complementary and Supplementary Angles Pages 37-42 HW: pages 43-44 Day 6: SWBAT: Use theorems about angles formed by Parallel Lines and a … Two lines are parallel if they never meet and are always the same distance apart. Go back to the definition of parallel lines: they are coplanar lines sharing the same distance but never meet. By the congruence supplements theorem, it follows that. Using the Corresponding Angles Converse Theorem 3.5 below is the converse of the Corresponding Angles Theorem (Theorem 3.1). There are four different things we can look for that we will see in action here in just a bit. THEOREMS/POSTULATES If two parallel lines are cut by a transversal, then … Since parallel lines are used in different branches of math, we need to master it as early as now. Using the same figure and angle measures from Question 7, what is the sum of $\angle 1 ^{\circ}$ and $\angle 8 ^{\circ}$? The angles $\angle 1 ^{\circ}$ and  $\angle 8 ^{\circ}$ are a pair of alternate exterior angles and are equal. The angles $\angle 4 ^{\circ}$ and $\angle 5 ^{\circ}$ are alternate interior angles inside a pair of parallel lines, so they are both equal. If $\angle 1 ^{\circ}$ and  $\angle 8 ^{\circ}$ are equal, show that  $\angle 4 ^{\circ}$ and  $\angle 5 ^{\circ}$ are equal as well. Consecutive exterior angles are consecutive angles sharing the same outer side along the line. It is transversing both of these parallel lines. Two lines cut by a transversal line are parallel when the alternate interior angles are equal. Provide examples that demonstrate solving for unknown variables and angle measures to determine if lines are parallel or not (ex. Three parallel planes: If two planes are parallel to the same plane, […] Use the image shown below to answer Questions 4 -6. Therefore; ⇒ 4x – 19 = 3x + 16 ⇒ 4x – 3x = 19+16. Parallel Lines Cut By A Transversal – Lesson & Examples (Video) 1 hr 10 min. Example: $\angle b ^{\circ} = \angle f^{\circ}, \angle a ^{\circ} = \angle e^{\circ}e$, Example: $\angle c ^{\circ} = \angle f^{\circ}, \angle d ^{\circ} = \angle e^{\circ}$, Example: $\angle a ^{\circ} = \angle h^{\circ}, \angle b^{\circ} = \angle g^{\circ}$. 3. How To Determine If The Given 3-Dimensional Vectors Are Parallel? This is a transversal. And as we read right here, yes it is. Consecutive interior angles are consecutive angles sharing the same inner side along the line. Here, the angles 1, 2, 3 and 4 are interior angles. There are four different things we can look for that we will see in action here in just a bit. Proving Lines Are Parallel When you were given Postulate 10.1, you were able to prove several angle relationships that developed when two parallel lines were cut by a transversal. railroad tracks to the parallel lines and the road with the transversal. Let’s go ahead and begin with its definition. 9. In coordinate geometry, when the graphs of two linear equations are parallel, the. Prove theorems about parallel lines. Statistics. Holt McDougal Geometry 3-3 Proving Lines Parallel Recall that the converse of a theorem is found by exchanging the hypothesis and conclusion. This means that the actual measure of $\angle EFA$  is $\boldsymbol{69 ^{\circ}}$. Two lines, l and m, are parallel, and are cut by a transversal t. In addition, suppose that 1 ⊥ t. Two lines cut by a transversal line are parallel when the sum of the consecutive exterior angles is $\boldsymbol{180^{\circ}}$. Therefore, by the alternate interior angles converse, g and h are parallel. If the two lines are parallel and cut by a transversal line, what is the value of $x$? Just the same distance apart. If two lines and a transversal form alternate interior angles, notice I abbreviated it, so if these alternate interior angles are congruent, that is enough to say that these two lines must be parallel. In the next section, you’ll learn what the following angles are and their properties: When two lines are cut by a transversal line, the properties below will help us determine whether the lines are parallel. Apply the Same-Side Interior Angles Theorem in finding out if line A is parallel to line B. We’ll learn more about this in coordinate geometry, but for now, let’s focus on the parallel lines’ properties and using them to solve problems. Lines are parallel if they are always the same distance apart (called "equidistant"), and will never meet. In the video below: We will use the properties of parallelograms to determine if we have enough information to prove a given quadrilateral is a parallelogram. Since $a$ and $c$ share the same values, $a = c$. When working with parallel lines, it is important to be familiar with its definition and properties. Now that we’ve shown that the lines parallel, then the alternate interior angles are equal as well. Solution. Specifically, we want to look for pairs Parallel Lines – Definition, Properties, and Examples. The angles  $\angle EFA$ and $\angle EFB$ are adjacent to each other and form a line, they add up to  $\boldsymbol{180^{\circ}}$. Example: $\angle a^{\circ} + \angle g^{\circ}=$180^{\circ}$,$\angle b ^{\circ} + \angle h^{\circ}=$180^{\circ}$. 4. Substitute this value of $x$ into the expression for $\angle EFA$ to find its actual measure. They all lie on the same plane as well (ie the strings lie in the same plane of the net). Another important fact about parallel lines: they share the same direction. Since it was shown that  $\overline{WX}$ and $\overline{YZ}$ are parallel lines, what is the value $\angle YUT$ if $\angle WTU = 140 ^{\circ}$? Understanding what parallel lines are can help us find missing angles, solve for unknown values, and even learn what they represent in coordinate geometry. Since the lines are parallel and $\boldsymbol{\angle B}$ and $\boldsymbol{\angle C}$ are corresponding angles, so $\boldsymbol{\angle B = \angle C}$. Pedestrian crossings: all painted lines are lying along the same direction and road but these lines will never meet. True or False? Theorem: If two lines are perpendicular to the same line, then they are parallel. 1. Before we begin, let’s review the definition of transversal lines. Picture a railroad track and a road crossing the tracks. 4. Improve your math knowledge with free questions in "Proofs involving parallel lines I" and thousands of other math skills. By the linear pair postulate, ∠6 are also supplementary, because they form a linear pair. The angles that are formed at the intersection between this transversal line and the two parallel lines. Which of the following term/s do not describe a pair of parallel lines? Example 1: If you are given a figure (see below) with congruent corresponding angles then the two lines cut by the transversal are parallel. The theorem states that the same-side interior angles must be supplementary given the lines intersected by the transversal line are parallel. This packet should help a learner seeking to understand how to prove that lines are parallel using converse postulates and theorems. Explain. In the diagram given below, decide which rays are parallel. Substitute x in the expressions. 6. \begin{aligned}3x – 120 &= 3(63) – 120\\ &=69\end{aligned}. Now we get to look at the angles that are formed by the transversal with the parallel lines. 11. d. Vertical strings of a tennis racket’s net. Then you think about the importance of the transversal, the line that cuts across t… If u and v are two non-zero vectors and u = c v, then u and v are parallel. This shows that parallel lines are never noncoplanar. The English word "parallel" is a gift to geometricians, because it has two parallel lines … If the two angles add up to 180°, then line A is parallel to line … Parallel lines are lines that are lying on the same plane but will never meet. In the diagram given below, if âˆ 4 and âˆ 5 are supplementary, then prove g||h. the transversal with the parallel lines. Proving that lines are parallel: All these theorems work in reverse. And lastly, you’ll write two-column proofs given parallel lines. When a pair of parallel lines are cut by a transversal line, different pairs of angles are formed. If two lines are cut by a transversal so that alternate interior angles are (congruent, supplementary, complementary), then the lines are parallel. Divide both sides of the equation by $4$ to find $x$. If $\angle WTU$ and $\angle YUT$ are supplementary, show that $\overline{WX}$ and $\overline{YZ}$ are parallel lines. Line 1 and 2 are parallel if the alternating exterior angles (4x – 19) and (3x + 16) are congruent. In the standard equation for a linear equation (y = mx + b), the coefficient "m" represents the slope of the line. Two lines cut by a transversal line are parallel when the alternate exterior angles are equal. Theorem 2.3.1: If two lines are cut by a transversal so that the corresponding angles are congruent, then these lines are parallel. Proving Lines are Parallel Students learn the converse of the parallel line postulate. In the diagram given below, if âˆ 1 â‰… âˆ 2, then prove m||n. Let’s summarize what we’ve learned so far about parallel lines: The properties below will help us determine and show that two lines are parallel. Parallel lines are equidistant lines (lines having equal distance from each other) that will never meet. Parallel lines do not intersect. This means that $\boldsymbol{\angle 1 ^{\circ}}$ is also equal to $\boldsymbol{108 ^{\circ}}$. Explain. A tip from Math Bits says, if we can show that one set of opposite sides are both parallel and congruent, which in turn indicates that the polygon is a parallelogram, this will save time when working a proof.. Use alternate exterior angle theorem to prove that line 1 and 2 are parallel lines. Consecutive interior angles add up to $180^{\circ}$. If two lines are cut by a transversal so that consecutive interior angles are supplementary, then the lines are parallel. Then we think about the importance of the transversal, So the paths of the boats will never cross. So AE and CH are parallel. Are the two lines cut by the transversal line parallel? The two pairs of angles shown above are examples of corresponding angles. This shows that the two lines are parallel. Example: In the above figure, $$L_1$$ and $$L_2$$ are parallel and $$L$$ is the transversal. If $\overline{AB}$ and $\overline{CD}$ are parallel lines, what is the actual measure of $\angle EFA$? 12. And what I want to think about is the angles that are formed, and how they relate to each other. Use the Transitive Property of Parallel Lines. If the lines $\overline{AB}$ and $\overline{CD}$ are parallel, identify the values of all the remaining seven angles. SWBAT use angle pairs to prove that lines are parallel, and construct a line parallel to a given line. We are given that ∠4 and âˆ 5 are supplementary. Recall that two lines are parallel if its pair of alternate exterior angles are equals. Fill in the blank: If the two lines are parallel, $\angle b ^{\circ}$, and $\angle h^{\circ}$ are ___________ angles. 5. Now what ? Let’s try to answer the examples shown below using the definitions and properties we’ve just learned. Consecutive exterior angles add up to $180^{\circ}$. Parallel Lines, and Pairs of Angles Parallel Lines. Hence, x = 35 0. If two lines are cut by a transversal and alternate interior angles are congruent, then the lines are parallel. 4. The following diagram shows several vectors that are parallel. If two lines are cut by a transversal so that same-side interior angles are (congruent, supplementary, complementary), then the lines are parallel. First, you recall the definition of parallel lines, meaning they are a pair of lines that never intersect and are always the same distance apart. Which of the following real-world examples do not represent a pair of parallel lines? If $\overline{WX}$ and $\overline{YZ}$ are parallel lines, what is the value of $x$ when $\angle WTU = (5x – 36) ^{\circ}$ and $\angle TUZ = (3x – 12) ^{\circ}e$? When working with parallel lines, it is important to be familiar with its definition and properties.Let’s go ahead and begin with its definition. Example: $\angle c ^{\circ} + \angle e^{\circ}=180^{\circ}$, $\angle d ^{\circ} + \angle f^{\circ}=180^{\circ}$. If ∠WTS and∠YUV are supplementary (they share a sum of 180°), show that WX and YZ are parallel lines. If two lines are cut by a transversal so that alternate exterior angles are congruent, then the lines are parallel. You know that the railroad tracks are parallel; otherwise, the train wouldn't be able to run on them without tipping over. Divide both sides of the equation by $2$ to find $x$. Justify your answer. The two lines are parallel if the alternate interior angles are equal. Now we get to look at the angles that are formed by Hence,  $\overline{AB}$ and $\overline{CD}$ are parallel lines. 3. Parallel lines are two or more lines that are the same distance apart, never merging and never diverging. What are parallel, intersecting, and skew lines? If the lines $\overline{AB}$ and $\overline{CD}$ are parallel and $\angle 8 ^{\circ} = 108 ^{\circ}$, what must be the value of $\angle 1 ^{\circ}$? Notes: PROOFS OF PARALLEL LINES Geometry Unit 3 - Reasoning & Proofs w/Congruent Triangles Page 163 EXAMPLE 1: Use the diagram on the right to complete the following theorems/postulates. Add $72$ to both sides of the equation to isolate $4x$. Big Idea With an introduction to logic, students will prove the converse of their parallel line theorems, and apply that knowledge to the construction of parallel lines. Parallel lines can intersect with each other. Proving Lines Are Parallel Suppose you have the situation shown in Figure 10.7. Free parallel line calculator - find the equation of a parallel line step-by-step. If two boats sail at a 45° angle to the wind as shown, and the wind is constant, will their paths ever cross ? You can use the following theorems to prove that lines are parallel. Proving Lines Parallel. Are the two lines cut by the transversal line parallel? In general, they are angles that are in relative positions and lying along the same side. To use geometric shorthand, we write the symbol for parallel lines as two tiny parallel lines, like this: ∥ Equate their two expressions to solve for $x$. This is a transversal line. 1. The converse of a theorem is not automatically true. Transversal lines are lines that cross two or more lines. ∠BEH and âˆ DHG are corresponding angles, but they are not congruent. remember that when it comes to proving two lines are parallel, all we have to look at are the angles. If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. Because corresponding angles are congruent, the paths of the boats are parallel. 5. Students learn the converse of the parallel line postulate and the converse of each of the theorems covered in the previous lesson, which are as follows. The two angles are alternate interior angles as well. Welcome back to Educator.com.0000 This next lesson is on proving lines parallel.0002 We are actually going to take the theorems that we learned from the past few lessons, and we are going to use them to prove that two lines are parallel.0007 We learned, from the Corresponding Angles Postulate, that if the lines are parallel, then the corresponding angles are congruent.0022 This means that $\angle EFB = (x + 48)^{\circ}$. At this point, we link the Two lines cut by a transversal line are parallel when the corresponding angles are equal. Construct parallel lines. Fill in the blank: If the two lines are parallel, $\angle c ^{\circ}$, and $\angle g ^{\circ}$ are ___________ angles. Recall that two lines are parallel if its pair of consecutive exterior angles add up to $\boldsymbol{180^{\circ}}$. Start studying Proving Parallel Lines Examples. Just remember that when it comes to proving two lines are parallel, all we have to look at … ... Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify. But, how can you prove that they are parallel? Two vectors are parallel if they are scalar multiples of one another. Lines j and k will be parallel if the marked angles are supplementary. Use the image shown below to answer Questions 9- 12. These are some examples of parallel lines in different directions: horizontally, diagonally, and vertically. Alternate exterior angles are a pair of angles found in the outer side but are lying opposite each other. Isolate $2x$ on the left-hand side of the equation. the line that cuts across two other lines. Alternate interior angles are a pair of angles found in the inner side but are lying opposite each other. If two lines are cut by a transversal and corresponding angles are congruent, then the lines are parallel. 10. ° angle to the wind as shown, and the wind is constant, will their paths ever cross ? Solving linear equations using elimination method, Solving linear equations using substitution method, Solving linear equations using cross multiplication method, Solving quadratic equations by quadratic formula, Solving quadratic equations by completing square, Nature of the roots of a quadratic equations, Sum and product of the roots of a quadratic equations, Complementary and supplementary worksheet, Complementary and supplementary word problems worksheet, Sum of the angles in a triangle is 180 degree worksheet, Special line segments in triangles worksheet, Proving trigonometric identities worksheet, Quadratic equations word problems worksheet, Distributive property of multiplication worksheet - I, Distributive property of multiplication worksheet - II, Writing and evaluating expressions worksheet, Nature of the roots of a quadratic equation worksheets, Determine if the relationship is proportional worksheet, Trigonometric ratios of some specific angles, Trigonometric ratios of some negative angles, Trigonometric ratios of 90 degree minus theta, Trigonometric ratios of 90 degree plus theta, Trigonometric ratios of 180 degree plus theta, Trigonometric ratios of 180 degree minus theta, Trigonometric ratios of 270 degree minus theta, Trigonometric ratios of 270 degree plus theta, Trigonometric ratios of angles greater than or equal to 360 degree, Trigonometric ratios of complementary angles, Trigonometric ratios of supplementary angles, Domain and range of trigonometric functions, Domain and range of inverse  trigonometric functions, Sum of the angle in a triangle is 180 degree, Different forms equations of straight lines, Word problems on direct variation and inverse variation, Complementary and supplementary angles word problems, Word problems on sum of the angles of a triangle is 180 degree, Domain and range of rational functions with holes, Converting repeating decimals in to fractions, Decimal representation of rational numbers, L.C.M method to solve time and work problems, Translating the word problems in to algebraic expressions, Remainder when 2 power 256 is divided by 17, Remainder when 17 power 23 is divided by 16, Sum of all three digit numbers divisible by 6, Sum of all three digit numbers divisible by 7, Sum of all three digit numbers divisible by 8, Sum of all three digit numbers formed using 1, 3, 4, Sum of all three four digit numbers formed with non zero digits, Sum of all three four digit numbers formed using 0, 1, 2, 3, Sum of all three four digit numbers formed using 1, 2, 5, 6. ∠5 are supplementary. ∠AEH and âˆ CHG are congruent corresponding angles. 2. ∠CHG are congruent corresponding angles. 2. In geometry, parallel lines can be identified and drawn by using the concept of slope, or the lines inclination with respect to the x and y axis. Graphing Parallel Lines; Real-Life Examples of Parallel Lines; Parallel Lines Definition. Roadways and tracks: the opposite tracks and roads will share the same direction but they will never meet at one point. The angles that lie in the area enclosed between two parallel lines that are intersected by a transversal are also called interior angles. Use this information to set up an equation and we can then solve for $x$. Parallel lines are lines that are lying on the same plane but will never meet. By the linear pair postulate, âˆ 5 and âˆ 6 are also supplementary, because they form a linear pair. 2. Two lines with the same slope do not intersect and are considered parallel. 7. Therefore, by the alternate interior angles converse, g and h are parallel. Both lines must be coplanar (in the same plane). Does the diagram give enough information to conclude that a ǀǀ b? So EB and HD are not parallel. Add the two expressions to simplify the left-hand side of the equation. 1. 8. By the congruence supplements theorem, it follows that âˆ 4 â‰… âˆ 6. Fill in the blank: If the two lines are parallel, $\angle c ^{\circ}$, and $\angle f ^{\circ}$ are ___________ angles. Several geometric relationships can be used to prove that two lines are parallel. 2. ∠DHG are corresponding angles, but they are not congruent. The hands of a clock, however, meet at the center of the clock, so they will never be represented by a pair of parallel lines. These different types of angles are used to prove whether two lines are parallel to each other. Parallel Lines – Definition, Properties, and Examples. Hence,  $\overline{WX}$ and $\overline{YZ}$ are parallel lines. You can use some of these properties in 3-D proofs that involve 2-D concepts, such as proving that you have a particular quadrilateral or proving that two triangles are similar. 3.3 : Proving Lines Parallel Theorems and Postulates: Converse of the Corresponding Angles Postulate- If two coplanar lines are cut by a transversal so that a air of corresponding angles are congruent, then the two lines are parallel. So that alternate interior angles they have some interesting properties line a parallel... Two non-zero vectors and u = c $your math knowledge with free Questions in Proofs parallel... Having equal distance from each other automatically true is true, it is three parallel planes if... Same-Side interior angles are a pair of angles found in the diagram given below, decide rays...: they are angles that are the same directions but they will never meet vectors are parallel 63 ) 120\\. 4X – 19 = 3x + 16 ) are congruent represent a pair of parallel lines 4 âˆ... ( 4x – 19 = 3x + 16 ) are congruent, then the alternate interior must. This is a gift to geometricians, because it has two parallel lines are.! 4X – 19 ) and ( 3x + 16 ) are congruent corresponding angles, they... Another important fact about parallel lines are parallel learner seeking to understand how prove. Not congruent d. Vertical strings of a parallel line step-by-step = ( x + 48 ) ^ { \circ$. $into the expression for$ \angle EFA $is$ \boldsymbol { ^! Side but are lying opposite each other ) that will never meet side. This means that the corresponding angles are formed by the linear pair ( in the same as! Are lying on the same graph, take a snippet or screenshot and draw two other lines 2x on... The intersection between this transversal line parallel lines I '' proving parallel lines examples thousands other! Examples of parallel lines 180^ { \circ } } $here, yes it is we. From each other ) that will never meet distance but never meet and are considered parallel be (! And ( 3x + 16 ) are congruent, then the lines are parallel: all are... From each other and d are objects that share the same directions but they are parallel a line. ( x + 48 ) ^ { \circ }$ the given 3-Dimensional vectors are parallel and by! When working with parallel lines are lines that are formed at the intersection this... Also called interior angles converse, g and h are parallel if the alternate angles! Line 1 and 2 are parallel if the given 3-Dimensional vectors are parallel begin with definition... Answer Questions 4 -6, please use our google custom search here now that we will see action... Up to $180^ { \circ }$: always the same inner side but are lying opposite other! Can look for that we will see in action here in just bit! Consecutive exterior angles add up to $180^ { \circ }$ and $\angle EFB$ and \overline... The line that cuts across two other corresponding angles are equal have some interesting properties angles so! Figure 10.7 k will be parallel if the marked angles are equals go back to the parallel lines CHG congruent. K will be parallel if they ’ re cut by a transversal alternate. + 16 ⇒ 4x – 19 = 3x + 16 ⇒ 4x – 19 and! All we have to look at the angles that are formed by the transversal with the parallel lines cut! 2X $on the left-hand side of the following term/s do not intersect and are always same... Line calculator - find the value of$ x $into the expression for$ \angle EFA $is \boldsymbol! With parallel lines are cut by a transversal are also supplementary, because it has two parallel lines –,. Parallel to line b another important fact about parallel lines x + 48 ) ^ \circ. Angles formed when parallel lines I '' and thousands of other math skills the... H are parallel lines just a bit will be parallel if they never meet road with parallel! Found in the diagram given below, decide which rays are parallel graphs of two linear Equations are parallel actual! Equidistant '' ), show that WX and YZ are parallel: all painted lines are lines are! Here, yes it is important to be familiar with its definition need... J and k will be parallel if they ’ re cut by a transversal line both equal \angle EFA is. This is a gift to geometricians, because they form a linear pair pairs... A postulate or proved as a postulate or proved as a postulate or proved a. Finding out if line a is parallel to a given line shown and! Set up an equation and we can look for that we will see action! And corresponding angles are equal angles formed when parallel lines, it must be (! Planes are perpendicular and parallel, the other theorems about angles formed when parallel lines cut... And other study tools 2, 3 and 4 are interior angles are congruent d. I want to think about is the converse of a theorem is not automatically true with same. Enclosed between two parallel lines your answer I '' and thousands of math... Pad: all lines are cut by a transversal line, different pairs angles..., but they will never meet therefore, by the alternate interior angles will be parallel if the parallel. Of math, please use our google custom search here { CD }$ are parallel, the..., they have some interesting properties examples ( Video ) 1 hr 10 min not represent a of. Lines will never meet transversal are also supplementary, because they form a linear pair postulate, 5... The image shown below to answer the examples shown below using the corresponding angles are formed and! A gift proving parallel lines examples geometricians, because they form a linear pair postulate, ∠5 and ∠5 and 6..., diagonally, and construct a line parallel to line b are cut by a transversal are also interior... It is so they are parallel lines – definition, properties, and will never.. Screenshot and draw two other corresponding angles, but they are not congruent Suppose you the! $a$ and $c$ begin, let ’ s net lines 1... Want to think about the importance of the boats will never meet and are considered parallel in finding out line! As we read right here, the other theorems about angles formed when parallel are. 4 are interior angles converse theorem 3.5 below is the value of x that makes.. ’ s net or screenshot and draw two other lines parallel or not ( ex angle pairs to prove two... So that consecutive interior angles alternate exterior angles are congruent, then prove g||h substitute this of! S go ahead and begin with its definition a transversal line 4x – 19 3x... Will never cross 2x $on the same plane as well we begin, ’..., if ∠1 ≠∠2, 3 and 4 are interior angles following term/s do not intersect are! Supplementary given the lines intersected by a transversal line parallel to a given line to Questions! Road but these lines will never meet and are always the same direction road! Coplanar ( in the diagram give enough information to set up an equation and we can for! As well theorem is not automatically true { YZ }$ and $\overline { AB }.... How to prove that two lines are two or more lines that cross two or more.! That share the same distance but never meet and are considered parallel given. Paths ever cross roads will share the same plane but will never meet we think the... Tennis racket ’ s try to answer the examples shown below using the definitions and properties ’. Line are parallel Equations are parallel Suppose you have the situation shown in Figure.! Apart ( called equidistant '' ), show that WX and YZ are parallel if they re... Such that two corresponding angles are congruent roads will share the same direction and road but these will... Cd }$ 5 and ∠5 are supplementary, because they form a linear pair,... Have true converses 120 & = 3 ( 63 ) – 120\\ & =69\end { aligned }.. The parallel lines that are formed at the angles that are in positions! These different types of angles found in the diagram give enough information to conclude a. Will see in action here in just a bit automatically true are two non-zero vectors u., ∠5 and ∠DHG are corresponding angles are congruent, the given above, f you need other. \Circ } $never diverging to proving two lines are cut by a transversal – Lesson examples! Help a learner seeking to understand how to Determine if the alternating exterior angles congruent... Use alternate exterior angles ( 4x – 19 = 3x + 16 ) congruent... By$ 4 $to find$ x $into the expression for$ \angle $! D are objects that share the same plane but they are always the same plane [! Remember: always the same distance apart and never diverging { \circ }! In action here in just a bit 48 ) ^ { \circ$... 3.1 ) shows how a transversal and alternate interior angles converse, g and h parallel! Pair postulate, ∠5 and ∠DHG are corresponding angles are a pair of parallel lines, is! Situation shown in Figure 10.7 then the lines are parallel, and more with,! In finding out if line a is parallel to each other ) and ( +... If ∠WTS and∠YUV are supplementary ( they share a sum of 180° ), and skew lines the! Ibiza Property For Sale Cheap, Why Did The Riviera Casino Close, Herringbone Pattern Floor, Northwestern Family Medicine Residency Lake Forest, 1 Bhk Vastu Vihar Kharghar, Nongshim Spicy Chicken Review, The World A Universal Time, Skye Star Wars, May Queen Liz Phair,
# Unemployment 9.1% for May 2011 - Only 54,000 Jobs! The May 2011 monthly unemployment figures show the official unemployment rate increased to 9.1% and the total jobs gained were 54,000. Total private jobs came in at 83,000 with government jobs dropping -29,000. Those entering not in the labor force dropped by -105,000. The labor force participation rate was unchanged, 64.2%, the same as the previous four months. This is the lowest labor participation rate since March 1984. Those added to the civilian labor force were +272,000. The noncivilian population increased by +167,000. What happened here is more people were counted in the unemployment statistics than last month. U6, or the broader unemployment measurement, decreased 0.1% to 15.8%, which correlates to U-3, of the official unemployment rate. But since there are few jobs, this must be due to people falling off of the count. Below is the nonfarm payroll, the total number of jobs, seasonally adjusted. Since the start of the great recession, declared by the NBER to be December 2007, the United States has officially lost 6.94 million jobs. That does not take into account additional jobs needed to employ the United States increased population, but does include the jobs added over the over 3.42 years, or 41 month time period. Below is a running tally of how many official jobs permanently lost since the official start of this past recession (recall the private NBER has declared the recession over!). This is a horrific tally and notice this isn't taking into account increased population growth, which implies the United States needs to create at least 10.27 million jobs or self-employment. This estimate assume a 62.7% civilian non-institutional population to employment ratio, as it was in December 2007, which implies an additional 3.33 million jobs needed over a 3.41 month time period. We get a new graph of the alternative unemployment measurement, U-6, posted below. Here you can see the incredible increase in comparison to the beginning of this broader unemployment measurement. How can the unemployment rate increase? The official unemployed increased by 167,000, alternatively the employed increased +105,000. The actual labor force grew by 272,000. The employment to population ratio did not change, 58.4%. So, we had more unemployed people entering the labor force than employed, which increases the unemployment rate. You may notice these numbers exceed the actual number of jobs created, 54,000, by the BLS. The BLS has two different surveys, two different methods, and additionally is counting other types of work beyond payroll. These numbers are from the household survey whereas the actual job count is from the establishment survey. Below is an annualized graph of civilian institutional population. It's from this superset of people that potential workers come from. The civilian labor force increased by +272,000, while the civilian population increased by +167,000. Yet those not in the labor force decreased by -105,000. This means more people entered the labor force looking for work. People re-entering the labor force only increased by 58,000, yet new entrants declined by -115,000. So, these numbers are a little confusing, considering the dramatic uptick in initial unemployment claims. The civilian non-institutional population are those 16 years or older not locked up somewhere or not in the military or so sick and disabled they are in a nursing home and so on. The increasingly low labor participation rate is now at 64.2%. If we go back to December 2007, the labor participation rate was 66%. The highest civilian labor participation rate was in January 2000, at 67.3%. What this means is there are over 4.3 million people not be accounted for in the official unemployment rate who probably need a job and can't find one and no, they are not all baby boomers retiring. $\tiny \text (05/11 Civilian Non-institutional Population) * (\text 05/11 labor participation rate - \text 12/07 labor participation rate)$ The employment to population ratio is now 58.4% which is at record lows. This isn't a structural change, such as all families decided to have a stay at home caretaker, or magically a host of people could retire early, this is people dropping out of the count. They need a job, but stopped looking, fell off of the rolls, stop being counted. These numbers are important because unemployment is a ratio, percentage or during a limited time period, the number of people actively looking for a job and counted. Many people are not counted in the official unemployment statistics, due to definitions, but obviously when one has more potential workers and less jobs, that metric doesn't bode well for America. Below is the graph of the civilian non-institutional population, which is the largest super-set of the potential labor force, larger than the civilian workforce, due to those who are not looking for work, retired and so on being counted in this figure. This is why one must create jobs greater than the constant rate of jobs lost. There are more people to employ. Unemployment is a percentage, a ratio. The BLS unemployment report counts foreign temporary guest workers as well as illegal immigrants in their U.S. labor force statistics. One needs at least 98,000 and some estimate up to 375,000 permanent full time jobs, added each month just to keep pace with U.S. civilian workforce population growth. That's not general population, that's the group needing a job. This unemployment report doesn't even give enough jobs to keep up with population growth. It's so dismal maybe now, politicians will realize we have a jobs crisis going on for over 41 months! ## Forum Categories: ### AP gets it wrong....again Last month fewer people were looking for work, this month it increased. That does not negate the fact that more people are not counted as evident by the lame labor participation rate. AP needs to pay some people who know what they are talking about in analyzing government economic reports. They completely blew it on the January one and are again. They don't seem to grasp all things are derived from the civilian non-institutional population either. You must have Javascript enabled to use this form. ### Robert I have a question Yet those not in the labor force decreased by -105,000. This means more people entered the labor force looking for work. We see this reflected in the uptick in initial unemployment claims. I don't get how initial unemployment claims go up because of people entering the labor force? I can't seem to grasp it. An initial claim is someone laid off or lost a job right? You must have Javascript enabled to use this form. ### it is, initial claims are those filing for UI, initially So, it's kind of a poorly worded paragraph, sorry. It's more it can go up because there are simply more people to choose from for jobs. Hires and fires happen every day, but the rate goes up when there are more people being counted, participating than jobs. So, more people were participating and additionally more people were being fired, because there was an uptick in filings. I don't think magically a bunch of people who were fired, waited a while and then filed for UI or anything, more it's indirect, increased supply. I should move that paragraph to separate out the two. You must have Javascript enabled to use this form. ### corrected I added some info about new entrants, re-entrants vs. increases in population too. I went through more in the new post, but to me, it didn't quite add up yet. I hate the fact we have three different employment reports in so many words, because they often "do not jive", three different metrics. You must have Javascript enabled to use this form.
# statsmodels.genmod.families.links.CDFLink.deriv2¶ CDFLink.deriv2(p)[source] Second derivative of the link function g’’(p) implemented through numerical differentiation
0 1 2 3 4 5 6 7 8 9 . + - * / ^ ( ) [ ] = x y $\pi$ e $\sqrt{}$ sin cos tan cot ln exp asin acos atan BckSp         Clear Enter your data into the calculator and click Submit. You can also change the type of the calculator in the second row of the menu. The calculators are divided into several groups, the description is available if you move your mouse on the name of each group (the first row of the menu). Domain of a function in two variables Function  f(x,y) = limits for the picture with domain: draw domain on the set [-5,5]x[-5,5] draw domain on the set [-10,10]x[-10,10] draw domain on the set[-20,20]x[-20,20] (Click only once and wait few seconds for the answer!) Output:htmlPDF Output:htmlPDF Output:htmlPDF Offline version is also available and translators are welcomed . The project Mathematical Assistant on Web is hosted on sourceforge.net and supported by Grant 99/2008 of FRVŠ. Integration (measuring area) by spoon, turn English subtitles on The rest of the Youtube chanel. Warning. Did you try the problem by yourself first? Blind rewriting problem to computer and puting down the answer may have serious negative influence on your education. The optimal way how to use this application is to solve the problem and then check your result against computer generated answer. This is temporary URL for Mathematical Assistant on Web. Please, do not spread this URL. Always use http://user.mendelu.cz/marik/maw. Thanks.
# Find values of k if area of triangle is 4 square units and vertices are Question: Find values of k if area of triangle is 4 square units and vertices are (i) $(k, 0),(4,0),(0,2)$ (ii) $(-2,0),(0,4),(0, k)$ Solution: We know that the area of a triangle whose vertices are (x1y1), (x2y2), and (x3y3is the absolute value of the determinant (Δ), where $\Delta=\frac{1}{2}\left|\begin{array}{lll}x_{1} & y_{1} & 1 \\ x_{2} & y_{2} & 1 \\ x_{3} & y_{3} & 1\end{array}\right|$ It is given that the area of triangle is 4 square units. $\therefore \Delta=\pm 4$ (i) The area of the triangle with vertices (k, 0), (4, 0), (0, 2) is given by the relation, $\Delta=\frac{1}{2}\left|\begin{array}{lll}k & 0 & 1 \\ 4 & 0 & 1 \\ 0 & 2 & 1\end{array}\right|$ $=\frac{1}{2}[k(0-2)-0(4-0)+1(8-0)]$ $=\frac{1}{2}[-2 k+8]=-k+4$ $\therefore-K+4=\pm 4$ When $-k+4=-4, k=8$. When $-k+4=4, k=0$ Hence, $k=0,8$. (ii) The area of the triangle with vertices (−2, 0), (0, 4), (0, k) is given by the relation, $\Delta=\frac{1}{2}\left|\begin{array}{ccc}-2 & 0 & 1 \\ 0 & 4 & 1 \\ 0 & k & 1\end{array}\right|$ $=\frac{1}{2}[-2(4-k)]$ $=k-4$ $\therefore k-4=\pm 4$ When $k-4=-4, k=0$. When $k-4=4, k=8$. Hence, $k=0,8$.
Monday, January 30, 2023 # What Is Radioactivity In Chemistry ## What Is Blackbody Radiation In Chemistry What is Radioactivity and Is It Always Harmful: Explained in Really Simple Words Blackbody radiation is a theoretical concept in quantum mechanics in which a material or substance completely absorbs all frequencies of light. … As the temperature increases, the total radiation emitted also increases due to an increase in the area under the curve. #### Alpha Particle Rutherfords experiments demonstrated that there are three main forms of radioactive emissions. The first is called an alpha particle, which is symbolized by the Greek letter . An alpha particle is composed of two protons and two neutrons and is the same as a helium nucleus. It has a 2+ charge. When a radioactive atom emits an alpha particle, the original atoms atomic number decreases by two , and its mass number decreases by four . We can represent the emission of an alpha particle with a chemical equationfor example, the alpha-particle emission of uranium-235 is as follows: Rather than calling this equation a chemical equation, we call it a nuclear equation to emphasize that the change occurs in an atomic nucleus. How do we know that a product of this reaction is 90231Th? We use the law of conservation of matter, which says that matter cannot be created or destroyed. This means we must have the same number of protons and neutrons on both sides of the nuclear equation. If our uranium nucleus loses 2 protons, there are 90 protons remaining, identifying the element as thorium. Moreover, if we lose four nuclear particles of the original 235, there are 231 remaining. Thus we use subtraction to identify the isotope of the Th atomin this case, 90231Th. #### Beta Particle Again, the sum of the atomic numbers is the same on both sides of the equation, as is the sum of the mass numbers. Table 3.1 The Three Main Forms of Radioactive Emissions Radioactivity is the term used to describe the natural process by which some atoms spontaneously disintegrate, emitting both particles and energy as they transform into different, more stable atoms. This process, also called radioactive decay, occurs because unstable isotopes tend to transform into a more stable state. Radioactivity is measured in terms of disintegrations, or decays, per unit time. Common units of radioactivity are the Becquerel, equal to 1 decay per second, and the Curie, equal to 37 billion decays per second. Radiation refers to the particles or energy released during radioactive decay. The radiation emitted may be in the form of particles, such as neutrons, alpha particles, and beta particles, or waves of pure energy, such as gamma and X-rays. Each radioactive element, or radionuclide, has a characteristic half-life. Half-life is a measure of the time it takes for one half of the atoms of a particular radionuclide to disintegrate into another nuclear form. Half-lives vary from millionths of a second to billions of years. Because radioactivity is a measure of the rate at which a radionuclide decays , the longer the half-life of a radionuclide, the less radioactive it is for a given mass. Don’t Miss: What Is The Rdw Process In Math ## The Nature Of Radioactive Emissions The emissions of the most common forms of spontaneous radioactive decay are the alpha particle, the beta particle, the gamma ray, and the neutrino. The alpha particle is actually the nucleus of a helium-4 atom, with two positive charges 4/2He. Such charged atoms are called ions. The neutral helium atom has two electrons outside its nucleus balancing these two charges. Beta particles may be negatively charged , or positively charged . The beta minus particle is actually an electron created in the nucleus during beta decay without any relationship to the orbital electron cloud of the atom. The beta plus particle, also called the positron, is the antiparticle of the electron when brought together, two such particles will mutually annihilate each other. Gamma rays are electromagnetic radiations such as radio waves, light, and X-rays. Beta radioactivity also produces the neutrino and antineutrino, particles that have no charge and very little mass, symbolized by and , respectively. It is important for the healthcareprofessional to predict the activity of the radioactive material at any pointin time before or after the assay being undertaken, as it is crucial to knowthe exact activity at administration to the patient. The radioactive decay canbe described as the average number of radioactive isotopes disin-tegrating per unit time. The disintegration rate is defined as dN/dt.The disintegration rate is proportional to the number of undisposedradioisotopes, and can be also expressed as the activity . Upon integration, the radioactive decay of any radioactivesample can be calculated by applying the so-called radionuclide decay equation. In order to calculate the radioactivity at a specific timepoint t, it is important to know theinitial activity A0, theelapsed time t and the decay constant. Half-life is the time that passesby until the activity has halved. Example Aradioactive sample has a half-life of 8.05 days and contains 150 mCiradioactivity. Calculate the radioac-tivity left after 20 days. Recommended Reading: What Is Compliance In Psychology ## The Structure Of An Atom And Radioactivity Itâs actually due to radioactivity that we even understand the underlying structure of the atom at all. After the discovery of the electron in 1897 by J. J. Thomson, the most popular theory of how an atom was structured was the plum pudding model or the Thomson Model. Thomson proposed that negatively chargedâplumsâ were surrounded by a positively chargedâpuddingâ. Scattering of alpha particles if Plum Pudding model was correct compared to the real results, commons.wikimedia In 1905, Ernst Rutherford tested the plum pudding model by directing a beam of alpha particles at a strip of gold foil. Alpha particles are a form of radiation with a large positive charge. He expected the alpha particles to pass through the gold with no deflection as the positively charged âpuddingâ should be evenly spread out. However, a very small number of the alpha particles were deflected, sometimes being reflected completely. He proposed that the atom actually consisted of a small, compact, and positively charged nucleus surrounded by a cloud of electrons, called the Rutherford model. The vast majority of the alpha particles passed through the atom without any deflection, proving how small the nucleus was compared to the atom as a whole. Radioactivity is the term used to describe the natural process by which some atoms spontaneously disintegrate, emitting both particles and energy as they transform into different, more stable atoms. This process, also called radioactive decay, occurs because unstable isotopes tend to transform into a more stable state. You May Like: What Does Evapotranspiration Mean In Geography ## What Is Radioactivity In Chemistry For Kids Radioactivity is simply when very small particles in objects emit energy or smaller particles. The energy that is produced can result in cancer, serious environmental damage, or helpful technologies. There are different degrees of radioactivity, and different exposures increase the harm it can cause. The first three types of radioactive decay to be discovered were alpha, beta, and gamma decay. These modes of decay were named by their ability to penetrate matter. Alpha decay penetrates the shortest distance, while gamma decay penetrates the greatest distance. Eventually, the processes involved in alpha, beta, and gamma decay were better understood and additional types of decay were discovered. • Alpha decay: An alpha particle is emitted from the nucleus, resulting in a daughter nucleus . • Proton emission: The parent nucleus emits a proton, resulting in a daughter nucleus . • Neutron emission: The parent nucleus ejects a neutron, resulting in a daughter nucleus . • Spontaneous fission: An unstable nucleus disintegrates into two or more small nuclei. • Beta minus decay: A nucleus emits an electron and electron antineutrino to yield a daughter with A, Z + 1. • Beta plus decay: A nucleus emits a positron and electron neutrino to yield a daughter with A, Z – 1. • Electron capture: A nucleus captures an electron and emits a neutrino, resulting in a daughter that is unstable and excited. • Isomeric transition : An excited nucleus releases a gamma ray resulting in a daughter with the same atomic mass and atomic number , Gamma decay typically occurs following another form of decay, such as alpha or beta decay. When a nucleus is left in an excited state it may release a gamma ray photon in order for the atom to return to a lower and more stable energy state. Don’t Miss: What Is Conjugation In Biology ## Occurrence Of Alpha Decay Alpha decay occurs only in the heaviest of the elements. The elements nucleus should be large or unstable enough to undergo spontaneous fission-type changes. It is the most common form of decay in such elements. The alpha particles emitted out of the nucleus usually have an energy level of around 5 MeV and have a speed of around 5% of light. It is important to note that alpha particles possess a charge of +2 due to the absence of electrons. Due to this charge and owing to its heavy mass, an alpha particle reacts with the surroundings vigorously to lose all of its energy almost immediately. Their forward motion can be stopped by a few centimeters of air. Owing to their heaviness and their charge, this kind of radioactive decay reacts most violently with the human body. They have a high ionizing power due to which they can wreak havoc with a tissue. An overdose of alpha radiation results in the formation of blisters and burns on the victims bodies. ## Key Takeaways: Definition Of Radioactivity • Radioactivity is the process by which an unstable atomic nucleus loses energy by emitting radiation. • The SI unit of radioactivity is the becquerel . Other units include the curie, gray, and sievert. • Alpha, beta, and gamma decay are three common processes through which radioactive materials lose energy. Also Check: What Is Distillation In Chemistry ## Inature Notation And Units Radioactivity is the phenomenon of the spontaneous disintegration of unstable atomic nuclei to atomic nuclei to form more energetically stable atomic nuclei. Radioactive decay is a highly exoergic, statistically random, first-order process that occurs with a small amount of mass being converted to energy. Since it is a first-order process, each radioactive species is characterized by its own half-life, the length of time in which an initially very large number of such nuclei will have decayed to only half the original number. In radioactive decay, a relatively large amount of energy is liberated in each disintegrationtypically about 1 million times more than the amount of energy liberated in an exothermic chemical reaction, that is, a few million electron volts of energy per nucleus, compared to only a few electron volts of energy per atom or molecule. Since radioactive decay is a nuclear rather than an electronic phenomenon, its rate for a given radioactive species is not altered measurably by changes in temperature or pressure the only exception to this is the production of very slight changes in half-life by the use of great pressures on a few radionuclides that decay by the process of orbital electron capture . W. Greiner, D.N. Poenaru, in, 2005 ## Read A Brief Summary Of This Topic radioactivity, property exhibited by certain types of matter of emittingenergy and subatomic particles spontaneously. It is, in essence, an attribute of individual atomic nuclei. An unstable nucleus will decompose spontaneously, or decay, into a more stable configuration but will do so only in a few specific ways by emitting certain particles or certain forms of electromagnetic energy. Radioactive is a property of several naturally occurring elements as well as of artificially produced isotopes of the elements. The rate at which a radioactive element decays is expressed in terms of its half-life i.e., the time required for one-half of any given quantity of the isotope to decay. Half-lives range from more than 1024 years for some nuclei to less than 1023 second . The product of a radioactive processcalled the daughter of the parent isotopemay itself be unstable, in which case it, too, will decay. The process continues until a stable nuclide has been formed. You May Like: What Was The First School Of Thought In Psychology ## Atoms And Radioactivity: Beta Particle Oppositely to alpha decay, if an unstable nucleus has too many neutrons compared to protons, it will emit a beta âβâ particle. A neutron within the nucleus will spontaneously turn into a proton, ejecting a high-velocity electron in the process. The beta particle is literally just one electron. A Caesium-137 nucleus decays into Barium-137 and emits a beta particle, commons.wikimedia Beta decay will cause an atom to change to a different element. Remember that a neutron has been converted into a proton. This will increase the proton number of the nucleus by one but keep the mass number unchanged, as an electron has virtually no mass. A beta particle can be written asorin the context of nuclear equations. The nuclear equation of beta decay of Caesium-137 into Barium-137 shown in the example above is. ## Atoms And Radioactivity: Neutron Emission Some radioactive isotopes are capable of decay by emitting neutrons âηâ at high velocities. It is most commonly seen during nuclear fission of high mass radioactive isotopes with a high neutron to proton ratio. Depending on the isotope that is undergoing decay, one or multiple neutrons can be emitted at once. Neutron emission during the fission of an atomic nucleus, flickr When a nucleus emits a neutron, its mass number decreases by 1, but its proton number remains the same. It is generally written as. An atomâs designated element depends only on the proton number and not the mass number. This means that neutron emission alone will never change the element of an atom, although it will change it to a different isotope. You May Like: What Does Plane Mean In Geometry • Gamma rays are used to kill cancerous cells and hence used in radiotherapy. • Cobalt-60 is used to destroy carcinogenic cells. • Gamma rays are used in scanning the internal parts of the body. • Gamma rays kill microbes present in food and prevent it from decay by increasing the shelf life. • Age of the rocks can be studied using radioactive radiations by measuring the argon content present in the rock. In this article, you learned what is radioactivity. Want to know more? Join BYJUS and fall in love with learning. Also, register to BYJUS The Learning App for loads of interactive, engaging Physics-related videos and unlimited academic assist. What Is Radioactivity? | Chemistry Matters Carbon-12 ) and carbon-13 ) are both considered stable isotopes of carbon. However, there are some isotopes of carbon that are considered unstable, and and therefore radioactive. Carbon-14 is a radioactive isotope of carbon ). It has 6 protons and 8 neutrons and will most likely undergo beta decay to decay into a stable isotope: nitrogen-14. $$^_\text\longrightarrow \text^_\text^_\text$$ Don’t Miss: What Does The Word Biology Mean ## The Effects Of Radioactivity On An Atom A radioactive atom will be changed after undergoing radioactive decay, which can happen in several different ways. Radioactive decay can occur due to an unstable nucleus emitting radiation. The most common forms of decay are alpha particles, beta particles, gamma-rays, or neutron emissions. Each type of radiation has different properties and characteristics. ## What Is Radioactive In Physics Radioactivity is the phenomenon of the spontaneous disintegration of unstable atomic nuclei to atomic nuclei to form more energetically stable atomic nuclei. Radioactive decay is a highly exoergic, statistically random, first-order process that occurs with a small amount of mass being converted to energy. Don’t Miss: What Does Volume Mean In Math ## What Is A Radioactive Decay Chain U-238 emits an alpha Thorium 234 emits a beta Protactinium 234 emits a beta Uranium 234 emits an alpha Thorium 230 emits an alpha Polonium 218 emits an alpha Bismuth 214 emits a beta Polonium 214 emits an alpha Bismuth 210 emits a beta Polonium 210 emits an alpha When the nucleus of an atom has too few neutrons compared to protons, it will emit an alpha particle âαâ, which is made from twoprotons and two neutrons. This helps to restore the balance within the nucleus and reduce the ratio of protons to neutrons. An Americium-241 nucleus decays into Neptunium-237 and emits an alpha particle, commons.wikimedia An alpha particleis exactly the same as a helium nucleus. Therefore, alpha decay will cause the nucleus of an atom to lose a mass number of 4 and a proton number of 2. This is helpful when using nuclear equations, as we are able to determine what element the nucleus will decay into. A radium nucleus emits an alpha particle. What element has the radium nucleus decayed into? Refer to the periodic table. Radium has a proton number of 88 and a mass number of 226: One helium nucleus is emitted in alpha decay, so subtract 4 from the mass number and 2 from the proton number of radium: Determine which element has a proton number of 86 on the periodic table. The answer is Radon, . Don’t Miss: What Does Pt Stand For In Math ## What Is Radioactivity And Its Unit What is the SI Unit of Radioactivity? The SI unit of radioactivity is becquerel and this term is named after Henri Becquerel. Unit of radioactivity is defined as: The activity of a quantity of radioactive material where one decay takes place per second. 1 becquerel = 1 radioactive decay per second = 2.703×10-11 Ci.
#### APRG Seminar ##### Venue: Microsoft Teams (online) In his seminal paper (Acta Math. 1960), H"ormander established the $L^p$-$L^q$ boundedness of Fourier multipliers on $\mathbb{R}^n$ for the range $1<p \leq 2 \leq q<\infty.$ Recently, Ruzhansky and Akylzhanov (JFA, 2020) extended H"ormander’s theorem for general locally compact separable unimodular groups using group von Neumann algebra techniques and as a consequence, they obtained the $L^p$-$L^q$ boundedness of spectral multipliers for general positive unbounded invariant operators on locally compact separable unimodular groups. In this talk, we will discuss the $L^p$-$L^q$ boundedness of global pseudo-differential operators and Fourier multipliers on smooth manifolds for the range $1<p\leq 2 \leq q<\infty$ using the nonharmonic Fourier analysis developed by Ruzhansky, Tokmagambetov, and Delgado. As an application, we obtain the boundedness of spectral multipliers, embedding theorems, and time asymptotic the heat kernels for the anharmonic oscillator. This talk is based on my joint works with Duván Cardona (UGent), Marianna Chatzakou (Imperial College London), Michael Ruzhansky (UGent), and Niyaz Tokmagambetov (UGent). Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ;     E-mail: chair.math[at]iisc[dot]ac[dot]in Last updated: 01 Dec 2020
# American Institute of Mathematical Sciences ISSN: 1531-3492 eISSN: 1553-524X All Issues ## Discrete & Continuous Dynamical Systems - B May 2003 , Volume 3 , Issue 2 Select all articles Export/Reference: 2003, 3(2): 145-162 doi: 10.3934/dcdsb.2003.3.145 +[Abstract](1352) +[PDF](185.9KB) Abstract: Due to their mathematical tractability, two-dimensional (2D) fluid equations are often used by mathematicians as a model for quasi-geostrophic (QG) turbulence in the atmosphere, using Charney's 1971 paper as justification. Superficially, 2D and QG turbulence both satisfy the twin conservation of energy and enstrophy and thus are unlike 3D flows, which do not conserve enstrophy. Yet QG turbulence differs from 2D turbulence in fundamental ways, which are not generally known. Here we discuss ingredients missing in 2D turbulence formulations of large-scale atmospheric turbulence. We argue that there is no proof that energy cannot cascade downscale in QG turbulence. Indeed, observational evidence supports a downscale flux of both energy and enstrophy in the mesoscales. It is suggested that the observed atmospheric energy spectrum is explainable if there is a downscale energy cascade of QG turbulence, but is inconsistent with 2D turbulence theories, which require an upscale energy flux. A simple solved example is used to illustrate some of the ideas discussed. 2003, 3(2): 163-177 doi: 10.3934/dcdsb.2003.3.163 +[Abstract](1368) +[PDF](422.0KB) Abstract: This work revisits a couple of well-known piecewise linear oscillators pointing out several unnoticed properties. In particular, for one of these oscillators we study under what conditions bounded motions are possible and investigate the effect of viscous damping on its trajectories. The article complements a relatively recent paper by Capecchi [10] and presents a non-trivial counterexample to the wide-spread belief according to which chaos is ubiquitous in piecewise linear systems. 2003, 3(2): 179-192 doi: 10.3934/dcdsb.2003.3.179 +[Abstract](1364) +[PDF](257.6KB) Abstract: We find the explicit solution of the so-called two-mode model for multicomponent Bose-Einstein condensates. We prove that all the solutions are constants or periodic functions and give explicit formulae for them. 2003, 3(2): 193-200 doi: 10.3934/dcdsb.2003.3.193 +[Abstract](1325) +[PDF](126.0KB) Abstract: Countable many weakly coupled reversible oscillators are investigated. Homoclinic structures are assumed for the anti-integrable limit equations. The existence of infinitely many homoclinic solutions is shown for the chains of perturbed oscillators and each of the homoclinic solutions is accumulated by continuum many breathers with periods tending to infinity. A similar result is shown for the case when heteroclinic loop structures are assumed for the anti-integrable limit equations. Applications are given to several models. 2003, 3(2): 201-228 doi: 10.3934/dcdsb.2003.3.201 +[Abstract](1296) +[PDF](247.0KB) Abstract: A class of upwind flux splitting methods in the Euler equations of compressible flow is considered in this paper. Using the property that Euler flux $F(U)$ is a homogeneous function of degree one in $U$, we reformulate the splitting fluxes with $F^{+}=A^{+} U$, $F^{-}=A^{-} U$, and the corresponding matrices are either symmetric or symmetrizable and keep only non-negative and non-positive eigenvalues. That leads to the conclusion that the first order schemes are positive in the sense of Lax-Liu [18], which implies that it is $L^2$-stable in some suitable sense. Moreover, the second order scheme is a stable perturbation of the first order scheme, so that the positivity of the second order schemes is also established, under a CFL-like condition. In addition, these splitting methods preserve the positivity of density and energy. 2003, 3(2): 229-253 doi: 10.3934/dcdsb.2003.3.229 +[Abstract](1436) +[PDF](273.7KB) Abstract: In this article, we describe the basic properties of the Wang Chang-Uhlenbeck equations. Then, we obtain the classical H-theorem, the Gibbs theorem and the convergence toward an unique maxwellian equilibrium in the spatially homogeneous case. And, by choosing a particular cross sections model, we formally deduce the fluid limit which is the hyperbolic multispecies Euler system closed with a non classical state equation. 2003, 3(2): 255-262 doi: 10.3934/dcdsb.2003.3.255 +[Abstract](1541) +[PDF](131.4KB) Abstract: We prove the existence of recurrent or Poisson stable motions in the Navier-Stokes fluid system under recurrent or Poisson stable forcing, respectively. We use an approach based on nonautonomous dynamical systems ideas. 2003, 3(2): 263-284 doi: 10.3934/dcdsb.2003.3.263 +[Abstract](1174) +[PDF](592.2KB) Abstract: We present an application of the transport theory developed for area preserving dynamical systems, to the problem of pollution and in particular patchiness in clouds of pollution in partially stratified estuaries. We model the flow in such estuaries using a $3+1$ dimensional uncoupled cartoon of the dominant underlying global circulation mechanisms present within the estuarine flow. We separate the cross section up into different regions, bounded by partial and complete barriers. Using these barriers we then provide predictions for the lower bound on the vertical local flux. We also present work on the relationship between the time taken for a particle to leave the estuary, (ie. the exit time), and the mixing within the estuary. This link is important as we show that to optimally discharge pollution into an estuary both concepts have to be considered. We finish by suggesting coordinates in space time for an optimal discharge site and a discharge policy to ensure the continually optimal discharge from such a site (or even a non optimal site). 2003, 3(2): 285-298 doi: 10.3934/dcdsb.2003.3.285 +[Abstract](1467) +[PDF](180.7KB) Abstract: In this paper we consider the coupled PDE system which describes a composite (sandwich) beam, as recently proposed in [H.1], [H-S.1]: it couples the transverse displacement $w$ and the effective rotation angle $\xi$ of the beam. We show that by introducing a suitable new variable $\theta$, the original model in the original variables $\{w,\xi\}$ of the sandwich beam is transformed into a canonical thermoelastic system in the new variables $\{w,\theta\}$, modulo lower-order terms. This reduction then allows us to re-obtain recently established results on the sandwich beam--which had been proved by a direct, ad hoc technical analysis [H-L.1]--simply as corollaries of previously established corresponding results [A-L.1], [A-L.2], [L-T.1]--[L-T.5] on thermoelastic systems. These include the following known results [H-L.1] for sandwich beams: (i) well-posedness in the semigroup sense; (ii) analyticity of the semigroup when rotational forces are not accounted for; (iii) structural decomposition of the semigroup when rotational forces are accounted for; and (iv) uniform stability. In addition, however, through the aforementioned reduction to thermoelastic problems, we here establish new results for sandwich beams, when rotational forces are accounted for. They include: (i) a backward uniqueness property (Section 4), and (ii) a suitable singular estimate, critical in control theory (Section 5). Finally, we obtain a new backward uniqueness property, this time for a structural acoustic chamber having a composite (sandwich) beam as its flexible wall (Section 6). 2003, 3(2): 299-309 doi: 10.3934/dcdsb.2003.3.299 +[Abstract](1948) +[PDF](131.4KB) Abstract: Periodic oscillations are proved for an SIRS disease transmission model in which the size of the population varies and the incidence rate is a nonlinear function. For this particular incidence function, analytical techniques are used to show that, for some parameter values, periodic solutions can arise through a Hopf bifurcation and disappear through a homoclinic loop bifurcation. The existence of periodicity is important as it may indicate different strategies for controlling disease. 2018  Impact Factor: 1.008
# American Institute of Mathematical Sciences February  2012, 5(1): 115-126. doi: 10.3934/dcdss.2012.5.115 ## Reaction diffusion equation with non-local term arises as a mean field limit of the master equation 1 The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai Minato-ku, Tokyo, 108-8639, Japan 2 Division of Mathematical Science, Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyamacho, Toyonakashi, 560-8531, Japan 3 Japan Science and Technology Agency, CREST 5, Sanbancho, Chiyoda-ku, Tokyo, 102-0075, Japan Received  March 2009 Revised  December 2009 Published  February 2011 We formulate a reaction diffusion equation with non-local term as a mean field equation of the master equation where the particle density is defined continuously in space and time. In the case of the constant mean waiting time, this limit equation is associated with the diffusion coefficient of A. Einstein, the reaction rate in phenomenology, and the Debye term under the presence of potential. Citation: Kazuhisa Ichikawa, Mahemauti Rouzimaimaiti, Takashi Suzuki. Reaction diffusion equation with non-local term arises as a mean field limit of the master equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 115-126. doi: 10.3934/dcdss.2012.5.115 ##### References: [1] P. A. Egelstaff, "An Introduction to the Liquid State,", Academic Press, (1967). [2] J. D. Murray, "Mathematical Biology I: An Introduction,", 3rd edition, (2001). [3] A. Okubo, "Diffusion and Ecological Problems: Modern Perspectives,", 2nd, (2001). [4] H. G. Othmer, S. R. Dumber and W. Alt, Models of dispersal in biological systems,, J. Math. Biol., 26 (1988), 263. doi: 10.1007/BF00277392. [5] H. G. Othmer and A. Stevens, Aggregation, blowup, and collapse: The ABCs of taxis in reinforced random walks,, SIAM J. Appl. Math., 57 (1997), 1044. doi: 10.1137/S0036139995288976. show all references ##### References: [1] P. A. Egelstaff, "An Introduction to the Liquid State,", Academic Press, (1967). [2] J. D. Murray, "Mathematical Biology I: An Introduction,", 3rd edition, (2001). [3] A. Okubo, "Diffusion and Ecological Problems: Modern Perspectives,", 2nd, (2001). [4] H. G. Othmer, S. R. Dumber and W. Alt, Models of dispersal in biological systems,, J. Math. Biol., 26 (1988), 263. doi: 10.1007/BF00277392. [5] H. G. Othmer and A. Stevens, Aggregation, blowup, and collapse: The ABCs of taxis in reinforced random walks,, SIAM J. Appl. Math., 57 (1997), 1044. doi: 10.1137/S0036139995288976. [1] Piermarco Cannarsa, Giuseppe Da Prato. Invariance for stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2012, 1 (1) : 43-56. doi: 10.3934/eect.2012.1.43 [2] Hakima Bessaih, Yalchin Efendiev, Razvan Florian Maris. Stochastic homogenization for a diffusion-reaction model. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5403-5429. doi: 10.3934/dcds.2019221 [3] Wei Wang, Anthony Roberts. Macroscopic discrete modelling of stochastic reaction-diffusion equations on a periodic domain. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 253-273. doi: 10.3934/dcds.2011.31.253 [4] Parker Childs, James P. Keener. Slow manifold reduction of a stochastic chemical reaction: Exploring Keizer's paradox. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1775-1794. doi: 10.3934/dcdsb.2012.17.1775 [5] Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301 [6] N. U. Ahmed. Weak solutions of stochastic reaction diffusion equations and their optimal control. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1011-1029. doi: 10.3934/dcdss.2018059 [7] Perla El Kettani, Danielle Hilhorst, Kai Lee. A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5615-5648. doi: 10.3934/dcds.2018246 [8] Jifa Jiang, Junping Shi. Dynamics of a reaction-diffusion system of autocatalytic chemical reaction. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 245-258. doi: 10.3934/dcds.2008.21.245 [9] Dieter Bothe, Michel Pierre. The instantaneous limit for reaction-diffusion systems with a fast irreversible reaction. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 49-59. doi: 10.3934/dcdss.2012.5.49 [10] Peter E. Kloeden, Thomas Lorenz, Meihua Yang. Reaction-diffusion equations with a switched--off reaction zone. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1907-1933. doi: 10.3934/cpaa.2014.13.1907 [11] M. Grasselli, V. Pata. A reaction-diffusion equation with memory. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1079-1088. doi: 10.3934/dcds.2006.15.1079 [12] Shangbing Ai, Wenzhang Huang, Zhi-An Wang. Reaction, diffusion and chemotaxis in wave propagation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 1-21. doi: 10.3934/dcdsb.2015.20.1 [13] Shouchuan Hu, Nikolaos S. Papageorgiou. Nonlinear Dirichlet problems with a crossing reaction. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2749-2766. doi: 10.3934/cpaa.2014.13.2749 [14] S.-I. Ei, M. Mimura, M. Nagayama. Interacting spots in reaction diffusion systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 31-62. doi: 10.3934/dcds.2006.14.31 [15] Juan Dávila, Louis Dupaigne, Marcelo Montenegro. The extremal solution of a boundary reaction problem. Communications on Pure & Applied Analysis, 2008, 7 (4) : 795-817. doi: 10.3934/cpaa.2008.7.795 [16] Anne Shiu, Timo de Wolff. Nondegenerate multistationarity in small reaction networks. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2683-2700. doi: 10.3934/dcdsb.2018270 [17] Lu Yang, Meihua Yang. Long-time behavior of stochastic reaction-diffusion equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2627-2650. doi: 10.3934/dcdsb.2017102 [18] Dingshi Li, Kening Lu, Bixiang Wang, Xiaohu Wang. Limiting behavior of dynamics for stochastic reaction-diffusion equations with additive noise on thin domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 187-208. doi: 10.3934/dcds.2018009 [19] Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143 [20] Dingshi Li, Kening Lu, Bixiang Wang, Xiaohu Wang. Limiting dynamics for non-autonomous stochastic retarded reaction-diffusion equations on thin domains. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3717-3747. doi: 10.3934/dcds.2019151 2017 Impact Factor: 0.561
## disko-san disko-san is a simple command line tool to check the sanity of new hard drives. It first writes chunks of data to the disk, consisting of a checksum plus random data. Each chunk is 4 MiB in size. After writing the disk full, it re-reads all of those chunks and verifies via their checksum if any corruption occurred. The program keeps track of its progress on a disk via an optional state file. This file allows the program to be terminated and resumed later on The program is written in plain go and should run on any non-ancient Linux system. Probably also on *BSD. ## State file The magic of the tool is to keep a state file. Here, disko-san writes its current progress, so that the program can be terminated and continued later on. This was the whole reason why I wrote this tool. I had to verify a bunch of hard disks that were to big to run at once without a system reboot. The tool is on GitHub and binaries are build in OBS. ## Performance log disko-san can additionally store it’s write metrics to a performance log file. There it stores the position, size (always 4 MiB), and milliseconds how long it took to write that chunk as CSV file. You can use this file to search for potentially bad sectors, where the write speed deviates significantly and consistently from the disk median. # Installation Installation for openSUSE is done via OBS For Tumbleweed zypper addrepo https://download.opensuse.org/repositories/home:ph03nix:tools/openSUSE_Tumbleweed/home:ph03nix:tools.repo zypper refresh zypper install disko-san And for Leap 15.2 zypper addrepo https://download.opensuse.org/repositories/home:ph03nix:tools/openSUSE_Leap_15.2/home:ph03nix:tools.repo zypper refresh zypper install disko-san For other distributions you can try the pre-build binaries from the GitHub releases, or you build it yourself. ### Building Build instructions are on GitHub, but it’s as easy as git clone https://github.com/grisu48/disko-san.git cd disko-san go build -o disko-san disko-san.go # Usage disko-san DISK [STATE] [PERFLOG] DISK defines the disk under test STATE progress file, required for resume operations PERFLOG write performance (write metrics) to this file Assuming your disk under test is /dev/sdh and you’d like to store the state file to your home directory, simply run disko-san /dev/sdh /home/phoenix/sdh_state
### Chun-Hung Liu (劉俊宏), Asymptotic dimension of minor-closed families and beyond Zoom ID:95464969835 (356260) The asymptotic dimension of metric spaces is an important notion in  geometric group theory. The metric spaces considered in this talk are  the ones whose underlying spaces are the vertex-sets of (edge-)weighted  graphs and whose metrics are the distance functions in weighted graphs.  A standard compactness argument shows that it suffices to consider the  asymptotic ### Jeong Ok Choi (최정옥), Various game-theoretic models on graphs Room B232 IBS (기초과학연구원) We introduce some of well-known game-theoretic graph models and related problems. A contagion game model explains how an innovation diffuses over a given network structure and focuses on finding conditions on which structure an innovation becomes epidemic. Regular infinite graphs are interesting examples to explore. We show that regular infinite trees make an innovation least ### Jaeseong Oh (오재성), A 2-isomorphism theorem for delta-matroids Room B232 IBS (기초과학연구원) Whitney’s 2-Isomorphism Theorem characterises when two graphs have isomorphic cycle matroids. In this talk, we present an analogue of this theorem for graphs embedded in surfaces by characterising when two graphs in surface have isomorphic delta-matroids. This is based on the joint work with Iain Moffatt. ### Daniel Cranston, Vertex Partitions into an Independent Set and a Forest with Each Component Small Zoom ID: 934 3222 0374 (ibsdimag) For each integer $k\ge 2$, we determine a sharp bound on $\operatorname{mad}(G)$ such that $V(G)$ can be partitioned into sets $I$ and $F_k$, where $I$ is an independent set and $G$ is a forest in which each component has at most k vertices. For each $k$ we construct an infinite family of examples showing our result is best ### Casey Tompkins, Extremal forbidden poset problems in Boolean and linear lattices Room B232 IBS (기초과학연구원) Extending the classical theorem of Sperner on the maximum size of an antichain in the Boolean lattice, Katona and Tarján introduced a general extremal function $La(n,P)$, defined to be the maximum size of a family of subsets of which does not contain a given poset $P$ among its containment relations.  In this talk, I ### Meike Hatzel, Constant congestion bramble Zoom ID: 934 3222 0374 (ibsdimag) In this talk I will present a small result we achieved during a workshop in February this year. My coauthors on this are Marcin Pilipczuk, Paweł Komosa and Manuel Sorge. A bramble in an undirected graph $G$ is a family of connected subgraphs of $G$ such that for every two subgraphs $H_1$ and $H_2$ in the bramble either $V(H_1) \cap ### Yijia Chen (陈翌佳), Graphs of bounded shrub-depth, through a logic lens Zoom ID: 934 3222 0374 (ibsdimag) Shrub-depth is a graph invariant often considered as an extension of tree-depth to dense graphs. In this talk I will explain our recent proofs of two results about graphs of bounded shrub-depth. Every graph property definable in monadic-second order logic, e.g., 3-colorability, can be evaluated by Boolean circuits of constant depth and polynomial size, whose ### Duksang Lee (이덕상), Characterizing matroids whose bases form graphic delta-matroids Room B232 IBS (기초과학연구원) We introduce delta-graphic matroids, which are matroids whose bases form graphic delta-matroids. The class of delta-graphic matroids contains graphic matroids as well as cographic matroids and is a proper subclass of the class of regular matroids. We give a structural characterization of the class of delta-graphic matroids. We also show that every forbidden minor for ### Da Qi Chen, Bipartite Saturation Zoom ID: 934 3222 0374 (ibsdimag) In extremal graph theory, a graph G is H-saturated if G does not contain a copy of H but adding any missing edge to G creates a copy of H. The saturation number, sat(n, H), is the minimum number of edges in an n-vertex H-saturated graph. This class of problems was first studied by Zykov Livestream ### Joonkyung Lee (이준경), On Ramsey multiplicity Zoom ID:8628398170 (123450) Ramsey's theorem states that, for a fixed graph$H$, every 2-edge-colouring of$K_n$contains a monochromatic copy of$H$whenever$n$is large enough. Perhaps one of the most natural questions after Ramsey's theorem is then how many copies of monochromatic$H\$ can be guaranteed to exist. To formalise this question, let the Ramsey multiplicity 기초과학연구원 수리및계산과학연구단 이산수학그룹 대전 유성구 엑스포로 55 (우) 34126 IBS Discrete Mathematics Group (DIMAG) Institute for Basic Science (IBS) 55 Expo-ro Yuseong-gu Daejeon 34126 South Korea E-mail: dimag@ibs.re.kr, Fax: +82-42-878-9209
Proceedings from the roundtable on “the role of coxibs in successful pain management”| Volume 24, ISSUE 1, SUPPLEMENT 1, S38-S47, July 2002 # The Impact of Pain Management on Quality of Life Open Access ## Abstract Although its inclusion in medical research is relatively recent and its interpretation is often variable, quality of life is increasingly being recognized as one of the most important parameters to be measured in the evaluation of medical therapies, including those for pain management. Pain, when it is not effectively treated and relieved, has a detrimental effect on all aspects of quality of life. This negative impact has been found to span every age and every type and source of pain in which it has been studied. Effective analgesic therapy has been shown to improve quality of life by relieving pain. Opioid analgesics, cyclooxygenase (COX)-2 inhibitors (or coxibs), and several adjuvant analgesics for neuropathic pain have been demonstrated to significantly improve quality-of-life scores in patients with pain. Coxibs provide effective, well-tolerated analgesia without some of the issues faced with opioids—benefits that should translate into improved quality of life. Recent studies have demonstrated that the COX-2 inhibitor rofecoxib significantly improves quality of life in patients with osteoarthritis and chronic, lower back pain. Quality-of-life measurements, especially symptom distress scales, can also be used as sensitive means of differentiating one agent from another in the same class. In future pharmacotherapeutic research, quality of life should be included as an outcome domain as are the traditionally measured variables of efficacy and safety. In particular, future studies of coxibs should include symptom distress scores as important quality-of-life measurements, to identify meaningful differences between this new class of analgesics and nonselective nonsteroidal anti-inflammatory drugs. ## Introduction Pain is not only a highly noxious experience per se, but it can also have an overwhelmingly negative effect on nearly every other aspect of life, including mood and capacity to function in daily roles. According to a study by the World Health Organization, individuals who live with persistent pain are four times more likely than those without pain to suffer from depression or anxiety, and more than twice as likely to have difficulty working. • Gureje O. • Von Korff M. • Simon G.E. • et al. Persistent pain and well-being A World Health Organization study in primary care. Pain is one of the most significant healthcare crises in the United States. Nearly half of Americans see a physician with a primary complaint of pain each year, MayoClinic.com. Managing pain: attitude, medication and therapy are keys to control. Mayo Clinic Web Site. June 21, 2001. Available at: http://www. mayoclinic.com/invoke.cfm?id=HQ01055. Accessed September 19, 2001. making pain the single most frequent reason for physician consultation in the United States. • Abbott F.V. • Fraser M.I. Use and abuse of over-the-counter analgesic agents. Even this fact belies the true magnitude of the problem, since a substantial number of people with pain do not consult a physician. In one of the largest survey studies on the subject of pain, 18% of respondents who rated their pain as severe or unbearable had not visited any healthcare professional, because they did not think that anyone could relieve their suffering. • Sternbach R.A. Survey of pain in the United States The Nuprin Pain Report. The costs associated with pain are extremely high, both to the healthcare system and to society at large. Not only do individuals with pain have a greater rate of utilization of the healthcare system, but their productivity is substantially diminished. It has been estimated that more than 4 billion workdays are lost to pain annually. If one assumes a very conservative median US income of $23,000, then pain costs society$55 billion in lost productivity for full-time workers alone. • Sternbach R.A. Survey of pain in the United States The Nuprin Pain Report. While these costs are enormous, one of the greatest tolls exacted by pain is on quality of life. Pain is widely accepted to be one of the most important determinants of quality of life, which can be defined as an individual's ability to perform a range of roles in society and to reach an acceptable level of satisfaction from functioning in those roles. • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. • Anderson R.B. • Hollenberg N.K. • Williams G.H. Physical Symptoms Distress Index a sensitive tool to evaluate the impact of pharmacological agents on quality of life. However, quality-of-life research is, relatively speaking, in its infancy, and the effect of symptoms such as pain on quality of life is just beginning to be understood. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. Increasingly, however, quality of life is coming to be accepted as one of the most important outcome domains to be measured in the evaluation of any therapy or health-related intervention. • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. Quality of life is a more subtle indicator than the typically measured variables of efficacy and safety, but it is arguably more indicative of treatment value and may be more relevant to both patient satisfaction and willingness to adhere to treatment. ## Measuring Quality of Life: The Scales and Beyond Quality of life can be measured in a wide variety of ways, and an array of instruments has been developed to evaluate and attempt to quantify it. Several questions need to be answered to select the optimal instrument for any given circumstance. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. In the present context, it is assumed that we are referring to health-related quality of life, which is more specific than general quality of life. ### Which Is More Applicable, a Disease-Specific or a Generic Instrument? Specific instruments are designed to measure quality of life in a particular disease state, such as cancer or arthritis. Numerous specific instruments are available in nearly every disease category; for example, there are at least four instruments that are specific to prostate cancer alone. The disadvantage of specific instruments is that their use makes it impossible to compare findings across disease states. Generic instruments are intended to measure quality of life in any disease state and across disease states as well. Their advantage is that they allow for groups of patients with various conditions to be compared with one another. Their disadvantage, however, is that because they involve many different types of constructs, and are so general, they are often not very effective at measuring improvement in a specific disease state as a consequence of an intervention. Thus, they may not pick up subtle but important shifts in quality of life resulting from a given treatment. The classic example of a generic quality of life instrument is the Medical Outcomes Study Short Form 36, or SF-36. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. The SF-36 is a 36-item survey of general health status that was designed to combine the comprehensiveness of much longer surveys with the brevity of relatively coarse single-item surveys. It can be self-completed, administered by computer, or conducted by a trained interviewer in person or over the telephone. • Ware Jr, J.E. The SF-36 health survey. ### What Dimensions of Quality of Life Need to Be Measured? Quality of life is inherently a multidimensional phenomenon, and most useful quality-of-life instruments reflect this. There are domain-specific quality-of-life instruments, which measure a single aspect of quality of life, such as physical function or anxiety. However, multidomain instruments are generally preferred, since an instrument that does not include several dimensions will make it impossible to determine the nature of a score change. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. Although some instruments have more domains, • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. most acceptable quality-of-life assessment strategies address several or all of the following domains: physical, psychological, social, somatic, and spiritual. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. The SF-36 includes 8 domains: physical, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. • Ware Jr, J.E. The SF-36 health survey. ### How Much Responder Burden Is Acceptable? Responder burden refers to the amount of effort that the patient must extend to complete the evaluation. The number of instruments, the number of questions in the instruments, and the conceptual difficulty of the response task must be considered. This is particularly important when measuring quality of life for patients in pain, because of the debilitating nature of the condition. While some patients are grateful for the caring and concern implied by the effort to solicit their feelings about their quality of life, others may be too incapacitated to fully comply. Less-than-full compliance can lead to inaccurate results. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. ### What Are the Administrative Issues That Need to Be Considered? Most quality of life evaluations require measurement at a minimum of two intervals—the baseline and then at a later point, typically after some form of treatment has been administered. Therefore, arrangements need to be made to ensure that the greatest possible number of patients complete follow-up evaluations. In addition, decisions need to be made about whether patients must complete their own evaluations (which may be difficult for patients who suffer from severe pain) or whether proxy completion will be permitted. It is generally agreed that self-report data are preferred because they decrease the possibility that proxies may unwittingly bias the results by allowing their own feelings and opinions related to quality of life to be reflected in their responses. Instruments written at low-literacy levels or in multiple languages can help decrease the need for proxy involvement, as can allowing the option of audiotaped or computerized questionnaire completion, particularly for patients whose movement is limited by severe pain. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. ### Has the Instrument Proven to Be Both Valid And Reliable? Validity and reliability are crucial characteristics of a useful quality-of-life questionnaire. A valid instrument is one that actually measures what it is intended to measure, whereas a reliable questionnaire is one that provides a reproducible result. A questionnaire administered on Day 1 and repeated a few days later should produce a nearly identical score, provided that no new variables have been introduced in the interim. • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. ### Newer Approaches to Quality-of-Life Assessment New approaches to the field of quality-of-life research add to the usefulness and interpretability of quality-of-life questionnaires. The Symptom Distress Inventory method, for example, involves providing patients with a checklist that allows them to indicate which disease-specific symptoms they have and how much distress each symptom produces. The magnitude of symptom distress has been found to be strongly correlated with traditional quality-of-life assessment tools and may in some cases be the most sensitive way to address health-related quality of life. • Hollenberg N.K. • Williams G.H. • Anderson R. Medical therapy, symptoms, and the distress they cause relation to quality of life in patients with angina pectoris and/or hypertension. Furthermore, symptom distress methods have been shown to be more sensitive than traditional quality-of-life instruments in differentiating the impact of various drugs on quality of life. • Testa M.A. • Anderson R.B. • Nackley J.F. • et al. Quality of life and antihypertensive therapy in men a comparison of captopril with enalapril. Thus, when two drugs have equivalent efficacy but different side effect profiles (a common situation), these distinctions in side effects, picked up most sensitively by the symptom distress method, may underlie important differences in quality of life for those on the medications. Utility methods enable the evaluation of treatment-related factors that affect quality of life (e.g., degree of pain relief or propensity to cause a side effect such as nausea) in the context of patient preferences. For example, patients may be asked to weigh the relative importance of various symptoms or other health-related factors (e.g., cost of treatment, life prolongation). Different treatments are then compared according to improvement in overall utility, rather than using a simple unidimensional outcome variable. Finally, calibration methods allow changes in quality of life to be evaluated comparatively against other stressful life events (e.g., job loss), thereby providing a comparative gauge of what magnitude of change on a quality of life (or symptom distress) scale is significant. ## The Impact of Uncontrolled Pain on Quality of Life Pain and quality of life are phenomena that share several fundamental characteristics. Pain has been defined by the American Pain Society as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage.” Similarly, the Joint Commission on Accreditation of Healthcare Organizations notes that pain is a common experience that has adverse physiological and psychological effects when unrelieved. • Joint Commission on Accreditation of Healthcare Organizations Hence, pain involves cognitive, motivational, affective, behavioral, and physical components. Quality of life, a construct that incorporates all factors that impact on an individual's life, has a similar all-encompassing nature. • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. • Torrance G.W. Utility approach to measuring health-related quality of life. Indeed, the World Health Organization's list of the domains and facets that comprise quality of life confirms the all-embracing nature of the concept (Table 1). • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. Table 1Domains and Facets of Quality of Life, as Defined by the World Health Organization • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. Domain I: Physical • Pain and discomfort. • Energy and fatigue. • Sexual activity. • Sleep and rest. • Sensory functions. Domain II: Psychological • Positive feelings. • Thinking, learning, memory, and concentration (cognitions). • Self-esteem. • Body image and appearance. • Negative feelings. Domain III: Level of Independence • Mobility. • Activities of daily living. • Dependence on medication or treatment. • Dependence on nonmedicinal substances. • Communication capacity. • Working capacity. Domain IV: Social Relationships • Personal relationships. • Practical social support. • Activities as provider/supporter. Domain V: Environmental Health • Physical safety and security. • Home environment. • Work satisfaction. • Financial resources. • Health and social care; availability and quality (services). • Opportunities for acquiring new information and skills. • Participation in and opportunities for recreation and leisure activities. • Physical environments. • Transport. Domain VI: Spirituality • Spirituality, religion, and personal beliefs. General Facet • Overall perceptions of health and quality of life. World Health Organization Quality of Life Group. 1995. • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. . Pain, when it is ongoing and uncontrolled, has a detrimental, deteriorative effect on virtually every aspect of a patient's life. It produces anxiety and emotional distress; undermines well-being; interferes with functional capacity; and hinders the ability to fulfill family, social, and vocational roles. With such broad-based effects, it is apparent that pain would have the effect of diminishing quality of life. The deteriorative effect on quality of life is universal; it spans every age and stage of life and occurs regardless of the pain's type or source. For example, in a study of 49,971 elderly nursing home residents with disorders of nearly every kind, Won and colleagues found that more than one in four (26.3%) experienced pain on a daily basis. • Won A. • Lapane K. • Gambassi G. • et al. Correlates and management of nonmalignant pain in the nursing home. A strong association was found between daily pain and indices of poor quality of life: Patients who suffered from daily pain were more likely to have impairment in activities of daily living (odds ratio [OR]: 2.47), mood disorders (OR: 1.66), and decreased involvement in activities (OR: 1.35). These associations persisted even after the investigators adjusted for the potentially confounding effects of age, gender, race, cognitive status, and such debilitating conditions as arthritis, stroke, congestive heart failure, and Parkinson's disease. • Won A. • Lapane K. • Gambassi G. • et al. Correlates and management of nonmalignant pain in the nursing home. The younger end of the age spectrum is equally vulnerable to the detrimental effects of pain on quality of life. In a study of 128 adolescents with chronic pain, Hunfeld and researchers found that quality of life decreased as intensity and frequency of pain increased. • Hunfeld J.A. • Perquin C.W. • Duivenvoorden H.J. • et al. Chronic pain and its impact on quality of life in adolescents and their families. The domains of psychological functioning (including feeling less at ease), physical status (including an increase in incidence of other somatic complaints), and functional status (defined as greater impediments to leisure and daily activities) were particularly affected. Notably, surveys of the patients' mothers revealed that the adolescents' pain reduced their families' quality of life as well. The damaging effects of pain on quality of life have been demonstrated for nearly every kind of pain, including neuropathic pain, other chronic nonmalignant pain such as that associated with arthritis, and malignant pain. • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. • Haythornthwaite J.A. • Benrud-Larson L.M. Psychological aspects of neuropathic pain. • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. • Hill C.L. • Parsons J. • Taylor A. • et al. Health related quality of life in a population sample with arthritis. For example, in a study of 150 patients with chronic pain, including pain of neuropathic, somatic, psychogenic, and visceral origins, Becker and colleagues found that scores on both the Psychological General Well-Being (PGWB) scale and the SF-36 were significantly reduced compared with scores in the normal population (P < 0.001). • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. (The PGWB is a 22-item instrument designed to measure subjective psychological well-being in population-based studies. It includes six parameters: anxiety, depression, vitality, positive well-being, self-control, and general health. • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. All eight SF-36 subscores, including bodily pain, general health, mental health, physical functioning, role-emotional, role-physical, social functioning, and vitality were significantly reduced compared with subscores for individuals without pain (Figure 1). • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. Furthermore, 40% of patients with pain had scores on the Hospital Anxiety and Depression scale that indicated the presence of a depressive disorder, whereas 50% had scores indicating a comorbid anxiety disorder. The impact of malignant pain on quality of life is similarly severe. Rummans and coworkers studied the effect of pain on quality of life in 117 patients with recurrent breast or gynecological cancer. • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. The investigators found a substantial correlation between the presence of pain and the physical and social dimensions of quality of life. To their surprise, however, they found a weaker correlation between pain and the psychiatric and spiritual quality-of-life domains. They attributed this aberrant finding to the fact that the majority of these patients were experiencing mild to moderate pain and none were experiencing severe, incapacitating pain. • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. The majority of studies have demonstrated that there is a dose-response relationship between pain and quality of life: as one increases, the other proportionately decreases. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. • Hunfeld J.A. • Perquin C.W. • Duivenvoorden H.J. • et al. Chronic pain and its impact on quality of life in adolescents and their families. • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. • Cleeland C.S. • Ryan K.M. Pain assessment global use of the Brief Pain Inventory. For example, in their study of 216 adults with various forms of cancer grouped by level of pain severity, Wang and colleagues found that those with moderate or severe pain had consistently lower SF-36 scores than patients with no pain or mild pain (Figure 2). • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. All mean Mental and Physical Component Summary scores declined as pain severity increased (P < 0.001 for both), and this relationship was found to exist independent of Eastern Cooperative Oncology Group (ECOG) performance status. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. Cleeland and Ryan stated that it is more important to know the intensity of a patient's pain than to know merely whether or not pain is present. • Cleeland C.S. • Ryan K.M. Pain assessment global use of the Brief Pain Inventory. “Many adults, including cancer patients, function quite effectively with background levels of pain which, for the most part, are not attended to. As pain increases, however, it passes a threshold beyond which it can no longer be ignored. At this point, it becomes disruptive to many aspects of the person's life.” According to their model, a progressively greater number of quality-of-life domains are impacted as pain becomes progressively worse (Table 2). • Cleeland C.S. • Ryan K.M. Pain assessment global use of the Brief Pain Inventory. Table 2Activities/Quality-of-Life Domains Impaired by Increasing Pain Severity • Cleeland C.S. • Ryan K.M. Pain assessment global use of the Brief Pain Inventory. Relate WalkWalk SleepSleepSleepSleep ActiveActiveActiveActive MoodMoodMoodMood WorkWorkWorkWorkWork EnjoyEnjoyEnjoyEnjoyEnjoyEnjoy 345678 Worst pain rating Note: Boldface indicates an additional dimension that is impaired at the given level of pain severity. Adapted with permission from Ref. • Cleeland C.S. • Ryan K.M. Pain assessment global use of the Brief Pain Inventory. . The direct and unambiguous association that exists between pain and quality of life would seem to highlight the importance of treating and effectively relieving pain. Unfortunately, the evidence overwhelmingly demonstrates that despite the availability of effective analgesic pharmacotherapy, pain is often undertreated and poorly controlled. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. • Cleeland C.S. • Gonin R. • Hatfield A.K. • et al. Pain and its treatment in outpatients with metastatic cancer. • Von Roenn J.H. • Cleeland C.S. • Gonin R. • et al. Physician attitudes and practice in cancer pain management a survey from the Eastern Cooperative Oncology Group. The inadequacy of current efforts at pain control, which is widely acknowledged by physicians, is perhaps particularly striking in the field of oncology. In the aforementioned study of cancer patients conducted by Wang and colleagues, 59% of patients who received treatment for pain had a negative Pain Management Index, meaning that their analgesic treatment did not meet the minimum standards of the World Health Organization guidelines. • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life a study of Chinese cancer patients. Similarly, in a study supported by ECOG, the National Cancer Institute, the National Institutes of Health, and the Department of Health and Human Services, Cleeland and colleagues asked a group of 1308 outpatients with metastatic cancer from 54 ECOG-affiliated locations to rate the severity of their cancer pain during the preceding week, the degree of pain-related functional impairment they experienced, and the degree of relief provided by their analgesic regimens. • Cleeland C.S. • Gonin R. • Hatfield A.K. • et al. Pain and its treatment in outpatients with metastatic cancer. Of the group, 871 of 1308 (67%) reported that they had experienced pain or taken analgesics in the week preceding the study, and 475 of 1306 (36%) said that their pain was severe enough that it impaired their ability to function. Of the 597 patients for whom complete information was available, 250 (42%) received inadequate analgesia. Factors associated with poor pain management included minority race/ethnicity; greater discrepancy between patient and physician in judging degree of pain interference with activity; and pain unrelated to cancer, older age, female sex, and better ECOG performance status (i.e., physician's judgment that the patient was relatively less ill). When queried, physicians admit that their efforts at pain management are largely inadequate. In a survey of all ECOG-affiliated physicians with pain management responsibility, Von Roenn and coworkers found that only 51% believed that pain control in their own practice settings was good or very good; 31% described it as fair, and 18% said that it was poor or very poor. • Von Roenn J.H. • Cleeland C.S. • Gonin R. • et al. Physician attitudes and practice in cancer pain management a survey from the Eastern Cooperative Oncology Group. ## Effective Pain Control: Its Salutary Effect on Quality of Life If poorly controlled pain has a deteriorative effect on quality of life, then the implication is that analgesics, by decreasing pain, will increase quality of life. Several recent studies have demonstrated that this intuitive association is true. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. • McCarberg B.H. • Barkin R.L. Long-acting opioids for chronic pain pharmacotherapeutic opportunities to enhance compliance, quality of life, and analgesia. • Rowbotham M. • Harden N. • Stacey B. • et al. Gabapentin for the treatment of postherpetic neuralgia a randomized controlled trial. Katz N, Davis MW, Dworkin R. Topical lidocaine patch produces a significant improvement in mean pain scores in treated PHN patients: results of a multicenter open-label trial. In: Posters of the 20th Annual Scientific Meeting of the American Pain Society; Phoenix, AZ; April 19–22, 2001; Poster 741. For example, our group measured changes in Brief Pain Inventory scores in 332 patients with postherpetic neuralgia treated with a 5% lidocaine patch for 28 days. Katz N, Davis MW, Dworkin R. Topical lidocaine patch produces a significant improvement in mean pain scores in treated PHN patients: results of a multicenter open-label trial. In: Posters of the 20th Annual Scientific Meeting of the American Pain Society; Phoenix, AZ; April 19–22, 2001; Poster 741. We found that treatment was associated with decreased pain-related interference with quality of life in all domains examined (Figure 3). Katz N, Davis MW, Dworkin R. Topical lidocaine patch produces a significant improvement in mean pain scores in treated PHN patients: results of a multicenter open-label trial. In: Posters of the 20th Annual Scientific Meeting of the American Pain Society; Phoenix, AZ; April 19–22, 2001; Poster 741. Rowbotham and colleagues had similar results in their study of 229 patients with postherpetic neuralgia who were randomized to receive gabapentin or placebo for four weeks. • Rowbotham M. • Harden N. • Stacey B. • et al. Gabapentin for the treatment of postherpetic neuralgia a randomized controlled trial. At the conclusion of the study, average daily pain scores were reduced from 6.3 to 4.2 points in the gabapentin-treated patients, compared with a change from 6.5 to 6.0 points in the placebo group (P < 0.001). Simultaneously, SF-36 measures relating to physical functioning, role-physical, bodily pain, vitality, and mental health were all significantly better in the gabapentin group than in the placebo group (P ≤ 0.01). Gabapentin-treated patients also had significantly greater improvements than patients in the placebo group in Profile of Mood States assessments of depression-dejection, anger-hostility, fatigue-inertia, confusion-bewilderment, and total mood disturbance (P ≤ 0.01). • Rowbotham M. • Harden N. • Stacey B. • et al. Gabapentin for the treatment of postherpetic neuralgia a randomized controlled trial. The link between new treatments for arthritis and patient quality of life has also been evaluated. Ehrich and coworkers recently reported the effect of a cyclooxygenase-2-selective inhibitor, rofecoxib, on health-related quality of life in 672 patients with osteoarthritis of the knee or hip. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. Patients were randomized to receive once-daily placebo or rofecoxib at doses of 5, 12.5, 25, or 50 mg, and the SF-36 was administered at baseline and at the conclusion of week 6 of treatment. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. All doses of rofecoxib were significantly superior to placebo in relieving arthritis pain. This improvement in arthritis symptoms was found to correlate directly with improvements in quality of life. Adjusted within-group mean change scores demonstrated that all doses of rofecoxib brought about significant improvement on both the mental and physical component summary scores (Figure 4), • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. as well as on all eight physical and mental health domains of the SF-36. These improvements were significantly greater (P < 0.05) than those obtained with placebo in all domains except general health. A dose-response relationship was noted, such that the mean changes in quality of life for the 12.5-, 25-, and 50-mg groups were of a larger magnitude than that for the 5-mg group. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. The investigators hypothesized that the improvement in overall emotional well-being experienced by the rofecoxib-treated patients was probably due to increased ability to perform and enjoy routine tasks and leisure activities as a result of relief of osteoarthritis signs and symptoms. • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. ## Quality of Life as a Differentiator: When Efficacy Is Similar, Can Quality-of-Life Measures Be Used to Show the Superiority of One Medication Over Another? Quality of life is clearly an important variable to measure in and of itself. However, another use for quality-of-life measurement is increasingly being recognized. In any therapeutic area, drugs within the same pharmacologic class often have similar efficacy profiles. In such cases, quality of life and other such indicators have been used successfully to differentiate one agent from another. At least two studies have made quality-of-life comparisons in the area of antihypertensive therapy. • Anderson R.B. • Hollenberg N.K. • Williams G.H. Physical Symptoms Distress Index a sensitive tool to evaluate the impact of pharmacological agents on quality of life. • Testa M.A. • Anderson R.B. • Nackley J.F. • et al. Quality of life and antihypertensive therapy in men a comparison of captopril with enalapril. In one, a study conducted by Testa and researchers comparing quality of life in 379 patients being treated with either captopril or enalapril, no differences were found between the two agents in either efficacy or adverse effects. • Testa M.A. • Anderson R.B. • Nackley J.F. • et al. Quality of life and antihypertensive therapy in men a comparison of captopril with enalapril. Nevertheless, captopril-treated patients were found to have significantly better quality-of-life scores than enalapril-treated patients. • Testa M.A. • Anderson R.B. • Nackley J.F. • et al. Quality of life and antihypertensive therapy in men a comparison of captopril with enalapril. In the other study, a comparison of verapamil and nifedipine, no difference in efficacy between the two agents was reported. • Anderson R.B. • Hollenberg N.K. • Williams G.H. Physical Symptoms Distress Index a sensitive tool to evaluate the impact of pharmacological agents on quality of life. However, a significant distinction was noted in Physical Symptom Distress Index scores (a measure of the distress caused by drug-related adverse effects) in favor of verapamil (P = 0.002), which corresponded to a difference between the two groups in quality-of-life scores. The variations in symptom distress scores tended to predict adherence; there were more discontinuations in the nifedipine group than in the verapamil group. The investigators concluded that measurement of symptom distress is a sensitive technique for evaluating the effect of antihypertensive therapy on quality of life. • Anderson R.B. • Hollenberg N.K. • Williams G.H. Physical Symptoms Distress Index a sensitive tool to evaluate the impact of pharmacological agents on quality of life. Typically, adverse events are captured by recording symptoms spontaneously reported by patients; this is an insensitive method compared with prospectively capturing relevant side effects and their magnitude. • Anderson R.B. • Testa M.A. Symptom distress checklists as a component of quality of life measurement comparing prompted reports by patient and physician with concurrent adverse event reports via the physician. A similar relationship between improved adverse effects and quality of life was reported in a randomized crossover trial in which transdermal fentanyl was compared with sustained-relief morphine in patients with chronic noncancer-related pain. Allan L, Milligan K. Randomized, crossover and open-label trials demonstrate the efficacy of transdermal fentanyl (Duragesic®) for the treatment of chronic non-cancer pain. European League Against Rheumatism; Prague, Czech Republic; June 13–16, 2001. Abstract SAT0127. In addition to more effective pain relief, fentanyl-treated patients reported significantly less trouble with side effects than those receiving morphine (P < 0.001). They also had significantly higher SF-36 scores in bodily pain, vitality, social functioning, and mental health (P < 0.005). These results suggest that tolerability may be a critical marker for quality of life. Because NSAIDs, including cyclooxygenase-2 inhibitors (coxibs), provide analgesia by nonopioid mechanisms, they may have an opioid-sparing effect in patients treated with both agents. Opioid sparing via use of coxibs can be expected to decrease common quality-of-life-impairing adverse effects associated with opioids, including drowsiness, dizziness, constipation, nausea, and tolerance.29 These studies support the notion that drug therapies for pain can potentially be differentiated in terms of overall impact on quality of life, and that the most relevant driver of quality of life in this setting may be symptom distress due to medicinal side effects. ### Future Directions in Quality-of-Life Research on Analgesics: Beyond Efficacy The pain-relief efficacy of the coxibs is approximately equivalent to that of the nonselective nonsteroidal anti-inflammatory agents (NSAIDs). However, the lower risk of gastrointestinal (GI) adverse effects associated with the coxibs compared with traditional NSAIDs is an important quality-of-life consideration. Both rofecoxib and celecoxib have been shown, in very large randomized clinical trials (n = 8076 and 8059, respectively) to result in a significantly lower rate of GI ulcers, blood loss, intolerability, and other GI events relative to conventional NSAIDs. • Silverstein F.E. • Faich G. • Goldstein J.L. • et al. Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis—the CLASS study A randomized controlled trial. • Bombardier C. • Laine L. • Reicin A. • et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. During nine months of follow-up, rofecoxib was found to be associated with 2.1 confirmed GI events per 100 patient-years compared with 4.5 events per 100 patient-years with naproxen (P < 0.001). • Bombardier C. • Laine L. • Reicin A. • et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. Celecoxib was associated with an annualized incidence rate of upper GI ulcer complications of 0.76% compared with a rate of 1.45% for ibuprofen or diclofenac (P = 0.09). • Silverstein F.E. • Faich G. • Goldstein J.L. • et al. Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis—the CLASS study A randomized controlled trial. Rather than depend on surrogate markers, however, future research should compare coxibs and nonselective NSAIDs with regard to effect on quality of life, focusing on symptom distress measurements. It is critical to examine whether overall quality of life is improved with coxibs compared with nonselective NSAIDs, based on improved tolerability. Similar comparisons of various opioids or modes of opioid administration should focus on quality of life. ## Conclusion Uncontrolled pain has a universal and profoundly negative effect on quality of life; no age group is spared, and no type of pain is excepted. There is ample evidence, however, that effective analgesia improves quality of life—a finding that makes inadequate efforts at pain control unacceptable. Opioid analgesics have been shown to improve quality of life for patients with chronic pain, as have nonnarcotic agents, including nonselective NSAIDs and coxibs. In addition to providing a realistic indicator of how a given treatment will affect patients' lives, quality-of-life indicators can also be used to differentiate two agents of the same pharmacologic class. Future research in the area of analgesic pharmacotherapy should include quality of life as a key variable. Analgesic agents should be compared both within and between classes, incorporating the use of symptom distress scales, which may be the most sensitive way of discriminating among analgesics in effects on quality of life. The meaningfulness of these differences can be addressed directly by methods that calibrate symptom distress to stressful real-life events or by using utility-based methods. ## References • Gureje O. • Von Korff M. • Simon G.E. • et al. Persistent pain and well-being. JAMA. 1998; 280: 147-151 1. MayoClinic.com. Managing pain: attitude, medication and therapy are keys to control. Mayo Clinic Web Site. June 21, 2001. Available at: http://www. mayoclinic.com/invoke.cfm?id=HQ01055. Accessed September 19, 2001. • Abbott F.V. • Fraser M.I. Use and abuse of over-the-counter analgesic agents. J Psychiatry Neurosci. 1998; 23: 13-34 • Sternbach R.A. Survey of pain in the United States. Clin J Pain. 1986; 2: 49-53 • Rummans T.A. • Frost M. • Suman V.J. • et al. Quality of life and pain in patients with recurrent breast and gynecologic cancer. Psychosomatics. 1998; 39: 437-445 • Anderson R.B. • Hollenberg N.K. • Williams G.H. Physical Symptoms Distress Index. Arch Intern Med. 1999; 159: 693-700 • Ehrich E.W. • Bolognese J.A. • Watson D.J. • et al. Effect of rofecoxib on measures of health-related quality of life in patients with osteoarthritis. Am J Manag Care. 2001; 7: 609-616 • Wang X.S. • Cleeland C.S. • Mendoza T.R. • et al. The effects of pain severity on health-related quality of life. Cancer. 1999; 86: 1848-1855 • Skevington S.M. Investigating the relationship between pain and discomfort and quality of life, using the WHOQOL. Pain. 1998; 76: 395-406 • Esper P. • Redman B.G. Supportive care, pain management, and quality of life in advanced prostate cancer. Urol Clin North Am. 1999; 26: 375-389 • Ware Jr, J.E. The SF-36 health survey. in: Spilker B. Quality of Life and Pharmacoeconomics. 2nd ed. Lippincott-Raven Publishers;, Philadelphia, Pa1996: 337-345 • Hollenberg N.K. • Williams G.H. • Anderson R. Medical therapy, symptoms, and the distress they cause. Arch Intern Med. 2000; 160: 1477-1483 • Testa M.A. • Anderson R.B. • Nackley J.F. • et al. Quality of life and antihypertensive therapy in men. N Engl J Med. 1993; 328: 907-913 • Joint Commission on Accreditation of Healthcare Organizations Assessment of persons with pain. Pain Assessment and Management An Organizational Approach. Joint Commission on Accreditation of Healthcare Organizations;, Oakbrook Terrace, IL2000: 13-25 • Torrance G.W. Utility approach to measuring health-related quality of life. J Chron Dis. 1987; 40: 593-600 • Won A. • Lapane K. • Gambassi G. • et al. Correlates and management of nonmalignant pain in the nursing home. J Am Geriatr Soc. 1999; 47: 936-942 • Hunfeld J.A. • Perquin C.W. • Duivenvoorden H.J. • et al. Chronic pain and its impact on quality of life in adolescents and their families. J Pediatr Psychol. 2001; 26: 145-153 • Haythornthwaite J.A. • Benrud-Larson L.M. Psychological aspects of neuropathic pain. Clin J Pain. 2000; 16: S101-S105 • Becker N. • Thomsen A.B. • Olsen A.K. • et al. Pain epidemiology and health related quality of life in chronic non-malignant pain patients referred to a Danish multidisciplinary pain center. Pain. 1997; 73: 393-400 • Hill C.L. • Parsons J. • Taylor A. • et al. Health related quality of life in a population sample with arthritis. J Rheumatol. 1999; 26: 2029-2035 • Cleeland C.S. • Ryan K.M. Pain assessment. Ann Acad Med. 1994; 23: 129-138 • Cleeland C.S. • Gonin R. • Hatfield A.K. • et al. Pain and its treatment in outpatients with metastatic cancer. N Engl J Med. 1994; 330: 592-596 • Von Roenn J.H. • Cleeland C.S. • Gonin R. • et al. Physician attitudes and practice in cancer pain management. Ann Intern Med. 1993; 119: 121-126 • McCarberg B.H. • Barkin R.L. Long-acting opioids for chronic pain. Am J Ther. 2001; 8: 181-186 • Rowbotham M. • Harden N. • Stacey B. • et al. Gabapentin for the treatment of postherpetic neuralgia. JAMA. 1998; 280: 1837-1842 2. Katz N, Davis MW, Dworkin R. Topical lidocaine patch produces a significant improvement in mean pain scores in treated PHN patients: results of a multicenter open-label trial. In: Posters of the 20th Annual Scientific Meeting of the American Pain Society; Phoenix, AZ; April 19–22, 2001; Poster 741. • Anderson R.B. • Testa M.A. Symptom distress checklists as a component of quality of life measurement. Drug Inf J. 1994; 28: 89-114 3. Allan L, Milligan K. Randomized, crossover and open-label trials demonstrate the efficacy of transdermal fentanyl (Duragesic®) for the treatment of chronic non-cancer pain. European League Against Rheumatism; Prague, Czech Republic; June 13–16, 2001. Abstract SAT0127. • Silverstein F.E. • Faich G. • Goldstein J.L. • et al. Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis—the CLASS study. JAMA. 2000; 284: 1247-1255 • Bombardier C. • Laine L. • Reicin A. • et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Engl J Med. 2000; 343: 1520-1528
Contents complex geometry # Contents ## Idea The zeta function naturally associated to a Riemann surface/complex curve, hence the zeta function of an elliptic differential operator for the Laplace operator on the Riemann surface (and hence hence essentially the Feynman propagator for the scalar fields on that surface) is directly analogous to the zeta functions associated with arithmetic curves, notably the Artin L-functions. (Minakshisundaram-Pleijel 49) considered the zeta function of an elliptic differential operator for the Laplace operator on a Riemann surface. Motivated by the resemblance of the Selberg trace formula to Weil’s formula for the sum of zeros of the Riemann zeta function, (Selberg 56) defined for any compact hyperbolic Riemann surface a zeta function-like expression, the Selberg zeta function of a Riemann surface. (e.g. Bump, below theorem 19). Much of this is more generally defined/considered on higher dimensional hyperbolic manifolds. That the Selberg zeta function is indeed proportional to the zeta function of a Laplace operator is due to (D’Hoker-Phong 86, Sarnak 87), and that it is similarly related to the eta function of a Dirac operator on the given Riemann surface/hyperbolic manifold goes back to (Milson 78), with further development including (Park 01). For review of the literature on this relation see also the beginning of (Friedman 06). ## Examples ### For a complex torus / complex elliptic curve For $\mathbb{C}/(\mathbb{Z}\oplus \tau \mathbb{Z})$ a complex torus (complex elliptic curve) equipped with its standard flat Riemannian metric, then the zeta function of the corresponding Laplace operator $\Delta$ is $\zeta_{\Delta} = (2\pi)^{-2 s} E(s) \coloneqq (2\pi)^{-2 s} \underset{(k,l)\in \mathbb{Z}\times\mathbb{Z}-(0,0)}{\sum} \frac{1}{{\vert k +\tau l\vert}^{2s}} \,.$ The corresponding functional determinant is $\exp( E^\prime_{\Delta}(0) ) = (Im \tau)^2 {\vert \eta(\tau)\vert}^4 \,,$ where $\eta$ is the Dedekind eta function. (recalled e.g. in Todorov 03, page 3) ### Of Dirac operators twisted by a flat connection For $A$ a flat connection on a Riemannian manifold, write $D_A$ for the Dirac operator twisted by this connection. On a suitable hyperbolic manifold, the partition function/theta function for $D_A$ appears in (Bunke-Olbrich 94, prop. 6.3) (and Bunke-Olbrich 94a, def. 3.1) for the odd dimensional case). The corresponding Selberg zeta formula is (Bunke-Olbrich 94a, def. 4.1). This has a form analogous to that of Artin L-functions with the flat connection replaced by a Galois representation. ## Properties ### Analogy with Artin L-function That the Selberg/Ruelle zeta function is equivalently an Euler product of characteristic polynomials is due to (Gangolli 77, (2.72) Fried 86, prop. 5). That it is in particular the Euler product of characteristic polynomials of the monodromies/holonomies of the flat connection corresponding to the given group representation is (Bunke-Olbrich 94, prop. 6.3) for the even-dimensional case and (Bunke-Olbrich 94a) for the odd-dimensional case. Notice that this is analogous to the standard definition of an Artin L-function if one interprets a Frobenius map $Frob_p$ (as discussed there) as an element of the arithmetic fundamental group of an arithmetic curve and a Galois representation as a flat connection. ### Function field analogy function field analogy number fields (“function fields of curves over F1”)function fields of curves over finite fields $\mathbb{F}_q$ (arithmetic curves)Riemann surfaces/complex curves affine and projective line $\mathbb{Z}$ (integers)$\mathbb{F}_q[z]$ (polynomials, function algebra on affine line $\mathbb{A}^1_{\mathbb{F}_q}$)$\mathcal{O}_{\mathbb{C}}$ (holomorphic functions on complex plane) $\mathbb{Q}$ (rational numbers)$\mathbb{F}_q(z)$ (rational functions)meromorphic functions on complex plane $p$ (prime number/non-archimedean place)$x \in \mathbb{F}_p$$x \in \mathbb{C}$ $\infty$ (place at infinity)$\infty$ $Spec(\mathbb{Z})$ (Spec(Z))$\mathbb{A}^1_{\mathbb{F}_q}$ (affine line)complex plane $Spec(\mathbb{Z}) \cup place_{\infty}$$\mathbb{P}_{\mathbb{F}_q}$ (projective line)Riemann sphere $\partial_p \coloneqq \frac{(-)^p - (-)}{p}$ (Fermat quotient)$\frac{\partial}{\partial z}$ (coordinate derivation) genus of the rational numbers = 0genus of the Riemann sphere = 0 formal neighbourhoods $\mathbb{Z}_p$ (p-adic integers)$\mathbb{F}_q[ [ t -x ] ]$ (power series around $x$)$\mathbb{C}[ [z-x] ]$ (holomorphic functions on formal disk around $x$) $Spf(\mathbb{Z}_p)\underset{Spec(\mathbb{Z})}{\times} X$ (“$p$-arithmetic jet space” of $X$ at $p$)formal disks in $X$ $\mathbb{Q}_p$ (p-adic numbers)$\mathbb{F}_q((z-x))$ (Laurent series around $x$)$\mathbb{C}((z-x))$ (holomorphic functions on punctured formal disk around $x$) $\mathbb{A}_{\mathbb{Q}} = \underset{p\; place}{\prod^\prime}\mathbb{Q}_p$ (ring of adeles)$\mathbb{A}_{\mathbb{F}_q((t))}$ ( adeles of function field )$\underset{x \in \mathbb{C}}{\prod^\prime} \mathbb{C}((z-x))$ (restricted product of holomorphic functions on all punctured formal disks, finitely of which do not extend to the unpunctured disks) $\mathbb{I}_{\mathbb{Q}} = GL_1(\mathbb{A}_{\mathbb{Q}})$ (group of ideles)$\mathbb{I}_{\mathbb{F}_q((t))}$ ( ideles of function field )$\underset{x \in \mathbb{C}}{\prod^\prime} GL_1(\mathbb{C}((z-x)))$ theta functions Jacobi theta function zeta functions Riemann zeta functionGoss zeta function branched covering curves $K$ a number field ($\mathbb{Q} \hookrightarrow K$ a possibly ramified finite dimensional field extension)$K$ a function field of an algebraic curve $\Sigma$ over $\mathbb{F}_p$$K_\Sigma$ (sheaf of rational functions on complex curve $\Sigma$) $\mathcal{O}_K$ (ring of integers)$\mathcal{O}_{\Sigma}$ (structure sheaf) $Spec_{an}(\mathcal{O}_K) \to Spec(\mathbb{Z})$ (spectrum with archimedean places)$\Sigma$ (arithmetic curve)$\Sigma \to \mathbb{C}P^1$ (complex curve being branched cover of Riemann sphere) $\frac{(-)^p - \Phi(-)}{p}$ (lift of Frobenius morphism/Lambda-ring structure)$\frac{\partial}{\partial z}$ genus of a number fieldgenus of an algebraic curvegenus of a surface formal neighbourhoods $v$ prime ideal in ring of integers $\mathcal{O}_K$$x \in \Sigma$$x \in \Sigma$ $K_v$ (formal completion at $v$)$\mathbb{C}((z_x))$ (function algebra on punctured formal disk around $x$) $\mathcal{O}_{K_v}$ (ring of integers of formal completion)$\mathbb{C}[ [ z_x ] ]$ (function algebra on formal disk around $x$) $\mathbb{A}_K$ (ring of adeles)$\prod^\prime_{x\in \Sigma} \mathbb{C}((z_x))$ (restricted product of function rings on all punctured formal disks around all points in $\Sigma$) $\mathcal{O}$$\prod_{x\in \Sigma} \mathbb{C}[ [z_x] ]$ (function ring on all formal disks around all points in $\Sigma$) $\mathbb{I}_K = GL_1(\mathbb{A}_K)$ (group of ideles)$\prod^\prime_{x\in \Sigma} GL_1(\mathbb{C}((z_x)))$ Galois theory Galois group$\pi_1(\Sigma)$ fundamental group Galois representationflat connection (“local system”) on $\Sigma$ class field theory class field theorygeometric class field theory Hilbert reciprocity lawArtin reciprocity lawWeil reciprocity law $GL_1(K)\backslash GL_1(\mathbb{A}_K)$ (idele class group) $GL_1(K)\backslash GL_1(\mathbb{A}_K)/GL_1(\mathcal{O})$$Bun_{GL_1}(\Sigma)$ (moduli stack of line bundles, by Weil uniformization theorem) non-abelian class field theory and automorphy number field Langlands correspondencefunction field Langlands correspondencegeometric Langlands correspondence $GL_n(K) \backslash GL_n(\mathbb{A}_K)//GL_n(\mathcal{O})$ (constant sheaves on this stack form unramified automorphic representations)$Bun_{GL_n(\mathbb{C})}(\Sigma)$ (moduli stack of bundles on the curve $\Sigma$, by Weil uniformization theorem) Tamagawa-Weil for number fieldsTamagawa-Weil for function fields theta functions Hecke theta functionfunctional determinant line bundle of Dirac operator/chiral Laplace operator on $\Sigma$ zeta functions Dedekind zeta functionWeil zeta functionzeta function of a Riemann surface/of the Laplace operator on $\Sigma$ higher dimensional spaces zeta functionsHasse-Weil zeta function context/function field analogytheta function $\theta$zeta function $\zeta$ (= Mellin transform of $\theta(0,-)$)L-function $L_{\mathbf{z}}$ (= Mellin transform of $\theta(\mathbf{z},-)$)eta function $\eta$special values of L-functions physics/2d CFTpartition function $\theta(\mathbf{z},\mathbf{\tau}) = Tr(\exp(-\mathbf{\tau} \cdot (D_\mathbf{z})^2))$ as function of complex structure $\mathbf{\tau}$ of worldsheet $\Sigma$ (hence polarization of phase space) and background gauge field/source $\mathbf{z}$analytically continued trace of Feynman propagator $\zeta(s) = Tr_{reg}\left(\frac{1}{(D_{0})^2}\right)^s = \int_{0}^\infty \tau^{s-1} \,\theta(0,\tau)\, d\tau$analytically continued trace of Feynman propagator in background gauge field $\mathbf{z}$: $L_{\mathbf{z}}(s) \coloneqq Tr_{reg}\left(\frac{1}{(D_{\mathbf{z}})^2}\right)^s = \int_{0}^\infty \tau^{s-1} \,\theta(\mathbf{z},\tau)\, d\tau$analytically continued trace of Dirac propagator in background gauge field $\mathbf{z}$ $\eta_{\mathbf{z}}(s) = Tr_{reg} \left(\frac{sgn(D_{\mathbf{z}})}{ { \vert D_{\mathbf{z}} } \vert }\right)^s$regularized 1-loop vacuum amplitude $pv\, L_{\mathbf{z}}(1) = Tr_{reg}\left(\frac{1}{(D_{\mathbf{z}})^2}\right)$ / regularized fermionic 1-loop vacuum amplitude $pv\, \eta_{\mathbf{z}}(1)= Tr_{reg} \left( \frac{D_{\mathbf{z}}}{(D_{\mathbf{z}})^2} \right)$ / vacuum energy $-\frac{1}{2}L_{\mathbf{z}}^\prime(0) = Z_H = \frac{1}{2}\ln\;det_{reg}(D_{\mathbf{z}}^2)$ Riemannian geometry (analysis)zeta function of an elliptic differential operatorzeta function of an elliptic differential operatoreta function of a self-adjoint operatorfunctional determinant, analytic torsion complex analytic geometrysection $\theta(\mathbf{z},\mathbf{\tau})$ of line bundle over Jacobian variety $J(\Sigma_{\mathbf{\tau}})$ in terms of covering coordinates $\mathbf{z}$ on $\mathbb{C}^g \to J(\Sigma_{\mathbf{\tau}})$zeta function of a Riemann surfaceSelberg zeta functionDedekind eta function arithmetic geometry for a function fieldGoss zeta function (for arithmetic curves) and Weil zeta function (in higher dimensional arithmetic geometry) arithmetic geometry for a number fieldHecke theta function, automorphic formDedekind zeta function (being the Artin L-function $L_{\mathbf{z}}$ for $\mathbf{z} = 0$ the trivial Galois representation)Artin L-function $L_{\mathbf{z}}$ of a Galois representation $\mathbf{z}$, expressible “in coordinates” (by Artin reciprocity) as a finite-order Hecke L-function (for 1-dimensional representations) and generally (via Langlands correspondence) by an automorphic L-function (for higher dimensional reps)class number $\cdot$ regulator arithmetic geometry for $\mathbb{Q}$Jacobi theta function ($\mathbf{z} = 0$)/ Dirichlet theta function ($\mathbf{z} = \chi$ a Dirichlet character)Riemann zeta function (being the Dirichlet L-function $L_{\mathbf{z}}$ for Dirichlet character $\mathbf{z} = 0$)Artin L-function of a Galois representation $\mathbf{z}$ , expressible “in coordinates” (via Artin reciprocity) as a Dirichlet L-function (for 1-dimensional Galois representations) and generally (via Langlands correspondence) as an automorphic L-function ## References Original articles include • S. Minakshisundaram, ; Å Pleijel, Some properties of the eigenfunctions of the Laplace-operator on Riemannian manifolds (1949), Canadian Journal of Mathematics 1: 242–256, doi:10.4153/CJM-1949-021-5, ISSN 0008-414X, MR 0031145 (web) • Atle Selberg, Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series, Journal of the Indian Mathematical Society 20 (1956) 47-87. • John Milson?, Closed geodesic and the $\eta$-invariant, Ann. of Math., 108, (1978) 1-39 () Review includes Expression of the Selberg/Ruelle zeta function as an Euler product of characteristic polynomials is due to • Ramesh Gangolli, Zeta functions of Selberg’s type for compact space forms of symmetric spaces of rank one, Illinois J. Math. Volume 21, Issue 1 (1977), 1-41. (Euclid) • David Fried, The zeta functions of Ruelle and Selberg. I, Annales scientifiques de l’École Normale Supérieure, Sér. 4, 19 no. 4 (1986), p. 491-517 (Numdam) Discussion of the relation between, on the one hand, zeta function of Laplace operators/eta funcstions of Dirac operators and, on the other hand, Selberg zeta functions includes • Eric D'HokerDuong Phong, Communications in Mathematical Physics, Volume 104, Number 4 (1986), 537-545 (Euclid) • Peter Sarnak, Determinants of Laplacians, Communications in Mathematical Physics, Volume 110, Number 1 (1987), 113-120. (Euclid) • Ulrich Bunke, Martin Olbrich, Andreas Juhl, The wave kernel for the Laplacian on the classical locally symmetric spaces of rank one, theta functions, trace formulas and the Selberg zeta function, Annals of Global Analysis and Geometry February 1994, Volume 12, Issue 1, pp 357-405 • Ulrich Bunke, Martin Olbrich, Theta and zeta functions for locally symmetric spaces of rank one (arXiv:dg-ga/9407013) and for odd-dimensional spaces also in
Sunday, November 29, 2015  Lógica Matemáticas Astronomía y Astrofísica Física Química Ciencias de la Vida Ciencias de la Tierra y Espacio Ciencias Agrarias Ciencias Médicas Ciencias Tecnológicas Antropología Demografía Ciencias Económicas Geografía Historia Ciencias Jurídicas y Derecho Lingüística Pedagogía Ciencia Política Psicología Artes y Letras Sociología Ética Filosofía  ## Linear relations between polynomial orbits 1) La descarga del recurso depende de la página de origen 2) Para poder descargar el recurso, es necesario ser usuario Descargar recurso ### Detalles del recurso Pertenece a: Project Euclid (Hosted at Cornell University Library) Descripción: We study the orbits of a polynomial $f\in \mathbb {C}[X]$ , namely, the sets $\{\alpha,f(\alpha),f(f(\alpha)),\ldots\}$ with $\alpha\in \mathbb {C}$ . We prove that if two nonlinear complex polynomials $f,g$ have orbits with infinite intersection, then $f$ and $g$ have a common iterate. More generally, we describe the intersection of any line in $\mathbb {C}^{d}$ with a $d$ -tuple of orbits of nonlinear polynomials, and we formulate a question which generalizes both this result and the Mordell–Lang conjecture. Autor(es): Ghioca, Dragos -  Tucker, Thomas J. -  Zieve, Michael E. - Id.: 55259223 Idioma: English  - Versión: 1.0 Tipo:  application/pdf - Palabras clave37F10 - Tipo de recurso: Text  - Audiencia: Estudiante  -  Profesor  -  Autor  - Estructura: Atomic Coste: no : Copyright 2012 Duke University Press Formatos:  application/pdf - Requerimientos técnicos:  Browser: Any - Relación: [References] 0012-7094 [References] 1547-7398 Fecha de contribución: 25-jul-2012 Contacto: Localización: * Duke Math. J. 161, no. 7 (2012), 1379-1410 * doi:10.1215/00127094-1598098 ### Otros recursos del mismo autor(es) 1. Curves of Every Genus with Many Points, II: Asymptotically Good Families We resolve a 1983 question of Serre by constructing curves with many points of every genus over ever... 2. Symplectic spreads and permutation polynomials Every symplectic spread of PG(3, q), or equivalently every ovoid of Q(4, q), is shown to give a cert... 3. Hardware simulation of diesel generator and microgrid stability Over the last few years, people have begun to depend less on large power plants with extensive distr... 4. Distinct Mutation Pathways of Non-Subtype B HIV-1 during In Vitro Resistance Selection with Nonnucleoside Reverse Transcriptase Inhibitors ▿ † Studies were conducted to investigate mutation pathways among subtypes A, B, and C of human immunode... 5. Antiviral Activity of MK-4965, a Novel Nonnucleoside Reverse Transcriptase Inhibitor▿ Nonnucleoside reverse transcriptase inhibitors (NNRTIs) are the mainstays of therapy for the treatme... ### Otros recursos de la misma colección 1. Homology of curves and surfaces in closed hyperbolic $3$ -manifolds Among other things, we prove the following two topological statements about closed hyperbolic $3$ -m... 2. Serrin’s overdetermined problem and constant mean curvature surfaces For all $N\geq9$ , we find smooth entire epigraphs in $\mathbb{R}^{N}$ , namely, smooth domains of t... 3. A $p$ -adic nonabelian criterion for good reduction of curves Let $K$ be a complete discrete valuation field of characteristic $0$ , with valuation ring $\mathcal... 4. Harmonic functions on the lattice: Absolute monotonicity and propagation of smallness In this work we establish a connection between two classical notions, unrelated so far: harmonic fun... 5. The proof of the Kontsevich periodicity conjecture on noncommutative birational transformations For an arbitrary associative unital ring$R$, let$J_{1}$and$J_{2}\$ be the following noncommutati... ### Valoración de los usuarios No hay ninguna valoración para este recurso.Sea el primero en valorar este recurso. Busque un recurso
# Similar Artists 1. We don't have a wiki here yet... 2. We don't have a wiki here yet... 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. We don't have a wiki here yet... 9. We don't have a wiki here yet... 10. Martin Hubert (http://www.martinhubert.eu/) ist Wissenschaftsjournalist, unter anderem beim WDR und beim Deutschlandradio. Seine Spezialgebiete… 11. We don't have a wiki here yet... 12. We don't have a wiki here yet... 13. We don't have a wiki here yet... 14. We don't have a wiki here yet... 15. We don't have a wiki here yet... 16. We don't have a wiki here yet... 17. We don't have a wiki here yet... 18. We don't have a wiki here yet... 19. We don't have a wiki here yet...
# Conjugacy problem in a conjugacy separable group Here is a question that has been bothering me for some time: Let G be a finitely generated conjugacy separable group with solvable word problem. Does it follow that the conjugacy problem in G is solvable? Background. A group G is said to be conjugacy separable is for any two non-conjugate elements x,y in G there is a homomorphism from G to a finite group F such that the images of x and y are not conjugate in F. Equivalently, G is conjugacy separable if each conjugacy class is closed in the profinite topology on G. A well-known theorem of Mal'cev states that a finitely presented conjugacy separable group has solvable conjugacy problem (in this case it is possible to recursively enumerate all the finite quotients, simultaneously checking conjugacy of the images of two given elements in each of them). On the first page of the paper 'Conjugacy separability of certain torsion groups.' (Arch. Math. (Basel) 68 (1997), no. 6, 441--449.) Wilson and Zalesskii claim that the conjugacy problem is solvable in finitely generated recursively presented conjugacy separable groups (which, of course, implies a positive answer to my question), and refer to a work of J. McKinsey, 'The decision problem for some classes of sentences without quantifiers' (J. Symbolic Logic 8, 61 – 76 (1943)). However, I could not find anything in the latter paper that would allow to deal with infinite recursive presentations. Moreover, the corresponding property for residually finite groups simply fails. More precisely, there exist finitely generated residually finite recursively (infinitely!) presented groups with unsolvable word problem (cf. 'A Finitely Generated Residually Finite Group with an Unsolvable Word Problem' by S. Meskin, Proceedings of the American Mathematical Society, Vol. 43, No. 1 (Mar., 1974), pp. 8-10). - Ashot, could you say where Mal'cev proved this? I thought the reference for this result was [W. Mostowski, On the decidability of some problems in special classes of groups. Fund. Math. 59 1966 123--135.] –  Igor Belegradek Jul 19 '10 at 18:14 Igor, the reference is [A.I. Mal'cev, On Homomorphisms onto finite groups (Russian). Uchen. Zap. Ivanovskogo Gos. Ped. Inst. 18 (1958), 49-60. English translation in: Amer. Math. Soc. Transl. Ser. 2, 119 (1983) 67-79.] Mostowskii cites Mal'cev's announcement of this result but seems to be unaware of the main paper. In fact, Mal'cev's proof is also based on the above mentioned work of McKinsey. –  Ashot Minasyan Jul 19 '10 at 18:36 I'm struggling to think of a known construction of finitely generated, infinitely presented, conjugacy separable groups. Ashot, do you have any candidates in mind? –  HJRW Jul 19 '10 at 20:58 @ Henry: I think that an appropriately chosen version of the Bowditch-Mess construction has the desired properties. ams.org/mathscinet-getitem?mr=1240944 –  Ian Agol Jul 19 '10 at 22:39 Henry, plenty of such groups exist, e.g. $\mathbb{Z} wr \mathbb{Z}$, lamplighter groups, and, more generally, wreath products of finitely generated abelian groups with conjugacy separable groups (such that the acting group has separable cyclic subgroups) -- by a theorem of V. Remeslennikov [Finite approximability of groups with respect to conjugacy. (Russian) Sibirsk. Mat. Ž. 12 (1971), 1085--1099.] I also proved [see Cor. 2.9 in personal.soton.ac.uk/am4x07/rs/RAAG-conj_sep.pdf ] that all Bestvina-Brady groups are conjugacy separable. –  Ashot Minasyan Jul 20 '10 at 11:09
# source:branches/2011/dev_NEMO_MERGE_2011/DOC/TexFiles/Chapters/Chap_LDF.tex@3221 Last change on this file since 3221 was 3221, checked in by agn, 9 years ago File size: 25.2 KB Line 1 2% ================================================================ 3% Chapter Ñ Lateral Ocean Physics (LDF) 4% ================================================================ 5\chapter{Lateral Ocean Physics (LDF)} 6\label{LDF} 7\minitoc 8 9 10\newpage 11$\$\newline    % force a new ligne 12 13 14The lateral physics terms in the momentum and tracer equations have been 15described in \S\ref{PE_zdf} and their discrete formulation in \S\ref{TRA_ldf} 16and \S\ref{DYN_ldf}). In this section we further discuss each lateral physics option. 17Choosing one lateral physics scheme means for the user defining, (1) the space 18and time variations of the eddy coefficients ; (2) the direction along which the 19lateral diffusive fluxes are evaluated (model level, geopotential or isopycnal 20surfaces); and (3) the type of operator used (harmonic, or biharmonic operators, 21and for tracers only, eddy induced advection on tracers). These three aspects 22of the lateral diffusion are set through namelist parameters and CPP keys 23(see the \textit{nam\_traldf} and \textit{nam\_dynldf} below). 24 25%-----------------------------------nam_traldf - nam_dynldf-------------------------------------------- 26\namdisplay{namtra_ldf} 27\namdisplay{namdyn_ldf} 28%-------------------------------------------------------------------------------------------------------------- 29 30 31% ================================================================ 32% Lateral Mixing Coefficients 33% ================================================================ 34\section [Lateral Mixing Coefficient (\textit{ldftra}, \textit{ldfdyn})] 35        {Lateral Mixing Coefficient (\mdl{ldftra}, \mdl{ldfdyn}) } 36\label{LDF_coef} 37 38 39Introducing a space variation in the lateral eddy mixing coefficients changes 40the model core memory requirement, adding up to four extra three-dimensional 41arrays for the geopotential or isopycnal second order operator applied to 42momentum. Six CPP keys control the space variation of eddy coefficients: 43three for momentum and three for tracer. The three choices allow: 44a space variation in the three space directions (\key{traldf\_c3d}\key{dynldf\_c3d}), 45in the horizontal plane (\key{traldf\_c2d}\key{dynldf\_c2d}), 46or in the vertical only (\key{traldf\_c1d}\key{dynldf\_c1d}). 47The default option is a constant value over the whole ocean on both momentum and tracers. 48 49The number of additional arrays that have to be defined and the gridpoint 50position at which they are defined depend on both the space variation chosen 51and the type of operator used. The resulting eddy viscosity and diffusivity 52coefficients can be a function of more than one variable. Changes in the 53computer code when switching from one option to another have been 54minimized by introducing the eddy coefficients as statement functions 55(include file \hf{ldftra\_substitute} and \hf{ldfdyn\_substitute}). The functions 56are replaced by their actual meaning during the preprocessing step (CPP). 57The specification of the space variation of the coefficient is made in 58\mdl{ldftra} and \mdl{ldfdyn}, or more precisely in include files 59\textit{traldf\_cNd.h90} and \textit{dynldf\_cNd.h90}, with N=1, 2 or 3. 60The user can modify these include files as he/she wishes. The way the 61mixing coefficient are set in the reference version can be briefly described 62as follows: 63 64\subsubsection{Constant Mixing Coefficients (default option)} 65When none of the \textbf{key\_dynldf\_...} and \textbf{key\_traldf\_...} keys are 66defined, a constant value is used over the whole ocean for momentum and 67tracers, which is specified through the \np{rn\_ahm0} and \np{rn\_aht0} namelist 68parameters. 69 70\subsubsection{Vertically varying Mixing Coefficients (\key{traldf\_c1d} and \key{dynldf\_c1d})} 71The 1D option is only available when using the $z$-coordinate with full step. 72Indeed in all the other types of vertical coordinate, the depth is a 3D function 73of (\textbf{i},\textbf{j},\textbf{k}) and therefore, introducing depth-dependent 74mixing coefficients will require 3D arrays. In the 1D option, a hyperbolic variation 75of the lateral mixing coefficient is introduced in which the surface value is 76\np{rn\_aht0} (\np{rn\_ahm0}), the bottom value is 1/4 of the surface value, 77and the transition takes place around z=300~m with a width of 300~m 78($i.e.$ both the depth and the width of the inflection point are set to 300~m). 79This profile is hard coded in file \hf{traldf\_c1d}, but can be easily modified by users. 80 81\subsubsection{Horizontally Varying Mixing Coefficients (\key{traldf\_c2d} and \key{dynldf\_c2d})} 82By default the horizontal variation of the eddy coefficient depends on the local mesh 83size and the type of operator used: 84\begin{equation} \label{Eq_title} 85  A_l = \left\{ 86   \begin{aligned} 87         & \frac{\max(e_1,e_2)}{e_{max}} A_o^l           & \text{for laplacian operator } \\ 88         & \frac{\max(e_1,e_2)^{3}}{e_{max}^{3}} A_o^l          & \text{for bilaplacian operator } 89   \end{aligned}    \right. 90\end{equation} 91where $e_{max}$ is the maximum of $e_1$ and $e_2$ taken over the whole masked 92ocean domain, and $A_o^l$ is the \np{rn\_ahm0} (momentum) or \np{rn\_aht0} (tracer) 93namelist parameter. This variation is intended to reflect the lesser need for subgrid 94scale eddy mixing where the grid size is smaller in the domain. It was introduced in 95the context of the DYNAMO modelling project \citep{Willebrand_al_PO01}. 96Note that such a grid scale dependance of mixing coefficients significantly increase 97the range of stability of model configurations presenting large changes in grid pacing 98such as global ocean models. Indeed, in such a case, a constant mixing coefficient 99can lead to a blow up of the model due to large coefficient compare to the smallest 100grid size (see \S\ref{STP_forward_imp}), especially when using a bilaplacian operator. 101 102Other formulations can be introduced by the user for a given configuration. 103For example, in the ORCA2 global ocean model (\key{orca\_r2}), the laplacian 104viscosity operator uses \np{rn\_ahm0}~= 4.10$^4$ m$^2$/s poleward of 20$^{\circ}$ 105north and south and decreases linearly to \np{rn\_aht0}~= 2.10$^3$ m$^2$/s 107can be found in routine \rou{ldf\_dyn\_c2d\_orca} defined in \mdl{ldfdyn\_c2d}. 108Similar modified horizontal variations can be found with the Antarctic or Arctic 109sub-domain options of ORCA2 and ORCA05 (\key{antarctic} or \key{arctic} 110defined, see \hf{ldfdyn\_antarctic} and \hf{ldfdyn\_arctic}). 111 112\subsubsection{Space Varying Mixing Coefficients (\key{traldf\_c3d} and \key{dynldf\_c3d})} 113 114The 3D space variation of the mixing coefficient is simply the combination of the 1151D and 2D cases, $i.e.$ a hyperbolic tangent variation with depth associated with 116a grid size dependence of the magnitude of the coefficient. 117 118\subsubsection{Space and Time Varying Mixing Coefficients} 119 120There is no default specification of space and time varying mixing coefficient. 121The only case available is specific to the ORCA2 and ORCA05 global ocean 122configurations (\key{orca\_r2} or \key{orca\_r05}). It provides only a tracer 123mixing coefficient for eddy induced velocity (ORCA2) or both iso-neutral and 124eddy induced velocity (ORCA05) that depends on the local growth rate of 125baroclinic instability. This specification is actually used when an ORCA key 126and both \key{traldf\_eiv} and \key{traldf\_c2d} are defined. 127 128$\$\newline    % force a new ligne 129 130A space variation in the eddy coefficient appeals several remarks: 131 132(1) the momentum diffusion operator acting along model level surfaces is 133written in terms of curl and divergent components of the horizontal current 134(see \S\ref{PE_ldf}). Although the eddy coefficient can be set to different values 135in these two terms, this option is not available. 136 137(2) with an horizontally varying viscosity, the quadratic integral constraints 138on enstrophy and on the square of the horizontal divergence for operators 139acting along model-surfaces are no longer satisfied 140(Appendix~\ref{Apdx_dynldf_properties}). 141 142(3) for isopycnal diffusion on momentum or tracers, an additional purely 143horizontal background diffusion with uniform coefficient can be added by 144setting a non zero value of \np{rn\_ahmb0} or \np{rn\_ahtb0}, a background horizontal 145eddy viscosity or diffusivity coefficient (namelist parameters whose default 146values are $0$). However, the technique used to compute the isopycnal 147slopes is intended to get rid of such a background diffusion, since it introduces 148spurious diapycnal diffusion (see {\S\ref{LDF_slp}). 149 150(4) when an eddy induced advection term is used (\key{traldf\_eiv}), $A^{eiv}$, 151the eddy induced coefficient has to be defined. Its space variations are controlled 152by the same CPP variable as for the eddy diffusivity coefficient ($i.e.$ 153\textbf{key\_traldf\_cNd}). 154 155(5) the eddy coefficient associated with a biharmonic operator must be set to a \emph{negative} value. 156 157(6) it is possible to use both the laplacian and biharmonic operators concurrently. 158 159(7) it is possible to run without explicit lateral diffusion on momentum (\np{ln\_dynldf\_lap} = 160\np{ln\_dynldf\_bilap} = false). This is recommended when using the UBS advection 162and can be useful for testing purposes. 163 164% ================================================================ 165% Direction of lateral Mixing 166% ================================================================ 167\section  [Direction of Lateral Mixing (\textit{ldfslp})] 168      {Direction of Lateral Mixing (\mdl{ldfslp})} 169\label{LDF_slp} 170 171%%% 172\gmcomment{  we should emphasize here that the implementation is a rather old one. 173Better work can be achieved by using \citet{Griffies_al_JPO98, Griffies_Bk04} iso-neutral scheme. } 174 175A direction for lateral mixing has to be defined when the desired operator does 176not act along the model levels. This occurs when $(a)$ horizontal mixing is 177required on tracer or momentum (\np{ln\_traldf\_hor} or \np{ln\_dynldf\_hor}) 178in $s$- or mixed $s$-$z$- coordinates, and $(b)$ isoneutral mixing is required 179whatever the vertical coordinate is. This direction of mixing is defined by its 180slopes in the \textbf{i}- and \textbf{j}-directions at the face of the cell of the 181quantity to be diffused. For a tracer, this leads to the following four slopes : 182$r_{1u}$, $r_{1w}$, $r_{2v}$, $r_{2w}$ (see \eqref{Eq_tra_ldf_iso}), while 183for momentum the slopes are  $r_{1t}$, $r_{1uw}$, $r_{2f}$, $r_{2uw}$ for 184$u$ and  $r_{1f}$, $r_{1vw}$, $r_{2t}$, $r_{2vw}$ for $v$. 185 186%gm% add here afigure of the slope in i-direction 187 188\subsection{slopes for tracer geopotential mixing in the $s$-coordinate} 189 190In $s$-coordinates, geopotential mixing ($i.e.$ horizontal mixing) $r_1$ and 191$r_2$ are the slopes between the geopotential and computational surfaces. 192Their discrete formulation is found by locally solving \eqref{Eq_tra_ldf_iso} 193when the diffusive fluxes in the three directions are set to zero and $T$ is 194assumed to be horizontally uniform, $i.e.$ a linear function of $z_T$, the 195depth of a $T$-point. 196%gm { Steven : My version is obviously wrong since I'm left with an arbitrary constant which is the local vertical temperature gradient} 197 198\begin{equation} \label{Eq_ldfslp_geo} 199\begin{aligned} 200 r_{1u} &= \frac{e_{3u}}{ \left( e_{1u}\;\overline{\overline{e_{3w}}}^{\,i+1/2,\,k} \right)} 201           \;\delta_{i+1/2}[z_t] 202      &\approx \frac{1}{e_{1u}}\; \delta_{i+1/2}[z_t] 203\\ 204 r_{2v} &= \frac{e_{3v}}{\left( e_{2v}\;\overline{\overline{e_{3w}}}^{\,j+1/2,\,k} \right)} 205           \;\delta_{j+1/2} [z_t] 206      &\approx \frac{1}{e_{2v}}\; \delta_{j+1/2}[z_t] 207\\ 208 r_{1w} &= \frac{1}{e_{1w}}\;\overline{\overline{\delta_{i+1/2}[z_t]}}^{\,i,\,k+1/2} 209      &\approx \frac{1}{e_{1w}}\; \delta_{i+1/2}[z_{uw}] 210 \\ 211 r_{2w} &= \frac{1}{e_{2w}}\;\overline{\overline{\delta_{j+1/2}[z_t]}}^{\,j,\,k+1/2} 212      &\approx \frac{1}{e_{2w}}\; \delta_{j+1/2}[z_{vw}] 213 \\ 214\end{aligned} 215\end{equation} 216 217%gm%  caution I'm not sure the simplification was a good idea! 218 219These slopes are computed once in \rou{ldfslp\_init} when \np{ln\_sco}=True, 220and either \np{ln\_traldf\_hor}=True or \np{ln\_dynldf\_hor}=True. 221 222\subsection{Slopes for tracer iso-neutral mixing}\label{LDF_slp_iso} 223In iso-neutral mixing  $r_1$ and $r_2$ are the slopes between the iso-neutral 224and computational surfaces. Their formulation does not depend on the vertical 225coordinate used. Their discrete formulation is found using the fact that the 226diffusive fluxes of locally referenced potential density ($i.e.$ $in situ$ density) 227vanish. So, substituting $T$ by $\rho$ in \eqref{Eq_tra_ldf_iso} and setting the 228diffusive fluxes in the three directions to zero leads to the following definition for 229the neutral slopes: 230 231\begin{equation} \label{Eq_ldfslp_iso} 232\begin{split} 233 r_{1u} &= \frac{e_{3u}}{e_{1u}}\; \frac{\delta_{i+1/2}[\rho]} 234                        {\overline{\overline{\delta_{k+1/2}[\rho]}}^{\,i+1/2,\,k}} 235\\ 236 r_{2v} &= \frac{e_{3v}}{e_{2v}}\; \frac{\delta_{j+1/2}\left[\rho \right]} 237                        {\overline{\overline{\delta_{k+1/2}[\rho]}}^{\,j+1/2,\,k}} 238\\ 239 r_{1w} &= \frac{e_{3w}}{e_{1w}}\; 240         \frac{\overline{\overline{\delta_{i+1/2}[\rho]}}^{\,i,\,k+1/2}} 241             {\delta_{k+1/2}[\rho]} 242\\ 243 r_{2w} &= \frac{e_{3w}}{e_{2w}}\; 244         \frac{\overline{\overline{\delta_{j+1/2}[\rho]}}^{\,j,\,k+1/2}} 245             {\delta_{k+1/2}[\rho]} 246\\ 247\end{split} 248\end{equation} 249 250%gm% rewrite this as the explanation is not very clear !!! 251%In practice, \eqref{Eq_ldfslp_iso} is of little help in evaluating the neutral surface slopes. Indeed, for an unsimplified equation of state, the density has a strong dependancy on pressure (here approximated as the depth), therefore applying \eqref{Eq_ldfslp_iso} using the $in situ$ density, $\rho$, computed at T-points leads to a flattening of slopes as the depth increases. This is due to the strong increase of the $in situ$ density with depth. 252 253%By definition, neutral surfaces are tangent to the local $in situ$ density \citep{McDougall1987}, therefore in \eqref{Eq_ldfslp_iso}, all the derivatives have to be evaluated at the same local pressure (which in decibars is approximated by the depth in meters). 254 255%In the $z$-coordinate, the derivative of the  \eqref{Eq_ldfslp_iso} numerator is evaluated at the same depth \nocite{as what?} ($T$-level, which is the same as the $u$- and $v$-levels), so  the $in situ$ density can be used for its evaluation. 256 257As the mixing is performed along neutral surfaces, the gradient of $\rho$ in 258\eqref{Eq_ldfslp_iso} has to be evaluated at the same local pressure (which, 259in decibars, is approximated by the depth in meters in the model). Therefore 260\eqref{Eq_ldfslp_iso} cannot be used as such, but further transformation is 261needed depending on the vertical coordinate used: 262 263\begin{description} 264 265\item[$z$-coordinate with full step : ] in \eqref{Eq_ldfslp_iso} the densities 266appearing in the $i$ and $j$ derivatives  are taken at the same depth, thus 267the $in situ$ density can be used. This is not the case for the vertical 268derivatives: $\delta_{k+1/2}[\rho]$ is replaced by $-\rho N^2/g$, where $N^2$ 269is the local Brunt-Vais\"{a}l\"{a} frequency evaluated following 270\citet{McDougall1987} (see \S\ref{TRA_bn2}). 271 272\item[$z$-coordinate with partial step : ] this case is identical to the full step 273case except that at partial step level, the \emph{horizontal} density gradient 274is evaluated as described in \S\ref{TRA_zpshde}. 275 276\item[$s$- or hybrid $s$-$z$- coordinate : ] in the current release of \NEMO, 277there is no specific treatment for iso-neutral mixing in the $s$-coordinate. 278In other words, iso-neutral mixing will only be accurately represented with a 279linear equation of state (\np{nn\_eos}=1 or 2). In the case of a "true" equation 280of state, the evaluation of $i$ and $j$ derivatives in \eqref{Eq_ldfslp_iso} 281will include a pressure dependent part, leading to the wrong evaluation of 282the neutral slopes. 283 284%gm% 285Note: The solution for $s$-coordinate passes trough the use of different 286(and better) expression for the constraint on iso-neutral fluxes. Following 287\citet{Griffies_Bk04}, instead of specifying directly that there is a zero neutral 288diffusive flux of locally referenced potential density, we stay in the $T$-$S$ 289plane and consider the balance between the neutral direction diffusive fluxes 290of potential temperature and salinity: 291\begin{equation} 292\alpha \ \textbf{F}(T) = \beta \ \textbf{F}(S) 293\end{equation} 294%gm{  where vector F is ....} 295 296This constraint leads to the following definition for the slopes: 297 298\begin{equation} \label{Eq_ldfslp_iso2} 299\begin{split} 300 r_{1u} &= \frac{e_{3u}}{e_{1u}}\; \frac 301      {\alpha_u \;\delta_{i+1/2}[T] - \beta_u \;\delta_{i+1/2}[S]} 302      {\alpha_u \;\overline{\overline{\delta_{k+1/2}[T]}}^{\,i+1/2,\,k} 303       -\beta_\;\overline{\overline{\delta_{k+1/2}[S]}}^{\,i+1/2,\,k} } 304\\ 305 r_{2v} &= \frac{e_{3v}}{e_{2v}}\; \frac 306      {\alpha_v \;\delta_{j+1/2}[T] - \beta_v \;\delta_{j+1/2}[S]} 307      {\alpha_v \;\overline{\overline{\delta_{k+1/2}[T]}}^{\,j+1/2,\,k} 308       -\beta_\;\overline{\overline{\delta_{k+1/2}[S]}}^{\,j+1/2,\,k} } 309\\ 310 r_{1w} &= \frac{e_{3w}}{e_{1w}}\; \frac 311      {\alpha_w \;\overline{\overline{\delta_{i+1/2}[T]}}^{\,i,\,k+1/2} 312       -\beta_\;\overline{\overline{\delta_{i+1/2}[S]}}^{\,i,\,k+1/2} } 313      {\alpha_w \;\delta_{k+1/2}[T] - \beta_w \;\delta_{k+1/2}[S]} 314\\ 315 r_{2w} &= \frac{e_{3w}}{e_{2w}}\; \frac 316      {\alpha_w \;\overline{\overline{\delta_{j+1/2}[T]}}^{\,j,\,k+1/2} 317       -\beta_\;\overline{\overline{\delta_{j+1/2}[S]}}^{\,j,\,k+1/2} } 318      {\alpha_w \;\delta_{k+1/2}[T] - \beta_w \;\delta_{k+1/2}[S]} 319\\ 320\end{split} 321\end{equation} 322where $\alpha$ and $\beta$, the thermal expansion and saline contraction 323coefficients introduced in \S\ref{TRA_bn2}, have to be evaluated at the three 324velocity points. In order to save computation time, they should be approximated 325by the mean of their values at $T$-points (for example in the case of $\alpha$ 326$\alpha_u=\overline{\alpha_T}^{i+1/2}$$\alpha_v=\overline{\alpha_T}^{j+1/2}$ 327and $\alpha_w=\overline{\alpha_T}^{k+1/2}$). 328 329Note that such a formulation could be also used in the $z$-coordinate and 330$z$-coordinate with partial steps cases. 331 332\end{description} 333 334This implementation is a rather old one. It is similar to the one proposed 335by Cox [1987], except for the background horizontal diffusion. Indeed, 336the Cox implementation of isopycnal diffusion in GFDL-type models requires 337a minimum background horizontal diffusion for numerical stability reasons. 338To overcome this problem, several techniques have been proposed in which 339the numerical schemes of the ocean model are modified \citep{Weaver_Eby_JPO97, 340Griffies_al_JPO98}. Here, another strategy has been chosen \citep{Lazar_PhD97}: 341a local filtering of the iso-neutral slopes (made on 9 grid-points) prevents 342the development of grid point noise generated by the iso-neutral diffusion 343operator (Fig.~\ref{Fig_LDF_ZDF1}). This allows an iso-neutral diffusion scheme 344without additional background horizontal mixing. This technique can be viewed 345as a diffusion operator that acts along large-scale (2~$\Delta$x) 346\gmcomment{2deltax doesnt seem very large scale} 347iso-neutral surfaces. The diapycnal diffusion required for numerical stability is 348thus minimized and its net effect on the flow is quite small when compared to 349the effect of an horizontal background mixing. 350 351Nevertheless, this iso-neutral operator does not ensure that variance cannot increase, 352contrary to the \citet{Griffies_al_JPO98} operator which has that property. 353 354%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 355\begin{figure}[!ht]      \begin{center} 356\includegraphics[width=0.70\textwidth]{./TexFiles/Figures/Fig_LDF_ZDF1.pdf} 357\caption {    \label{Fig_LDF_ZDF1} 358averaging procedure for isopycnal slope computation.} 359\end{center}    \end{figure} 360%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 361 363%First the expression for the rotation tensor has been obtain assuming the "small slope" approximation, so a bound has to be imposed on slopes. 364%Second, numerical stability issues also require a bound on slopes. 365%Third, the question of boundary condition specified on slopes... 366 367%from griffies: chapter 13.1.... 368 369 370 371% In addition and also for numerical stability reasons \citep{Cox1987, Griffies_Bk04}, 372% the slopes are bounded by $1/100$ everywhere. This limit is decreasing linearly 373% to zero fom $70$ meters depth and the surface (the fact that the eddies "feel" the 374% surface motivates this flattening of isopycnals near the surface). 375 376For numerical stability reasons \citep{Cox1987, Griffies_Bk04}, the slopes must also 377be bounded by $1/100$ everywhere. This constraint is applied in a piecewise linear 378fashion, increasing from zero at the surface to $1/100$ at $70$ metres and thereafter 379decreasing to zero at the bottom of the ocean. (the fact that the eddies "feel" the 380surface motivates this flattening of isopycnals near the surface). 381 382%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 383\begin{figure}[!ht]     \begin{center} 384\includegraphics[width=0.70\textwidth]{./TexFiles/Figures/Fig_eiv_slp.pdf} 385\caption {     \label{Fig_eiv_slp} 386Vertical profile of the slope used for lateral mixing in the mixed layer : 387\textit{(a)} in the real ocean the slope is the iso-neutral slope in the ocean interior, 388which has to be adjusted at the surface boundary (i.e. it must tend to zero at the 389surface since there is no mixing across the air-sea interface: wall boundary 390condition). Nevertheless, the profile between the surface zero value and the interior 391iso-neutral one is unknown, and especially the value at the base of the mixed layer ; 392\textit{(b)} profile of slope using a linear tapering of the slope near the surface and 393imposing a maximum slope of 1/100 ; \textit{(c)} profile of slope actually used in 394\NEMO: a linear decrease of the slope from zero at the surface to its ocean interior 395value computed just below the mixed layer. Note the huge change in the slope at the 396base of the mixed layer between  \textit{(b)}  and \textit{(c)}.} 397\end{center}   \end{figure} 398%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 399 400\colorbox{yellow}{add here a discussion about the flattening of the slopes, vs  tapering the coefficient.} 401 402\subsection{slopes for momentum iso-neutral mixing} 403 404The iso-neutral diffusion operator on momentum is the same as the one used on 405tracers but applied to each component of the velocity separately (see 406\eqref{Eq_dyn_ldf_iso} in section~\ref{DYN_ldf_iso}). The slopes between the 407surface along which the diffusion operator acts and the surface of computation 408($z$- or $s$-surfaces) are defined at $T$-, $f$-, and \textit{uw}- points for the 409$u$-component, and $T$-, $f$- and \textit{vw}- points for the $v$-component. 410They are computed from the slopes used for tracer diffusion, $i.e.$ 411\eqref{Eq_ldfslp_geo} and \eqref{Eq_ldfslp_iso} : 412 413\begin{equation} \label{Eq_ldfslp_dyn} 414\begin{aligned} 415&r_{1t}\ \ = \overline{r_{1u}}^{\,i}       &&&    r_{1f}\ \ &= \overline{r_{1u}}^{\,i+1/2} \\ 416&r_{2f} \ \ = \overline{r_{2v}}^{\,j+1/2} &&&   r_{2t}\ &= \overline{r_{2v}}^{\,j} \\ 417&r_{1uw}  = \overline{r_{1w}}^{\,i+1/2} &&\ \ \text{and} \ \ &   r_{1vw}&= \overline{r_{1w}}^{\,j+1/2} \\ 418&r_{2uw}= \overline{r_{2w}}^{\,j+1/2} &&&         r_{2vw}&= \overline{r_{2w}}^{\,j+1/2}\\ 419\end{aligned} 420\end{equation} 421 422The major issue remaining is in the specification of the boundary conditions. 423The same boundary conditions are chosen as those used for lateral 424diffusion along model level surfaces, i.e. using the shear computed along 425the model levels and with no additional friction at the ocean bottom (see 426{\S\ref{LBC_coast}). 427 428 429% ================================================================ 430% Eddy Induced Mixing 431% ================================================================ 432\section  [Eddy Induced Velocity (\textit{traadv\_eiv}, \textit{ldfeiv})] 433      {Eddy Induced Velocity (\mdl{traadv\_eiv}, \mdl{ldfeiv})} 434\label{LDF_eiv} 435 436When Gent and McWilliams [1990] diffusion is used (\key{traldf\_eiv} defined), 437an eddy induced tracer advection term is added, the formulation of which 438depends on the slopes of iso-neutral surfaces. Contrary to the case of iso-neutral 439mixing, the slopes used here are referenced to the geopotential surfaces, $i.e.$ 440\eqref{Eq_ldfslp_geo} is used in $z$-coordinates, and the sum \eqref{Eq_ldfslp_geo} 441+ \eqref{Eq_ldfslp_iso} in $s$-coordinates. The eddy induced velocity is given by: 442\begin{equation} \label{Eq_ldfeiv} 443\begin{split} 444 u^* & = \frac{1}{e_{2u}e_{3u}}\; \delta_k \left[e_{2u} \, A_{uw}^{eiv} \; \overline{r_{1w}}^{\,i+1/2} \right]\\ 445v^* & = \frac{1}{e_{1u}e_{3v}}\; \delta_k \left[e_{1v} \, A_{vw}^{eiv} \; \overline{r_{2w}}^{\,j+1/2} \right]\\ 446w^* & = \frac{1}{e_{1w}e_{2w}}\; \left\{ \delta_i \left[e_{2u} \, A_{uw}^{eiv} \; \overline{r_{1w}}^{\,i+1/2} \right] + \delta_j \left[e_{1v} \, A_{vw}^{eiv} \; \overline{r_{2w}}^{\,j+1/2} \right] \right\} \\ 447\end{split} 448\end{equation} 449where $A^{eiv}$ is the eddy induced velocity coefficient whose value is set 450through \np{rn\_aeiv}, a \textit{nam\_traldf} namelist parameter. 451The three components of the eddy induced velocity are computed and add 452to the eulerian velocity in \mdl{traadv\_eiv}. This has been preferred to a 453separate computation of the advective trends associated with the eiv velocity, 454since it allows us to take advantage of all the advection schemes offered for 455the tracers (see \S\ref{TRA_adv}) and not just the $2^{nd}$ order advection 456scheme as in previous releases of OPA \citep{Madec1998}. This is particularly 457useful for passive tracers where \emph{positivity} of the advection scheme is 458of paramount importance. 459 460At the surface, lateral and bottom boundaries, the eddy induced velocity, 461and thus the advective eddy fluxes of heat and salt, are set to zero. 462 463 464 465 466 Note: See TracBrowser for help on using the repository browser.
I ran into this error with a student of mine, running Astro on Windows (could not replicate on macOS), after running npm run dev. After much 🤔 we solved by renaming the parent folder, which apparently had a strange character, perhaps non-ASCII. If you run into this problem, try renaming the folder you are running (or perhaps a parent folder in the path) to just ASCII text, like “test”
Use MathJax offline in org-mode export Is there any way to use MathJax in emacs org-mode html export without an internet connection? I know that MathJax can be downloaded, but when I provide org-html-mathjax-options with the path to MathJax.js, the html file doesn't use MathJax at all. Any help would be appreciated. Thanks! Update -- this has been answered. When in org-html-mathjax-options, set path to /<path-to-mathjax>/MathJax/MathJax.js?config=TeX-AMS_HTM‌​L,local/local • Interesting I never used MathJax with org-mode. I think... So, thinking out loud, If getting to MathJax.js through the web works, I would try to get to it through a local server. i.e. something like http://localhost/<path-to-mathjax-folder>/MathJax.js (it depends on how the server is set up) Jan 30 '17 at 20:52 • @RolazaroAzeveires strangely that does not work. Nor does file:///<path-to-mathjax-folder>/MathJax.js. Jan 30 '17 at 22:07 This works for me, as a local setting, instead of #+HTML_MATHJAX: path:"http://localhost/<path-to-mathjax>/MathJax.js" use #+HTML_MATHJAX: path:"http://localhost/<path-to-mathjax>/MathJax.js?config=TeX-AMS_HTML" I do not know why, both manuals from emacs and org-mode do not use this config, so any further explanation is welcome. I got there by noting that the default value includes this config option, not a plain path. According to user14743's comment, in org-html-mathjax-options, set the path to /<path-to-mathjax>/MathJax/MathJax.js?config=TeX-AMS-MML_HTM‌​LorMML,local/local. I note that file://... does not work either way. • You just had to put that at the top of your file? I tried and still, strangely, nothing. I'm glad you were able to make it work in principle, at least, even though I can't get it to work over here. That makes me a little optimistic that I can get it set up. I had thought maybe it was due to one of my Firefox extensions, but this failed even after disabling them. Feb 1 '17 at 3:51 • @user14743, yes. I am not sure what else to suggest... Can you see the MathJax.js file if you use the path in your browser? The path must be the path seen by the local host, not the one on the hard disk. Feb 1 '17 at 21:58 • I'm unable to post an answer for some reason. I can't see the CAPTCHA box so that I can click it and prove I'm a human. Anyway, I was able to solve it. When in org-html-mathjax-options, set path to /<path-to-mathjax>/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML,local/local Feb 2 '17 at 18:14 • Feel free to update your answer to reflect this and I'll mark it as solved. This will help future people find the answer to this question. Feb 2 '17 at 18:15 Thanks for your post. I find this works for me, #+HTML_MATHJAX: path:"file:///<path-to-mathjax-folder>/MathJax.js?config=TeX-AMS_HTML" However, this does not work, #+HTML_MATHJAX: path:"http:/localhost/<path-to-mathjax-folder>/MathJax.js?config=TeX-AMS_HTML" I don't know why. This just provides an alternative way. • The second version should be http:// not http:/. May be that it is the reason it didn't work for you. Dec 10 '19 at 13:37 • I tried both http:// and https://. Neither of them worked. Dec 12 '19 at 10:29 • I also tried /<path-to-mathjax-foler>/ without file:///. It unexpectedly worked. I tried all of these on a Mac. The third one looks like the version file:///. To get http:// work, it may need some other configurations. Dec 12 '19 at 10:42
## Uniform estimates for a variational problem with small parameters.(English)Zbl 0793.49019 The author studies the constrained variational problem, $$\inf\{J_ I[u]: u\in H_ 2(I), \langle u\rangle_ I= a\}$$, where $$I$$ is a bounded interval on the line and $J_ I[u]= {1\over | I|} \int_ I (u''{}^ 2- \mu u'{}^ 2+ \psi(u))dt\quad\text{and}\quad \langle u\rangle_ I= {1\over | I|} \int_ I u dt.$ Here $$\mu$$ is a positive number and $$\psi$$ is a double well potential, e.g. $$\psi(u)=(u^ 2- 1)^ 2$$. This problem (which we denote by $$(P^ a_ I)$$) was introduced in B. D. Coleman, M. Marcus and V. J. Mizel [Arch. Ration. Mech. Anal. 117, No. 4, 321-347 (1992)] as a model for the determination of the thermodynamical equilibrium states of unidimensional bodies. In the above-mentioned paper the authors were interested in studying the patterns of equilibrium states of large bodies. For this purpose they investigated a version of the above model in which the underlying domain is the whole line. In this version one defines $$J_ R[u]$$ (the energy of a state $$u\in H^{\text{loc}}_ 2(R)$$) as $$\liminf_{T\to\infty} J_{(-T,T)}[u]$$ and the average mass $$\langle u\rangle_ R$$ as the limit of $$\langle u\rangle_{(-T,T)}$$ (which is assumed to exist). In the present paper the author investigates the relation between the formally limiting problem $$(P^ a_ R)$$ and the problems $$(P^ a_ I)$$ as $$| I|\to\infty$$. The main part of the paper is devoted to the derivation of uniform a priori estimates for equilibrium states of problem $$(P^ a_ I)$$, which are crucial to this investigation. Reviewer: M.Marcus (Haifa) ### MSC: 49S05 Variational principles of physics Full Text: ### References: [1] Adams, R. A., Sobolev Spaces, Academic Press, 1975. [2] Attouch, H., Variational Convergence for Functions and Operators, Pitman, 1984. · Zbl 0561.49012 [3] Coleman, B. D., Marcus, M. & Mizel, V. J., On the thermodynamics of periodic phases, Arch. Rational Mech. Anal. 117 (1992), 321-347. · Zbl 0788.73015 [4] Leizarowitz, A., Infinite horizon autonomous systems with unbounded cost, Appl. Math. Optim. 13 (1985), 19-43. · Zbl 0591.93039 [5] Leizarowitz, A. & Mizel, V. J., One dimensional infinite-horizon variational problems arising in continuum mechanics, Arch. Rational. Mech. Anal. 106 (1989), 161-194. · Zbl 0672.73010 [6] Marcus, M. & Mizel, V. J., Higher order variational problems related to a model in thermodynamics (in preparation). This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# global section of local system from direct image Deligne has a theorem in "Theorie de Hodge II" as follows: Let $S$ be a smooth separated scheme, and $f:X\to S$ be a smooth proper morphism. Let $\bar{X}$ be a non singular compactification of $X$. Then the canonical morphism : $$H^n(\bar{X},\mathbf{Q)}\to H^0(S,\mathbf{R}^nf_* \mathbf{Q})$$ is surjective. My question is: if we replace the constant sheaf $\bf Q$ over $\bar{X}$ with a local system $E$ (a locally constant sheaf) , does the theorem still hold ? Thank you ! - Yes, this is still true, if we assume that $E$ is the underlying local system of a polarized variation of Hodge structure on $\overline X$, which takes care of most local system of "algebro-geometric origin". Deligne's result comes from a combination of the following three results: 1. The Leray spectral sequence for $f \colon X \to S$ and the sheaf $\mathbf Q$ degenerates. 2. The map $H^n(\overline X,\mathbf Q) \to W_nH^n(X,\mathbf Q)$ is surjective. 3. $H^0(S,R^nf_\ast \mathbf Q)$ is pure. To generalize to a local system we need a suitable formalism of mixed sheaves. Since you have $\mathbf Q$-coefficients it looks like you're working over $\mathbf C$ so we should use Saito's theory of mixed Hodge modules. Then all these remain true with coefficients in $E$ instead. For the first one, Saito proves that the perverse Leray sequence degenerates for a proper morphism and a pure Hodge module. For a morphism which is in addition smooth the perverse Leray sequence is just the ordinary one, and if $E$ is a PVHS then it's a pure Hodge module. For the second it is enough to prove dually that $\mathrm{gr}^W_{n+k} H^n_c(X,E) \to \mathrm{gr}^W_{n+k} H^n_c(\overline X, E)$ is injective (where $k$ is the weight of the sheaf $E$). But the long exact sequence of a pair identifies the kernel of this map with a quotient of $\mathrm{gr}^W_{n+k} H^{n-1}_c(\overline X \setminus X,E)$ which vanishes by the fact that $Rf_!$ decreases weights. (For a more general statement see Peters and Saito, "Lowest weights in cohomology of variations of Hodge structure".) The third follows because $H^0(S,R^nf_\ast E)$ injects into $H^n(X_s,E)$ (as the monodromy invariants) which is pure because $X_s$ is smooth and proper and because the restriction of $E$ to $X_s$ is still pure, again everything is due to Saito's theory. - Can you give me more detail reference ? Or some body proved that theorem in a published paper? For we want to cite it. –  Lan Sep 9 '13 at 13:27 I don't know a paper proving this specific fact, most likely you'll have to include it as a lemma with a proof along what I wrote above. Saito's papers are notoriously difficult to read, but two references which could be useful for you are the first few pages of Saito's "Introduction to mixed Hodge modules" (which should contain enough information to fill in the details in what I wrote) and the treatment in the book of Peters and Steenbrink. Or you could read Beilinson-Bernstein-Deligne (for the l-adic story), or Brylinski and Zucker's "An overview of recent advances in Hodge theory". –  Dan Petersen Sep 9 '13 at 16:10
# K-Fold Crossvalidation in Tensorflow when using flow_from_directory for image recognition Disclaimer: I have very little experience with Tensorflow. I have a custom dataset with 20 categories with 100+ images in each. I am doing 5-fold cross validation using InceptionV3 for transfer learning. The easiest way to load this dataset into Tensorflow that I was able to find was flow_from_directory. The method works for one fold, but not for 5 folds since you can't set the folds. How would I go about dividing up the generators into 5 folds? Should I use an alternative method of importing data instead of flow_from_directory? There was a similar question where the answer was seemingly just importing it in a different way. from tensorflow.keras.preprocessing.image import ImageDataGenerator datagen=ImageDataGenerator(preprocessing_function=preprocess_input, validation_split=0.2) train_generator=datagen.flow_from_directory('/content/dataset', target_size=(299,299), color_mode='rgb', batch_size=32, class_mode='categorical', shuffle=True, subset='training') val_generator = datagen.flow_from_directory('/content/dataset', target_size=(299,299), color_mode='rgb', batch_size=32, class_mode='categorical', shuffle=True, subset='validation') The easiest way I found was replacing flow_from_directory command to flow_from_dataframe (for more information on this command see). That way you can split the dataframe. You just have to make a dataframe with images paths and labels. i = 1 df_metrics = pd.DataFrame() kf = KFold(n_splits = 10, shuffle = True, random_state = None) for train_index, test_index in kf.split(dataframe): trainData = dataframe.iloc[train_index] testData = dataframe.iloc[test_index] print('Initializing Kfold %s'%str(i)) print('Train shape:',trainData.shape) print('Test shape:',testData.shape) epochs = 30 train_datagen = ImageDataGenerator(rescale=1./255,validation_split=0.2) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator=train_datagen.flow_from_dataframe( dataframe=trainData, directory="./train/", x_col="id", y_col="label", subset="training", batch_size=batch_size, shuffle=True, class_mode="categorical", target_size=(img_width, img_height)) validation_generator=train_datagen.flow_from_dataframe( dataframe=trainData, directory="./train/", x_col="id", y_col="label", subset="validation", batch_size=batch_size, shuffle=True, class_mode="categorical", target_size=(img_width, img_height)) test_generator=test_datagen.flow_from_dataframe( dataframe=testData, directory="./test/", x_col="id", y_col="label", batch_size=1, shuffle=False, class_mode="categorical", target_size=(img_width, img_height) . . . i +=1
### Gold Again Without using a calculator, computer or tables find the exact values of cos36cos72 and also cos36 - cos72. ### Pythagorean Golden Means Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. ### Golden Triangle Three triangles ABC, CBD and ABD (where D is a point on AC) are all isosceles. Find all the angles. Prove that the ratio of AB to BC is equal to the golden ratio. # Golden Eggs ##### Stage: 5 Challenge Level: 1) An ellipse with semi axes $a$ and $b$ fits between two circles of radii $a$ and $b$ (where $b> a$) as shown in the diagram. If the area of the ellipse is equal to the area of the annulus what is the ratio $b:a$? (2) Find the value of $R$ if this sequence of 'nested square roots' continues indefinitely: $$R=\sqrt{1 + \sqrt{1 + \sqrt {1 + \sqrt {1 + ...}}}}.$$
# A 3 cm long, 2 mm × 2 mm rectangular cross-section aluminium fin [k = 237 W/m°c] is attached to a surface. If the fin efficiency is 65%, the effectiveness of this single fin is: This question was previously asked in TNTRB 2017 ME Official Question Paper View all TN TRB ME Papers > 1. 30 2. 24 3. 8 4. 39 Option 4 : 39 Free ST 1: Logical reasoning 5280 20 Questions 20 Marks 20 Mins ## Detailed Solution Concept: The relation between the efficiency of the fin and the effectiveness of the fin is given by, $$\frac{\eta }{\varepsilon } = \frac{{Cross~Section~area~of~the~fin~\left( {Ac} \right)}}{{Surface~~area~~of~the~fin~\left( {As} \right)}}$$ Calculation Given Length of the fin ,L = 3 cm = 30 mm, Side of the square cross-section, a = b = 2 mm, Efficiency of the fin, η = 65% = 0.65 As = p × L = 2 × (a + b) × L = 2 × (2 + 2) × 30 = 240 mm2 Ac = a × b = 2 × 2 = 4 mm2 $$Thermal~conductivity,k = 237~\frac{W}{{m^\circ C}}$$ Therefore, $$⇒ \frac{{0.65}}{{\rm{\varepsilon }}} = \frac{4}{{240}}$$ ⇒ ε = 39 Important Points • The Biot number of the fin with good effectiveness should be less than 1. • Fin material should have convection resistance higher than conduction resistance.
## Barry Halliwell and John M. C. Gutteridge Print publication date: 2015 Print ISBN-13: 9780198717478 Published to Oxford Scholarship Online: October 2015 DOI: 10.1093/acprof:oso/9780198717478.001.0001 Show Summary Details Page of PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 26 February 2017 # (p.697) Appendix Some basic chemistry Source: Free Radicals in Biology and Medicine Publisher: Oxford University Press # A1 Atomic structure Atoms consist of a positively charged nucleus surrounded by one or more negatively charged electrons. The nucleus contains two types of particle of approximately equal mass, the positively charged proton and the uncharged neutron. By comparison with these particles, the mass of the electron is negligible, so that virtually all of the mass of the atom is contributed by its nucleus. The atomic number of an element is the number of protons in its nucleus, the mass number is the number of protons plus neutrons. In the neutral atom, the atomic number also equals the number of electrons. The simplest atom is that of hydrogen, containing one proton (atomic number one, mass number one) and one electron. All other elements contain neutrons in the nucleus. Some elements exist as isotopes, in which the atoms contain the same number of protons and electrons, but different numbers of neutrons. These isotopes can be stable (e.g. 15N) or unstable, the unstable ones undergoing radioactive decay at various rates (e.g. 14C). In this process, the nucleus of the radioactive isotope changes, and a new element forms. Carbon, hydrogen, nitrogen, and oxygen exist almost exclusively as one isotopic form in nature (Table A1), whereas chlorine is a mixture. Table A1 Isotopes of some common elements. Element Isotope Number of protons in nucleus Number of neutrons in nucleus Chlorine $1735Cl$ 17 18 Both isotopes are stable and occur naturally, 35Cl being more abundant. $1737Cl$ 17 20 Carbon $612C$ 6 6 Over 90% of naturally occurring carbon is $612C.$. Small amounts of the radioactive isotope $614C$ are formed by the bombardment of atmospheric CO2 with cosmic rays (streams of neutrons arising from outer space). This isotope undergoes slow radioactive decay (50% decay after 5600 years). $613C$ 6 7 $614C$ 6 8 Nitrogen $714N$ 7 7 15N is a stable isotope of nitrogen often used as a ‘tracer’, e.g. $15NO3−$ can be fed to humans to study its metabolism. $715N$ 7 8 Oxygen $816O$ 8 8 Over 90% of naturally occurring oxygen is the isotope $816O$. $817O$ 8 9 $818O$ 8 10 Hydrogen $11H$ 1 0 Over 99% of hydrogen is $11H$. Deuterium $(12H)$ is a stable isotope, whereas tritium $(13H)$ is radioactive. Deuterium oxide is known as ‘heavy water’, and is used in detecting singlet O2 (Section 6.8.3). $12H$ 1 1 $13H$ 1 2 The superscript number on the left of the symbol for the element is the mass number, and the subscript the atomic number. Electrons are negatively charged. Since they do not spiral into the positively charged nucleus, they must possess energy to counteract its attractive force. Electrons exist in specific orbits, or ‘electron shells’, each associated with a particular energy level. The ‘K’-shell electrons, lying closest to the nucleus, have the lowest energy, and the energy successively increases as one proceeds outwards to the so-called M- and N-shells. The K-shell can hold a maximum of two electrons, the L-shell, 8, M-shell, 18 and N-shell, 32. Table A2 shows the location of electrons in each of these shells for the elements up to atomic number 36. Table A2 Location of electrons in shells for the elements with atomic numbers 1 to 36. Atomic number of element Element Symbol Shell K Shell L Shell M Shell N 1 Hydrogen H 1 2 Helium He 2 3 Lithium Li 2 1 4 Beryllium Be 2 2 5 Boron B 2 3 6 Carbon C 2 4 7 Nitrogen N 2 5 8 Oxygen O 2 6 9 Fluorine F 2 7 10 Neon Ne 2 8 11 Sodium Na 2 8 1 12 Magnesium Mg 2 8 2 13 Aluminium Al 2 8 3 14 Silicon Si 2 8 4 15 Phosphorus P 2 8 5 16 Sulphur S 2 8 6 17 Chlorine Cl 2 8 7 18 Argon Ar 2 8 8 19 Potassium K 2 8 8 1 20 Calcium Ca 2 8 8 2 21 Scandium Sc 2 8 9 2 22 Titanium Ti 2 8 10 2 23 V 2 8 11 2 24 Chromium Cr 2 8 13 1 25 Manganese Mn 2 8 13 2 26 Iron Fe 2 8 14 2 27 Cobalt Co 2 8 15 2 28 Nickel Ni 2 8 16 2 29 Copper Cu 2 8 18 1 30 Zinc Zn 2 8 18 2 31 Gallium Ga 2 8 18 3 32 Germanium Ge 2 8 18 4 33 Arsenic As 2 8 18 5 34 Selenium Se 2 8 18 6 35 Bromine Br 2 8 18 7 36 Krypton Kr 2 8 18 8 Electrons have some of the properties of a particle, and some of the properties of a wave. The position of an electron at a given time cannot be specified precisely, but only the region of space where it is most likely to be. These regions are called orbitals. Each electron in an atom has its energy defined by four quantum numbers. The first, or principal quantum number (n), defines the main energy level that the electron occupies. For the K-shell, $n= 1$; for L, $n= 2$; for M, $n= 3$; and for N, $n= 4$. The second, or azimuthal quantum number (l), governs the shape of the orbital and has values from zero up to $(n−1)$. When $l=0$, the electrons are called ‘s’ electrons; when $l=1$, they are ‘p’ electrons; $l=2$, ‘d’ electrons; and $l=3$ gives ‘f’ electrons. The third quantum number is the magnetic quantum number (m) and, for each value of l, m has values of l, $l−l,…,0,−1,…,…,−l$. Finally, the fourth quantum number, or spin quantum number, can only have values of 1/2 or −1/2. Table A3 shows how electrons with these different quantum numbers fill the electron shells. Pauli’s principle states that ‘no two electrons can have the same four quantum numbers’. Since the spin quantum number has only two possible values $±1/2$, it follows that an orbital can hold a maximum of two electrons (Table A3). Table A3 Orbitals available in the principal electron shells. In filling the available orbitals electrons enter the orbitals with the lowest energy first (Aufbau principle). The order is Table A4 gives the electronic energy configurations of the elements with atomic numbers 1 to 32. When the elements are arranged in the periodic table (Figure A1), elements with similar electronic arrangements fall into similar groups (vertical rows), e.g. the group II elements all have two electrons in their outermost electron shell, and the group IV elements have four. Since the 4s-orbital is of lower energy than the 3d-orbitals, these latter orbitals remain empty until the 4s-orbital is filled (e.g. see potassium and calcium in Table A4). In subsequent elements the five 3d-orbitals (p.698) (p.699) (p.700) (p.701) receive electrons, creating the first row of the d-block in the periodic table (Figure A1). Some of these d-block elements are transition elements, meaning elements in which an inner shell of electrons is incomplete (in this case there are electrons in the fourth shell, but all the d-orbitals of the third shell are not yet full). The term transition element, as defined above, applies to scandium and subsequent elements as far as nickel, although it is often extended to include the whole of the first row of the d-block. Figure A1 The periodic table. Table A4 Electronic configuration of the elements. Element Atomic number Symbol Configuration Place in periodic table Hydrogen 1 H 1s1 Uncertain Helium 2 He 1s2 Group 0 (inert gases) Lithium 3 Li 1s22s1 Group I (alkali metals) Beryllium 4 Be 1s22s2 Group II (alkaline-earth metals) Boron 5 B 1s22s22p1 Group III Carbon 6 C 1s22s22p2 Group IV Nitrogen 7 N 1s22s22p3 Group V Oxygen 8 O 1s22s22p4 Group VI Fluorine 9 F 1s22s22p5 Group VII (halogen elements) Neon 10 Ne 1s22s22p6 Group 0 Sodium 11 Na 1s22s22p63s1 Group I Magnesium 12 Mg 1s22s22p63s2 Group II Aluminium 13 Al 1s22s22p63s23p1 Group III Silicon 14 Si 1s22s22p63s23p2 Group IV Phosphorus 15 P 1s22s22p63s23p3 Group V Sulphur 16 S 1s22s22p63s23p4 Group VI Chlorine 17 Cl 1s22s22p63s23p5 Group VII Argon 18 Ar 1s22s22p63s23p6 Group 0 Potassium 19 K 1s22s22p63s23p64s1 Group I Calcium 20 Ca 1s22s22p63s23p64s2 Group II Scandium 21 Sc 1s22s22p63s23p64s23d1 d-block Titanium 22 Ti 1s22s22p63s23p64s23d2 d-block 23 V 1s22s22p63s23p64s23d3 d-block Chromium 24 Cr 1s22s22p63s23p64s13d5 d-block Manganese 25 Mn 1s22s22p63s23p64s23d5 d-block Iron 26 Fe 1s22s22p63s23p64s23d6 d-block Cobalt 27 Co 1s22s22p63s23p64s23d7 d-block Nickel 28 Ni 1s22s22p63s23p64s23d8 d-block Copper 29 Cu 1s22s22p63s23p64s13d10 d-block Zinc 30 Zn 1s22s22p63s23p64s23d10 d-block Gallium 31 Ga 1s22s22p63s23p64s23d104p1 Group III Germanium 32 Ge 1s12s22p63s23p64s23d104p2 Group IV If orbitals of equal energy are available, for example the three 2p-orbitals in the L-shell, or the five 3d-orbitals in the M-shell (Table A3), each is filled with one electron before any receives two (Hund’s rule). (p.702) Hence one can further analyse the electronic configurations in Table A4. For example, boron has two 1s, two 2s, and one 2p electrons. Three 2p-orbitals of equal energy are available (Table A3), often written as 2px, 2py, and 2pz. If we represent each orbital as a box and an electron as an arrow, boron can be represented as For the next element, carbon, the extra electron enters another 2p-orbital in compliance with Hund’s rule, And for nitrogen, Further electrons will now begin to ‘pair up’ to fill the 2p-orbitals, for example the oxygen atom, Table A5 uses the same ‘electrons-in-boxes’ notation for the elements in the first row of the d-block. Each of the five 3d-orbitals receives one electron, before any receives two. Table A5 Electronic configuration of the elements scandium to zinc in the first row of the d-block of the periodic table. # A2 Bonding between atoms The description of chemical bonding below is the simplest possible needed to understand this book. ## A2.1 Ionic bonding Essentially two types of chemical bond exist. The first is ionic bonding, and occurs when electropositive elements combine with electronegative ones. Electropositive elements, such as those in groups I and II of the periodic table (Figure A1), tend to lose their outermost electrons easily, whereas electronegative elements (group VII, and oxygen and sulphur in group VI) tend to accept extra electrons. By doing so, both gain the electronic configuration of the inert gases, which seems to be a particularly stable configuration in view of the lack of reactivity of these elements. Consider, for example, the combination of an atom of sodium with one of chlorine. Sodium, an electropositive group I element, has the electronic configuration 1s22s22p63s1. If a sodium atom loses one electron, it now has the configuration 1s22s22p6, that of the inert gas neon. It is still sodium because its nucleus is unchanged, but the loss of an electron leaves the atom with a positive charge, forming an ion or, more specifically, a cation (positively charged ion). For chlorine, configuration 1s22s22p63s23p5, acceptance of one electron gives the argon electron configuration 1s22s22p63s23p6, and produces a negatively charged ion (anion) Cl. In the case of a group II element such as magnesium, it must lose two electrons to gain an inert gas electron configuration. Thus one atom of magnesium can provide electrons for acceptance by two chlorine atoms, giving magnesium chloride a formula MgC12, $Display mathematics$ An atom of oxygen, however, can accept two electrons and combine with magnesium to form an oxide MgO, $Display mathematics$ Once formed, anions and cations are held together by the attraction of their opposite charges. Each ion will exert an effect on each other ion in its vicinity, and these forces cause the ions to pack together into an ionic crystal lattice. As an example, in crystals of NaCl, each Na+ ion is surrounded by six Cl ions, and vice versa. Once the lattice has formed, it cannot be said that any one Na+ ion ‘belongs’ to any one Cl ion, nor can ‘molecules’ of sodium chloride be said to exist. The formula of an ionic compound merely indicates the ratio of the ions present. A lot of energy is needed to disrupt all the electrostatic forces between the many millions of ions in a crystal of an ionic compound, so such compounds are usually solids with high melting-points. Ionic compounds are mostly soluble in water, and the solutions conduct electricity because of the presence of ions to carry the current. The properties of an ionic compound in solution are those of its constituent ions. ## A2.2 Covalent bonding This involves sharing a pair of electrons between the two bonded atoms. Usually each atom contributes one electron to the shared pair, but in dative covalent bonding, one atom contributes both. For example, hydrogen usually occurs as covalently bonded diatomic molecules, H2. If we represent the electron of each hydrogen atom by a cross (×) we can write $Display mathematics$ (p.703) where $××$ is the shared pair of electrons. Many other gaseous elements, including oxygen and chlorine, exist as covalently bonded diatomic molecules. For the covalent compound ammonia, NH3, let us represent the outermost electrons of the N as circles and those of hydrogen as X, Each atom contributes one electron to the bond. Ammonia also undergoes dative covalent bonding using the spare pair (lone pair) of electrons on the nitrogen. For example, it forms a covalent bond with a proton (H+). H+ is formed by loss of one electron from a hydrogen atom, and so has no electrons, Once formed, each of the four covalent bonds in $NH4+$ is indistinguishable from the others. Covalent compounds are usually gases, liquids, or low-melting-point solids at room temperature, because the forces of interaction between the molecules are weak. By contrast, covalent bonds themselves are usually strong and hard to break. Covalent bonds, unlike ionic bonds, have definite directions in space, and so their length, and the angles between them, can be measured. Orbital theory (Section A1) also applies to covalent compounds, the bonding electrons occupying molecular orbitals formed by interaction of the atomic orbitals in which they were originally located. Various possible interactions produce molecular orbitals of different energy levels, each of which can hold a maximum of two electrons with opposite values of the spin quantum number (Pauli’s principle). In the simplest case, H2, two possible molecular orbitals can form by interaction of the 1s atomic orbitals of each H atom. The lowest energy orbital is the bonding molecular orbital (often written as σ‎1s) in which the electron is most likely to be found between the two nuclei. There is also an antibonding molecular orbital (written as σ‎*1s) of higher energy in which there is little chance of finding an electron between the two nuclei. A bonding molecular orbital is more stable than the atomic orbitals, whereas an antibonding molecular orbital is less stable. The two electrons in H2 have opposite spin, and both occupy the bonding molecular orbital. Hence H2 is much more stable than two H atoms. P-type atomic orbitals can produce two types of molecular orbital (σ‎ and π‎) by overlapping in different ways. Hence, for a 2p-orbital (say 2px) combining with another one, there will be two bonding molecular orbitals, σ‎2px and π‎2px, and two antibonding molecular orbitals, σ‎*2px and π‎*2px. Energy increases in the order $Display mathematics$ With this in mind, we can consider bonding in two more complex cases: the gases nitrogen and oxygen. The nitrogen atom has the configuration 1s22s22p3. If two atoms join to form N2, the four 1s-electrons (two from each atom) fully occupy a σ‎1s bonding and a σ‎*1s antibonding orbital, and so there is no net bonding. The four 2s-electrons similarly occupy σ‎2s and σ‎*2s molecular orbitals, again no net bonding. Six electrons are left, located in two 2px, two 2py, and two 2pz atomic orbitals. If the axis of the bond between the atoms is taken to be that of the 2px orbitals, they can overlap along this axis to produce a bonding σ‎2px molecular orbital that can hold both electrons. The 2py and 2pz atomic orbitals cannot overlap along their axes, but they can overlap laterally to give bonding π‎2py and π‎2pz molecular orbitals, each of which holds two electrons. The 2p antibonding orbitals are not occupied; and the net result is a triple covalent bond ; one σ‎ covalent bond and two π‎ covalent bonds. The N2 molecule is thus far more stable than individual N atoms. The oxygen atom (configuration, 1s22s22p4) has one more electron, and so when O2 forms there are two more electrons to consider. These must occupy the next highest molecular orbital in terms of energy. In fact, there are two such orbitals of equal energy, π‎*2py and π‎*2pz. By Hund’s rule, each receives one electron. Since the presence of two electrons in antibonding orbitals energetically cancels out one of the π‎2p bonding orbitals, the two oxygen atoms are effectively joined by a double bond, that is $O=O$ (also see Fig 1.14). The fluorine molecule contains two more electrons than does O2, and so the π‎*2py and π‎*2pz orbitals are both full. Since three bonding and two antibonding molecular orbitals are occupied, the F2 molecule effectively contains a single bond, F–F. ## A2.3 Non-ideal character of bonds The discussion so far has implied an equal sharing of the bonding electrons between two atoms joined by a covalent bond. However, this only occurs when both atoms have a similar attraction for the electrons, i.e. are equally electronegative. This is often not the case. (p.704) Consider, for example, the water molecule, which contains two oxygen–hydrogen covalent bonds. Oxygen is more electronegative than hydrogen, and so takes a slightly greater ‘share’ of the bonding electrons, giving it a slight negative charge (written as δ). The hydrogen thus has a slight positive charge, These charges give water many of its properties. They attract water molecules to each other, making it harder to separate them and so raising the boiling point of water to $100∘C$ at normal atmospheric pressure, These weak ionic bonds are called hydrogen bonds. The δ+ and δ charges also allow water to hydrate ions; water molecules cluster around ions and help to stabilize them. The energy released when ions become hydrated helps to provide the energy needed to disrupt the crystal lattice when ionic compounds dissolve in water. In some cases the energy of hydration is too small to disrupt the lattice, resulting in an ionic compound insoluble in water. ## A2.4 Hydrocarbons and electron delocalization Carbon has four electrons in its outermost shell (Table A4), and normally forms four covalent bonds. Carbon atoms can covalently bond to each other to form long chains. For example, butane has the structure Butane is a hydrocarbon, that is it contains only carbon and hydrogen. Two other hydrocarbon gases, ethane and pentane, are released during lipid peroxidation (Section 5.12.5.1). They have the structures Carbon atoms can also form double covalent bonds (written as () and triple covalent bonds with each other. A double bond consists of four shared electrons (two pairs), and a triple bond has six shared electrons (three pairs). The simplest hydrocarbon containing a double bond is the gas ethene, sometimes called ethylene, Ethene is produced in several assays for the detection of hydroxyl radicals (Table 6.14). Ethyne, sometimes called acetylene, contains a triple bond and has the structure . Organic compounds containing carbon–carbon double or triple bonds are said to be unsaturated, for example PUFAs (Section 5.11.2). The organic liquid benzene has the formula C6H6. Given that carbon forms four covalent bonds, the structure of benzene might be drawn as containing three carbon–carbon single bonds, and three double bonds, that is This structure cannot be correct, however, since benzene does not show the characteristic chemical reactions of compounds containing double bonds. A carbon–carbon single bond is normally 0.154 nm long (one nanometre, nm, is 10−9 metre), and a carbon–carbon double bond, 0.134 nm; yet all the bond lengths between the carbon atoms in benzene are equal at 0.139 nm, that is, intermediate between the double and single bond lengths. The six electrons, which should have formed three double bonds, appear to be ‘spread around’ all six bonds. This is often drawn as Compounds containing the benzene ring or similar ring structures are called aromatic compounds. Delocalization of electrons over several bonds greatly increases the stability of a molecule. Other examples can be seen in haem rings (Section 1.10.3), which show extensive delocalization of electrons, and in several ions such as nitrate $(NO3−)$ and carbonate $(CO32−)$. In each case the negative charge is spread between each of the bonds, (each O has, on average, one-third of the negative charge). (each O has, on average, two-thirds of a negative charge). # A3 Moles and molarity One mole of a substance is its relative molecular mass (‘molecular weight’) expressed in grams. Thus one mole of hydrogen (H2) is 2 g, one mole of water 18 g, and one mole of sodium hydroxide (NaOH) 40 g. One mole of any covalently bonded substance contains the same number of molecules, 6.023 × 1023 to be precise (Avogadro’s number). Thus molecules are found in 2 g of hydrogen, and $6.023×1023$ water molecules in 18 g of water. One mole of the ionic solid NaOH will contain $6.023×1023$ Na+ ions and the same number of OH ions. Whereas moles are amounts, molarity is a concentration. Solution concentrations are usually expressed in molar terms because this relates to the actual number of ions or molecules present in the solution. A molar solution has one mole of solute (the substance dissolved) present in 1 dm3 (or 1 litre) of solution. One millimole (1 mmol) is 10−3 moles. Thus a millimolar (1 mM) solution has 1 mmol of solute per dm3. One micromole (1 μ‎mol) is 10−6 moles. Thus a micromolar (1 μ‎M) solution has 1 μ‎mole of solute per dm3. A 1 mM solution has 1 μ‎mol of solute per cm3 (ml). One nanomole (1 nmol) is 10−9 moles. Thus a nanomolar (1 nM) solution has 1 nmol of solute per dm3. A 1 μ‎M solution has 1 nmol of solute per cm3 (ml). # A4 pH and pKa The pH of a solution is a measure of its acidity; pH 7.0 is neutral, pH <7 acidic, and pH >7 alkaline. Most cells operate at pH values at or close to 7.4, but ‘physiological pH’ ranges from <2 in the stomach to >8 in the stroma of illuminated chloroplasts. The term pH is defined as $Display mathematics$ where the square brackets denote concentration. Thus pure water at $25∘C$ contains $10−7moles/dm3$ of H+ ions and its pH is 7. As temperature rises, heterolytic fission of water (Fig. 1.13) is favoured, [H+] rises and pH falls, so pure water at $37∘C$ is not neutral but slightly acidic. An acid may be (somewhat simplistically) defined as a donor of hydrogen ions. Strong acids (HCl, HNO3, H2SO4) are completely ionized when mixed with water to give dilute aqueous solutions (but not as the pure acids, which are covalently bonded). However, most acids in living systems (e.g. HNO2, HOCl, $HO2∙$, are only partially ionized (so-called weak acids) and exist in an equilibrium: $Display mathematics$ A is the conjugate base of the weak acid HA; a base is a hydrogen ion acceptor. The acid dissociation constant, Ka, is the ratio of the concentrations, $Display mathematics$ at equilibrium. The bigger the value of Ka, the stronger the acid. Values of Ka are affected by temperature. Another term often used is pKa, $Display mathematics$ Thus, the higher Ka, the smaller is pKa. Mixtures of weak acids and their conjugate bases form buffer solutions; their pH changes only slightly when acid or alkali (in moderate amounts) are added. The equation governing the behaviour of buffers is the Henderson–Hasselbalch equation: $Display mathematics$ Thus if equal amounts of a weak acid and its conjugate base are mixed, the pH of the resulting solution equals the pKa of the acid. If extra H+ is added, it is buffered by movement of the equilibrium $Display mathematics$ towards the left; if alkali is added, [H+] falls and it is replaced by movement of the reaction towards the right. This is the essence of buffer action. (p.706)
**This is an old revision of the document!** ## Final Review The final is in the testing center and is closed book and closed notes. It consists of one multiple-guess problem and 5 essay problems. Here is a comprehensive list of topics: • Given a temporal logic formula, identify if it is LTL, CTL, or CTL* only. • Create Kripke structures that satisfy temporal logic formulas (Homework 7) • Write temporal logic formulas for specification expressed in english–be sure you know both CTL and LTL sub-logics (Homework 7) • Prove (or disprove) if two temporal logic formulas are equivalent (Homework 8). • Write LTL properties given a Promela model and specification using remoterefs (Homework 9). • Convert arbitrary CTL formulas to formulas that only have EG, EU, and EX for operators other than normal Boolean operators (Homework 10). • Given a Kripke structure and a set of CTL formulas, determine which states are labeled with which formulas (Homework 10). • Play computer and show how BDDs are created and managed with the ITE given a program using the Cudd interface. Be sure to show the unique table and the recursive trees tracking the ITE calls (Homework 11 and Homework 12). • Perform SwapVariables on a given BDD (unique table included) to replace certain variables (Homework 11). • Create a Boolean expression for a transition relation from a simple program (Homework 13) • Write a Boolean function describing the initial state of a system and perform forward reachability analysis using that function and a given transition relation (Homework 13 but doing it abstractly similar to class rather than BDDs). • Perform CTL model checking using Boolean functions and fix-point computations (abstractly similar to class rather than with BDDs). I expect the test to take 2 hours of student time. ## Midterm Review One page of notes is allowed for the exam. You are responsible for knowing the testing center hours: double check the schedule for Saturday! Below is a comprehensive list of topics on the exam. Please note that some of the topics were not covered directly by the homework, so you will want to perhaps work a few problems on your own to prepare. • Translate if-statements and while-statements into PROMELA • Create a PROMELA verification model to solve a problem that uses shared memory for coordinating processes • Create a PROMELA verification model to solve a problem that uses message passing for coordinating processes. Be familiar with all the different forms of interacting with a channel including the ability to poll, insert sorted, pattern match (including the eval() function), and copy values from the channel. • Write safety properties and create traces that violate the property. • Write liveness properties and create traces that violate the property. • Convert a state transition system into a Buchi Automaton • Given a Buchi Automaton, write a regular expression that includes the $\omega$-operator that is the language detected by the Automaton. • Compute the intersection of two Buchi automaton. • Perform double-depth-first search to detect cycles in a given Buchi automaton. Indicate pre-order traversal numbers on both searches and show the evolution of the runtime stack. • Given a correctness property, write a never claim to detect when the property is violated. I expect the test to take at least 1.5 hour of student time. The total time limit on the test is 2 hours.
# Noncollinear calculations for metallic nanowires¶ Version: 2016.3 In the tutorial Introduction to noncollinear spin you learned how to perform a simple noncollinear calculation. In this tutorial you will apply the same procedure to more advanced systems. In a first example, you will consider a metal nanowire connecting two electrodes of the same material. In the second example, you will consider an infinite wire and study the effect of spin-orbit coupling and spin-orientation on the electronic structure. The calculations will be very similar to the two works published by Czerner et al.: [CYM08][CYM10]. ## Building the device¶ ### Set up the Ni(111) surface¶ Open the Builder, click on Add ‣ From Database, search for the Nickel fcc bulk structure and add it to the Stash. Open Builders ‣ Surface (Cleave), enter Miller indices (111) as shown in the figure below, and click Next. In the next window, define a $$2 \times 2$$ surface supercell as shown below, then click Next. Set the out-of-plane direction to Periodic and normal (electrode), set the thickness to 6 layers (see image below) and click Finish. Finally, add some vacuum above the (111) surface. Go to Bulk Tools ‣ Lattice Parameters and set the C vector length to 40 Å. Be careful to select to “Keep Cartesian coordinates constant when changing the lattice”. Note In principle, you can also create the surface by adding the vacuum in the Surface (Cleave) tool by choosing a slab configuration. The procedure followed here will make the visualization of the next steps easier. ### Set up the Ni wire between two Ni(111) surfaces¶ Select three Ni atoms at the surface and click the centroid plugin on the toolbar at the top of the Builder to add a Ni atom in the geometric center of the selected atoms. Then, go to Coordinate Tools ‣ Translate tool to move this atom by 2.035 Å above the surface. This distance corresponds approximately to the Ni(111) interlayer distance. To create the wire, use again Coordinate Tools ‣ Translate but this time select the Copy option and translate by 2.49248 Å, which corresponds to the Ni-Ni distance. Do it twice and you are halfway to creating a 5-atom wire embedded between two Ni electrodes. Take note of the Z coordinate of the last Ni atom added to the chain (19.2306 Å) and select all the Ni atoms in your system except this last Ni atom. Then, go to Coordinate Tools ‣ Mirror, select the predefined xy mirror axes and enter 19.2306 Å as the Z coordinate of the mirror point P. Remember to check the Copy box to make a copy of the mirrored object, and press Apply and you will obtain the structure shown below. ### Create the device¶ Click on Device Tools ‣ Device From Bulk and keep the predefined electrode lengths corresponding to three Ni layers. ## Setting up the collinear calculation and analyzing the results¶ In this step you will run a collinear calculation and save the corresponding state in a HDF5 file. In the next step, you will run a noncollinear calculation by using the collinear state as an initial guess. In this way, you will save a lot of computational time, since the convergence of a noncollinear calculation can be hard to achieve otherwise. Send the device structure you have previously created to the Script Generator and add the following blocks: • Add a New Calculator with the following parameters: • LDA exchange-correlation functional • Spin-polarized calculation • $$4 \times 4 \times 100$$ grid for k-points sampling • SingleZetaPolarized basis set • Electron temperature set to 2400 K • Add an Initial State with the following options: • Select User spin and check that Spin for Nickel is set to 1. • Add an Analysis > MullikenPopulation • Change the output file name to ‘Ni5_collinear.hdf5‘. Note The high electron temperature considerably improves convergence. Send the script to the Job Manager and run the calculation. Once the job is done (in serial it can take up to three hours), the LabFloor will be populated by the DeviceConfiguration and MullikenPopulation objects. Select the MullikenPopulation and drag and drop it on the Viewer to visualize the converged collinear spin state. The directions of the arrows is not so interesting or surprising - this is a collinear calculation with the electrodes polarized parallel to each other. We do however see a strong net spin polarization in the atomic wire compared to the surface layers. ## Setting up the noncollinear calculation¶ In the tutorial Introduction to noncollinear spin you have learned how to set up a noncollinear calculation. Here you will learn how to set up the initial spins of a noncollinear calculation based on the collinear converged result. In order to set the initial spin state you need to specify the spin direction in physical spherical coordinates (r, θ, φ), with the following important definitions (see also the figure below): • θ is the angle with the z axis • φ the polar angle in the xy plane relative to the x-axis Warning If you start your job by reading a previously converged spin state, as described below in this tutorial, the reference axis (Z in the picture above) for each spin of each atom is the axis of the corresponding converged spin state. Drag and drop the DeviceConfiguration from the LabFloor to the Script Generator and modify the following parameters: • In the existing New Calculator, set the spin option to ‘Noncollinear • Add an Initial State with the following options: • Select User spin and check that Spin for Nickel is set to 1 • Check the option Use old calculation and use as filename ‘ni5_collinear.hdf5 Send the script to the Editor and modify the Initial State block as follows: # ------------------------------------------------------------- # Initial State # ------------------------------------------------------------- # Define the spin rotation theta = 180*Degrees left_spins = [(i, 1, 90*Degrees, 0*Degrees) for i in range(24)] center_spins = [(i+24, 1, 90*Degrees-theta*(i+1)/6, 0*Degrees) for i in range(5)] right_spins = [(i+29, 1, 90*Degrees-theta, 0*Degrees) for i in range(24)] spin_list = left_spins+center_spins+right_spins initial_spin = InitialSpin(scaled_spins=spin_list) device_configuration.setCalculator( calculator, initial_spin=initial_spin, initial_state=old_calculation, ) device_configuration.update() nlsave('ni5_noncollinear_out-plane.hdf5', device_configuration) nlprint(device_configuration) The setup above corresponds to an out-of-plane spin rotation between two anti-ferromagnetic electrodes, as reported in Refs. [CYM08] and [CYM10] (cf. Figure 1 in Ref. [CYM08]). In a later section you will also consider an in-plane rotation. Note The scaled_spin argument follows the format (atom index, initial scaled spin, θ, φ) as documented in the InitialSpin entry in the Reference Manual. Remember that the spin orientation you enter here is relative to the collinear spin state read before. In this case, the collinear state is ferromagnetic, which corresponds to θ=0 and φ=0. Save the script in your project directory, check that the ‘ni5_collinear.hdf5‘ file is present in the same directory and run the script. ## Analyzing the results¶ ### Antiparallel configuration, out-of-plane rotation¶ From the LabFloor drag and drop the MullikenPopulation object contained in ‘Ni5_noncollinear_out-plane.hdf5‘ file into the Viewer to visualize the converged noncollinear spin state. For a more detailed analysis, open the text representation of MullikenPopulation. Here you can read the spin-up, spin-down, θ and φ components for each atom. The figure below shows the direction and size of the spin state of the atoms in the wire (red arrows) and also of two atoms in the electrode extension regions (black arrows) for comparison. You can compare this plot to the results reported in Refs. [CYM08] and [CYM10]. ### Antiparallel configuration, in-plane rotation¶ Another possibility is to rotate the spins in th xy plane by changing the φ angle. To achieve this, modify the script above as indicated below: # Define the spin rotation phi = 180*Degrees left_spins = [(i, 1, 90*Degrees, 0*Degrees) for i in range(24)] center_spins = [(i+24, 1, 90*Degrees, 0*Degrees+phi*(i+1)/6) for i in range(5)] right_spins = [(i+29, 1, 90*Degrees, 0*Degrees-phi) for i in range(24)] spin_list = left_spins+center_spins+right_spins initial_spin = InitialSpin(scaled_spins=spin_list) Run the calculation for this initial spin configuration and display the corresponding MullikenPopulation in the Viewer: ## Including spin-orbit coupling in noncollinear calculations¶ ATK also allows you to perform noncollinear calculations including spin-orbit coupling (SOC). In this example, you will investigate the effects of noncollinear spin and spin-orbit coupling to the simple case of an infinite Ni chain [CYM10]. In order to improve the convergence of the calculation, you will first set up a spin polarized calculation and only afterwards perform a noncollinear and finally a calculation with SOC, starting from the previously converged state. ### Building the Ni wire¶ Open the Builder and create a bulk configuration corresponding to an infinite chain of Ni atoms with an interatomic distance of 2.49248 Å (Ni-Ni distance). ### Collinear calculation¶ Send the structure to the Script Generator, and setup the calculation as follows: • Add a New Calculator with the following parameters • PBE exchange-correlation functional • Spin-polarized calculation • $$1 \times 1 \times 13$$ k-points sampling • SG15-Medium basis set • Initial State • Select User spin and check that Spin for Nickel is set to 1. • Change the output file name to ‘ni_collinear.hdf5‘. Note In order to run a calculation including spin-orbit later on you need fully relativistic pseudopotentials. In this case you will use the SG15 Pseudopotentials and basis sets . Once you are ready, run the calculation. ### Noncollinear calculation with spin-orbit coupling¶ Drag and drop the DeviceConfiguration contained in the ‘ni_collinear.hdf5‘ file from the LabFloor to the Script Generator and modify the following parameters: • In the existing New Calculator, set the spin option to ‘Noncollinear Spin-Orbit • Select User spin and check that Spin for Nickel is set to 1. • Check the option Use old calculation and use as filename ‘ni_collinear.hdf5‘’ • 200 points • G,Z path Send the script to the Editor and modify the Initial State block as follows: # ------------------------------------------------------------- # Initial State # ------------------------------------------------------------- # Define the spin rotation theta = 0*Degrees left_spins = [(0, 1, theta, 0*Degrees) ] initial_spin = InitialSpin(scaled_spins=left_spins) bulk_configuration.setCalculator( calculator, initial_spin=initial_spin, initial_state=old_calculation, ) bulk_configuration.update() nlsave('Ni_noncollinear_phi0.hdf5', bulk_configuration) nlprint(bulk_configuration) Then run the calculation. You can notice that in the calculation the direction of the spin is parallel to the C axis (theta=0*Degrees). Once the calculation is done, repeat the calculation by setting the spin direction perpendicular to C (theta=90*Degrees), and compare the two resulting band structures (see figure below). The results are in good agreement with the observations reported in [CYM10]. In particular, the effect of the SOC is clearly visible from the band splitting when the spin is oriented parallel to the wire axis. For both spin orientations you can also observe anticrossing of bands. ## References¶ [CYM08] (1, 2, 3, 4) Michael Czerner, Bogdan Yu. Yavorsky, and Ingrid Mertig. Fully relaxed magnetic structure of transition metal nanowires: First-principles calculations. Phys. Rev. B, 77:104411, 2008. doi:10.1103/PhysRevB.77.104411. [CYM10] (1, 2, 3, 4, 5) Michael Czerner, Bogdan Yu. Yavorsky, and Ingrid Mertig. The role of noncollinear magnetic order and magnetic anisotropy for the transport properties through nanowires. Phys. Stat. Sol. B, 247:2594, 2010. doi:10.1002/pssb.201046190.
• Computing Depreciation under Alternative Methods - Sterling Steel Inc. purchased a new stamping... (Solved) March 29, 2015 of the machine was 260,000 units. Actual annual production was as follows: Year Units 1 73,000 2 62,000 3 30,000 4 53,000 5 42,000 Required: 1 . Complete a separate depreciation schedule for each Depreciation= Cost less Salvage Value/Life Depreciation= 580,000 less 60,000/5 Depreciation= $104,000 Straight Line Method Year Beginning Value Depreciation Expense Accumulated Depreciation... • Financial Accounting (Solved) June 11, 2014 . The company’s fiscal year ends on December 31. Using the following information, compute depreciation for this machine for each of the 4 years using each of the following methods : Straight-line method Sum-of- years method Double-declining method Units-of- production Year Machine Hours 20X 1 Answer Preview : The expenses are generally classified under two basic heads namely Capital Expenses and the Revenue Expenses. Any enterprise should follow the basic principles governing the type of... • unit 4 4-3 & 4-4 (Solved) August 08, 2014 paid for asset$21,000 $30,750$8,000 Installation cost $500$ 1 ,000 $200 Renovation costs prior to use$ 2 ,000 $1 ,000$ 1 ,500 By the end of the first year , each machine had been operating 4 ,800 hours. Depreciation estimates are shown in Table 2 below : Estimates Problem 4-3 , Table 2 The depreciation recorded for the first year is based on the different methods of depreciation shown above. The Straight line method of depreciation uses the useful life of the asset, the... Recent Questions in Financial Accounting Submit Your Questions Here! Copy and paste your question here... Attach Files
# OPERATION RAGDOLL // Swat Soldiers New toy! Hey gang ! I have couple of ideas to play around with these guys…! I am planning to implement some MOCAP- FIELDS to create explosions and smash reactions… we will see how it goes! 1 Like Operation ragdoll commence ! 1 Like First Round: “Operation RAGDOLL” • Set up the RAGDOLL • Play with values MOCAP vs Ragdoll secondary details. (Animation has NOT been baked into the RIG yet) At first I could not see the difference It’s subtle, but that last fall onto the ground got a really nice upgrade compared to the motion capture, which is somewhat exaggerated. Especially the legs bouncing up towards the end, and the immediate lack of motion at the end of its animation clip. These deserve some close-ups, with shadows to highlight the contacts. 1 Like Hey Marcus ! Agreed… ! some upgrades from the RG simulation my idea is to create a more dynamic scene, I want to see if now that I have set up the RG, I can reference this file couple of times and create a shot more interesting where the soldiers interact and react to Fields, columns and stuff like that ! Will keep the WIP posted ! 1 Like Testing and learning some Fields…! It is tricky ! … the visual representation of the fields and the amount of "magnitude that needs to be enter, takes times… But it is really fun to play with ! Very nice! What was the most tricky aspect you found? Cranking up Magnitude? It’s been difficult finding a good default, since every character has a different size and weight. For very heavy things, you need more magnitude. I’ve considered making it such that the size and weight of objects are ignored, such that the defaults make sense for everything. It would be less realistic, and it would not look natural when there are multiple different sizes of objects within the same field; but perhaps it would be easier to work with? For feedback, there’s a part where lands at the end that looks like a wooden puppet landing on a marble floor, because the contacts are so hard. I would make both of those environment boxes Dynamic and Pin them in worldspace, such that they are somewhat soft. I’d probably split up the long box into a few smaller boxes, so that the whole thing isn’t moving as one. That would make the landing a bit more soft and possibly more realistic. Hey Marcus ! here’s my 10 cents “feedback” 1- “Magnitude” Yes… hard to crank it… as an idea… Would be nice to have the Scale attr, as we have on the solver… Where we can X10 X100 the value… so the values are not that crazy on the the GraphEditor 2- Direction vector and visualization vector : When the “Gizmo” (Sphere, box, none etc) it is really hard to guess where the direction is going. Specially with “newton” and Radial was more friendly but still… I guess and icon “Like for example” Direct Lights from Maya has will help. The Size of the icon could also be connected to the Attr that scales x10 - x100 the magnitude. 3- I did try to use “two” (2) Radial forces, but even when one of them were set to ZERO, it seems it was + “adding” force to the vector… So I decided to use just one and animate the position of the Gizmo Thanks for your feedback… it will help ! I will keep playing around with this… My idea is to use more that three Soldiers doing different actions from MOCAP Lib (idle - Walking - running) and give the some Big burst to make them blow away ! 1 Like … continuing with “Operation RagDoll”… I start designing the shot, this is the first rough staging/layout that I got to do. These guys are about to “BOOOMMMM…” I still have plenty of cleaning, blending and transitioning all the clips, but for now I got the idea planned. Lots of Rigid body, details, weapon and ragdoll on this shot! let’s see where I can get this shot on my spare time… Cheers ! 1 Like BOOOOOMMM indeed! looking forward to seeing this one evolve. 1 Like Sharing new WIP… Start integrating the Field explosion. Choreography of the actions and setting up some rig bodies to make this more chaotic Still have to set up some better collisions and start adding some Post pose animation to make less stiff and more dinamic motion on the characters. MOCAP still very rough … @marcus … one question… Again RG license converts the file to Trial /non-commercial… Also, Is it possible to submit bigger videos? 8 MB is kind of small to show a longer version. 1 Like Neat! What could look cool is dial down the pose strength after the explosion so they go looser, then dial it back up if some are not fully dead and move around afterwards 1 Like Awesome stuff. Sorry about that, there is a fix here: There is not, but there are ways to make videos smaller. Maya does a terrible job compressing video. Here’s your 6.94 mb file with much better compression, at 0.40 mb, quality is near identical. For this, I use FFMPEG and there’s also Handbreak which is free and has a nice UI. You’d playblast, and then drop your video in here. If you’re a techie, you can use ffmpeg and the command-line with this .bat script. convert.bat echo off c:\path\to\ffmpeg.exe -i %1 -q:v 0 %1.mp4 1 Like Awesome I will give it a try…!! What I did really quick was animating the Stiffness at the moment of the explo…! Is that what you are saying with “pose strength”? thanks Jason Thank you Sir Marcus ! Just start creating more chaos and also baking on layers and keeping what I like… Baked version! I am going to start saving on layers what I like and tweaking values to give specific movement to the Swat soldiers So far i like the one on the roof and the ladder… I want to work on the other two 2 Likes That guy getting stuck in the ladder is golden. Oh yeah, that guy’s arms are tearing off. Instant R-rating on this show. 1 Like
# Category Archives: Data Analysis Data analysis is one of the big topics of the century. While its big brother data science engages with larger chunks of data. Data analysis on the other hand itself also occupies itself with normal data-sets. I will show you here, how to conduct data analytics. I especially enjoy the visualisation of data. In my opinion there is nothing more pleasing than a nice plot! Othwise I will show you how to clean data and do other operations on it. Most of the time I will use the R programming laguage for this purpose, as it is well suited for the task of data analysis. Sometimes I will also make a detour to machine learning and data mining. Albeit this category is called data analysis, I might also cover topics from big data in the future. As I already worked with big data in form of RNA sequencing data I might be qualified to do so. Although I’m originally from Bioinformatics, I also plan to take a look at other fields of apllication. The mangy statistics are nevetheless always the same. If you want me to cover a specific topic, just write me. One of the topics I already covered on demand was seat allocation methods in election. I like to play the explainer for mathematical concepts. So try me! ## D’Hondt Method In my last post I showed you the seat allocation method, called Sainte-Laguë/Schepers method. I recommend you reading it, before you continue with this post, as this post about the D’Hondt method heavily builds upon it. There are actually two seat allocation methods, that are pretty similar to Sainte-Laguë/Schepers. One of them being the D’Hondt 1 method. The other one being the Adam’s method2. Both are really remarkably similar to the method from my last post. With really the only difference being the way they round the seats. While Sainte-Laguë/Schepers uses standard round, D’Hondt uses the floor function. Meaning that it always round to the next lower integer. And Adams uses the ceiling function, which round to the next higher integer. Now immediately you should scream: “STOP! What?! Adams uses the ceiling function?! So does this mean, that every party, that gets votes, gets at least one seat?!” The answer would be yesish. Yes, if your election doesn’t have an election threshold, every party, that gets votes, would at least get one seat. “Well isn’t that incredibly unfavorable?” Yepp… But there are cases, where an allocation method like this could make sense. Not for regular election in my opinion, but for elections in parliaments. Let’s say, you have 20 mandates in a parliament, you want to distribute in a parliament with 300 elected politicians. Then the consideration, that it would be fair, if every party will get at least one mandate, could be made. However the Adams method is incredibly uncommon. This Wikipedia article, which also served as my source for the method, only mentions the French parliament as example. The D’Hondt method on the other side is pretty common. It’s actually the most common one in this year’s EU election. And my source was also the corresponding German Wikipedia article. ## Implementation D’Hondt Method Luckily I don’t have to do much to implement those two methods. I just have to change a little bit about my function from last time. seatAllocation <- function(votes, seats, roundMethod = round){ ## calculate the initial divisor ## get the initial seats per party ## if they already satisfy the seats to be assigned, return the seat allocation if(sum(seatsPerParty) == seats){ return(seatsPerParty) } ## otherwise increment or decrement the divisor until ## the result fits and then return it if(sum(seatsPerParty) < seats){ while(sum(seatsPerParty) < seats){ divisor = divisor - 1 } return(seatsPerParty) }else{ while(sum(seatsPerParty) > seats){ divisor = divisor + 1 } return(seatsPerParty = seatsPerParty) } } You see, what I did there? And why I love functional programming? Now by default, it’s the Sainte-Laguë/Schepers method and through giving the parameter roundMethod either the floor or ceiling function, we can make the D’Hondt and respectively Adams method out of it. And we could even come up with some other rounding function in the future and use it. ## Test and Compare The Methods And without further a due let’s test and compare the methods on our previous example. votes <- c(AP = 11345, CVP = 563342, EP = 618713, OSP = 305952, PDP = 95001) seatsSLS <- seatAllocation(votes, seats = 310, roundMethod = round) seatsDH <- seatAllocation(votes, seats = 310, roundMethod = floor) seatsA <- seatAllocation(votes, seats = 310, roundMethod = ceiling) library(data.table) DT <- rbind(data.table(party = names(seatsA), seats = seatsA, method = "Adams"), data.table(party = names(seatsSLS), seats = seatsSLS, method = "Sainte-Laguë/Schepers"), data.table(party = names(seatsDH), seats = seatsDH, method = "D'Hondt")) library(ggplot2) g <- ggplot(DT, aes(x = party, y = seats, fill = method)) g <- g + geom_bar(stat = "identity", position = "dodge") g <- g + geom_text(aes(label=seats), position=position_dodge(width=0.9), vjust=-0.25) g Thanks, stackoverflow! And you see… The actual difference isn’t big at all. The only thing one could say, is that Adams give a bonus to the small parties. D’Hondt method favors the big ones a bit. And Sainte-Laguë/Schepers is somehow in the middle. And for me at least it’s really hard to say, which one is favorable. Sainte-Laguë/Schepers seems like a good compromise. However the differences more or less only affect small parties. But for them the difference is important. This doesn’t mean, that there’s no difference for large parties. On seat could mean the difference between majority and well… Not majority. Especially if you factor coalitions into the mix. Maybe we will talk about possible problems in one of my next posts. I’m beginning to like this topic. I’m already thinking about becoming a lobbyist… lol. 0 # For Allocation of Seats in the EU Parliament On Monday I had a talk over Discord with Boris Biba, who himself runs a blog. We wanted to do a cooperation for some time. The focus of his blog are philosophy and politics. And as I told him, that I’m interested in crunching numbers, the comming EU elections are the perfect opportunity for a cooperation. First we talked about doing something regarding the Wahl-O-Mat. Now in hindsight it was probably good that we decided for something else, as the Wahl-O-Mat was taken offline just today. Then Boris brought up that he wanted to a post about the seat allocation method, which is called Sainte-Laguë/Schepers method, for German votes in the EU election. And I thought to myself, that this is wonderful, as voting is basically a paradigm for statistics. So I would be able to implement a small algorithm. So be also sure to check out the post, which you can find here, from Boris, if you’re able to read German! What I’ll be doing in this post, is explain you the seat allocation method called Sainte-Laguë/Schepers and then give you a demonstrative example for it. And as an easteregg I throw in some election posters for the imaginary parties, I’ll use in the example. I created those posters with Adobe Spark. As a main source for my post, I took the corresponding article from the German Wahl-Lexikon. ## Description of the Method So there are basically three variants of this method, which all deliver the same result. Two of them work by ranking the voting result. The other one by simple division, which is the one used for the German part of the EU election. It is either called iterative or divisor method. The simple idea behind this divisor method is to find a divisor for the voting result, which delivers you the right amount of total seats, if you divide the voting results by it and then round them by standard rounding. To find the right divisor, first the total amount of votes is divided by the number of seats to be assigned. $$divisor = \frac{\#votesTotal}{\#seats}$$ The for each party the number of votes is divided by this divisor. $$seatsParty_{i} = \frac{\#votesParty_{i}}{divisor}$$ And if the sum of the seats of all parties matches up with the amount to be assigned, we’re already done! If not, we have to either increment or decrement the divisor depending on, if we have to few or to many seats. Just think about that… If you increase the divisor, the amount of seats shrinks. And vice versa if you decrease the divisor, the amount of seats increases. And so the divisor is adjusted and the final seats per party are obtained. ## Implementation of the Sainte-Laguë/Schepers method And of course it wouldn’t be me, if I wouldn’t also implement the method. Here we go… seatAllocation <- function(votes, seats){ ## calculate the initial divisor ## get the initial seats per party ## if they already satisfy the seats to be assigned, return the seat allocation if(sum(seatsPerParty) == seats){ return(list(divisor = divisor, seatsPerParty = seatsPerParty)) } ## otherwise increment or decrement the divisor until ## the result fits and then return it if(sum(seatsPerParty) < seats){ while(sum(seatsPerParty) < seats){ divisor = divisor - 1 } return(list(divisor = divisor, seatsPerParty = seatsPerParty)) }else{ while(sum(seatsPerParty) > seats){ divisor = divisor + 1 } return(list(divisor = divisor, seatsPerParty = seatsPerParty)) } } The function is basically the same as what I described under the last point in plain text. As always, if you have some questions or remarks regarding my implementation feel free to write me a comment! ## Example with the Sainte-Laguë/Schepers method Now to test the method, let’s just come up with some arbitrary voting result for our imaginary parties introduced earlier. And of course plot them as a pie chart! votes <- c(AP = 11345, CVP = 563342, EP = 618713, OSP = 305952, PDP = 95001) Subsequently, let’s test what result the method delivers and if the percentages match up approximately. result <- seatAllocation(votes, 310) OK, first let’s visualize the result. But let’s not use a pie chart again. Because to be honest they can be misleading. This time we will use a waffle chart, which displays the actual seats. Of course we also need to do some preprocessing. We want the parties ordered after their size and we won’t their percentage of seats in the legend. seatsPerParty <- result$seatsPerParty seatsPerParty <- sort(seatsPerParty, decreasing = TRUE) names(seatsPerParty) <- paste0(names(seatsPerParty), " (", format(seatsPerParty/sum(seatsPerParty) * 100, digits = 2), "%)") waffle::waffle(seatsPerParty) Well, there’s some difference in the percentage, but that’s to be expected as you can’t distribute fractions of seats between the parties. ## Outlook Of course there are many other methods for allocating seats in an election. Some that are equivalent to this one and others that are not. And if you’re interesting in them, I would encourage you to write me. If you like, we can look at a bunch of them an then compare them. And we could also take a look at things like overhang seat or different kinds of voting. I think it’s a nice topic for making plots. By the way if you also wanna read this post in German, check the following link out! Please follow and like us: 0 ## Map Plots About the Global Burden of Disease # A practical example Like promised in another post I will show you how to do a map plots with R. For this purpose I will use the ggmap package, which makes this a relatively easy task. But before I begin with the actual code, let me give you a short motivation ## Why to use map plots Motivations for using map plots can be various. For example if you’re a journalist and let’s say you want to visualize a kind of events (like say earthquakes) in a regional context, this is a very demonstrative way of visualizing your data. Or if you want to present some kind of data about places or countries map plots are always a good option. The first time I did a map plot was actually part of an awesome lecture I had back in Munich at the TUM. Afterwards I got the chance to use this skill right away in the next semester for my Bachelor’s thesis. As some of you might know the area, where I applied the algorithm which I improved and implemented for my thesis, was mental disorders. During the writing process of my thesis, I found it a good starting point in my thesis and the accompanying presentation to emphasize the prevalence of mental disorders in the world. In order to do so I used a map plot. That’s basically also what I will do now, but this time with the case of cancer. But on a side note I’m not saying that you should do those kinds of plots for each thesis or presentation regarding a disease topic. It’s just one possible starting point for it and not necessarily the best. So please don’t just mindlessly copy, what I’m doing here. 🙂 ## Getting the data for dissease related map plots First let’s load all the packages we will need for this little exercise. library(XLConnect) library(data.table) library(ggplot2) library(ggthemes) library(maps) XLConnect is a package for loading excel sheet, which we will need. That I like to use data.table you probably already noticed. It’s just super fast and comfy for some procedures and it has some nice synergies with ggplot2. The maps package contains as the name suggests map data, which can be used to plot. Alternatively one could also use ggmap. And ggthemes contains a neat theme for maps, which I will use. First let’s load our world map. This data.table contains region names and the boundaries of those regions as longitudes and latitudes. ggplot can plot those as polygons. mapdata <- data.table(map_data("world")) knitr::kable(mapdata[1:5]) longlatgrouporderregionsubregion -69.8991212.4520011ArubaNA -69.8957112.4230012ArubaNA -69.9421912.4385313ArubaNA -70.0041512.5004914ArubaNA -70.0661212.5469715ArubaNA OK, done. Now we need to download the data on 2004’s mortality from the WHO. download.file("www.who.int/entity/healthinfo/global_burden_disease/gbddeathdalycountryestimates2004.xls", "gbd.xls") tmp <- readWorksheetFromFile(file = "gbd.xls", sheet = "Deaths 2004") causes <- tmp$Col1[14:143] countries <- unname(tmp[6,7:198]) deathRates <- tmp[14:143,7:198] You should probably take a look at the Excel file yourself to understand it and what I’m doing later. The file is made for humans to look at it and not directly for machines to read it. Which is why we have to do some cleaning and transforming. In my experience as a Bioinformatics students this is something you have to do almost always. Even if you have a machine readable format, there’s no perfect data-set. You will always have some missing data or have to transform your data in some way. And this isn’t necessarily a trivial step. Often you will spend a lot of time here. And that’s good. If cleaning data was trivial, then we wouldn’t need data scientist. ## Cleaning data To begin with we have to transform the death rates to numeric values… Because they’re characters (strings) right now. For this purpose we have to also replace the separating comma at the thousand position. You see? What’s done to make the data more human readable, makes it less machine readable. That’s often the case. Then we set the column names to the countries and transform the matrix together with the vector of causes to a data.table. deathRatesNum <- matrix(as.numeric(gsub(",", "", as.matrix(deathRates))), nrow = dim(deathRates)[1]) ## Warning in matrix(as.numeric(gsub(",", "", as.matrix(deathRates))), nrow = ## dim(deathRates)[1]): NAs introduced by coercion colnames(deathRatesNum) <- countries DT <- data.table(causes = causes, deathRatesNum) Now we want a clean or also called long data-set. In this new data set we will have only three columns. Two variables (causes and region), which uniquely identify the value death rate. Similar to a database we can also set those variable columns as keys, which makes it very fast searchable. DTclean <- melt(DT, id.vars = "causes", variable.name = "region", value.name = "deathRate") setkey(DTclean, causes, region) Next let us see, if we have some regions in our data.table that aren’t in our map. DTclean[!region %in% mapdata\$region, unique(region)] ## [1] Antigua and Barbuda ## [2] Brunei Darussalam ## [3] Congo ## [4] Côte d'Ivoire ## [5] Democratic People's Republic of Korea ## [6] Iran (Islamic Republic of) ## [7] Lao People's Democratic Republic ## [8] Libyan Arab Jamahiriya ## [9] Micronesia (Federated States of) ## [10] Republic of Korea ## [11] Republic of Moldova ## [12] Russian Federation ## [13] Saint Kitts and Nevis ## [14] Saint Vincent and the Grenadines ## [15] Serbia and Montenegro ## [16] Syrian Arab Republic ## [17] The former Yugoslav Republic of Macedonia ## [19] Tuvalu ## [20] United Kingdom ## [21] United Republic of Tanzania ## [22] United States of America ## [23] Venezuela (Bolivarian Republic of) ## [24] Viet Nam ## 192 Levels: Afghanistan Albania Algeria Andorra ... Zimbabwe As expected, there are 24 regions from the WHO sheet not in the mapdata. Even though there’s probably a more elegant solution, I will change them manually. It’s a work that has to be done once. For this purpose it’s probably only necessary to fill it in for the big countries. So this is bearable. DTclean[region == "Brunei Darussalam", region := "Brunei"] DTclean[region == "Congo", region := "Republic of Congo"] DTclean[region == "Democratic People's Republic of Korea", region := "North Korea"] DTclean[region == "Iran (Islamic Republic of)", region := "Iran"] DTclean[region == "Côte d'Ivoire", region := "Ivory Coast"] DTclean[region == "Lao People's Democratic Republic", region := "Laos"] DTclean[region == "Libyan Arab Jamahiriya", region := "Libya"] DTclean[region == "The former Yugoslav Republic of Macedonia", region := "Macedonia"] DTclean[region == "Micronesia (Federated States of)", region := "Micronesia"] DTclean[region == "Republic of Moldova", region := "Moldova"] DTclean[region == "Republic of Korea", region := "South Korea"] DTclean[region == "Russian Federation", region := "Russia"] DTclean[region == "Serbia and Montenegro", region := "Serbia"] DTclean[region == "Syrian Arab Republic", region := "Syria"] DTclean[region == "United Republic of Tanzania", region := "Tanzania"] DTclean[region == "United Kingdom", region := "UK"] DTclean[region == "United States of America", region := "USA"] DTclean[region == "Venezuela (Bolivarian Republic of)", region := "Venezuela"] DTclean[region == "Viet Nam", region := "Vietnam"] And yea of course the work isn’t done completely yet. We also should check if there are regions in the mapdata, that aren’t in the WHO data-set. This could be due to various reasons… One being, that a region isn’t a member of the WHO and therefore the WHO doesn’t publish data on them. Or more likely that a country from the WHO data-set span more than one region on the map, Serbia and Montenegro being such a case. However I’m lazy now and I won’t do this today. How about you doing it and writing me a comment? 😛 Let it be a team1 effort. ## Making the map plots OK, before we do the actual plotting let’s first calculate for how much percentage of all deaths in each country cancer is the cause. In detail I do this by joining the data.table with itself. On a side note: W000 is the WHO code for all death causes combined and W060 for Malignant neoplasms, which is a more formal name for cancer. Then we need to join the data.table with the map on the region name. DTcaused <- DTclean[causes == "W000"][DTclean[causes == "W060"], on = "region"][, .(region, percentageCaused = i.deathRate / deathRate)] deathrateMap <- mapdata[DTcaused, on = "region", allow.cartesian=TRUE, nomatch = 0] And finally we can do our plot. For this purpose we first plot all regions in grey and as overlay we fill the countries, that we have data on, with a color between grey and red depending on how frequent cancer as a death cause is. g <- ggplot() + geom_polygon(data = mapdata, aes(long, lat, group = group), fill = "grey") g <- g + geom_polygon(data = deathrateMap, aes(long, lat, group = group, fill = percentageCaused)) g <- g + scale_fill_gradient(low = "grey", high = "red", aesthetics = "fill", name = "Percentage of\ndeaths caused\nby cancer") g + ggthemes::theme_map() And of course there’s one thing about this plot that could be misleading. Given that regions with missing data and very low prevalence of cancer deaths will both be grey, you hopefully see the potential problem here? It’s not necessarily wrong or bad to do so. But I hope you recognize how someone could make a plot this way to mislead his audience. That’s why I recommend when it comes to looking at plots not only to think about, what is shown, but also what isn’t shown. Since no large data-set is complete… So ask the person who presents it to you, how she/he handled missing data points. So what does this map actually say? From my perspective I don’t think anything surprising. At the moment, this data set captured, cancer was (and probably still is) mostly a problem of industrialized countries and it doesn’t seem to be connected to geography primarily (Can you see how Israel, Japan and South Korea pop up?). Although the difference between the USA and Canada could be something interesting. But this map, in my opinion, shows very clearly that cancer is one of the leading causes of death in the developed world, which also is the reason, why we also spend so much money on researching it. However the main purpose of this post was to show you, how to make such plots and not discuss the reasons of different causes of mortality. Ultimately I hope that this post has helped you. Of course it is important that you mention your sources (cite them if you write a paper). This is because your approach has to be reproducible and you have to give those people, who did the preliminary work, credit for it. In R you can get the proper citations for the packages you used the following way: citation("ggmap") ## ## To cite ggmap in publications, please use: ## ## D. Kahle and H. Wickham. ggmap: Spatial Visualization with ## ggplot2. The R Journal, 5(1), 144-161. URL ## http://journal.r-project.org/archive/2013-1/kahle-wickham.pdf ## ## A BibTeX entry for LaTeX users is ## ## @Article{, ## author = {David Kahle and Hadley Wickham}, ## title = {ggmap: Spatial Visualization with ggplot2}, ## journal = {The R Journal}, ## year = {2013}, ## volume = {5}, ## number = {1}, ## pages = {144--161}, ## url = {https://journal.r-project.org/archive/2013-1/kahle-wickham.pdf}, ## } citation("maps") ## ## To cite package 'maps' in publications use: ## ## Original S code by Richard A. Becker, Allan R. Wilks. R version ## by Ray Brownrigg. Enhancements by Thomas P Minka and Alex ## Deckmyn. (2018). maps: Draw Geographical Maps. R package version ## 3.3.0. https://CRAN.R-project.org/package=maps ## ## A BibTeX entry for LaTeX users is ## ## @Manual{, ## title = {maps: Draw Geographical Maps}, ## author = {Original S code by Richard A. Becker and Allan R. Wilks. R version by Ray Brownrigg. Enhancements by Thomas P Minka and Alex Deckmyn.}, ## year = {2018}, ## note = {R package version 3.3.0}, ## url = {https://CRAN.R-project.org/package=maps}, ## } ## ## ATTENTION: This citation information has been auto-generated from ## the package DESCRIPTION file and may need manual editing, see ## 'help("citation")'. You get the idea. Also cite the other packages, if you use them in your publication or thesis. The output is in bibtex format. So I hope you know what to do with it. 😛 Of course the data on the global burden of disease you have to cite as well. Thus I’ll give you the formatted citation for it: WHO. (2004). The global burden of disease: 2004 update: causes of death. 2004 Update, 8–26. And last, but not least, please also mention me. This however is not a necessity, but a sign of respect towards my work. By all means respect is an important thing, unfortunately not often enough given in our society.
## Files in this item FilesDescriptionFormat application/pdf 5682.pdf (21kB) (no description provided)PDF ## Description Title: An ab Initio Study Of Electronically Excited States Of Sin And So Author(s): Kim, Gap-Sue Contributor(s): Yurchenko, Sergei N.; Semenov, Mikhail; Somogyi, Wilfrid; Clark, Nicholas; Brady, Ryan Subject(s): Linelists Abstract: CASSCF + MRCI calculations for the diatomic molecules, SiN and SO, have been performed using the C$_{2v}$ point group symmetry. For SiN, five lowest bound electronic have been were considered, $X$~$^2\Sigma^+$, $A$~$^2\Pi$, $B$~$^2\Sigma^+$, $a$~$^4\Pi$, $b$~$^4\Sigma^+$, while for SO, 9 electronic states were selected, $X$~$^3\Sigma^-$, $A$~$^3\Pi$, $A'$~$^3\Delta$, $A''$~$^3\Sigma^+$, $B$~$^3\Sigma^-$, $C$~$^3\Pi$, $a$~$^1\Delta$, $b$~$^1\Sigma^+$ and $c$~$^1\Sigma^-$, due to their importance for the spectroscopic applications in the IR, Visible and UV regions. For all the excited states potential energy, electronic angular momenta, spin orbit and (transition) dipole moment curves were generated. We use these \textit{ab initio} curves to predict rovibronic spectra of SO and SiN as well as their lifetimes. We aim to construct accurate molecular line lists for these molecules, which will require an empirical refinement of the \textit{ab initio} curves in order to improve the quality of the predictions of experimental spectra. Issue Date: 2021-06-22 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/111489 Date Available in IDEALS: 2021-09-24 
Notes on Pressure | Grade 7 > Science > Pressure | KULLABS.COM Notes, Exercises, Videos, Tests and Things to Remember on Pressure Please scroll down to get to the study materials. • Note • Things to remember • Videos • Exercise • Quiz #### Introduction The total perpendicular force exerted by a body on the surface in contact is called thrust. Pressure is defined as the thrust per unit area of a surface. The SI unit of pressure is Pascal (Pa) which is Newton per square metre( N/m2). If ‘P’ is the pressure exerted by a body of area ‘A’ when force ‘F’ is applied then, P = $$\frac{F}{A}$$ Pressure plays a significant role in our day to day activities. Sometimes we should increase pressure and sometimes we should decrease it. A drawing pin is broad at the thumb side but sharp and pointed at another end. It is done to reduce pressure at the thumb and to increase pressure on the drawing board. The force of our hand falls over a large area of the drawing pin and produces high pressure on the pointed edge of the pin. The effect of same force on different areas are different. #### Measurement of pressure If ‘F’ be the force and ‘A’ be the area of the body then pressure exerted by the body is given by, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ From the above equation, we can conclude that, • Pressure is directly proportional to the applied force, and • Pressure is inversely proportional to the area exerted by the body. • The pressure depends on the force applied or upthrust and area over which the force acts. Less pressure is exerted when a force acts over a large area of a surface and more pressure is exerted when the force acts on a small area of a surface. Same force can produce different pressure depending upon on the area over which it acts. Example A large brick of 10N occupies 1m2 surface area. Calculate the pressure exerted. Solutions: We have, Force (F) = 10N Area (A) = 1m2 Pressure = ? According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, P = $$\frac{10}{1}$$ Or, P = 10 Pa. $$\therefore$$ The pressure exerted by the brick is 10 Pa. #### Application of Pressure. • A sharp knife has a very small surface area on its cutting edge so that high pressure can be exerted to cut the meat. • The studs on a football have only a small area of contact with the ground. The pressure under the studs is high enough for them to sink into the ground, which gives extra grip. • Nails, needles and pins have very sharp ends with a very small surface area. When a force is applied to the head of the nail, the pressure will drive its sharp end into a piece of wood easily. • Skis have a large area to reduce the pressure on the snow so that they do not sink in too far. • A tractor moving on soft ground has wide tires to reduce the pressure on the ground so that they will not sink into the ground. Activity Bring a knife and a potato or any other vegetable. Then cut the potato with both sides of the knife i.e. by the blunt edge and sharp or pointed edge. What difference do you observe while cutting potato by both sides of the knife? What do you conclude from this activity? • The force acting per unit area of a surface is called pressure. • The pressure depends on the force applied or upthrust and area over which the force acts. • A sharp knife has a very small surface area on its cutting edge so that high pressure can be exerted to cut the meat. • Nails, needles and pins have very sharp ends with a very small surface area. When a force is applied to the head of the nail, the pressure will drive its sharp end into a piece of wood easily. . #### Click on the questions below to reveal the answers Force acting per unit area is called pressure. The factors in which pressure depends are as follows, 1. Force applied or the thrust 2. Area over which the force acts. The differences between force and pressure are as follows, Force Pressure It is a pull or pushes acting on a body. It is the thrust acting per unit area. Its SI unit is Newton (N). Its SI unit is Pascal (P). It is the cause of pressure. It is the effect of force. It is easier to cut with a sharp knife than a blunt one because effect of force will be more on small area than large area. So, pressure due to sharp knife is more and blunt knife is less. Wooden sleepers are kept below railway line so that there is less pressure of the train on the ground and railway line does not sink into the ground. Camels can walk easily on sand in desert as compared to horses or donkeys because they have broad and flat soles which exert less pressure on the sand. Solutions: We have, Force (F) = 500N Area (A) = 5m2 Pressure (P) = ? According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, p = $$\frac{500}{5}$$ Or, P = 100 Pascal $$\therefore$$ The pressure occupied by the square box is 100 Pa. Solution: We have, Force (F) = 200N Area (A) = 2m2 Pressure (P) = ? According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, p = $$\frac{200}{2}$$ Or, P = 100 Pascal $$\therefore$$ The pressure occupied by the brick is 100 Pa. Solutions: We have, Force (F) = ? Area (A) = 10m2 Pressure (P) = 50pa. According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, 50 = $$\frac{F}{10}$$ Or, F = 50 $$\times$$ 10 Or, F = 500N $$\therefore$$ The force of the body is 500N. Solutions: We have, Force (F) = 180N Area (A) = ? Pressure (P) = 60pa. According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, 180 = $$\frac{60}{A}$$ Or, A = $$\frac{60}{180}$$ Or, A = 0.33m2 $$\therefore$$ The area of the brick is 0.33m2 Solutions: We have, Force (F) = 81N Area = l2 = 32 = 9m2 Pressure (P) = ? According to the formula, Pressure (P) = $$\frac{Force (F)}{Area (A)}$$ Or, P = $$\frac{F}{A}$$ Or, p = $$\frac{81}{9}$$ Or, P = 9 Pascal $$\therefore$$ The pressure of the square box is 9 pascal. 0% • ### Pressure is defined as Force per second Force per unit area Force per unit area per second Upthrust per second Pascal Candele Newton Watt • ### The pressure exerted by a body depends on force and area force and mass area and length force and volume • ### When the force acting on a body increases, the pressure increases remians the same multiplies decreases • ### When the force acting on a body decreases, the pressure increases decreases remians the same multiplies Heat Pressure Upthrust Force Horse Dog Donkey Camels • ### The cutting edge of knives are sharpened to Increase the pressure apply less pressure decrease the pressure to apply small pressure • ### A broad steel belt is provided over the wheels of army tanks to exert large pressure exert more pressure exert less pressure multiply the pressure • ### Studs are kept on the football player's boot to apply small pressure Increase the pressure decrease the pressure apply less pressure • ### The rear wheels of tractors are made large and flat to exert large pressure on ground multiply the pressure to exert more pressure on ground exert less pressure on ground ## ASK ANY QUESTION ON Pressure No discussion on this note yet. Be first to comment on this note