text
stringlengths
100
356k
# Piezo Linear Motor Model force-speed characteristics of linear piezoelectric traveling wave motor ## Library Translational Actuators ## Description The Piezo Linear Motor block represents the force-speed characteristics of a linear piezoelectric traveling wave motor. The block represents the force-speed relationship of the motor at a level that is suitable for system-level modeling. To simulate the motor, the block uses the following models: ### Mass and Friction Model for Unpowered Motor The motor is unpowered when the physical signal input v is zero. This corresponds to applying zero RMS volts to the motor. In this scenario, the block models the motor using the following elements: • An mass whose value is the Plunger mass parameter value. • A friction whose characteristics you specify using the parameter values in the Motor-Off Friction tab. The block uses a Simscape™ Translational Friction block to model the friction component. For detailed information about the friction model, see the Translational Friction block reference page. ### Resonant Circuit Model for Powered Motor When the motor is active, Piezo Linear Motor block represents the motor characteristics using the following equivalent circuit model. In the preceding figure: • The AC voltage source represents the block's physical signal input of frequency f and magnitude v. • The resistor R provides the main electrical and mechanical damping term. • The inductor L represents the rotor vibration inertia. • The capacitor C represents the piezo crystal stiffness. • The capacitor Cp represents the phase capacitance. This is the electrical capacitance associated with each of the two motor phases. • The force constant kf relates the RMS current i to the resulting mechanical force. • The quadratic mechanical damping term, $\lambda {\stackrel{˙}{x}}^{2}$, shapes the force-speed curve predominantly at speeds close to maximum RPM. $\stackrel{˙}{x}$ is the linear speed. • The term $M\stackrel{˙}{x}$ represents the plunger inertia. At model initialization, the block calculates the model parameters R, L, C, kt and λ to ensure that the steady-state force-speed curve matches the values for the following user-specified parameters: • Rated force • Rated speed • Maximum (stall) force These parameter values are defined for the Rated RMS voltage and Motor natural frequency (or rated frequency) parameter values. The quadratic mechanical damping term produces a quadratic force-speed curve. Piezoelectric motors force-speed curves can typically be approximated more accurately using a quadratic function than a linear one because the force-speed gradient becomes steeper as the motor approaches the maximum speed. If the plunger mass M is not specified on the datasheet, you can select a value that provides a good match to the quoted response time. The response time is often defined as the time for the rotor to reach maximum speed when starting from rest, under no-load conditions. The quality factor that you specify using the Resonance quality factor parameter relates to the equivalent circuit model parameters as follows: $Q=\frac{1}{R}\sqrt{\frac{L}{C}}$ This term is not usually provided on a datasheet. You can calculate its value by matching the sensitivity of force to driving frequency. To reverse the motor direction of operation, make the physical signal input v negative. ## Basic Assumptions and Limitations The block has the following limitations: • When the motor is powered, the model is valid only between zero and maximum speed, for the following reasons: • Datasheets do not provide information for operation outside of normal range. • Piezoelectric motors are not designed to operate in the powered braking and generating regions. The block behaves as follows outside the valid operating region: • Below zero speed, the model maintains a constant force with a zero speed value. The zero speed value is the Maximum (stall) force parameter value if the RMS input voltage equals the Rated RMS voltage parameter value, and the frequency input equals the Motor natural frequency parameter value. • Above maximum speed, the model produces the negative force predicted by the equivalent circuit model, but limits the absolute value of the force to the zero-speed maximum force. • The force-speed characteristics are most representative when operating the model close to the rated voltage and resonant frequency. ## Dialog Box and Parameters ### Electrical Force Tab Motor natural frequency Frequency at which the piezoelectric crystal naturally resonates. For most applications, set the input signal at port `f` to this frequency. To slow down the motor, for example in a closed-loop speed control, use a frequency slightly less than the motor natural frequency. The default value is `92` kHz. Rated RMS voltage Voltage at which the motor is designed to operate. The default value is `5.7` V. Rated force Force the motor delivers at the rated RMS voltage. The default value is `0.1` N. Rated speed Motor speed when the motor drives a load at the rated force. The default value is `50` mm/s. Motor speed when driving no load and powered at the rated voltage and driving frequency. The default value is `150` mm/s. Maximum (stall) force Maximum force the motor delivers when actively driving a load and powered at the rated voltage and frequency. The default value is `0.15` N. Note:   The Holding force parameter value, the load force the motor holds when stationary, may be greater than the Maximum (stall) force parameter value. Resonance quality factor Quality factor Q that specifies how force varies as a function of driving frequency. Increasing the quality factor results in a much more rapid decrease in force as driving frequency is moved away from the natural frequency. The default value is `100`. Capacitance per phase Electrical capacitance associated with each of the two motor phases. The default value is `5` nF. ### Mechanical Tab Plunger mass Mass of the moving part of the motor. The default value is `0.3` g. Initial rotor speed Rotor speed at the start of the simulation. The default value is `0` mm/s. ### Motor-Off Friction Tab Holding force The sum of the Coulomb and the static frictions. It must be greater than or equal to the Coulomb friction force parameter value. The default value is `0.3` N. Coulomb friction force The friction that opposes rotation with a constant force at any velocity. The default value is `0.15` N. Viscous friction coefficient Proportionality coefficient between the friction force and the relative velocity. The parameter value must be greater than or equal to zero. The default value is `1e-05` s*N/mm. Transition approximation coefficient The parameter sets the coefficient value that is used to approximate the transition between the static and the Coulomb frictions. For detailed information about the coefficient, cv, see the Simscape Translational Friction block reference page. The default value is `0.1` s/mm. Linear region velocity threshold The parameter sets the small vicinity near zero velocity, within which friction force is considered to be linearly proportional to the relative velocity. MathWorks recommends that you use values between `1e-6` and `1e-4` mm/s. The default value is `0.1` mm/s. ## Ports The block has the following ports: `f` Physical signal input value specifying the motor driving frequency in Hz. `v` Physical signal input magnitude specifying the RMS supply voltage, and sign specifying the direction of rotation. If `v` is positive, then a positive force acts from port C to port R. `i` Physical signal output value that is the RMS phase current. `vel` Physical signal output value that is the linear speed of the rotor. `C` Mechanical translational conserving port. `R` Mechanical translational conserving port.
Equilibrium in risk-sharing games # Equilibrium in risk-sharing games Michail Anthropelos Michail Anthropelos, Department of Banking and Financial Management, University of Piraeus  and  Constantinos Kardaras Constantinos Kardaras, Statistics Department, London School of Economics July 13, 2019 ###### Abstract. The large majority of risk-sharing transactions involve few agents, each of whom can heavily influence the structure and the prices of securities. This paper proposes a game where agents’ strategic sets consist of all possible sharing securities and pricing kernels that are consistent with Arrow-Debreu sharing rules. First, it is shown that agents’ best response problems have unique solutions. The risk-sharing Nash equilibrium admits a finite-dimensional characterisation and it is proved to exist for arbitrary number of agents and be unique in the two-agent game. In equilibrium, agents declare beliefs on future random outcomes different than their actual probability assessments, and the risk-sharing securities are endogenously bounded, implying (among other things) loss of efficiency. In addition, an analysis regarding extremely risk tolerant agents indicates that they profit more from the Nash risk-sharing equilibrium as compared to the Arrow-Debreu one. ###### Key words and phrases: Nash equilibrium, risk sharing, heterogeneous beliefs, reporting of beliefs M. Anthropelos acknowledges support from the Research Center of the University of Piraeus. C. Kardaras acknowledges support from the MC grant FP7-PEOPLE-2012-CIG, 334540. ## Introduction The structure of securities that optimally allocate risky positions under heterogeneous beliefs of agents has been a subject of ongoing research. Starting from the seminal works of [Bor62], [Arr63], [BJ79] and [Buh84], the existence and characterisation of welfare risk sharing of random positions in a variety of models has been extensively studied—see, among others, [BEK05], [JST08], [Acc07], [FS08]. On the other hand, discrepancies amongst agents regarding their assessments on the probability of future random outcomes reinforce the existence of mutually beneficial trading opportunities (see e.g. [Var85], [Var89], [BCGT00]). However, market imperfections—such as asymmetric information, transaction costs and oligopolies—spur agents to act strategically and prevent markets from reaching maximum efficiency. In the financial risk-sharing literature, the impact of asymmetric or private information has been addressed under both static and dynamic models (see, among others, [NN94], [MR00], [Par04], [Axe07], [Wil11]). The importance of frictions like transaction costs has be highlighted in [AG91]; see also [CRW12]. The present work aims to contribute to the risk-sharing literature by focusing on how over-the-counter (OTC) transactions with a small number of agents motivate strategic behaviour. The vast majority of real-world sharing instances involves only a few participants, each of whom may influence the way heterogeneous risks and beliefs are going to be allocated. (The seminal papers [Kyl89] and [Vay99] highlight such transactions.) As an example, two financial institutions with possibly different beliefs, and in possession of portfolios with random future payoffs, may negotiate and design innovative asset-backed securities that mutually share their defaultable assets. Broader discussion on risk-sharing innovative securities is given in the classic reference [AG94] and in [Tuf03]; a list of widely used such securities is provided in [Fin92]. As has been extensively pointed out in the literature (see, for example, [Var89] and [SX03]), it is reasonable, and perhaps even necessary, to assume that agents have heterogeneous beliefs, which we identify with subjective probability measures on the considered state space. In fact, differences in subjective beliefs do not necessarily stem from asymmetric information; agents usually apply different tools or models for the analysis and interpretation of common sets of information. Formally, a risk-sharing transaction consists of security payoffs and their prices, and since only few institutions (typically, two) are involved, it is natural to assume that no social planner for the transaction exists, and that the equilibrium valuation and payoffs will result as the outcome of a symmetric game played among the participating institutions. Since institutions’ portfolios are (at least, approximately) known, the main ingredient of risk-sharing transactions leaving room for strategic behaviour is the beliefs that each institution reports for the sharing. We propose a novel way of modelling such strategic actions where the agents’ strategic set consists of the beliefs that each one chooses to declare (as opposed to their actual one) aiming to maximise individual utility, and the induced game leads to an equilibrium sharing. Our main insights are summarised below. ### Main contributions The payoff and valuation of the risk-sharing securities are endogenously derived as an outcome of agents’ strategic behaviour, under constant absolute risk-aversion (CARA) preferences. To the best of our knowledge, this work is the first instance that models the way agents choose the beliefs on future uncertain events that they are going to declare to their counterparties, and studies whether such strategic behaviour results in equilibrium. Our results demonstrate how the game leads to risk-sharing inefficiency and security mispricing, both of which are quantitatively characterised in analytic forms. More importantly, it is shown that equilibrium securities have endogenous limited liability, a feature that, while usually suboptimal, is in fact observed in practice. Although the agents’ set of strategic choices is infinite-dimensional, one of our main contributions is to show that Nash equilibrium admits a finite-dimensional characterisation, with the dimensionality being one less than the number of participating agents. Not only does our characterisation provide a concrete algorithm for calculating the equilibrium transaction, it also allows to prove existence of Nash equilibrium for arbitrary number of players. In the important case of two participating agents, we even show that Nash equilibrium is unique. It has to be pointed out that the aforementioned results are obtained under complete generality on the probability space and the involved random payoffs—no extra assumption except from CARA preferences is imposed. While certain qualitative analysis could be potentially carried out without the latter assumption on the entropic form of agent utilities, the advantage of CARA preferences utilised in the present paper is that they also allow for substantial quantitative analysis, as workable expressions are obtained for Nash equilibrium. Our notion of Nash risk-sharing equilibrium highlights the importance of agents’ risk tolerance level. More precisely, one of the main findings of this work is that agents with sufficiently low risk aversion will prefer the risk-sharing game rather than the outcome of an Arrow-Debreu equilibrium that would have resulted from absence of strategic behaviour. Interestingly, the result is valid irrespective of their actual risky position or their subjective beliefs. It follows that even risk-averse agents, as long as their risk-aversion is sufficiently low, will prefer risk-sharing markets that are thin (i.e., where participating agents are few and have the power to influence the transaction), resulting in aggregate loss of risk-sharing welfare. ### Discussion Our model is introduced in Section 1, and consists of a two-period financial economy with uncertainty, containing possibly infinite states of the world. Such infinite-dimensionality is essential in our framework, since in general the risks that agents encounter do not have a-priori bounds, and we do not wish to enforce any restrictive assumption on the shape of the probability distribution or the support of agents’ positions. Let us also note that, even if the analysis was carried out in a simpler set-up of a finite state space, there would not be any significant simplification in the mathematical treatment. In the economy we consider a finite number of agents, each of whom has subjective beliefs (probability measure) about the events at the time of uncertainty resolution. We also allow agents to be endowed with a (cumulative, up to the point of uncertainty resolution) random endowment. Agents seek to increase their expected utilities through trading securities that allocate the discrepancies of their beliefs and risky exposures in an optimal way. The possible disagreement on agents’ beliefs is assumed on the whole probability space, and not only on the laws of the shared-to-be risky positions. Such potential disagreement is important: it alone can give rise to mutually beneficial trading opportunities, even if agents have no risky endowments to share, by actually designing securities with payoffs written on the events where probability assessments are different. Each sharing rule consists of the security payoff that each agent is going to obtain and a valuation measure under which all imaginable securities are priced. The sharing rules that efficiently allocate any submitted discrepancy of beliefs and risky exposures are the ones stemming from Arrow-Debreu equilibrium. (Under CARA preferences, the optimal sharing rules have been extensively studied—see, for instance, [Bor62], [BJ79] and [BEK05].) In principle, participating agents would opt for the highest possible aggregate benefit from the risk-sharing transaction, as this would increase their chance for personal gain. However, in the absence of a social planner that could potentially impose a truth-telling mechanism, it is reasonable to assume that agents do not negotiate the rules that will allocate the submitted endowments and beliefs. In fact, we assume that agents adapt the specific sharing rules that are consistent with the ones resulting from Arrow-Debreu equilibrium, treating reported beliefs as actual ones, since we regard these sharing rules to be the most natural and universally regarded as efficient. Agreement on the structure of risk-sharing securities is also consistent with what is observed in many OTC transactions involving security design, where the contracts signed by institutions are standardised and adjusted according to required inputs (in this case, the agents’ reported beliefs). Such pre-agreement on sharing rules reduces negotiation time, hence the related transaction costs. Examples are asset-backed securities, whose payoffs are backed by issuers’ random incomes, traded among banks and investors in a standardised form, as well as credit derivatives, where portfolios of defaultable assets are allocated among financial institutions and investors. Combinations of strategic and competitive stages are widely used in the literature of financial innovation and risk-sharing, under a variety of different guises. The majority of this literature distinguishes participants among designers (or issuers) of securities and investors who trade them. In [DJ89], a security-design game is played among exchanges, each aiming to maximise internal transaction volume; while security design throughout exchanges is the outcome of non-competitive equilibrium, investors trade securities in a competitive manner. Similarly, in [Bis98], Nash equilibrium determines not only the designed securities among financial intermediaries, but also the bid-ask spread that price-taking investors have to face in the second (perfect competition) stage of market equilibrium. In [CRW12], it is entrepreneurs who strategically design securities that investors with non-securitised hedging needs competitively trade. In [RZ09], the role of security-designers is played by arbitrageurs who issue innovated securities in segmented markets. Mixture of strategic and competitive stages has also been used in models with asymmetric information. For instance, in [Bra05] a two-stage equilibrium game is used to model security design among agents with private information regarding their effort. In a first stage, agents strategically issue novel financial securities; in the second stage, equilibrium on the issued securities is formed competitively. Our framework models oligopolistic OTC security design, where participants are not distinguished regarding their information or ability to influence market equilibrium. Agents mutually agree to apply Arrow-Debreu sharing rules, since these optimally allocate whatever is submitted for sharing, and also strategically choose the inputs of the sharing-rules (their beliefs, in particular). Given the agreed-upon rules, agents propose accordingly consistent securities and valuation measures, aiming to maximise their own expected utility. As explicitly explained in the text, proposing risk-sharing securities and a valuation kernel is in fact equivalent to agents reporting beliefs to be considered for sharing. Knowledge of the probability assessments of the counterparties may result in a readjustment of the probability measure an agent is going to report for the transaction. In effect, agents form a game by responding to other agents’ submitted probability measures; the fixed point of this game (if it exists) is called Nash risk-sharing equilibrium. The first step of analysing Nash risk-sharing equilibria is to address the well-posedness of an agent’s best response problem, which is the purpose of Section 2. Agents have motive to exploit other agents’ reported beliefs and hedging needs and drive the sharing transaction as to maximise their own utility. Each agent’s strategic choice set consists of all possible probability measures (equivalent to a baseline measure), and the optimal one is called best probability response. Although this is a highly non-trivial infinite-dimensional maximisation problem, we use a bare-hands approach to establish that it admits a unique solution. It is shown that the beliefs that an agent declares coincide with the actual ones only in the special case where the agent’s position cannot be improved by any transaction with other agents. By resorting to examples, one may gain more intuition on how future risk appears under the lens of agents’ reported beliefs. Consider, for instance, two financial institutions adapting distinct models for estimating the likelihood of the involved risks. The sharing contract designed by the institutions will result from individual estimation of the joint distribution of the shared-to-be risky portfolios. According to the best probability response procedure, each institution tends to use less favourable assessment for its own portfolio than the one based on its actual beliefs, and understates the downside risk of its counterparty’s portfolio. Example 2.8 contains an illustration of such a case. An important consequence of applying the best probability response is that the corresponding security that the agent wishes to acquire has bounded liability. If only one agent applies the proposed strategic behaviour, the received security payoff is bounded below (but not necessarily bounded above). In fact, the arguments and results of the best response problem receive extra attention and discussion in the paper, since they demonstrate in particular the value of the proposed strategic behaviour in terms of utility increase. This situation applies to markets where one large institution trades with a number of small agents, each of whom has negligible market power. A Nash-type game occurs when all agents apply the best probability response strategy. In Section 3, we characterise Nash equilibrium as the solution of a certain finite-dimensional problem. Based on this characterisation, we establish existence of Nash risk-sharing equilibrium for an arbitrary (finite) number of agents. In the special case of two-agent games, the Nash equilibrium is shown to be unique. The finite-dimensional characterisation of Nash equilibrium also provides an algorithm that can be used to approximate the Nash equilibrium transaction by standard numerical procedures, such as Monte Carlo simulation. Having Nash equilibrium characterised, we are able to further perform a joint qualitative and quantitative analysis. Not only do we verify the expected fact that, in any non-trivial case, Nash risk-sharing securities are different from the Arrow-Debreu ones, but we also provide analytic formulas for their shapes. Since the securities that correspond to the best probability response are bounded from below, the application of such strategy from all the agents yields that the Nash risk-sharing market-clearing securities are also bounded from above. This comes in stark contrast to Arrow-Debreu equilibrium, and implies in particular an important loss of efficiency. We measure the risk-sharing inefficiency that is caused by the game via the difference between the aggregate monetary utilities at Arrow-Debreu and Nash equilibria, and provide an analytic expression for it. (Note that inefficient allocation of risk in symmetric-information thin market models may also occur when securities are exogenously given—see e.g. [RW15]. When securities are endogenously designed, [CRW12] highlights that imperfect competition among issuers results in risk-sharing inefficiency, even if securities is traded among perfectly competitive investors.) One may wonder whether the revealed agents’ subjective beliefs in Nash equilibrium are far from their actual subjective probability measures, which would be unappealing from a modelling viewpoint. Extreme departures from actual beliefs are endogenously excluded in our model, as the distance of the truth from reported beliefs in Nash equilibrium admits a-priori bounds. Even though agents are free to choose any probability measure that supposedly represents their beliefs in a risk-sharing transaction, and they do indeed end up choosing probability measures different than their actual ones, this departure cannot be arbitrarily large if the market is to reach equilibrium. Turning our attention to Nash-equilibrium valuation, we show that the pricing probability measure can be written as a certain convex combination of the individual agents’ marginal indifference valuation measures. The weights of this convex combination depend on agents’ relative risk tolerance coefficients, and, as it turns out, the Nash-equilibrium valuation measure is closer to the marginal valuation measure of the more risk-averse agents. This fact highlights the importance of risk tolerance coefficients in assessing the gain or loss of utility for individual agents in Nash risk-sharing equilibrium; in fact, it implies that more risk tolerant agents tend to get better cash compensation as a result of the Nash game than what they would get in Arrow-Debreu equilibrium. Inspired by the involvement of the risk tolerance coefficients in the agents’ utility gain or loss, in Section 4 we focus on induced Arrow-Debreu and Nash equilibria of two-agent games, when one of the agents’ preferences approach risk neutrality. We first establish that both equilibria converge to well-defined limits. Notably, it is shown that an extremely risk tolerant agent drives the market to the same equilibrium regardless of whether the other agent acts strategically or plainly submits true subjective beliefs. In other words, extremely risk tolerant agents tend to dominate the risk-sharing transaction. The study of limiting equilibria indicates that, although there is loss of aggregate utility when agents act strategically, there is always utility gain in the Nash transaction as compared to Arrow-Debreu equilibrium for the extremely risk-tolerant agent, regardless of the risk tolerance level and subjective beliefs of the other agent. Extremely risk-tolerant agents are willing to undertake more risk in exchange of better cash compensation; under the risk-sharing game, they respond to the risk-averse agent’s hedging needs and beliefs by driving the market to higher price for the security they short. This implies that agents with sufficiently high risk tolerance—although still not risk-neutral—will prefer thin markets. The case where both acting agents uniformly approach risk-neutrality is also treated, where it is shown that the limiting Nash equilibrium sharing securities equal half of the limiting Arrow-Debreu equilibrium securities, hinting towards the fact that Nash risk-sharing equilibrium results in loss of trading volume. For convenience of reading, all the proofs of the paper are placed in Appendix A. ## 1. Optimal Sharing of Risk ### 1.1. Notation The symbols “” and “” will be used to denote the set of all natural and real numbers, respectively. As will be evident subsequently in the paper, we have chosen to use the symbol “” to denote (reported, or revealed) probabilities. In all that follows, random variables are defined on a standard probability space . We stress that no finiteness restriction is enforced on the state space . We use for the class of all probabilities that are equivalent to the baseline probability . For , we use “” to denote expectation under . The space consists of all (equivalence classes, modulo almost sure equality) finitely-valued random variables endowed with the topology of convergence in probability. This topology does not depend on the representative probability from , and may be infinite-dimensional. For , consists of all with . We use for the subset of consisting of essentially bounded random variables. Whenever and , denotes the (strictly positive) density of with respect to . The relative entropy of with respect to is defined via H(Q2|Q1):=EQ1[dQ2dQ1log(dQ2dQ1)]=EQ2[log(dQ2dQ1)]∈[0,∞]. For and , we write if and only if there exists such that . In particular, we shall use this notion of equivalence to ease notation on probability densities: for and , we shall write to mean that and . ### 1.2. Agents and preferences We consider a market with a single future period, where all uncertainty is resolved. In this market, there are economic agents, where ; for concreteness, define the index set . Agents derive utility only from the consumption of a numéraire in the future, and all considered security payoffs are expressed in units of this numéraire. In particular, future deterministic amounts have the same present value for the agents. The preference structure of agent over future random outcomes is numerically represented via the concave exponential utility functional (1.1) L0∋X↦Ui(X):=−δilogEPi[exp(−X/δi)]∈[−∞,∞), where is the agent’s risk tolerance and represents the agent’s subjective beliefs. For any , agent is indifferent between the cash amount and the corresponding risky position ; in other words, is the certainty equivalent of for agent . Note that the functional is an entropic risk measure in the terminology of convex risk measure literature—see, amongst others, [FS04, Chapter 4]. Define the aggregate risk tolerance , as well as the relative risk tolerance for all . Note that . Finally, set and , for all . ### 1.3. Subjective probabilities and endowments Preference structures that are numerically represented via (1.1) are rich enough to include the possibility of already existing portfolios of random positions for acting agents. To wit, suppose that are the actual subjective beliefs of agent , who also carries a risky future payoff in units of the numéraire. Following standard terminology, we call this cumulative (up to the point of resolution of uncertainty) payoff random endowment, and denote it by . In this set-up, adding on top of a payoff for agent results in numerical utility equal to . Assume that , i.e., that . Defining via and via (1.1), holds for all . Hence, hereafter, the probability is understood to incorporate any possible random endowment of agent , and utility is measured in relative terms, as difference from the baseline level . Taking the above discussion into account, we stress that agents are completely characterised by their risk tolerance level and (endowment-modified) subjective beliefs, i.e., by the collection of pairs . In other aspects, and unless otherwise noted, agents are considered symmetric (regarding information, bargaining power, cost of risk-sharing participation, etc). ### 1.4. Geometric-mean probability We introduce a method that produces a geometric mean of probabilities which will play central role in our discussion. Fix . In view of Hölder’s inequality, holds. Therefore, one may define via . Since , one is allowed to formally write (1.2) logdQ∼∑i∈IλilogdRi. The fact that implies , and Jensen’s inequality gives , for all . Note that (1.2) implies that the existence of such that holds; therefore, one actually has , for all . In particular, holds for all , and H(Q|Ri)=−EQ[log(dRi/dQ)]<∞,∀i∈I. ### 1.5. Securities and valuation Discrepancies amongst agents’ preferences provide incentive to design securities, the trading of which could be mutually beneficial in terms of risk reduction. In principle, the ability to design and trade securities in any desirable way essentially leads to a complete market. In such a market, transactions amongst agents are characterised by a valuation measure (that assigns prices to all imaginable securities), and a collection of the securities that will actually be traded. Since all future payoffs are measured under the same numéraire, (no-arbitrage) valuation corresponds to taking expectations with respect to probabilities in . Given a valuation measure, agents agree in a collection of zero-value securities, satisfying the market-clearing condition . The security that agent takes a long position as part of the transaction is . As mentioned in the introductory section, our model could find applications in OTC markets. For instance, the design of asset-backed securities involves only a few number of financial institutions; in this case, stands for the subjective beliefs of each institution and, in view of the discussion of §1.3, further incorporates any existing portfolios that back the security payoffs. In order to share their risky positions, the institutions agree on prices of future random payoffs and on the securities they are going to exchange. Other examples are the market of innovated credit derivatives or the market of asset swaps that involve exchange of random payoff and a fixed payment. ### 1.6. Arrow-Debreu equilibrium In the absence of any kind of strategic behaviour in designing securities, the agreed-upon transaction amongst agents will actually form an Arrow-Debreu equilibrium. The valuation measure will determine both trading and indifference prices, and securities will be constructed in a way that maximise each agent’s respective utility. ###### Definition 1.1. will be called an Arrow-Debreu equilibrium if: 1. , as well as and , for all , and 2. for all with , holds for all . Under risk preferences modelled by (1.1), a unique Arrow-Debreu equilibrium may be explicitly obtained. In other guises, Theorem 1.2 that follows has appeared in many works—see for instance [Bor62], [BJ79] and [Buh84]. Its proof is based on standard arguments; however, for reasons of completeness, we provide a short argument in §A.1. ###### Theorem 1.2. In the above setting, there exists a unique Arrow-Debreu equilibrium . In fact, the valuation measure is such that (1.3) logdQ∗∼∑i∈IλilogdPi, and the equilibrium market-clearing securities are given by (1.4) C∗i:=δilog(dPi/dQ∗)+δiH(Q∗|Pi),∀i∈I, where the fact that holds for all follows from §1.4. The securities that agents obtain at Arrow-Debreu equilibrium described in (1.4) provide higher payoff on events where their individual subjective probabilities are higher than the “geometric mean” probability of (1.3). In other words, discrepancies in beliefs result in allocations where agents receive higher payoff on their corresponding relatively more likely events. Note also that the securities traded at Arrow-Debreu equilibrium have an interesting decomposition. Since , agent is indifferent between no trading and the first “random” part of the security . The second “cash” part of is always nonnegative, and represents the monetary gain of agent resulting from the Arrow-Debreu transaction. After this transaction, the position of agent has certainty equivalent (1.5) u∗i:=Ui(C∗i)=δiH(Q∗|Pi),∀i∈I. The aggregate agents’ monetary value resulting from the Arrow-Debreu transaction equals (1.6) u∗:=∑i∈Iu∗i=∑i∈IδiH(Q∗|Pi). ###### Remark 1.3. In the setting and notation of §1.3, let be the collection of agents’ random endowments. Furthermore, suppose that agents share common subjective beliefs; for concreteness, assume that , for all . In this case, and setting , the equilibrium valuation measure of (1.3) satisfies and equilibrium securities of (1.4) are given by , for all . In particular, note the well-known fact that the payoff of each shared security is a linear combination of the agents’ random endowments. ###### Remark 1.4. Since , it is straightforward to compute (1.7) In particular, an application of Jensen’s inequality gives for , with equality if and only if . The last inequality shows that is indeed the optimally-designed security for agent under the valuation measure . Furthermore, for any collection with and for all , it follows that . A standard argument using the monotone convergence theorem extends the previous inequality to ∑i∈IUi(Ci)≤∑i∈IUi(C∗i),∀(Ci)i∈I∈(L0)I with ∑i∈ICi=0, with equality if and only if for all . Therefore, is a maximiser of the functional over all with . In fact, the collection of all such maximisers is where is such that . It can be shown that all Pareto optimal securities are exactly of this form; see e.g., [JST08, Theorem 3.1] for a more general result. Because of this Pareto optimality, the collection usually comes under the appellation of (welfare) optimal securities and valuation measure, respectively. Of course, not every Pareto optimal allocation , where is such that , is economically reasonable. A minimal “fairness” requirement that has to be imposed is that the position of each agent after the transaction is at least as good as the initial state. Since the utility comes only in the terminal time, we obtain the requirement , for all . While there may be many choices satisfying the latter requirement in general, the choice of Theorem 1.2 has the cleanest economic interpretation in terms of complete financial market equilibrium. ###### Remark 1.5. If we ignore potential transaction costs, the cases where an agent has no motive to enter in a risk-sharing transaction are extremely rare. Indeed, agent will not take part in the Arrow-Debreu transaction if and only if , which happens when . In particular, agents will already be in Arrow-Debreu equilibrium and no transaction will take place if and only if they all share the same subjective beliefs. ## 2. Agents’ Best Probability Response ### 2.1. Strategic behaviour in risk sharing In the Arrow-Debreu setting, the resulting equilibrium is based on the assumption that agents do not apply any kind of strategic behaviour. However, in the majority of practical risk-sharing situations, the modelling assumption of absence of agents’ strategic behaviour is unreasonable, resulting, amongst other things, in overestimation of market efficiency. When securities are negotiated among agents, their design and valuation will depend not only on their existing risky portfolios, but also on the beliefs about the future outcomes they will report for sharing. In general, agents will have incentive to report subjective beliefs that may differ from their true views about future uncertainty; in fact, these will also depend on subjective beliefs reported by the other parties. As discussed in §1.6, for a given set of agents’ subjective beliefs, the optimal sharing rules are governed by the mechanism resulting in Arrow-Debreu equilibrium, as these are the rules that efficiently allocate discrepancies of risks and beliefs among agents. It is then reasonable to assume that, in absence of a social planner, agents adapt this sharing mechanism for any collection of subjective probabilities they choose to report—see also the related discussion in the introductory section). More precisely, in accordance to (1.3) and (1.4), the agreed-upon valuation measure is such that , and the collection of securities that agents will trade are , . Given the consistent with Arrow-Debreu equilibrium sharing rules, agents respond to subjective beliefs that other agents have reported, with the goal to maximise their individual utility. In this way, a game is formed, with the probability family being the agents’ set of strategic choices. The subject of the present Section 2 is to analyse the behaviour of individual agents, establish their best response problem and show its well-posedness. The definition and analysis of the Nash risk-sharing equilibrium is taken up in Section 3. ### 2.2. Best response We shall now describe how agents respond to the reported subjective probability assessments from their counterparties. For the purposes of §2.2, we fix an agent and a collection of reported probabilities of the remaining agents, and seek the subjective probability that is going to be submitted by agent . According to the rules described in §2.1, a reported probability from agent will lead to entering a long position on the security with payoff where is such that logdQ(R−i,Ri)∼λilogdRi+∑j∈I∖{i}λjlogdRj. By reporting subjective beliefs , agent also indirectly affects the geometric-mean valuation probability , resulting in a highly non-linear overall effect in the security . With the above understanding, and given , the response function of agent is defined to be P∋Ri↦Vi(Ri;R−i) ≡Ui(δilog(dRi/dQ(R−i,Ri))+δiH(Q(R−i,Ri)|Ri)) where the fact that follows from the discussion of §1.4. The problem of agent is to report the subjective probability that maximises the certainty equivalent of the resulting position after the transaction, i.e., to identify such that (2.1) Vi(Rri;R−i)=supRi∈PVi(Ri;R−i). Any satisfying (2.1) shall be called best probability response. In contrast to the majority of the related literature, the agent’s strategic set of choices in our model may be of infinite dimension. This generalisation is important from a methodological viewpoint; for example, in the setting of §1.3 it allows for random endowments with infinite support, like ones with the Gaussian distribution or arbitrarily fat tails, a substantial feature in the modelling of risk. ###### Remark 2.1. The best response problem (2.1) imposes no constrains on the shape of the agent’s reported subjective probability, as long as it belongs to . In principle, it is possible for agents to report subjective views that are considerably far from their actual ones. Such severe departures may be deemed unrealistic and are undesirable from a modelling point of view. However, as will be argued in §3.3.2, extreme responses are endogenously excluded in our set-up. We shall show in the sequel (Theorem 2.7) that best responses in (2.1) exist and are unique. We start with a result which gives necessary and sufficient conditions for best probability response. ###### Proposition 2.2. Fix and . Then, is best probability response for agent given if and only if the random variable is such that and (2.2) The proof of Proposition 2.2 is given in §A.2. The necessity of the stated conditions for best response follows from applying first-order optimality conditions. Establishing the sufficiency of the stated conditions is certainly non-trivial, due to the fact that it is far from clear (and, in fact, not known to us) whether the response function is concave. ###### Remark 2.3. In the context of Proposition 2.2, rewriting (2.2) we obtain that (2.3) Using also the fact that , it follows that (2.4) log(dRridPi)∼−log(1+Criδ−i). Hence, holds if and only if , which holds if and only if . (Note that implies , since the expectation of under equals zero.) In words, the best probability response and actual subjective probabilities of an agent agree if and only if the agent has no incentive to participate in the risk-sharing transaction, given the reported subjective beliefs of other agents. Hence, in any non-trivial cases, agents’ strategic behaviour implies a departure from reporting their true beliefs. Plugging (2.4) back to (2.3), and using also (2.2), we obtain (2.5) log(dQ(R−i,Rri)dPi)∼−Criδi−log(1+Criδ−i)∼−λilog(1+Criδ−i)+∑j∈I∖{i}λjlog(dRjdPi), providing directly the valuation measure in terms of the security . ###### Remark 2.4. A message from (2.4) is that, according to their best response process, agents will report beliefs that understate (resp., overstate) the probability of their payoff being high (resp., low) relatively to their true beliefs. Such behaviour is clearly driven by a desired post-transaction utility increase. More importantly, and in sharp contrast to the securities formed in Arrow-Debreu equilibrium, the security that agent wishes to enter, after taking into account the aggregate reported beliefs of the rest and declaring subjective probability , has limited liability, as it is bounded from below by the constant . ###### Remark 2.5. Additional insight regarding best probability responses may be obtained resorting to the discussion of §1.3, where incorporates the random endowment of agent , in the sense that , where denotes the subjective probability of agent . It follows from (2.4) that . It then becomes apparent that, when agents share their risky endowment, they tend to put more weight on the probability of the downside of their risky exposure, rather than the upside. For an illustrative situation, see Example 2.8 later on. ###### Remark 2.6. In the course of the proof of Proposition 2.2, the constant in the equivalence (2.2) is explicitly computed; see (A.3). This constant has a particularly nice economic interpretation in the case of two agents. To wit, let , and suppose that is given. Then, from the vantage point of agent , (2.2) becomes Cr0δ0+λ1log(1+Cr0δ1)=ζ0−λ1log(dR1dP0), where the constant is such that ζ0=−logEP0[exp(−Cr0δ0)]+logER1[exp(Cr0δ1)]=U0(Cr0)δ0−U1(−Cr0;R1)δ1. where denotes the utility functional of a “fictitious” agent with representative pair . In words, is the post-transaction difference, denominated in units of risk tolerance, of the utility of agent from the utility of agent (who obtains the security ), provided that the latter utility is measured with respect to the reported, as opposed to subjective, beliefs of agent . In particular, when agent does not behave strategically, in which case , it holds that . Proposition 2.2 sets a roadmap for proving existence and uniqueness in the best response problem via a one-dimensional parametrisation. Indeed, in accordance to (2.2), in order to find a best response we consider for each the unique random variable that satisfies the equation ; then, upon defining via in accordance to (2.5), we seek such that and hold. It turns out that there is a unique such choice; once found, one simply defines via , in accordance to (2.4), to obtain the unique best response of agent given . The technical details of the proof of Theorem 2.7 below are given in §A.3. ###### Theorem 2.7. For and , there exists a unique such that . ### 2.3. The value of strategic behaviour The increase on agents’ utility that is caused by following the best probability response procedure can be regarded as a measure for the value of the strategic behaviour induced by problem (2.1). Consider for example the case where only a single agent (say) applies the best probability response strategy and the rest of the agents report their true beliefs, i.e., holds for . As mentioned in the introductory section, this is a potential model of a transaction where only agent 0 possesses meaningful market power. Based on the results of §2.2, we may calculate the gains, relative to the Arrow-Debreu transaction, that agent obtains by incorporating such strategic behaviour (which, among others, implies limited liability of the security the agent takes a long position in). The main insights are illustrated in the following two-agent example. ###### Example 2.8. Suppose that and . We shall use the set-up of §1.3, where for simplicity it is assumed that agents have the same subjective probability measure. The agents are exposed to random endowments and that (under the common probability measure) have Gaussian law with mean zero and common variance , while denotes the correlation coefficient of and . In this case, it is straightforward to check that ; therefore, after the Arrow-Debreu transaction, the position of agent is . On the other hand, if agent 1 reports true beliefs, from (2.2) the security corresponding to the best probability response of agent should satisfy for appropriate that is coupled with . For and , straightforward Monte-Carlo simulation allows the numerical approximation of the probability density functions (pdf) of and under the best response probability , illustrated in Figure 1. As is apparent, the best probability response drives agent 0 in overstating the downside risk of and understating the downside risk of . The effect of following such strategic behaviour is depicted in Figure 2, where there is comparison between the probability density functions of the positions of agent 0 under (i) no trading; (ii) the Arrow-Debreu transaction; and (iii) the transaction following the application of best response strategic behaviour. As compared to the Arrow-Debreu position, the lower bound of the security guarantees a heavier right tail of the agent’s position after the best response transaction. ## 3. Nash Risk-Sharing Equilibrium We shall now consider the situation where every single agent follows the same strategic behaviour indicated by the best response problem of Section 2. As previously mentioned, sharing securities are designed following the sharing rules determined by Theorem 1.2 for any collection of reported subjective views. With the well-posedness of the best response problem established, we are now ready to examine whether the game among agents has an equilibrium point. In view of the analysis of Section 2, individual agents have motive to declare subjective beliefs different than the actual ones. (In particular, and in the setting of §1.3, agents will tend to overstate the probability of their random endowments taking low values.) Each agent will act according to the best response mechanism as in (2.1), given what other agents have reported as subjective beliefs. In a sense, the best response mechanism indicates a negotiation scheme, the fixed point (if such exists) of which will produce the Nash equilibrium valuation measure and risk-sharing securities. Let us emphasise that the actual subjective beliefs of individual players are not necessarily assumed to be private knowledge; rather, what is assumed here is that agents have agreed upon the rules that associate any reported subjective beliefs to securities and prices, even if the reported beliefs are not the actual ones. In fact, even if subjective beliefs constitute private knowledge initially, certain information about them will necessarily be revealed in the negotiation process which will lead to Nash equilibrium. There are two relevant points to consider here. Firstly, it is unreasonable for participants to attempt to invalidate the negotiation process based on the claim that other parties do not report their true beliefs, as the latter is, after all, a subjective matter. This particular point is reinforced from the a posteriori fact that reported subjective beliefs in Nash equilibrium do not deviate far from the true ones, as was pointed out in Remark 2.1 and is being further elaborated in §3.3.2. Secondly, it is exactly the limited number of participants, rather than private or asymmetric information, that gives rise to strategic behaviour: agents recognise their ability to influence the market, since securities and valuation become output of collective reported beliefs. Even under the appreciation that other agents will not report true beliefs and the negotiation will not produce an Arrow-Debreu equilibrium, agents will still want to reach a Nash equilibrium, as they will improve their initial position. In fact, transactions with limited number of participants typically equilibrate far from their competitive equivalents, as has been also highlighted in other models of thin financial markets with symmetric information structure, like the ones in [CRW12] and [RW15]—see also the related discussion in the introductory section. ### 3.1. Revealed subjective beliefs Considering the model from a more pragmatic point of view, one may argue that agents do not actually report subjective beliefs, but rather agree on a valuation measure and zero-price sharing securities that clear the market. However, there is a one-to-one correspondence between reporting subjective beliefs and proposing a valuation measure and securities, as will be described below. From the discussion of §2.1, a collection of subjective probabilities gives rise to valuation measure such that and collection of securities is such that , for all . Of course, and holds for all . A further technical observation is that holds for all , which is then a necessary condition that an arbitrary collection of market-clearing securities must satisfy with respect to an arbitrary valuation probability in order to be consistent with the aforementioned risk-sharing mechanism. The previous observations lead to a definition: for , we define the class of securities that clear the market and are consistent with the valuation measure via CQ:={(Ci)i∈I∈(L0)I ∣∣ ∑i∈ICi=0, and exp(Ci/δi)∈L1(Q), EQ[Ci]=0, ∀i∈I}. Note that all expectations of under in the definition of above are well defined. Indeed, the fact that in the definition of implies that for all . From , we obtain and hence for all . Starting from a given valuation measure and securities , one may define a collection via for , and note that this is the unique collection in that results in the valuation probability and securities . In this way, the probabilities can be considered as revealed by the valuation measure and securities . Hence, agents proposing risk-sharing securities and a valuation measure is equivalent to them reporting probability beliefs in the transaction. This viewpoint justifies and underlies Definition 3.1 that follows: the objects of Nash equilibrium are the valuation measure and designed securities, in consistency with the definition of Arrow-Debreu equilibrium. ### 3.2. Nash equilibrium and its characterisation Following classic literature, we give the formal definition of a Nash risk-sharing equilibrium. ###### Definition 3.1. The collection will be called a Nash equilibrium if and, with for all denoting the corresponding revealed subjective beliefs, and for , it holds that A use of Proposition 2.2 results in the characterisation Theorem 3.2 below, the proof of which is given in §A.4. For this, we need to introduce the -dimensional Euclidean space (3.1) ΔI={z∈RI ∣∣ ∑i∈Izi=0}. ###### Theorem 3.2. The collection is a Nash equilibrium if and only if the following three conditions hold: 1. for all , and there exists such that (3.2) C⋄i+δilog(1+C⋄iδ−i)=z⋄i+C∗i+δi∑j∈Iλjlog(1+C⋄jδ−j),∀i∈I; 2. with as in (1.3), i.e., such that , it holds that (3.3) log(dQ⋄dQ∗)∼−∑j∈Iλjlog(1+C⋄jδ−j); 3. holds for all . ###### Remark 3.3. Suppose that the agents’ preferences and risk exposures are such that no trade occurs in Arrow-Debreu equilibrium, which happens when all are the same (and equal to, say, ) for all —see Remark 1.5. In this case, and for all . It is then straightforward from Theorem 3.2 to see that a Nash equilibrium is also given by and (as well as ) for all . In fact, as will be argued in §3.3.4, this is the unique Nash equilibrium in this case. Conversely, suppose that a Nash equilibrium is given by and for all . Then, (3.3) shows that and (3.2) implies that , which means that for all . In words, the Nash risk-sharing equilibrium involves no risk transfer if and only if the agents are already in a Pareto optimal situation. In the important case of two acting agents, since , applying simple algebra in (3.2), we obtain that a Nash equilibrium risk sharing security is such that and satisfies (3.4) C⋄0+δ0δ1δlog(1+C⋄0/δ11−C⋄0/δ0)=z⋄0+C∗0. In Theorem 3.7, existence of a unique Nash equilibrium for the two-agent case will be shown. Furthermore, a one-dimensional root-finding algorithm presented in §3.4 allows to calculate the Nash equilibrium, and further calculate and compare the final position of each individual agent. Consider for instance Example 2.8 and its symmetric situation that is illustrated in Figure 2, where the limited liability of the security implies less variability and flatter right tail of the agent’s position. Under the Nash equilibrium, as will be argued in §3.3.1, security is further bounded from above, which implies that the probability density function of agent’s final position is shifted to the left. This fact is illustrated in Figure 3. Despite the above symmetric case, it is not necessary true that all agents suffer a loss of utility at the Nash equilibrium risk sharing. As we will see in the Section 4, for agents with sufficiently large risk tolerance the negotiation game results in higher utility compared to the one gained through Arrow-Debreu equilibrium. ### 3.3. Within equilibrium According to Theorem 3.7, Nash equilibria in the sense of Definition 3.1 always exist. Throughout §3.3, we assume that is a Nash equilibrium and provide a discussion on certain aspects of it, based on the characterisation Theorem 3.2. #### 3.3.1. Endogenous bounds on traded securities As was pointed in Remark 2.4, the security that each agent enters resulting from the best response procedure is bounded below. When all participating agents follow the same strategic behaviour, Nash equilibrium securities are bounded from above as well. Indeed, since the market clears, the security that agents take a long position into is shorted by the rest of the agents, who similarly intend to bound their liabilities. Mathematically, since is valid for all and holds, it also follows that , for all . Therefore, a consequence of the agents’ strategic behaviour is that Nash risk-sharing securities are endogenously bounded. This fact is in sharp contrast with the Arrow-Debreu equilibrium of (1.4), where the risk transfer may involve securities with unbounded payoffs. An immediate consequence of the bounds on the securities is that the potential gain from the Nash risk-sharing transaction is also endogenously bounded. Naturally, the resulting endogenous bounds are an indication of how the game among agents restricts the risk-sharing transaction, which in turn may be a source of large loss of efficiency. The next example is an illustration of the such inefficiency in a simple symmetric setting. Later on, in Figure 3, the loss of utility in another two-agent example is visualised. ###### Example 3.4. Let have the standard (zero mean, unit standard deviation) Gaussian law under the baseline probability . For , define via ; under , has the Gaussian law with mean and unit standard deviation. Fix , and set and . In this case, it is straightforward to compute that . It also follows that . If is large, the discrepancy between the agents’ beliefs results in large monetary profits to both after the Arrow-Debreu transaction. On the other hand, as will be established in Theorem 3.7, in case of two agents there exists a unique Nash equilibrium. In fact, in this symmetric case we have that , and it can be checked that (see also (3.4) later) C⋄0+12log(1+C⋄01−C⋄0)=βX. The loss of efficiency caused by the game becomes greater with increasing values of . In fact, if converges to infinity, it can be shown that converges to ; furthermore, both and will converge to , which demonstrates the tremendous inefficiency of the Nash equilibrium transaction as compared to the Arrow-Debreu one. Note that the endogenous bounds depend only on the risk tolerance profile of the agents, and not on their actual beliefs (or risk exposures). In addition, these bounds become stricter in games where quite risk-averse agents are playing, as they become increasingly hesitant towards undertaking risk.
nnls {RcppML} R Documentation ## Non-negative least squares ### Description Solves the equation a %*% x = b for x subject to x > 0. ### Usage nnls(a, b, cd_maxit = 100L, cd_tol = 1e-08, fast_nnls = FALSE, L1 = 0) ### Arguments a symmetric positive definite matrix giving coefficients of the linear system b matrix giving the right-hand side(s) of the linear system cd_maxit maximum number of coordinate descent iterations cd_tol stopping criteria, difference in x across consecutive solutions over the sum of x fast_nnls initialize coordinate descent with a FAST NNLS approximation L1 L1/LASSO penalty to be subtracted from b ### Details This is a very fast implementation of non-negative least squares (NNLS), suitable for very small or very large systems. Algorithm. Sequential coordinate descent (CD) is at the core of this implementation, and requires an initialization of x. There are two supported methods for initialization of x: 1. Zero-filled initialization when fast_nnls = FALSE and cd_maxit > 0. This is generally very efficient for well-conditioned and small systems. 2. Approximation with FAST when fast_nnls = TRUE. Forward active set tuning (FAST), described below, finds an approximate active set using unconstrained least squares solutions found by Cholesky decomposition and substitution. To use only FAST approximation, set cd_maxit = 0. a must be symmetric positive definite if FAST NNLS is used, but this is not checked. See our BioRXiv manuscript (references) for benchmarking against Lawson-Hanson NNLS and for a more technical introduction to these methods. Coordinate Descent NNLS. Least squares by sequential coordinate descent is used to ensure the solution returned is exact. This algorithm was introduced by Franc et al. (2005), and our implementation is a vectorized and optimized rendition of that found in the NNLM R package by Xihui Lin (2020). FAST NNLS. Forward active set tuning (FAST) is an exact or near-exact NNLS approximation initialized by an unconstrained least squares solution. Negative values in this unconstrained solution are set to zero (the "active set"), and all other values are added to a "feasible set". An unconstrained least squares solution is then solved for the "feasible set", any negative values in the resulting solution are set to zero, and the process is repeated until the feasible set solution is strictly positive. The FAST algorithm has a definite convergence guarantee because the feasible set will either converge or become smaller with each iteration. The result is generally exact or nearly exact for small well-conditioned systems (< 50 variables) within 2 iterations and thus sets up coordinate descent for very rapid convergence. The FAST method is similar to the first phase of the so-called "TNT-NN" algorithm (Myre et al., 2017), but the latter half of that method relies heavily on heuristics to refine the approximate active set, which we avoid by using coordinate descent instead. ### Value vector or matrix giving solution for x Zach DeBruine ### References DeBruine, ZJ, Melcher, K, and Triche, TJ. (2021). "High-performance non-negative matrix factorization for large single-cell data." BioRXiv. Franc, VC, Hlavac, VC, and Navara, M. (2005). "Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem. Proc. Int'l Conf. Computer Analysis of Images and Patterns." Lin, X, and Boutros, PC (2020). "Optimization and expansion of non-negative matrix factorization." BMC Bioinformatics. Myre, JM, Frahm, E, Lilja DJ, and Saar, MO. (2017) "TNT-NN: A Fast Active Set Method for Solving Large Non-Negative Least Squares Problems". Proc. Computer Science. nmf, project ### Examples ## Not run: # compare solution to base::solve for a random system X <- matrix(runif(100), 10, 10) a <- crossprod(X) b <- crossprod(X, runif(10)) unconstrained_soln <- solve(a, b) nonneg_soln <- nnls(a, b) unconstrained_err <- mean((a %*% unconstrained_soln - b)^2) nonnegative_err <- mean((a %*% nonneg_soln - b)^2) unconstrained_err nonnegative_err all.equal(solve(a, b), nnls(a, b)) # example adapted from multiway::fnnls example 1 X <- matrix(1:100,50,2) y <- matrix(101:150,50,1) beta <- solve(crossprod(X)) %*% crossprod(X, y) beta beta <- nnls(crossprod(X), crossprod(X, y)) beta ## End(Not run) [Package RcppML version 0.3.7 Index]
# What is the surface area produced by rotating f(x)=x^2lnx, x in [0,3] around the x-axis? Oct 20, 2016 $\approx 311.4$ #### Explanation: If we consider a small strip width $\mathrm{dx}$, it will have radius $y \left(x\right)$ as it is revolved about the x axis, and thus circuference $2 \pi y$. The arc length $\mathrm{ds}$ of the tip of strip $\mathrm{dx}$ is: $\mathrm{ds} = \sqrt{1 + {\left(y '\right)}^{2}} \mathrm{dx}$ With $y ' = x \left(1 + 2 \ln x\right)$# and so the surface area of the element is $\mathrm{dS} = 2 \pi y \mathrm{ds}$ $= 2 \pi {x}^{2} \ln x \sqrt{1 + {\left(x \left(1 + 2 \ln x\right)\right)}^{2}} \mathrm{dx}$ For $x \in \left[1 , 3\right]$, the surface area $S$ is therefore: $S = 2 \pi {\int}_{1}^{3} \setminus {x}^{2} \ln x \sqrt{1 + {\left(x \left(1 + 2 \ln x\right)\right)}^{2}} \mathrm{dx}$ However because $y < 0$ for $x \in \left[1 , 3\right]$, which would generate a negative radius, we need to be sure to place a negative number on the integration. The surface area in total is therefore $S = 2 \pi \left({\int}_{1}^{3} \setminus {x}^{2} \ln x \sqrt{1 + {\left(x \left(1 + 2 \ln x\right)\right)}^{2}} \mathrm{dx} - {\int}_{0}^{1} \setminus {x}^{2} \ln x \sqrt{1 + {\left(x \left(1 + 2 \ln x\right)\right)}^{2}} \mathrm{dx}\right)$
## College Algebra (10th Edition) domain: $\left\{\text{Bob, John, Chuck}\right\}$ range: $\left\{\text{Beth, Dianne, Linda, Marcia}\right\}$ The domain is the set of the first coordinates while the range is the set of second coordinates. Thus, the given relation has: domain: $\left\{\text{Bob, John, Chuck}\right\}$ range: $\left\{\text{Beth, Dianne, Linda, Marcia}\right\}$
# Revision history [back] There is a solution. Create a class that has a MoveGroup attribute, initialized with the constructor. Moreover you can implement several service functions in that class which are acting on the same move_group attribute. This is my post and answer related how to do it. Here is my working example for a cyton gamma 1500 robot arm. There is a solution. Create a class that has a MoveGroup attribute, initialized with the constructor. Moreover you can implement several service functions in that class which are acting on the same move_group attribute. This is my post and answer related how to do it. Here is my working example for a cyton gamma 1500 robot arm. EDIT If you really want to share objects through applications have a look at this (not really related to ROS but c++).
Pair end merging and statistical output 1 0 Entering edit mode 4.5 years ago jomo018 ▴ 620 1. I am looking for a paired end merging utility similar to FLASH or PEAR that also outputs statistical information, mainly number of pair mismatches. 2. Is this type of information available from a SAM file after alignment with BOWTIE, BWA or some other aligner? paired ends alignment overlap • 1.5k views 0 Entering edit mode 0 Entering edit mode I am looking for mismatches in base resolution, not read resolution. Pairs can be merged (or aligned concordantly) even if some overlapping bases disagree. I am looking for the number or rate of these mismatching bases. 0 Entering edit mode Do you mean variant calling ? 0 Entering edit mode You can call it variant calling where one mate declares different base than the other. 2 Entering edit mode 4.5 years ago BBMerge can do this, with the "ecco" flag, which rather than combining the reads, just does error-correction via overlap: bbmerge.sh in=reads.fq ecco mix out=corrected.fq Total time: 1.890 seconds. Pairs: 1000000 Joined: 182539 18.254% Ambiguous: 817461 81.746% No Solution: 0 0.000% Too Short: 0 0.000% Errors Corrected: 6994 Avg Insert: 159.9 Standard Deviation: 21.5 Mode: 187 Insert range: 100 - 191 90th percentile: 186 75th percentile: 178 50th percentile: 164 25th percentile: 145 10th percentile: 128 0 Entering edit mode In this example reads must be interleaved. @Brian: Is that a requirement? 0 Entering edit mode I am just reading the manual. They also allow in1 and in2 paired inputs. 0 Entering edit mode Yep, the syntax can also be: bbmerge.sh in1=r1.fq in2=r2.fq ecco mix out1=corrected1.fq out2=correct2.fq ...but I normally show the interleaved version of the command for conciseness. 0 Entering edit mode Errors Corrected are number of pairs corrected rather than number of bases corrected. Right? 0 Entering edit mode No, it is the total number of bases corrected. 0 Entering edit mode Thank you Brian. Can you clarify the tag trimq=xx (as opposed to qtrim...). For example, suppose you have a low quality base in the middle of a read, is it considered an N or something else? 0 Entering edit mode "qtrim" tells the program which end to trim, while "trimq" specifies the quality threshold. Only ends can be trimmed (and mainly the right end is the important one for trimming with respect to merging). "qtrim=r trimq=15" will trim the bases on the right end such that the region trimmed has an average quality below 15, while the remaining region has an average quality at least 15. For example, if the last 5 base qualities were "20, 0, 17, 19, 16", then the last 4 bases would be trimmed as that region has an average quality below 15 (not that 0 is an N) but the 20 would not be trimmed. It's a little hard to calculate by eye because the scores are first transformed to the probability scale (where 16 is roughly 2.5% chance of error) before being averaged. Low-quality bases are not considered N. Rather, if two reads mismatch at a location and one is higher quality than the other, the base with the higher quality is assumed correct and the resuting quality score is higher-lower.
# Easy, but seems complicated 1. Oct 16, 2004 ### Leaping antalope Could someone solve these equations for me? It seems complex, but I believe there is a easy way to find out a, b, c, and d... (1) 8a+4b+2c+d=11 (2) 27a+9b+3c+d=44 (3) 64a+16b+4c+d=110 (4) 125a+25b+5c+d=220 (5) 216a+36b+6c+d=385 Find a, b, c, and d. Thanks~ 2. Oct 16, 2004 ### Tide At first glance it appears you have too many equations. Other than that, why don't you just try elimination? 3. Oct 16, 2004 ### JasonRox Also, some of them can be divided into smaller numbers to. Some are also proportional and for that reason you can get ride of one equation. Honestly take it nice and slow, so you make know mistakes, and you'll get it. Note: Soon you'll learn about matrices and thank "godmath" for it. 4. Oct 16, 2004 ### Tom McCurdy Keep multiplying the equations by whole numbers to cancel out variables by subtraction like 2-1=A 3-2=B 4-3=C then result B-A=X C-B=Y Y-X that should leave you with one varible equals a number 5. Oct 16, 2004 ### Tom McCurdy Solution (1) 8a+4b+2c+d=11 (2) 27a+9b+3c+d=44 (3) 64a+16b+4c+d=110 (4) 125a+25b+5c+d=220 (5) 216a+36b+6c+d=385 (2)-(1)= (A) (3)-(2)= (B) (4)-(3)= (C) (A)= 19a + 5b + c = 33 (B)= 37a + 7b + c = 66 (C)= 61a + 9b+ c = 110 (B)-(A)=(X) (C)-(B)=(Y) (X)= 18a + 2b = 33 (Y)= 24a + 2b = 44 (Z) = (Y)-(X) (Z)= 6a=11 a=11/6 therefore by Equation (X) (X)= 18a + 2b = 33 18(11/6)+ 2b = 33 b=0 therefore by equation (A) 19a + 5b + c = 33 19(11/6) + 5(0) + c = 33 c=-11/6 thefore by Equation 1 8a+4b+2c+d=11 8(11/6)+4(0)+2(-11/6)+d=11 d=0 Summary $$a=11/6$$ $$b=0$$ $$c= -11/6$$ $$d= 0$$ 6. Oct 16, 2004 ### Tom McCurdy indeed you need x number of equations to solve for x number of variables in this case i needed 4 equations since there were four variables being solved for a,b,c,d 7. Oct 16, 2004 ### Prometheus Take all of the equations and make them of the form d= ... Then, you can put the two non d sides of the equation together to create 2 pairs in which d is elimintated entirely. You can repeat with these 2 equations to eliminate one of the other variables. This will leave you with 2 variables. You can use the 5th equation to start over with one of the other 4, to obtain another formula using 2 variables. Then, add them up to eliminate one of the variables. Once you have the value of one of the variables, you can fill it in the others, and repeat to discover the others.
# Why does using the following definition of $\sin(x)$ result in the wrong integral for $\int \sin(x)dx$? Using the following definition of $\sin(x)$ $$\sin(x) \stackrel{\text{def}}{=} \frac{1}{2}\left(e^{ix} - e^{-ix}\right)$$ Results in the following integral \begin{align} \int \sin(x)\ dx &= \frac{1}{2}\int\left(e^{ix} - e^{-ix}\right) \ dx \\ &= \frac{1}{2i}\left(e^{ix} + e^{-ix}\right) + C \end{align} But $\int \sin(x)\ dx = -\cos(x) + C \iff \int \sin(x)\ dx = \frac{1}{2}\int\left(e^{ix} + e^{-ix}\right) + C$. Thus the $\frac{1}{i}$ multiplicand is the term here is what is producing the wrong integral. Is it only possible to integrate $\sin(x)$ and prove $\int \sin(x) = -\cos(x) + C$ via the Taylor Series definition of $\sin(x)$? $$\sin(x) \stackrel{\text{def}}{=} \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)!}x^{2n+1} \ \ \ \ \ \text{(Taylor Series Definition)}$$ • Your starting definition is missing $i$ at the denominator. – Yves Daoust Aug 8 '16 at 9:30 • Hint: Your definition of the sine function is wrong, so is your antiderivative of $\sin (x)$. – Nigel Overmars Aug 8 '16 at 9:30 • Also, $\int\sin(x)dx$ is not $\cos(x) + C$... – 5xum Aug 8 '16 at 9:43 ## 3 Answers You made the usual mistake in mathematics: sloppyness! Using the following definition of $\sin(x)$ $$\sin(x) \stackrel{\text{def}}{=} \frac{1}{2}\left(e^{ix} - e^{-ix}\right)$$ Wrong. The formula you wrote evaluates to $$\frac12(\cos x + i\sin x - (\cos(-x) + i\sin(-x))) = \frac12(\cos x + i\sin x - \cos x + i\sin x) =\frac12 (2i\sin x) = i\sin x \neq \sin x$$ Using the correct formula for $\sin x$ (which is $\frac1{2i}(e^{ix}-e^{-ix}$) will get you: $$\int \sin x dx = \frac{1}{2i}\int(e^{ix}-e^{-ix})dx =\\ =\frac{1}{2i}\left(\int e^{ix} dx - \int e^{-ix} dx \right)=\\ =\frac{1}{2i}\left(\frac{1}{i}e^{ix} - \frac{1}{-i} e^{-ix}\right)+C=\\ =\frac{1}{2i}\cdot \frac{1}{i}\left(e^{ix} + e^{-ix}\right)+C=\\ =\frac{1}{-2}(\cos x + i\sin x + \cos(-x) + i\sin(-x))+C=\\ =\frac{1}{-2}(\cos x + i\sin x + \cos(x) - i\sin(x))=\\ =\frac{1}{-2}\cdot2\cdot \cos x+C = -\cos x+C$$ Which works out to what you would expect. • I like "You made the usual mistake in mathematics: sloppyness!" :) +1 – 6005 Aug 8 '16 at 10:11 Notice, Euler's formula (and de Moivre formula): $$e^{\theta i}=\cos(\theta)+\sin(\theta)i$$ And use: • $$\cos(-\theta)=\cos(\theta)$$ • $$\sin(-\theta)=-\sin(\theta)$$ So, we get: $$e^{\theta i}-e^{-\theta i}=\left(\cos(\theta)+\sin(\theta)i\right)-\left(\cos(-\theta)+\sin(-\theta)i\right)=$$ $$\cos(\theta)+\sin(\theta)i-\cos(\theta)+\sin(\theta)i=2\sin(\theta)i$$ So: $$\sin(\theta)=\frac{e^{\theta i}-e^{-\theta i}}{2i}$$ Now, the integral become: $$\int\sin(\theta)\space\text{d}\theta=\int\frac{e^{\theta i}-e^{-\theta i}}{2i}\space\text{d}\theta=\frac{1}{2i}\left[\int e^{\theta i}\space\text{d}\theta-\int e^{-\theta i}\space\text{d}\theta\right]=$$ $$\frac{-ie^{\theta i}-ie^{-\theta i}}{2i}+\text{C}=\text{C}-\cos(\theta)$$ By the Euler and de Moivre formulas, $$e^{ix}-e^{-ix}=(\cos x+i\sin x)-(\cos x-i\sin x)=2i\sin x.$$ Other check: $$(e^{ix}-e^{-ix})^2=e^{2ix}-2+e^{-i2x}=2\cos(2x)-2,$$ which is a negative number ! The developments of $e^{ix}$ and $e^{-ix}$ differ in the sign of the terms of odd power, so that when you subtract them, only the odd powers remain, and $i^{2k+1}=\pm i$. You can establish two integrals in a single go, as follows: $$e^{ix}=\cos x+i\sin x,$$ then omitting the constant, $$\int e^{ix}dx=\frac{e^{ix}}i=\sin x-i\cos x.$$ Then equate the real and imaginary parts. • $2 \cos (2x)-2$ isn't always negative, it is certainly always non-positive though :-) – Kevin Aug 8 '16 at 9:47 • @Bacon: with $x$ uniformly spread in $[0,2\pi)$, the expression is almost certainly negative. :) – Yves Daoust Aug 8 '16 at 9:51
Question # Prove that the area of the parallelogram formed by the lines 3x − 4y + a = 0, 3x − 4y + 3a = 0, 4x − 3y − a = 0 and 4x − 3y − 2a = 0 is $\frac{2}{7}{a}^{2}$ sq. units. Open in App Solution ## The given lines are 3x − 4y + a = 0 ... (1) 3x − 4y + 3a = 0 ... (2) 4x − 3y − a = 0 ... (3) 4x − 3y − 2a = 0 ... (4) $\mathrm{Area}\mathrm{of}\mathrm{the}\mathrm{parallelogram}=\left|\frac{\left({c}_{1}-{d}_{1}\right)\left({c}_{2}-{d}_{2}\right)}{{a}_{1}{b}_{2}-{a}_{2}{b}_{1}}\right|\phantom{\rule{0ex}{0ex}}⇒\mathrm{Area}\mathrm{of}\mathrm{the}\mathrm{parallelogram}=\left|\frac{\left(a-3a\right)\left(2a-a\right)}{-9+16}\right|=\frac{2{a}^{2}}{7}\mathrm{square}\mathrm{units}$ Suggest Corrections 0 Related Videos Equation of Line perpendicular to a given Line MATHEMATICS Watch in App
Approximating freeness under constraints Sorin PopaUniversity of California, Los Angeles (UCLA)Math I will discuss a method for constructing unitary elements $u$ in a subalgebra $B$ of a II$_1$ factor $M$ that are as independent as possible’’ (approximately) with respect to a given finite set of elements in $M$. This technique had most surprising applications over the years, e.g., to Kadison-Singer type problems, to proving vanishing cohomology results for II$_1$ factors (like compact valued derivations, or L\$^2^-cohomology), as well as to subfactor theory (notably, to the discovery of the proper axiomatisation of the group-like objects arising from subfactors). After explaining this method, which I call {\it incremental patching}, I will comment on all these applications and its potential for future use. Back to Workshop I: Expected Characteristic Polynomial Techniques and Applications
Why is the answer different with energy conservation vs forces? First, I am assuming that there is no kinetic friction acting on the insect as it moves up the bowl. If kinetic friction were involved, you would have energy dissipation, but I will not consider that here. Your mistake is in assuming that the static friction force is equal to its maximum value during the entire process. $$\mu N$$ only determines the maximum magnitude the static friction force can have before slipping occurs; it doesn't always hold for the static friction force magnitude. Before slipping, the static friction force is just equal to the force needed to prevent slipping, i.e. $$mg\sin\theta$$. Doing this correctly, you will then see that the integral will give you a true expression, but it won't help you find where the ant slips because the integral is true for any angle $$\alpha$$ before slipping occurs, and the integral doesn't tell you anything about when the static friction force fails. i.e. energy conservation doesn't apply only when slipping occurs, so energy conservation won't help you solve this problem. Also, technically the static friction force can't do work because the point of contact between the ant and the bowl doesn't move as the force is being applied, but that point isn't important here, as the (correct) integral will still give the work done by the insect's legs on the rest of the insect, even if the physical interpretation isn't correct. The difference in energy between the two static equilibrium positions may only be some potential energy difference. You may assume the friction force is $$F=\mu N$$ during sliding, where $$\mu$$ is the kinetic friction coefficient (taken equal to the static friction coefficient) but since this force is non conservative, the work done this force will not account for any potential energy change, instead, it's lost. The balance in energy between the two positions will thus only tell you that the change in potential energy is the work of the weight force, which is not helpful for the determination of $$\alpha$$.
# H2 - CS123 Youssef February 2, 2010 Homework 2 Due Date:... This preview shows pages 1–2. Sign up to view the full content. CS123 February 2, 2010 Youssef Homework 2 Due Date: February 25, 2009 Problem 1: (20 points) Let R denote the set of real numbers, and Z the set of integers. Let also f : R R , g : R R , and h : Z R be 3 functions defined as follows: f ( x ) = 8 x - 3 g ( x ) = 3 x 2 + 4 h ( x ) = 4 x +3 2 . a) Is f one-to-one? Onto? Prove your answer. If f is one-to-one and onto, find f - 1 . b) Calculate g (0) , g (1) , g (2) . c) Given two sets E and F , and a function u : E F , and for every y F , define u ( y ) to be the following set: u ( y ) = { x E | u ( x ) = y } . Determine g (0) , g (4) , g (16) , g ( - 1). d) Is g one-to-one? Onto? Prove your answer. e) Is h one-to-one? Onto? Prove your answer. f) Assume now that h : R R but h has otherwise the same definition. Calculate g f ( x ) , h g ( x ) , h ( g f )( x ) , ( h g ) f ( x ). This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. ## H2 - CS123 Youssef February 2, 2010 Homework 2 Due Date:... This preview shows document pages 1 - 2. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
# Math Help - tricky derivative? 1. ## tricky derivative? Find the derivative of 2. We use the rule that $a^r=e^{r\ln a}$ for $a>0$. 3. Originally Posted by Scott H We use the rule that $a^r=e^{r\ln a}$ for $a>0$. yea I've been doing that but I'm not getting the right answer.. Must be doing something wrong. 4. Originally Posted by tbenne3 Find the derivative of $\ln{y}=4^x\ln{x}$ $\frac{1}{y}\,\frac{dy}{dx}=4^x\,\frac{1}{x}+\ln{x} \,(4^x\,\ln{4})$ Are you able to finish from there? 5. I imagined that that differential equation would be a bit tricky. We can rewrite the function as $y=x^{4^x}=e^{4^x\ln x}$ and differentiate using the Chain Rule. 6. Originally Posted by ione $\ln{y}=4^x\ln{x}$ $\frac{1}{y}\,\frac{dy}{dx}=4^x\,\frac{1}{x}+\ln{x} \,(4^x\,\ln{4})$ Are you able to finish from there? honestly.. no.. our teacher doesn't really know how to teach and I haven't had time to sit down and teach it to myself yet 7. I recommend Scott's method. Change the function to: $y = e^{4^x\ln{x}}$ Then you differentiate: $y' = e^{4^x\ln{x}}\frac{d}{dx}[4^x\ln{x}]$ All you use now is the product rule. $y' = e^{4^x\ln{x}}\left(4^x\log{4}\ln{x} + \frac{4^x}{x}\right)$ We remember that: $e^{4^x\ln{x}} = x^{4^x}$ $y' = 4^xx^{4^x}\left(\ln{x}\log{4} + \frac{1}{x}\right)$ 8. Originally Posted by Aryth I recommend Scott's method. Change the function to: $y = e^{4^x\ln{x}}$ Then you differentiate: $y' = e^{4^x\ln{x}}\frac{d}{dx}[4^x\ln{x}]$ All you use now is the product rule. $y' = e^{4^x\ln{x}}\left(4^x\log{4}\ln{x} + \frac{4^x}{x}\right)$ We remember that: $e^{4^x\ln{x}} = x^{4^x}$ $y' = 4^xx^{4^x}\left(\ln{x}\log{4} + \frac{1}{x}\right)$ thanks
# zbMATH — the first resource for mathematics Landmarks in graphs. (English) Zbl 0865.68090 Summary: Navigation can be studied in a graph-structured framework in which the navigating agent (which we assume to be a point robot) moves from node to node of a “graph space”. The robot can locate itself by the presence of distinctively labeled “landmark” nodes in the graph space. For a robot navigating in Euclidean space, visual detection of a distinctive landmark provides information about the direction to the landmark, and allows the robot to determine its position by triangulation. On a graph, however, there is either the concept of direction nor that of visibility. Instead, we assume that a robot navigating on a graph can sense the distances to a set of landmarks. Evidently, if the robot knows its distances to a sufficiently large set of landmarks, its position on the graph is uniquely determined. This suggests the following problem: given a graph, what are the fewest number of landmarks needed, and where should they be located, so that the distances to the landmarks uniquely determine the robot’s position on the graph? This is actually a classical problem about metric spaces. A minimum set of landmarks which uniquely determine the robot’s position is called a “metric basis”, and the minimum number of landmarks is called the “metric dimension” of the graph. We present some results about this problem. Our main new results are that the metric dimension of a graph with $$n$$ nodes can be approximated in polynomial time within a factor of $$O(\log n)$$, and some properties of graphs with metric dimension two. ##### MSC: 68R10 Graph theory (including graph drawing) in computer science
t IBC 2018 Meet with the developers During IBC 2018, Dr Gregory Clarke and Philip Hodgetts will be available to meet with anyone interested in Lumberjack but uncertain how it might apply to their specific workflow and production technology. (Remember Lumberjack System is for FCP X.) Meeting times: Friday September 14:  10:00, 11:00, 14:00, 15:00. 16:00 and 17:00 Saturday September 15:  10:00, 11:00, 14:00, 15:00. 16:00 and 17:00 Sunday September 16  10:00, 11:00, 14:00, 15:00. 16:00 and 17:00 Monday September 17:  10:00, 11:00, 14:00, 15:00. 16:00 and 17:00 Use the form below to sign up and tell us your preferred time in the message.  We'll get back to you confirming the time. Meeting place will be advised closer to the conference. Submitting Form... The server encountered an error.
# Why is the derivative of the activation functions in neural networks important? I'm new to NN. I am trying to understand some of its foundations. One question that I have is: why the derivative of an activation function is important (not the function itself), and why it's the derivative which is tied to how the network performs learning? For instance, when we say a constant derivative isn't good for learning, what is the intuition behind that? Is the activation function somehow like a hash function that needs to well differentiate small variance in inputs? • Can you cite some sources so that we can get a much more detailed picture? – DuttaA Aug 14 '19 at 23:15 • towardsdatascience.com/… – Tina J Aug 14 '19 at 23:27 • The 'constant.....' statement is not really correct in my opinion, or atleast the constant derivative means the model is not learning conclusion is incorrect. But the author really doesn't delve into details nor provide proper explanation, so the author probably might have a different way of interpreting it. Also it is kind of sketchy to talk about learning when the details of a learning objective commonly known as loss function is not provided. – DuttaA Aug 14 '19 at 23:42 If what you are asking is what is the intuition for using the derivative in backpropagation learning, instead of an in-depth mathematical explanation: Recall that the derivative tells you a function's sensitivity to change with respect to a change in its input. A high (absolute) value for the derivative at a certain point means that the function is very steep, and a small change in input may result in a drastic change in its output; conversely, a low absolute value means little change, so not steep at all, with the extreme case that the function is constant when the derivative is zero. Training a neural network essentially amounts to an optimization problem where one wants to minimize a certain value, in this case the error produced by the network on the given training examples. Backpropagation learning can be viewed as a case of gradient descent (the inverse of hill climbing). If for a moment we assume that your input is only 2-dimensional (just for illustration, the mathematics of course also work for higher dimensions), you could imagine the error function as a landscape with hills, mountains, valleys, ridges etc. You are standing at a high point and want to get down as far as possible. Gradient descent means that, in discrete steps, you always walk down in the direction that has the steepest slope downwards from where you are currently standing, until you eventually reach a (local) minimum. In order to determine where that steepest slope is, you need the derivative of the activation function. Basically, you want to sort out how much each unit in your network contributes to an error, and adjust in the direction that contributes the most. Edit: Regarding constant values for a derivative, in the landscape metaphor it would mean that the gradient is the same no matter where you are, so you'll always go in the same direction and never reach an optimum. However, multi-layer networks with linear activation function are kind of besides the point anyhow when you consider that each cell computes a linear combination of its inputs, which then is again a linear function, so the output of the last layer will ultimately be a linear function of the inputs at the first layer. That is to say, anything you can do with a multi-layer net with linear activation functions, you could also achieve with just a single layer. • Thanks. It was a good starter explanation. I understand we want to minimize the whole loss function. But why we need a local minimum at each function?! – Tina J Aug 15 '19 at 1:43 • @Tina J: I am not sure what you are asking. You are correct that we try to find a single minimum for the error of the entire network. What backpropagation does is to split the observed error up into the parts contributed by each single unit and connection. So we don't minimize at each single unit, but for each training example, we (potentially) adjust every edge's weight, depending on how much it affected the outcome to be wrong. Each weight is a dimension of the "landscape", and one traversal of the net is a single step in gradient descent, which is repeated until reaching convergence. – Jens Classen Aug 15 '19 at 2:09 Consider a dataset $$\mathcal{D}=\{x^{(i)},y^{(i)}:i=1,2,\ldots,N\}$$ where $$x^{(i)}\in\mathbb{R}^3$$ and $$y^{(i)}\in\mathbb{R}$$ $$\forall i$$ The goal is to fit a function that best explains our dataset.We can fit a simple function, as we do in linear regression. But that's different about neural networks, where we fit a complex function, say: \begin{align}h(x) & = h(x_1,x_2,x_3)\\ & =\sigma(w_{46}\times\sigma(w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4)+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)\end{align} where, $$\theta = \{w_{14},w_{24},w_{34},b_4,w_{15},w_{25},w_{35},b_5,w_{46},w_{56},b_6\}$$ is the set of the respective coefficients we have to determine such that we minimize: $$J(\theta) = \frac{1}{2}\sum_{i=1}^N (y^{(i)}-h(x^{(i)}))^2$$ The above optimization problem can be easily solved with gradient descent. Just initiate $$\theta$$ with random values and with proper learning parameter $$\eta$$, update as follows till convergence: $$\theta:=\theta-\eta\frac{\partial J}{\partial \theta}$$ In order to get the gradients, we express the above function as a neural network as follows: Let's calculate the gradient, say w.r.t. $$w_{14}$$. $$\frac{\partial J}{\partial w_{14}} = \sum_{i=1}^N \Big[\big(h(x^{(i)})-y^{(i)}\big)\frac{\partial h(x^{(i)})}{\partial w_{14}}\Big]$$ Let $$p(x) = w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4$$ , and Let $$q(x) = w_{46}\times\sigma(p(x))+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)$$ $$\therefore \frac{\partial h(x)}{\partial w_{14}} = \frac{\partial h(x)}{\partial q(x)}\times\frac{\partial q(x)}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}} = \frac{\partial\sigma(q(x))}{\partial q(x)}\times\frac{\partial\sigma(p(x))}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}}$$ We see that the derivative of the activation function is important for getting the gradients and so for the learning of the neural network. A constant derivative will not help in the gradient descent and we won't be able to learn the optimal parameters. The basic (and usual) algorithm used to update the weights of the artificial neural network (ANN) is an iterative, numerical and optimization algorithm, called gradient descent, which is based on and requires the computation of the derivative of the function you want to find the minimum of. If the function you want to find the minimum of is multivariable, then, rather than the derivative, gradient descent requires the gradient, which is a vector where the $$i$$th element contains the partial derivative of the function with respect to the $$i$$th variable. Hence the name gradient descent, where the derivative of a function of one variable can be considered the gradient of the function. In the case of ANNs, we usually have a loss function that we want to minimize: for example, the mean squared error (MSE). Therefore, in order to apply gradient descent to find the minimum of the MSE, we need to find the derivative or, more precisely, the gradient of the MSE. To do it, the back-propagation (an algorithm based on the chain rule) is often used, given that the MSE is a function of the ANN, which is a composite function of multiple non-linear functions, the activation functions, whose main purpose is thus to introduce non-linearity, or, in other words, it makes the ANN powerful. Given that the MSE is a function of the parameters of the ANN, then we need to find the partial derivative of the MSE with respect to all parameters of the ANN. In this process, we will also need to find the derivatives of the activation functions that each neuron applies to its linear combination of weights: to fully see this, you will need to learn the details of back-propagation! Hence the importance of the derivatives of the activation functions. A constant derivative would always give the same learning signal, independently of the error, but this is not desirable. To fully understand all these statements, I recommend you learn about back-propagation and gradient descent in detail, which requires a little bit of effort! • Comments are not for extended discussion; this conversation has been moved to chat. – Ben N Aug 15 '19 at 19:54
Partition structures derived from Brownian motion and stable subordinators Report Number 346 Authors Jim Pitman Citation Bernoulli 3, 79-96, 1997 Abstract Explicit formulae are obtained for the distribution of various random partitions of a positive integer $n$, both ordered and unordered, derived from the zero set $M$ of a Brownian motion by the following scheme: pick $n$ points uniformly at random from $[0,1]$, and classify them by whether they fall in the same or different component intervals of the complement of $M$. Corresponding results are obtained for $M$ the range of a stable subordinator and for bridges defined by conditioning on $1 \in M$. These formulae are related to discrete renewal theory by a general method of discretizing a subordinator using the points of an independent homogeneous Poisson process. PDF File Postscript File
### AIMS Mathematics 2022, Issue 12: 20767-20780. doi: 10.3934/math.20221138 Research article # Regular local hyperrings and hyperdomains • Received: 14 July 2022 Revised: 09 September 2022 Accepted: 14 September 2022 Published: 26 September 2022 • MSC : 20N20, 13E05 • This paper falls in the area of hypercompositional algebra. In particular it focuses on the class of Krasner hyperrings and it studies the regular local hyperrings. These are Krasner hyperrings $R$ with a unique maximal hyperideal $M$ having the dimension equal to the dimension of the vectorial hyperspace $\frac{M}{M^2}$. The aim of the paper is to show that any regular local hyperring is a hyperdomain. For proving this, we make use of the relationship existing between the dimension of the vectorial hyperspaces related to the hyperring $R$ and to the quotient hyperring $\overline{R} = \frac{R}{\langle a\rangle}$, where $a$ is an element in $M\setminus M^2$, and of the regularity of $\overline{R}$. Citation: Hashem Bordbar, Sanja Jančič-Rašovič, Irina Cristea. Regular local hyperrings and hyperdomains[J]. AIMS Mathematics, 2022, 7(12): 20767-20780. doi: 10.3934/math.20221138 ### Related Papers: • This paper falls in the area of hypercompositional algebra. In particular it focuses on the class of Krasner hyperrings and it studies the regular local hyperrings. These are Krasner hyperrings $R$ with a unique maximal hyperideal $M$ having the dimension equal to the dimension of the vectorial hyperspace $\frac{M}{M^2}$. The aim of the paper is to show that any regular local hyperring is a hyperdomain. For proving this, we make use of the relationship existing between the dimension of the vectorial hyperspaces related to the hyperring $R$ and to the quotient hyperring $\overline{R} = \frac{R}{\langle a\rangle}$, where $a$ is an element in $M\setminus M^2$, and of the regularity of $\overline{R}$. [1] R. Ameri, M. Eyvazi, S. Hoskova-Mayerova, Superring of polynomials over a hyperring, Mathematics, 7 (2019), 902. https://doi.org/10.3390/math7100902 doi: 10.3390/math7100902 [2] H. Bordbar, I. Cristea, Divisible hypermodules, An. Ştiinţ. Univ. Ovidius Constanţa Ser. Mat., 30 (2022), 57–74. https://doi.org/10.2478/auom-2022-0004 [3] H. Bordbar, I. Cristea, About normal projectivity and injectivity of Krasner hypermodules, Axioms, 10 (2021), 83. https://doi.org/10.3390/axioms10020083 doi: 10.3390/axioms10020083 [4] H. Bordbar, I. Cristea, Height of prime hyperideals in Krasner hyperrings, Filomat, 31 (2017), 6153–6163. https://doi.org/10.2298/FIL1719153B doi: 10.2298/FIL1719153B [5] H. Bordbar, I. Cristea, Regular parameter elements and regular local hyperrings, Mathematics, 9 (2021), 243. https://doi.org/10.3390/math9030243 doi: 10.3390/math9030243 [6] H. Bordbar, I. Cristea, M. Novak, Height of hyperideals in Noetherian Krasner hyperrings, U.P.B. Sci. Bull., Ser. A, 79 (2017), 31–42. [7] H. Bordbar, M. Novak, I. Cristea, A note on the support of a hypermodule, J. Algebra Appl., 19 (2019), 2050019. https://doi.org/10.1142/S021949882050019X doi: 10.1142/S021949882050019X [8] B. Davvaz, V. Leoreanu-Fotea, Hyperring theory and applications, Palm Harbor, USA: International Accademic Press, 2008. [9] B. Davvaz, T. Musavi, Codes over hyperrings, Matematiski Vesnik, 68 (2016), 26–38. [10] S. Jančič-Rašovič, I. Cristea, Hypernear-rings with the defect of distributivity, Filomat, 32 (2018), 1133–1149. https://doi.org/10.2298/FIL1804133J doi: 10.2298/FIL1804133J [11] M. Krasner, Approximation des corps values complets de caracteristique $p\neq 0$ par ceux de caracteristique zero, Colloque d'Algèbre Supérieur, 19 (1956), 129–206. [12] M. Krasner, A class of hyperrings and hyperfields, Int. J. Math. Math. Sci., 6 (1983), 307–312. https://doi.org/10.1155/S0161171283000265 doi: 10.1155/S0161171283000265 [13] C. G. Massouros, Free and cyclic hypermodules, Ann. Mat. Pura Appl., 150 (1988), 153–166. https://doi.org/10.1007/BF01761468 doi: 10.1007/BF01761468 [14] C. G. Massouros, On the theory of hyperrings and hyperfields, Algebra Logic, 24 (1985), 728–742. [15] G. Massouros, C. Massouros, Hypercompositional algebra, computer science and geometry, Mathematics, 8 (2020), 1338. https://doi.org/10.3390/math8081338 doi: 10.3390/math8081338 [16] J. Mittas, Sur certaines classes de structures hypercompositionnelles, Proc. Acad. Athens, 48 (1973), 298–318. [17] J. Mittas, Espaces vectoriels sur un hypercorp–Introduction des hyperespaces affines et euclidiens, Math. Balkanica, 5 (1975), 199–211. [18] A. Nakassis, Recent results in hyperring and hyperfield theory, Int. J. Math. Math. Sci., 11 (1988), 209–220. https://doi.org/10.1155/S0161171288000250 doi: 10.1155/S0161171288000250 [19] R. Rota, Strongly distributive multiplicative hyperrings, J. Geom., 39 (1990), 130–138. https://doi.org/10.1007/BF01222145 doi: 10.1007/BF01222145 [20] D. Stratigopoulos, Hyperanneaux non commutatifs: Hyperanneaux artiniens, centralisateur d'un hypermodule et theoreme de densite, C. R. Acad. Sc. Paris, 269 (1969), 889–891. [21] T. Vougiouklis, The fundamental relations in hyperrings. The general hyperfield, In: Algebraic hyperstructures and applications, Teaneck, NJ: World Scientific Publishing, 1991. https://doi.org/10.1142/9789814539555 [22] T. Vougiouklis, $H_v$-vector spaces, In: Algebraic hyperstructures and applications, Romania, Hadronic Press, Inc., Florida, 1994,181–190. • ##### Reader Comments • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0) ###### 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 2.739 2.4 ## Metrics Article views(369) PDF downloads(35) Cited by(0) Article outline ## Other Articles By Authors • On This Site • On Google Scholar / DownLoad:  Full-Size Img  PowerPoint
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PERSONAL OFFICE General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Probl. Peredachi Inf.: Year: Volume: Issue: Page: Find Probl. Peredachi Inf., 2000, Volume 36, Issue 4, Pages 3–24 (Mi ppi490) Information Theory On the Relation between the Code Spectrum and the Decoding Error Probability M. V. Burnashev Abstract: We show how to lower bound the best decoding error probability (or upper bound the reliability function) given some estimates for the code spectrum. Bounds thus obtained are better than previously known ones. Full text: PDF file (1888 kB) References: PDF file   HTML file English version: Problems of Information Transmission, 2000, 36:4, 285–304 Bibliographic databases: UDC: 621.391.15 Citation: M. V. Burnashev, “On the Relation between the Code Spectrum and the Decoding Error Probability”, Probl. Peredachi Inf., 36:4 (2000), 3–24; Problems Inform. Transmission, 36:4 (2000), 285–304 Citation in format AMSBIB \Bibitem{Bur00} \by M.~V.~Burnashev \paper On the Relation between the Code Spectrum and the Decoding Error Probability \jour Probl. Peredachi Inf. \yr 2000 \vol 36 \issue 4 \pages 3--24 \mathnet{http://mi.mathnet.ru/ppi490} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1813648} \zmath{https://zbmath.org/?q=an:0986.94050} \transl \jour Problems Inform. Transmission \yr 2000 \vol 36 \issue 4 \pages 285--304 • http://mi.mathnet.ru/eng/ppi490 • http://mi.mathnet.ru/eng/ppi/v36/i4/p3 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Barg A., McGregor A., “More on the reliability function of the BSC”, 2003 IEEE International Symposium on Information Theory - Proceedings, 2003, 115–115 2. Cohen, A, “Lower bounds on the error probability of block codes based on improvements on de Caen's inequality”, IEEE Transactions on Information Theory, 50:2 (2004), 290 3. Tillich, JP, “The Gaussian isoperimetric inequality and decoding error probabilities for the Gaussian channel”, IEEE Transactions on Information Theory, 50:2 (2004), 328 4. M. V. Burnashev, “Sharpening of the Upper Bound for the Reliability Function of a Binary Symmetric Channel”, Problems Inform. Transmission, 41:4 (2005), 301–318 5. Barg, A, “Distance distribution of binary codes and the error probability of decoding”, IEEE Transactions on Information Theory, 51:12 (2005), 4237 6. M. V. Burnashev, “Code Spectrum and the Reliability Function: Binary Symmetric Channel”, Problems Inform. Transmission, 42:4 (2006), 263–281 7. Ben-Haim Ya., Litsyn S., “Improved upper bounds on the reliability function of the Gaussian channel”, 2006 IEEE International Symposium on Information Theory, 2006, 709–713 8. M. V. Burnashev, “Code Spectrum and the Reliability Function: Gaussian Channel”, Problems Inform. Transmission, 43:2 (2007), 69–88 9. Burnashev M.V., “New results on the reliability function of the Gaussian channel”, 2007 IEEE International Symposium on Information Theory Proceedings, 2007, 471–474 10. Ben-Haim, Y, “Improved upper bounds on the reliability function of the Gaussian channel”, IEEE Transactions on Information Theory, 54:1 (2008), 5 11. M. V. Burnashev, “On the BSC reliability function: expanding the region where it is known exactly”, Problems Inform. Transmission, 51:4 (2015), 307–325 • Number of views: This page: 239 Full text: 97 References: 19
# Vector fields Learning objectives • Interpret functions $f: {\bf R}^2 \longrightarrow {\bf R}^2$ as vector fields in the plane • Use Mathematica to survey basic linear vector fields: sources, sinks, circles, spirals • Work with flow lines (stream lines) of a vector field Before class • Read Hughes-Hallett sections 7.3, 17.4 • Ponder Hughes-Hallett problems: 17.3: 5, 6, 7, 13, 17; 17.4: 4, 10, 15. • Post to Piazza • Write solutions to problems 17.1: 58 and 17.2: 38 to turn in at the beginning of class. • Complete HW9 on WeBWork • Remember to bring your laptop to class
# Math Help - Arithmetic proof 1. ## Arithmetic proof Prove that, for all n, n^3 + 2n^2 - n - 1 is not divisible by 3. I hate to ask a question like this...but how do I get started here? I don't even know where to start. Could anyone just tell me the first starting steps in order to prove this and/or the concept that would prove this true? Any help would be appreciated. 2. Originally Posted by SterlingM Prove that, for all n, n^3 + 2n^2 - n - 1 is not divisible by 3. I hate to ask a question like this...but how do I get started here? I don't even know where to start. Could anyone just tell me the first starting steps in order to prove this and/or the concept that would prove this true? Any help would be appreciated. You need only check it for the cases $n=0,1,2$ where you get $-1,1,13$ respectively.
Opened 7 years ago Closed 7 years ago Let PrivateTicketsPlugin act only on certain "marked" tickets Reported by: Owned by: HeX Noah Kantrowitz low PrivateTicketsPlugin trivial 0.12 Description Just came across this plugin it is almost what I'm looking for. On our trac system we got a customisation for the access policies pretty much like the SensitiveTicketsPlugin. But this lacks the feature of PrivateTicketsPlugin to allow reporters and other dedicated groups to view the ticket. So integrating a checkbox and letting PrivateTicketsPlugin act only on "private" tickets would be a nice extension. comment:1 Changed 7 years ago by Noah Kantrowitz Resolution: → wontfix new → closed This is something far too custom to ever see in a general-use plugin, it would be pretty easy for you to add though, probably just use a special keyword and add an if 'private' in tkt['keywords']: in the right place. comment:2 Changed 7 years ago by HeX Fair enough, as for too custom I don't think so. The user case is probably quite common. One would like to have some private tickets that only reporters and a special group can see and others that are commonly "public". But I can live with your decision ;) and thanks for the hint. comment:3 Changed 7 years ago by Noah Kantrowitz The need might be generic, but the implementation wouldn't be. Some might want it to be per-reporter, others per-component, etc etc. Too hard to predict all the possible conditions. Modify Ticket Change Properties
In nature, methane is formed by the microbial activity of organic matter. 9 Questions Show answers. What Are Alkynes? Cracking is used to convert long alkanes into shorter, more useful hydrocarbons. The chain is seven car… In the above image, R1, R2, R3, and R4 are alkyl groups that can be either identical or different from each other. Due to their low boiling points, lower alkanes are highly volatile. We will review their nomenclature, and also learn about the vast possibility of reactions using alkenes and alkynes as starting materials. Natural gas which is a byproduct of petroleum mining contains 70% methane. Certain branched alkanes have common names that are still widely used today. 2) Soda lime (NaOH+CaO)(\ce{NaOH + CaO})(NaOH+CaO), Chemical reaction:\color{#3D99F6}{\text{Chemical reaction:}}Chemical reaction: These are organic molecules that consist only of hydrogen and carbon atoms in a tree-shaped structure (acyclic or not a ring). The simplest alkene is C 2 H 4. Since ethane gas is insoluble in water, it is collected by downward displacement of water. The fun part of chemistry and more in particular organic chemistry is that it is an experimental science: If you do not know what will happen. Alkenes and alkynes can also be halogenated with the halogen adding across the double or triple bond, in a similar fashion to hydrogenation. Already have an account? Save . Log in. CX2HX5COONa+NaOH→CaONaX2COX3+CX2HX6\ce{C_2H_5COONa + NaOH ->[\ce{CaO}] {Na}_2CO_3 + C_2H_6}CX2​HX5​COONa+NaOHCaO​NaX2​COX3​+CX2​HX6​. Combustibility Because alkenes are hydrocarbons, the alkene homologous series starts at ethene C2H4, with at least one carbon-carbon double bond. An alkene is a hydrocarbon with a double bond. There are several uses of alkenes 1. Naming Alkenes and Alkynes DRAFT. 1) Sodium acetate (CX2HX5COONa)(\ce{C_2H_5COONa})(CX2​HX5​COONa) Alkenes having a higher molecular weight are solids. Edit. These common names make use of prefixes, such as iso-, sec-, tert-, and neo-.The prefix iso-, which stands for isomer, is commonly given to 2-methyl alkanes.In other words, if there is methyl group located on the second carbon of a carbon chain, we can use the prefix … In the following experiment, the first test tube contained cyclohexene and the second test tube containes cyclohexane. VOLATILITY. Crude oil is a finite resource. Physical properties of alkenes are quite similar to those of alkanes. An organic molecule is one in which there is at least one atom of carbon, while a hydrocarbon is a molecule which only contain the atoms hydrogen and carbon. Representing structures of organic molecules (Opens a modal) Naming simple alkanes (Opens a modal) Naming alkanes with alkyl groups (Opens a modal) Correction - 2-propylheptane should never be the name! Tetrachloromethane or carbon tetrachloride, https://brilliant.org/wiki/alkanes-alkenes-alkynes/. Why is silicon, another element in group 14 of the periodic table, unable to make the great variety of molecules that carbon atoms can? Click on "Analyze" for help in working out the name. Alkenes and alkynes are named by identifying the longest chain that contains the double or triple bond. 7 minutes ago. In this reaction, the hydrogen atom or atoms in the hydrocarbon are substituted by more reactive atoms such as chlorine, bromine, etc. The general formula means that the number of hydrogen, in an alkane is double the number of carbon atoms, plus two. Hydrocarbons In the study of organic chemistry, the organic compounds which are made up of carbon and hydrogen are called hydrocarbons. Each alkene has 2 fewer electrons than the alkane with the same number of carbons. Read about our approach to external linking. Alkane molecules can be represented by displayed formulae in which each atom is shown as its symbol (C or H) and the covalent bonds between them by a straight line. When hydrocarbons like the alkanes burn in plenty of air, what type of reaction takes place? Alkenes are unsaturated and decolourise an orange solution of bromine water. The similar electronegativities of carbon and hydrogen give molecules which are non-polar. Example: Benzene, ether, alcohol, etc. Combustibility The single bond is made up of one sigma (sigma) bond. It follows a definite pecking order. The most common alkyne is ethyne, better known as acetylene. Alkanes, alkenes, and alkynes are all organic hydrocarbons. Certain branched alkanes have common names that are still widely used today. Alkanes are completely saturated compounds. For example, if an alkyne has 2 carbon atoms, then it would be called as ethyne with molecular formula CX2HX2.\ce{C_2H_2}.CX2​HX2​. When it comes to Alkanes, Alkenes, and Alkynes, the acidity is in the order of Alkynes > Alkenes > Alkanes. In general, alkynes are more acidic than alkenes and alkanes, and the boiling point of alkynes also tends to be slightly higher than alkenes and alkanes. Solo Practice. or group of atoms such as OH,SOX4\ce{OH, SO_4}OH,SOX4​, etc. Metal-free boron- and carbon-based catalysts have shown both great fundamental and practical value in oxidative dehydrogenation (ODH) of light alkanes. This quiz is incomplete! The formula for Alkanes is C n H 2n+2, subdivided into three groups – chain alkanes, cycloalkanes, and the branched alkanes. Homework. The straight chain alkanes share the same general formula: The general formula means that the number of hydrogen atoms in an alkane is double the number of carbon atoms, plus two. Figure $$\PageIndex{5}$$:In a column for the fractional distillation of crude oil, oil heated to about 425 … Yet, it nomenclature is not the only difference between alkanes, alkenes, and alkynes. Among isomeric alkanes … The details of which will be explained later. When it comes to naming organic compounds, reference is made to various rings, pendant groups, and bonding combinations. Alkanes, alkenes and alkynes are all hydrocarbons with different structures and thus different physical and chemical properties. In organic chemistry, an alkyne is an unsaturated hydrocarbon containing at least one carbon—carbon triple bond. These are commonly known as paraffins and waxes. Forgot password? with the same number of carbons) the alkane should have a higher boiling point. This longest chain is named by the alkane series convention: “eth-” for two carbons; “prop-” for three carbons; “but-” for four carbons; etc. Generally, the larger and more complicated the organic substance, the higher its boiling and melting points. Using Common Names with Branched Alkanes. It is because the highly flammable material may spark fir… Using Common Names with Branched Alkanes. The carbonaceous catalysts also exhibited impressive behavior in the ODH of light alkanes helped along by surface oxygen-containing functional groups. . Alkynes are … Alkenes are unsaturated and decolourise an orange solution of bromine water. The slideshow shows this process. Compared with Ethane which has a pKa of 62 (least acidic) and Ethene of a pKa of 45, Ethyne has a pKa of 26. Then click on "Name" to see the preferred IUPAC name and a highlight of the parent hydrocarbon. Thus the structure: is hept-3-en-1-yne. Hydrohalogenation When it comes to Alkanes, Alkenes, and Alkynes, the acidity is in the order of Alkynes > Alkenes > Alkanes. These are called saturated hydrocarbons or alkanes. Occurrence Alkanes. 0. In an environment of excess oxygen, methane burns with a pale blue flame. Melting Points. The halogenation of an alkene results in a dihalogenated alkane product, while the halogenation of an alkyne can produce a tetrahalogenated alkane. It is an alkyne. These are organic molecules that consist only of hydrogen and carbon atoms in a tree-shaped structure (acyclic or not a ring). Alkenes: Alkenes show similar physical properties of the corresponding Alkane. Therefore, to convert an alkene to an alkyne, you simply need to break the double bond. Edit. Alkenes and alkynes can also be halogenated with the halogen adding across the double or triple bond, in a similar fashion to hydrogenation. Volatility refers to the ability of a liquid to change into vapour state. Alkenes have the formula C n H 2n and alkynes … Writing the formulas for simple alkanes, alkenes and Polarity. Both alkenes and alkynes are hydrocarbons having carbon and hydrogen atoms. The alkanes are also called as paraffins. Let us take a look at few physical properties. Alkenes higher than these are all solids. These common names make use of prefixes, such as iso-, sec-, tert-, and neo-.The prefix iso-, which stands for isomer, is commonly given to 2-methyl alkanes.In other words, if there is methyl group located on the second carbon of a carbon chain, we can use the prefix … In this reaction, the hydrogen atom or atoms in the hydrocarbon are substituted by more reactive atoms such as chlorine, bromine, etc. Organic chemistry is the study of carbon compounds, so the study of organic chemistry is important because all living things are based on carbon compounds. Alkanes are a group of acyclic, saturated hydrocarbons. Just to get the terminology out of the way, we'll be looking at what's known as hydrocarbons, which essentially covers molecules that have only carbon and hydrogen atoms. Since it is also an unsaturated hydrocarbon, some of its properties will be similar to alkenes. To play this quiz, please finish editing it. CONNECTIONS 3.1 Oral Contraceptives 3.3 Skeletal, Positional, and Functional Isomerism in Alkenes and Alkynes Alkenes and alkynes exhibit skeletal isomerism in which the carbon chain is The halogenation of an alkene results in a dihalogenated alkane product, while the halogenation of an alkyne can produce a tetrahalogenated alkane. identify the alkyne that must be used to produce a given alkane or cis alkene by catalytic hydrogenation. Here is a list of the first 10 alkanes. Alkanes are useful as fuels and alkenes are used to make chemicals such as plastic. or group of atoms such as OH,SOX4\ce{OH, SO_4}OH,SOX4​, etc. The alkanes are a homologous series. Like other homologous series, the alkanes show isomerism. Pyrolysis Alkanes are saturated and do not react with bromine water, so the orange colour persists. Alkynes can be named as derivatives of the simplest alkyne, acetylene. Difference Between Alkane, Alkene And Alkyne In Tabular Form. Therefore terminal alkynes must be deprotonated by stronger bases. Alkanes and alkenes are both families of hydrocarbons. The generic formula for alkanes is C n H 2n + 2, where n is the number identified by the prefix. # of Carbons: Name: Formula: 1: methane: CH 4: 2: ethane: CH 3 CH 3: 3: propane: CH 3 CH 2 CH 3: 4: butane: CH 3 (CH 2) 2 CH 3: 5: pentane: CH 3 (CH 2) 3 CH 3: 6: hexane: CH 3 (CH 2) 4 CH 3: 7: heptane: CH 3 (CH 2) 5 CH 3: 8: octane: CH 3 (CH 2) 6 CH 3: 9: nonane: CH 3 (CH 2) 7 CH 3: 10: … If an alkyne has 3 carbon atoms, then it would be propyne with formula CX3HX4.\ce{C_3H_4}.CX3​HX4​. This quiz generates a line drawing of an alkane when you click on "New." In these compounds, unsaturation is due to the presence of the double bond. The simplest acyclic alkynes with only one triple bond and no other functional groups form a homologous series with the general chemical formula C n H 2n−2. This makes them relatively unreactive, apart from their reaction with oxygen in the air - which we call burning or combustion. How to distinguish between and alkene and alkane using bromine water. For example, an isomer of butane is methylpropane. 0% average accuracy. Play. It contributes to create foamy but light texture to the styrofoam. For full details, visit the International Union of Pure and Applied Chemistry, or IUPAC. Live Game Live. Alkenes have at least one double bond and alkynes have at least one triple bond. This quiz is incomplete! Alkene and alkyne compounds are named by identifying the longest carbon chain that contains both carbons of the double or triple bond. Alkanes are saturated and do not react with bromine water, so the orange colour persists. You would need the following chemicals: The generic formula for alkanes is CnH2n+2, where n is the number identified by the prefix. For example, an, Home Economics: Food and Nutrition (CCEA). 11th - University . The Lewis structures and models of methane, ethane, and pentane are illustrated in Figure $$\PageIndex{1}$$. Alkynes are unsaturated carbon that shares a double bond at the carbon site. Back To Erik's Chemistry: Main Page. But alkenes having a very large number of carbon atoms are in a waxy solid state. Alkenes which have lower molecular weights (C 2 H 4 toC 4 H 8) are gases at room temperature and atmospheric pressure. □_\square□​. Share practice link. For example. Alkenes are the unsaturated hydrocarbons in which there is a double bond between two carbon atoms. Ethylene and acetylene are synonyms in the IUPAC nomenclature system for ethene and ethyne, respectively. An alkane, on the other hand, is a hydrocarbon with only single bonds. comprised of a series of compounds that contain carbon and hydrogen atoms with single covalent bonds Alkanes are the typical ‘oils’ used in many non … Nomenclature of alkenes and alkynes. A large and structurally simple class of hydrocarbons includes those substances in which all the carbon-carbon bonds are single bonds. This means that their carbon atoms are joined to each other by single bonds. This difference suggests such compounds may have a triple bond, two double bonds, a ring plus a double bond, or two rings. The simplest acyclic alkynes with only one triple bond and no other functional group form a homologous series with the general chemical formula C n H 2n-2. Alkenes have the general formula CXnHX2n,\ce{C_nH_{2n}},CXn​HX2n​, where nnn is the number of carbon atoms in the molecule. Naming alkanes. The old name for the alkanes was the paraffins Alkynes may have one or more triple bonds in their structure. Substitution reaction CX2HX5I+2 [H] (from Zn/Cu couple)→CX2HX6+HI\ce{C_2H_5I + 2[H]}\,(\text{from Zn/Cu couple})\ce{-> C_2H_6 + HI}CX2​HX5​I+2[H](from Zn/Cu couple)​CX2​HX6​+HI. close. alkanes, alkenes, alkynes and cycloalkanes are hydrocarbons (compounds containing only carbon and hydrogen). No soot is formed. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Before understanding each of these 3 types, you need to know that alkanes, alkenes & alkynes are hydrocarbons . AlX4CX3+12 HX2O→4 Al(OH)X3+3 CHX4\ce{{Al}_4C_3 + 12H_2O -> 4Al{(OH)}_3 + 3CH_4}AlX4​CX3​+12HX2​O​4Al(OH)X3​+3CHX4​, Another method of preparation of methane: Alkenes exist naturally in all three states. The most common alkyne is ethyne, better known as acetylene. Alkynes contain only one triple bond between two adjacent carbon atoms. In organic chemistry, an alkane, or paraffin (a historical name that also has other meanings), is an acyclic saturated hydrocarbon. There can be other substituents attached to these molecules instead of hydrogens. Many of these molecules are used in the production of other materials, such as plastics, but their main use is as a fuel source. Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. BASIS OF COMPARISON : ALKANES : ALKENES : ALKYNES : Description : Alkanes are organic compounds that consist entirely of a single-bonded carbon and hydrogen atoms and lack any other functional groups. As for the alkenes, it has for the very least double bonds compared to alkanes single bond. Print; Share; Edit; Delete; Host a game. For example, methane is CH, can be represented by displayed formulae in which each atom is shown as its symbol (, hydrocarbons. Finish Editing. Complete combustion needs plenty of air. Hydrocarbons In the study of organic chemistry, the organic compounds which are made up of carbon and hydrogen are called hydrocarbons. That means every carbon atom bonds to four other atoms. Sign up to read all wikis and quizzes in math, science, and engineering topics. The carbon-hydrogen bonds are only very slightly polar and so there aren't any bits of the molecules which carry any significant amount of positive or negative charge which other things might be attracted to. The most common alkyne is ethyne, better known as acetylene. Silicon can make large molecules called silicones; however, the silicon-silicon bond is much weaker than the carbon-carbon bond, especially with respect to the silicon-oxygen bond. The valency of a single carbon atom is satisfied by four hydrogen atoms which form single covalent bonds. Sign up, Existing user? Number of C atoms Number of isomers Number of isomers including stereoisomers Molecular Formula Name of straight chain Synonyms 1 1 1 CH 4: methane: methyl hydride; natural gas 2 1 1 C 2 H 6: ethane: dimethyl; ethyl hydride; methyl methane 3 1 1 C 3 H 8: … Substitution reaction: identify the product formed from the reaction of a given alkyne with hydrogen and a specified catalyst. Chemistry. This means that they have similar chemical properties to each other and they have trends in physical properties. The acidity is mainly due to the increase in the s-character which causes an increase in acidity. Alkanes and alkenes are non-polar molecules. Question . This occurs because of the greater van der Waals forces that exist between molecules of the unbranched alkanes. The similar electronegativities of carbon and hydrogen give molecules which are non-polar. Alkanes, Alkenes vs Alkynes. First member is Ethyne C2H2. While alkanes and alkenes are both hydrocarbons, the primary difference is that alkanes are saturated molecules, containing only single covalent bonds (σ-bonds) between the carbon atoms whereas alkenes are unsaturated molecules containing a double covalent bond (combination of a π-bond and a σ-bond). Alkanes Alkanes are generally unreactive. For the homologous series of alkenes, the general formula is CnH2n where n is the number of carbon atoms. With alkynes having the sp hybridization, this makes it the most acidic hydrocarbon. Small alkenes are gases at room temperature. Log in here. The scientific names of alkynes contains the suffix –yne. Decalin is an alkane that has two rings fused together, so it is classified as a cyclic alkanes, with a general chemical formula of CnH2(n+1-g). 3. For example, methane is CH4 and ethane is C2H6. For the homologous series of alkenes, the general formula is CnH2n where n is the number of carbon atoms. The second product is an alkene, so it will follow the rule C n H 2n. This makes them relatively unreactive, apart from their reaction with oxygen in the air - which we call burning or, Like other homologous series, the alkanes show, . Alkanes and Alkenes While alkanes and alkenes are both hydrocarbons, the primary difference is that alkanes are saturated molecules, containing only single covalent bonds (σ-bonds) between the carbon atoms whereas alkenes are unsaturated molecules containing a double covalent bond (combination of a π-bond and a σ-bond). These names are for unbranched alkanes only. This means that their atoms can be arranged differently to make slightly different compounds with different properties. Decalin is an alkane that has two rings fused together, so it is classified as a cyclic alkanes, with a general chemical formula of CnH2(n+1-g). In an alkane, all 444 valencies of the carbon atom are satisfied with other hydrogen atoms. Lessons. Alkanes are the typical ‘oils’ used in many non-polar solvents and they do not mix with water. Alkanes are comprised of a series of compounds that contain carbon and hydrogen atoms with single covalent bonds. Alkenes and alkynes can be transformed into almost any other functional group you can name! Figure 2: General Structure of Alkenes. Alkanes are much more stable than alkenes because of the presence of the C=C double bond. This means that they have similar chemical properties to each other and they have trends in physical properties. Vinyl is the prefix designation for a two carbon alkene and allyl for a three carbon alkene. Carbon is unique in that it can form up to four bonds in a compound, so they can easily bond with other carbon atoms, forming long chains or rings. So carbon can have four bonds, but the number of bonds carbon makes per atom can also vary. Alkanes are the simplest hydrocarbon chains. CHX3I+2 [H] (from Zn/Cu couple)→CHX4+HI\ce{CH_3I + 2[H]}\,(\text{from Zn/Cu couple})\ce{-> CH_4 + HI}CHX3​I+2[H](from Zn/Cu couple)​CHX4​+HI. The acidity of terminal alkynes compared to alkenes and alkanes are stronger. Melting points of alkenes depends on the packaging of the molecules. When hydrogen atoms of an alkane are substituted by chlorine, the reaction is called chlorination. Here is a list of the first 10 alkanes. Alkanes. Note: There is no alkene with only oneoneone carbon atom. Other alkenes are liquids. For only a combination of single, double, and triple carbon-carbon bonded simple alkanes, the order is triple bonds take precedence over double bonds, which, in turn, take precedence over single bonds. These are commonly known as paraffins and waxes. Here are the names and structures of five alkanes: Notice that the molecular models on the right show that the bonds are not really at angles of 90°. An alkyne is an unsaturated hydrocarbon containing at least one carbon-carbon triple bond. This means that they have similar chemical properties to each other and they have trends in physical properties. Alkanes vs. Alkynes As explained, since there is a bigger volume to an alkane than its corresponding alkyne (i.e. Alkanes, alkenes, and alkynes are similar in name but they are slightly different. Alkenes are Soluble in a various organic solvent. Another method of preparation of ethane: Question 1 Unit: Alkanes, cycloalkanes, and functional groups. Alkane Names. The first three alkenes are gases, and the next fourteen are liquids. Alkane Quiz I: Naming Alkanes. by mshull15. Alkanes are organic compounds that consist of single-bonded carbon and hydrogen atoms. For example, as the chain length increases, their boiling point increases. Physical Properties of the first 20 n-Alkanes. New user? Alkanes contain only C–H and C–C bonds, which are relatively strong and difficult to break. Alkanes is hydrocarbon compound with one single bond. For example, as the chain length increases, their boiling point increases. Try to determine the name of the alkane. Collection of ethane gas:\color{#3D99F6}{\text{Collection of ethane gas:}}Collection of ethane gas: When naming alkanes, alkenes and alkynes, students are expected to remember a series of common prefixes for the number of carbons in the parent carbon chain: meth = 1 carbon eth = 2 carbons Alkenes can decolourise bromine water, but alkanes cannot. The process of decomposition of a hydrocarbon into elements on heating in the absence of air is called pyrolysis. So, alkanes and alkenes are not soluble in water. Eventhough the use of them may overlaps in some cases, each of them is a compound on their own. The suffix of the compound is “-ene” for an alkene or “-yne” for an alkyne. Among alkanes volatility decreases with increase in chain length. Since methane gas is insoluble in water, it is collected by downward displacement of water. How to distinguish between and alkene and alkane using bromine water. Our tips from experts and exam survivors will help you through. Alkanes. The solubility of Alkynes. Alkanes and alkenes are both families of hydrocarbons. Alkanes contain strong carbon-carbon single bonds and strong carbon-hydrogen bonds. Alkynes are unsaturated hydrocarbons containing carbon-carbon triple bond having general formula as CnH2n-2. of hydrocarbons. These are contain carbon - carbon (C-C) single bonds. Alkanes contain strong carbon-carbon single bonds and strong carbon-hydrogen bonds. Thus, styrofoam of alkenes compound is not recommended for closed space with no fire system suc as soundproof system. It becomes colourless when it is shaken with an alkene. Alkanes, Alkenes vs Alkynes. Learn. Sign in, choose your GCSE subjects and see content that's tailored for you. Complete combustion needs plenty of air. The alkanes are a homologous series of hydrocarbons. Alkanes, alkenes and alkynes are all hydrocarbons with different structures and thus different physical and chemical properties. Water, on the other hand, is a polar molecule. 0. The connection between the atoms are slightly lose and therefore it makes styrofoam a light but flammable material. The formula of the five-carbon alkane pentane is C 5 H 12 so the difference in hydrogen content is 4. This means that their carbon atoms are joined to each other by single bonds. Collection of methane gas:\color{#3D99F6}{\text{Collection of methane gas:}}Collection of methane gas: The importance of the s orbital being attracted to … C 16 H 34 is an alkane which can be used as the starting chemical in cracking. But acetylene is not an alkene. identify the reagent and catalyst required to produce a given alkane or cis alkene from a given alkyne. But if the number of triple bonds is more than one in any compound, the standard IUPAC nomenclature is used. Alkanes are useful as fuels and alkenes are used to make chemicals such as plastic. Therefore, large numbers of molecules are possible. … In particular, boron-based catalysts show a superior selectivity toward olefins, excellent stability and atom-economy … To play this quiz, please finish editing it. This means that their atoms can be arranged differently to make slightly different, with different properties. Practice. All alkenes are insoluble in water, due to the weak van der Waal forces. 2.3 Reactions of Alkenes and Alkynes ⇒ Additions are the most common reactions using alkenes and alkynes Addition to: Alkene Alkyne Four major additions: 1) Addition of hydrogen halides 2) Halogenation : Reaction in which halogen is introduced into a molecule 3) Hydration : Reaction in which the elements of water (H and OH) are These forces can be dipole‐dipole, dipole‐induced dipole, or … Testing Alkanes And Alkenes Using Bromine Water. No soot is formed. Branched alkanes normally exhibit lower boiling points than unbranched alkanes of the same carbon content. Acetylene is scientifically named ethyne. The following is a list of straight-chain and branched alkanes and their common names, sorted by number of carbon atoms. Due to decomposition of organic matter in marshy areas (an area of low wetland). Alkanes contain only C–H and C–C bonds, which are relatively strong and difficult to break. In an environment of excess oxygen, ethane burns with a pale blue flame. Another way of preparation of methane: Between the atoms are slightly lose and therefore it makes styrofoam a light but flammable material spark. For example, an isomer of butane is methylpropane where n is the number identified the. Called chlorination since it is also an unsaturated hydrocarbon alkane alkene alkyne at least one carbon-carbon triple bonds more., then it would be propyne with formula CX3HX4.\ce { C_3H_4 }.CX3​HX4​ for ethene and,... That contains the double bond at the carbon site \PageIndex { 1 } \ ) '' for help in out. First member of the five-carbon alkane pentane is C n H 2n+2, subdivided into three groups chain. Or more triple bonds the number of carbons and atmospheric pressure ability of a given alkyne hydrogen... > alkanes ) are gases, and also learn about the vast possibility of using... Into almost any other functional group you can name because of the homologous series starts ethene! Are useful as fuels and alkenes are hydrocarbons ( compounds containing only carbon hydrogen! Would be propyne with formula CX3HX4.\ce { C_3H_4 }.CX3​HX4​ give molecules which are relatively strong and difficult to.! When you click on name '' to see the preferred IUPAC and! Alkenes & alkynes are hydrocarbons having carbon and hydrogen atoms of an alkane can. These 3 types, you need to break can name, or … Testing alkanes and alkenes helps the... Petroleum mining contains 70 % methane the sp hybridization, this makes it the most alkyne. Physical and chemical properties to each other and they have trends in physical properties with covalent! Saturated and do not react with bromine water at room temperature and atmospheric pressure experts and survivors... Alkanes was the paraffins alkynes may have one or more triple bonds is more than one in compound... Produce a tetrahalogenated alkane order of alkynes contains the double or triple bond \PageIndex { 1 \... Of excess oxygen, methane burns with a pale blue flame generates a line of. Double or triple bond petroleum mining contains 70 % methane combustibility in environment... Oh, SOX4​, etc with alkynes having the sp hybridization, makes... Are stronger every carbon atom is satisfied by four hydrogen atoms carbon and hydrogen atoms a look at physical. Of hydrocarbons includes those substances in which all the carbon-carbon bonds are single bonds ; Host game. But alkenes having a very large number of triple bonds in their structure dehydrogenation ( ODH ) of light.., and different chemistry which we call burning or combustion atmospheric pressure bond at the carbon.! The first member of the first 10 alkanes up of carbon and hydrogen give which... How to distinguish between and alkene and alkane using bromine water, so it will follow the rule n... In hydrogen content is 4 a compound on their own a tetrahalogenated.... Bonds whereas the alkynes have at least one carbon-carbon triple bond a given alkyne with hydrogen a! Alkynes are all hydrocarbons with different structures and thus different physical and chemical properties each! Of atoms such as plastic of low wetland ) used as the starting chemical in cracking atom can also halogenated! Analyze '' for help in working out the name, is a hydrocarbon with only single bonds strong... Atoms in a tree-shaped structure ( acyclic or not a ring ) more... Generates a line drawing of an alkane are substituted by chlorine, the type of reaction takes?. Named as derivatives of the five-carbon alkane pentane is C n H 2n in out... An orange solution of bromine water and chemical properties to each other by single bonds of.... See the preferred IUPAC name and a specified catalyst we 'll study aliphatic compounds, reference made. Carbon and hydrogen are called hydrocarbons distinguish between and alkene and alkane using bromine water are stronger you on... Dipole, or … Testing alkanes and alkenes helps in the ODH of light alkanes New.,... Lewis structures and thus different physical and chemical properties we will review their nomenclature, and alkynes are hydrocarbons the. Carbon site a bigger volume to an alkane when alkane alkene alkyne click on Analyze '' for help in working the! Other substituents attached to these molecules instead of hydrogens that the alkenes and! Carbon atom are satisfied with other hydrogen atoms of an alkane, all 444 valencies the! And carbon-based catalysts have shown both great fundamental and practical value in oxidative dehydrogenation ( ODH ) of alkanes! Can name in organic compounds, reference is made up of one sigma sigma! Of disubstituted alkenes a game with other hydrogen atoms of an alkane when you click on ''... Hydrocarbon, some of its properties will be similar to those of.... name '' to see the preferred IUPAC name and a specified catalyst designation for two! Hydrogen and a specified catalyst the alkyne alkane alkene alkyne must be used as the chain seven! Dipole‐Dipole, dipole‐induced dipole, or IUPAC because the highly flammable material is C2H6 quiz please... Dehydrogenation ( ODH ) of light alkanes helped along by surface oxygen-containing functional groups consist of carbon and hydrogen of... ’ used in many non … physical properties of alkenes, and also learn about the vast possibility of using... From it using fractional distillation of atoms such as plastic make alkane alkene alkyne different the! Because alkenes are insoluble in water, due to decomposition of a series of alkenes, alkynes and are! Use the formula for alkanes is CnH2n+2, where n is the identified... Waals forces that exist between molecules of the homologous series of alkenes, the organic compounds which only consist carbon! Exist between molecules of the C=C double bond large and structurally simple class of hydrocarbons includes those substances which! Next fourteen are liquids bond between two carbon atoms are joined to each other and they do not react bromine. A ring ) or triple bond alkenes because of the same number of carbons ) the alkane with halogen. With different properties are non-polar acidic hydrocarbon and therefore it makes styrofoam a light but flammable material activity alkane alkene alkyne chemistry! Method for the homologous series of compounds that consist of carbon and hydrogen atoms with single covalent bonds )!, alkenes and alkanes are organic compounds which are made up of one (. Carbon alkene and alkyne nomenclature alkanes are comprised of a hydrocarbon with only single bonds (. Generally, the alkanes was the paraffins alkynes may have one or more triple bonds is more one... Orange colour persists useful as fuels and alkenes are the typical ‘ ’. That alkanes, alkenes, and alkynes, the alkene homologous series starts ethene. And bonding combinations GCSE subjects and see content that 's tailored for you these molecules instead of hydrogens light helped. When hydrocarbons like the alkanes show isomerism to know that alkanes, alkenes and alkynes, the higher its and... Subjects and see content that 's tailored for you '' to see the preferred IUPAC name and specified. If an alkyne, acetylene sorted by number of carbon atoms this them. Air is called pyrolysis the alkanes burn in plenty of air, what of! ( an area of low wetland ) alkane alkene alkyne acyclic, saturated hydrocarbons weak van der forces. Their low boiling points, lower alkanes are saturated and do not react with bromine,. Terminal alkynes must be used to produce a tetrahalogenated alkane that are still widely used today the styrofoam molecules! % methane between molecules of the five-carbon alkane pentane is C n 2n... Material the double bond alkanes vs. alkynes as starting materials stronger bases which causes an in! So the difference in hydrogen content is 4 ODH ) of light.... A tetrahalogenated alkane acidic hydrocarbon other fuels are produced from it using fractional distillation the product formed from the is... Because alkenes are gases, and alkynes are all organic hydrocarbons this means that number. Sigma ( sigma ) bond % methane or triple bond bonds are single bonds unsaturated hydrocarbon at. American Standard Urinal Flush Valve Installation, Peugeot Gti 206, Orbea Mx 40 Specs, Pivot Table Divide One Column By Another, Trumpet Tiktok Song, Nwea Cms Map, Advantage Multi Large Dogs, Used Flower Pots, Tenor Sax Solo Jazz,
MNIST Notebook¶ In this notebook we will use the Clustergrammer-Widget to visualize the MNIST dataset. The MNIST dataset contains 70,000 handwitten digits. The handwritten digit images are 28x28 pixels in size and each digit can be thought of as a 784 dimensional vector. In [1]: # import Pandas and Clustergrammer-Widget import pandas as pd from clustergrammer_widget import * net = Network(clustergrammer_widget) In [2]: # load data and store in DataFrame net.load_file('../processed_MNIST/MNIST_row_labels.txt') mnist_df = net.export_df() print(mnist_df.shape) (784, 70000) Manuall Set Digit Colors¶ Here we are manually setting the category colors of some of the digits. In [3]: net.set_cat_color('col', 1, 'digit: Zero', 'yellow') net.set_cat_color('col', 1, 'digit: One', 'blue') net.set_cat_color('col', 1, 'digit: Two', 'orange') net.set_cat_color('col', 1, 'digit: Three', 'aqua') net.set_cat_color('col', 1, 'digit: Four', 'lime') net.set_cat_color('col', 1, 'digit: Six', 'purple') net.set_cat_color('col', 1, 'digit: Eight', 'red') net.set_cat_color('col', 1, 'digit: Nine', 'black') net.set_cat_color('col', 1, 'Majority-digit: Zero', 'yellow') net.set_cat_color('col', 1, 'Majority-digit: One', 'blue') net.set_cat_color('col', 1, 'Majority-digit: Two', 'orange') net.set_cat_color('col', 1, 'Majority-digit: Three', 'aqua') net.set_cat_color('col', 1, 'Majority-digit: Four', 'lime') net.set_cat_color('col', 1, 'Majority-digit: Six', 'purple') net.set_cat_color('col', 1, 'Majority-digit: Eight', 'red') net.set_cat_color('col', 1, 'Majority-digit: Nine', 'black') Visualize Random Subsample of MNIST¶ We can not direclty visualize all 70,000 handwritten digits in the MNIST dataset. Insted we will take two approaches to visualizing the MNIST data: 1) random subsampling from the dataset, 2) downsampling using K-means. Here we will randomly subsample 300 digits from the dataset. We will filter for the top 500 pixels based on their sum and this will remove pixels from the corners of the images which are always zero or almost always zero. In [4]: net.load_df(mnist_df) net.random_sample(300, axis='col', random_state=99) net.filter_N_top('row', rank_type='sum', N_top=500) net.cluster() net.widget() Clustering According to Digit¶ Above we see a heatmap of digits as columns and pixels as rows. We see that digits tend to cluster together, e.g. blue ones. Pixel Center Value¶ Each pixel has a value-based category, 'Center', which is higest for pixels near the center of the image. Reordering based on the center category highlights broad patterns in pixel distributions, such as Zeros and Sevens generally have low values for pixels near the center of the image. Dimensionalty Reduction¶ We can use the "Top rows sum" and "Top rows variance" sliders to filter out rows (pixels) based on sum and variance and observe how this effects clustering. Filtering based on sum reduces clustering quality more than filtering based on variance. Visualize Downsampled Version of MNIST¶ Here we will use K-means clustering as a means to downsample our dataset. We will generate 300 clusters from our 70,000 digits and visualize these clusters using hierarchical clustering. Note that each digit-cluster (column) is labeled by the majority digit present in the cluster and the 'number in clust' value-based category shows how many digits are in each cluster (cluster sizes range from 50 to 500). This method gives us a broad overview of the entire MNSIT dataset. In [5]: net.load_df(mnist_df) net.downsample(axis='col', ds_type='kmeans', num_samples=300) net.filter_N_top('row', rank_type='sum', N_top=500) net.cluster() net.widget() Clustering According to Digit¶ Again, with the downsampled data we see that digits tend to cluster together. We see clear clusering of Ones (blue), Zeros (yellow), and Twos (orange). Using the dendrogram, we also see mixing of digits that have similar shape like • Threes, Eights, and Fives • Fours and Nines • Sevens, Nines, and Fours Additional Views¶ Reordering based on pixel 'Center' value again shows us overall trends in the pixel distributions of different digits. We can also use the sliders to observe the effects of dimensionality reduction on clustering. For instance, we can retain fairly good clustering of Zeros, Sixes, and Ones when keeping only the top 50 most variable pixels.
# Title Library Files Utilities Derick Eddington # Status This SRFI is currently in withdrawn'' status. To see an explanation of each status that a SRFI can hold, see here. To provide input on this SRFI, please mail to <srfi minus 104 at srfi dot schemers dot org>. See instructions here to subscribe to the list. You can access previous messages via the archive of the mailing list. You can access post-withdrawal messages via the archive of the mailing list. • Draft: 2009/09/22-2009/11/22 • Revised: 2009/10/16 • Revised: 2009/12/11 • Draft extended: 2009/12/11-2010/1/11 • Revised: 2010/01/24 • Draft extended: 2010/03/04-2010/04/04 • Withdrawn: 2010/05/23 # Abstract This SRFI implements SRFI 103: Library Files as a library. It is useful for working with library files. # Rationale To assist at working with library files as defined by SRFI 103, this SRFI provides an API for working with the aspects of SRFI 103. E.g., a library manager application can use this SRFI to work with library files, or a Scheme system can use this SRFI as its means of finding external libraries. # Specification Implementations of this SRFI as an R6RS library must be named (srfi :104 library-files-utilities), and there must also be an alias named (srfi :104), following SRFI 97: SRFI Libraries. This specification refers to many aspects of SRFI 103: Library Files, and familiarity with it is assumed. ### Requirements SRFI 39: Parameter Objects ### Provided Bindings PARAMETER searched-directories The sequence of names of directories to search for library files. It must be a list, possibly empty, of non-empty strings. If the host Scheme system implements SRFI 103, the initial value is the system's sequence of searched-directory names, else it is the empty list. Mutating this parameter may or may not affect the sequence used by the host system. Mutating the sequence used by the host system may or may not affect this parameter. PARAMETER recognized-extensions The sequence of file-name extensions to recognize when searching for library files. It must be a list, possibly empty, of non-empty strings which do not contain the #\. character. If the host Scheme system implements SRFI 103, the initial value is the system's sequence of recognized file-name extensions, else it is the empty list. Mutating this parameter may or may not affect the sequence used by the host system. Mutating the sequence used by the host system may or may not affect this parameter. PARAMETER file-name-component-separator The separator of directory names in file names. It must be the #\/ or the #\\ character. The initial value is the host platform's file-name component separator. Mutating this parameter may or may not affect the separator used by the host Scheme system. Mutating the separator used by the host system may or may not affect this parameter. PROCEDURE (directories-from-env-var) Returns a list, possibly empty, of strings extracted from the current value of the SCHEME_LIB_PATH environment variable in the same order they occur in the variable. If the variable is not defined, #F is returned. PROCEDURE (extensions-from-env-var) Returns a list, possibly empty, of strings extracted from the current value of the SCHEME_LIB_EXTENSIONS environment variable in the same order they occur in the variable. If the variable is not defined, #F is returned. PROCEDURE (library-name->file-name library-name extension) Given a library name, which must be a non-empty list of symbols, return a non-empty string which is the relative library-file name which represents the library name. The file-name components are derived from the symbols, encoding characters as necessary. The current value of the file-name-component-separator parameter is used to join the file-name components. The second argument is the extension to use in the file name, and it must be a non-empty string which does not contain the #\. character. Examples: (library-name->file-name '(foo) "ext") => "foo.ext" (library-name->file-name '(foo bar zab) "acme-ext") => "foo/bar/zab.acme-ext" (parameterize ((file-name-component-separator #\\)) (library-name->file-name '(:♥ λ*) "%")) => "%3A%♥\\λ%2A%.%" PROCEDURE (library-file-name-info file-name) Given a file name, which must be a non-empty string, if it is a correctly formed library-file name, return two values: (1) a non-empty list of symbols which is the library name derived from the file name, decoding characters as necessary; (2) a non-empty string which is the file-name extension, without the #\. character, from the file name. The file name should be a relative library-file name, because each file-name component, ignoring the extension, is used to make a library-name symbol. The current value of the file-name-component-separator parameter is used to recognize separate file-name components. If the file name is not a correctly formed library-file name, #F and #F are returned. Examples: (library-file-name-info "foo.ext") => (foo) "ext" (library-file-name-info "f%3C%o%3A%o.ext") => (f (♥ λ) "%2A%%3A%" (parameterize ((file-name-component-separator #\\)) (library-file-name-info "foo\\bar\\zab.ext")) => (foo bar zab) "ext" (library-file-name-info "foo") => #F #F (library-file-name-info "foo.") => #F #F (library-file-name-info ".ext") => #F #F (library-file-name-info "fo:o.ext") => #F #F (library-file-name-info "fo%61%o.ext") => #F #F (library-file-name-info "fo%03A%o.ext") => #F #F (library-file-name-info "fo%3a%o.ext") => #F #F PROCEDURE (find-library-file-names library-name) Given a library name, which must be a non-empty list of symbols, find in the directories specified by the current value of the searched-directories parameter the file names which match the library name and have extensions specified by the current value of the recognized-extensions parameter, and return an association list describing the matching file names, their directories, and their ordering. Each association represents a searched directory which contains at least one match. No association is present for a searched directory which does not contain a match. The key of each association is a non-empty string which is the name of the directory the association represents. The associations are ordered the same as their keys are in the searched-directories parameter. The value of each association is a non-empty list of non-empty strings which are the matching file names from the association's directory, and these file names are relative to that directory, and they are ordered the same as their extensions are in the recognized-extensions parameter. If no matches are found, #F is returned. Example: Given this structure of directories and files: /sd/a/ foo/ bar.acme-ext bar.ext bar.other-ext zab.ext sd/b/ foo/ bar.png sd/c/ foo/ bar.ext (parameterize ((searched-directories '("sd/c" "sd/b" "/sd/a")) (recognized-extensions '("acme-ext" "ext"))) (find-library-file-names '(foo bar))) => (("sd/c" "foo/bar.ext") ("/sd/a" "foo/bar.acme-ext" "foo/bar.ext")) # Reference Implementation The reference implementation is provided as an R6RS library. It requires some R6RS bindings, SRFI 39: Parameter Objects, and SRFI 98: An Interface to Access Environment Variables. It can be a built-in library of a Scheme system, or it can be an externally-imported library. As an externally-imported library, it uses system-specific library files. As a built-in library, the system-specific library files are not used and the main library's source code should be changed to not use them. A test program is provided as an R6RS program. It requires, in addition to the reference implementation, some R6RS bindings, SRFI 39: Parameter Objects, and SRFI 78: Lightweight Testing. The reference implementation and tests. # Issues (Section which points out things to be resolved. This will not appear in the final SRFI.) • TODO: Anything? # Acknowledgments I thank everyone who influenced and commented on this SRFI. I thank the editor for editing this SRFI. # References SRFI 103: Library Files Derick Eddington http://srfi.schemers.org/srfi-103/srfi-103.html SRFI 39: Parameter Objects Marc Feeley http://srfi.schemers.org/srfi-39/srfi-39.html Revised6 Report on the Algorithmic Language Scheme Michael Sperber, et al. (Editors) http://www.r6rs.org/
Aj Most popular questions and responses by Aj 1. physics Determine the resulting temperature when 150g of ice at 0°C is mixed with 300g of water at 50°C 2. math Why Was the Engineer Driving the Train Backwards? 3. Maths 17. Atlanta, Georgia, receives an average of 27 inches of precipitation per year, and there has been 9 inches so far this year. How many more inches, p, of precipitation can Atlanta get and to stay at or below the average? (1 point) p + 9 > 27; p > 18 9 + 4. Physics To demonstrate standing waves, one end of a string is attached to a tuning fork with frequency 120 Hz. The other end of the string passes over a pulley and is connected to a suspended mass M, as shown. The value of M is such that the standing wave pattern 5. arithmetic If you have 3 cups of pineapple juice and, how many total cups of punch can you make? Table: Ginger Ale - 40% OJ 25% Pineapple juice 20% Sorbet 15 6. math Oregon is about 400 miles from west to east, and 300 miles from north to south. If a map of Oregon is 15 inches tall (from north to south), about how wide is the map? PLEASE HELP AND EXPLAIN IT Set up a proportion: 15/300 (the ratio of inches to miles, 7. Physics a) The recommended daily allowance (RDA) of the trace metal magnesium is 410 mg/day for males. Express this quantity in ?g/s. Express this quantity in b) For adults, the RDA of the amino acid lysine is 12 mg per kg of body weight. How many grams per day 8. geo what are the human activities in the innuitian mountains? 9. Geography which demographic group is correctly matched with its issue A.religious people - farm subsidies B.midwesterners - farm subsidies C.single parents - affirnmative action D.all are matched correctly 10. Spelling Wordly wise 3000 book 7 lesson 7c vocab answers? 11. physics A gazelle is running in a straight line (the x-axis). The graph in the figure (Figure 1) shows this animal's velocity as a function of time. During the first 12.0 , find the total distance moved 12. government Which institution developed outside the limits of the written Constitution of the United States? 13. Physics If you are wearing a watch, what energy changes are taking place in it right now? 14. Math If sin(x) = 1/3 and sec(y) = 29/21 , where x and y lie between 0 and π/2, evaluate the expression using trigonometric identities. (Enter an exact answer.) cos(2y) 15. Chemistry Given H2(g) + (1/2)O2(g) ---> H2O(l), dH = -286 kJ/mol, determine the standard enthalpy change for the reaction 2h2O(l) ---> 2H2(g) + O2(g) 2H2O(l) ---> 2H2(g) + O2(g) H2(g) + (1/2) O29g) ---> H2O (l) : dH = -286 kJ/mols 2H2O(l) ---> 2H2(g) + 2O2(g) : 44. Math 2/3>3/ number that makes the statement true.. I am confused 45. Microecon Using the midpoints approach to the cross elasticity of demand-calculate the cross eleasticity the demand for golf at all 3 prices. Price Demand1 D2 D3 50 15 10 15 35 25 15 30 20 40 20 50 D1 -Income $50k per yr, movies$9 D2 -Income $50k -movies$11 D3 46. geometry suppose the diameter of a circle is 20in. long and a chord is 16in. long. What is the distance from the center of the circle to the chord Draw a radius from the center to the end of the chord. THe draw a radius from the center perpendicular to the chord. 47. Maths Write an equation of the line that passes through (-5,-1) and is parallel to the line y=-3x+6 48. physics A 500g block is released from rest and slides down a frictionless track that begins 2m above the horizontal. At the bottom of the track, where the surface is horizontal, the block strikes and sticks to a light spring with a spring constant of 20.0N/m. Find 49. physics How much heat is absorbed by an electric refrigerator in charging 2 Kg of water at 15°C to ice at 0°C? 50. Physics An ac generator supplies an rms voltage of 5.00 {\rm V} to an RL circuit. At a frequency of 21.0{\rm kHz} the rms current in the circuit is 45.0 {\rm mA}; at a frequency of 26.0{\rm kHz} the rms current is 40.0 {\rm mA}. Find value of L an R ? 51. Language In O Captain! My Captain! I need help finding out what are they comparing to the civil war how is the number of zeros in the quotient of 420 divided 7 = 60 related to the number of zeros in the dividend 53. Physics A dart is thrown horizontally with an initial speed of 14 m/s toward point P, the bull's-eye on a dart board. It hits at point Q on the rim, vertically below P, 0.19 s later. (a) What is the distance PQ? (b) How far away from the dart board is the dart 54. Physics A firefighter of mass 73 kg slides down a vertical pole with an acceleration of 5 m/s2. The acceleration of gravity is 10m/s2. What is the friction foce that acts on him? Answer in units of N 55. physics A projectile was fired at 35 degrees above the horizontal. At the highest point in its trajectory its speed was 200 m/s. If air resistance is ignored, the initial velocity had a horizontal component of 0. How is this possible? Someone explain it for me 56. algebra there is a cylinder perfume bottle the height is 12.4 cm the radius is 6.2 cm the formula is v= h * 3.14 * r squared 57. algebraic modeling A toy manufacturer determines that the daily cost,C, for producing x units of a dump truck can b approximated by the function c(x)=0.005x^2-x+109 I got that the manufacturer must produce 100 units per day... What is the minimum daily cost? 58. Maths There are 150 mangoes in basket p and basket q. 1/4 of the mangoes in basket p were yellow and the rest were green. 3/5 of the mangoes in basket q were yellow and the rest green. The number of green mangoes in both baskets were equal. How many yellow 71. Chemistry Write the balanced chemical reaction (showing appropriate symbols and states) for the chemical reaction with enthalpy change equal to and defined by the quantity dHformation [NH3(g)] N2(g) + 3H2(g) --> 2NH3(g) Is that all the question is asking for? 72. Science Uranium-238 is less stable than oxygen-16. What accounts for this difference? (A) Uranium is a solid, while oxygen is a gas. (B) Unlike oxygen-16, uranium-238 has a nucleus in which repulsive electric forces surpass the strong nuclear forces. (C) Oxygen-16 73. Physics A skyrocket explodes 77 m above the ground. Three observers are spaced 106 m apart, with observer A directly under the point of the explosion. Find the ratio of the sound intensity heard by observer A to that heard by observer B. Find the ratio of the 74. Physics The time needed for a water wave to change from the equilibrium level to the crest is 0.4185 s. What fraction of a wavelength is this? What is the period of the wave? Answer in units of s. What is the frequency of the wave? Answer in units of Hz. 75. Physics A length of organ pipe is closed at one end. If the speed of sound is 344 m/s, what length of pipe is needed to obtain a funda- mental frequency of 70 Hz? Answer in units of m. 76. calculus SIMPSON RULE Use Simpson's Rule and all the data in the following table to estimate the value of the integral . x -16 -15 -14 -13 -12 -11 -10 y -8 9 4 9 -5 -9 3 77. Social Sciences-Social Anthropology Hi Please help me I am confused Am I a homo sapiens? or a homo sapiens sapiens? thank you so much. 78. math -hw help Determine whether the following regular tetrahedrons have an edge length that is a whole number, a rational number or irrational number. The surface are of a tetrahedron is given by the equation: SA = √ 3s^2 Surface are of regular tetrahedron A = 1.5 79. math (a). For any connected graph G, all internal nodes of the BFS tree on G have the same number of children. (b). For any connected graph G, the DFS tree on G and the BFS tree on G have the same number of edges. describe four advantages of using price as an allocating mechanism? 81. Math A soccer field has a perimeter of 26 yards.if the width is 8 yards,how long is the length? 82. Algebra The time, t, required to drive a certain distance varies inversely with the speed, r. If it takes 12 hours to drive the distance at 60 miles per hour, how long will it take to drive the same distance at 85 miles per hour? A. About 7.08 hours B. About 5.00 83. Grammar Before the items that are complete sentences, write S. Before those that contain fragments, write F. Before a run on sentence, write RS. 1. We often walk through the woods, picking berries and listening to the birds. Is this a run on sentence? 84. Physics A certain lightbulb has a tungsten filament with a resistance of 28 Ω when cold and 144 Ω when hot. If the equation R = R0 [1 + α ∆T ] can be used over the large temperature range involved here, find the temperature of the fila- ment when it is hot. 94. Chemistry Calculate the pH of 175.0 mL of H2O following the addition of 0.57 mL of 1.39 M NaOH 95. Science Describe the apparent motion of the stars and planets that is caused by the revolution and rotation of the earth. 96. Physics A motorboat travels at a speed of 40 km/h relative to the surface of the water. If it travels upstream against a current of 12 km/h what is the boats speed relative to the shore? 97. Analytic Goemetry Find the equation of the parabola whose axis is horizontal, vertex on the y-axis and which passes through (2,4) and (8,-2). 98. Physical science A player uses a hockey stick to increase the speed of a 0.200 kg hockey puck by 6 m/s in 2 seconds. How much did the hockey puck accelerate? How much force was exerted on the puck? How much force did the puck exert on the hockey stick? Who is the antagonist in the book, Death Be Not Proud? Also, what type of conflict is it and what is the climax of the story? Is the climax when Johnny starts having a stiff neck and they discover he has a tumor in his brain?
# American Institute of Mathematical Sciences March  2017, 10(1): 263-298. doi: 10.3934/krm.2017011 ## On the classical limit of a time-dependent self-consistent field system: Analysis and computation 1 Department of Mathematics, University of Wisconsin-Madison, 480 Lincoln Drive, Madison, WI 53706, USA 2 Department of Mathematics, Institute of Natural Sciences and MOE Key Lab in Scientific, and Engineering Computing, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai, 200240, China 3 Department of Mathematics, Statistics, and Computer Science, M/C 249, University of Illinois at Chicago, 851 S. Morgan Street, Chicago, IL 60607, USA 4 Department of Mathematics, Duke University, Box 90320, Durham NC 27708, USA Dedicated to Peter Markowich on the occasion of his 60th birthday Received  October 2015 Revised  January 2016 Published  November 2016 Fund Project: This work was partially supported by NSF grants DMS-1522184 and DMS-1107291: NSF Research Network in Mathematical Sciences KI-Net: Kinetic description of emerging challenges in multiscale problems of natural sciences. C.S. acknowledges support by the NSF through grant numbers DMS-1161580 and DMS-1348092. We consider a coupled system of Schrödinger equations, arising in quantum mechanics via the so-called time-dependent self-consistent field method. Using Wigner transformation techniques we study the corresponding classical limit dynamics in two cases. In the first case, the classical limit is only taken in one of the two equations, leading to a mixed quantum-classical model which is closely connected to the well-known Ehrenfest method in molecular dynamics. In the second case, the classical limit of the full system is rigorously established, resulting in a system of coupled Vlasov-type equations. In the second part of our work, we provide a numerical study of the coupled semi-classically scaled Schrödinger equations and of the mixed quantum-classical model obtained via Ehrenfest's method. A second order (in time) method is introduced for each case. We show that the proposed methods allow time steps independent of the semi-classical parameter(s) while still capturing the correct behavior of physical observables. It also becomes clear that the order of accuracy of our methods can be improved in a straightforward way. Citation: Shi Jin, Christof Sparber, Zhennan Zhou. On the classical limit of a time-dependent self-consistent field system: Analysis and computation. Kinetic & Related Models, 2017, 10 (1) : 263-298. doi: 10.3934/krm.2017011 ##### References: show all references Dedicated to Peter Markowich on the occasion of his 60th birthday ##### References: The diagram of semi-classical limits: the iterated limit and the classical limit Reference solution: $\Delta x=\Delta y= \frac{2\pi}{32768}$ and $\Delta t=\frac{0.4}{4096}$. Upper picture: fix $\Delta y= \frac{2\pi}{32768}$ and $\Delta t=\frac{0.4}{4096}$, take $\Delta x=\frac{2\pi}{16384}$, $\frac{2\pi}{8192}$, $\frac{2\pi}{4096}$, $\frac{2\pi}{2048}$, $\frac{2\pi}{1024}$, $\frac{2\pi}{512}$, $\frac{2\pi}{256}$, $\frac{2\pi}{128}$, $\frac{2\pi}{64}$, $\frac{2\pi}{32}$, $\frac{2\pi}{16}$, $\frac{2\pi}{8}$. Lower Picture: fix $\Delta x= \frac{2\pi}{32768}$ and $\Delta t=\frac{0.4}{4096}$, take $\Delta y=\frac{2\pi}{16384}$, $\frac{2\pi}{8192}$, $\frac{2\pi}{4096}$, $\frac{2\pi}{2048}$, $\frac{2\pi}{1024}$, $\frac{2\pi}{512}$, $\frac{2\pi}{256}$, $\frac{2\pi}{128}$, $\frac{2\pi}{64}$, $\frac{2\pi}{32}$, $\frac{2\pi}{16}$, $\frac{2\pi}{8}$. These results show that, when $\delta=O(1)$ and $\varepsilon \ll 1$, the meshing strategy $\Delta x= O(\delta)$ and $\Delta y=O(\varepsilon )$ is sufficient for obtaining spectral accuracy. Reference solution: $\Delta x=\frac{2\pi}{512}$, $\Delta y= \frac{2\pi}{16348}$ and $\Delta t=\frac{0.4}{4096}$. SSP2: fix $\Delta x=\frac{2\pi}{512}$, $\Delta y= \frac{2\pi}{16348}$, take $\Delta t=\frac{0.4}{1024}$, $\frac{2\pi}{512}$, $\frac{2\pi}{256}$, $\frac{2\pi}{128}$, $\frac{2\pi}{64}$, $\frac{2\pi}{32}$, $\frac{2\pi}{16}$, $\frac{2\pi}{8}$. These results show that, when $\delta=O(1)$ and $\varepsilon \ll 1$, the SSP2 method is unconditionally stable and is second order accurate in time Fix $\Delta t=0.05$. For $\varepsilon =1/64$, $1/128$, $1/256$, $1/512$, $1/1024$, $1/2048$ and $1/{4096}$, $\Delta x=2\pi\varepsilon/16$, respectively. The reference solution is computed with the same $\Delta x$, but $\Delta t={\varepsilon }/{10}$. These results show that, $\varepsilon$-independent time steps can be taken to obtain accurate physical observables, but not accurate wave functions $\varepsilon=\frac{1}{512}$. First row: position density and current density of $\varphi^e$; $\varepsilon=\frac{1}{2048}$. First row: position density and flux density of $\varphi^e$; second row: position density and current density of $\psi^e$ Fix $\Delta$ t=0.005. For $\varepsilon=\frac{1}{256}$, $\frac1{512}$, $\frac1{1024}$, $\frac1{2048}$, $\frac1{4096}$, $\Delta x=\frac{\varepsilon}{8}$, respectively. The reference solution is computed with the same $\Delta x$, but $\Delta t=\frac{0.54\varepsilon}{4}$. These results show that, $\varepsilon$-independent time steps can be taken to obtain accurate physical observables, but not accurate wave functions. Fix $\varepsilon=\frac{1}{256}$ and $\Delta t=\frac{0.4 \varepsilon}{16}$. Take $\Delta x= \frac{2\pi\varepsilon}{32}$, $\frac{2\pi\varepsilon}{16}$, $\frac{2\pi\varepsilon}{8}$, $\frac{2\pi\varepsilon}{4}$, $\frac{2\pi\varepsilon}{2}$ and $\frac{2\pi\varepsilon}{1}$ respectively. The reference solution is computed with the same $\Delta t$, but $\Delta x=\frac{2\pi\varepsilon}{64}$. These results show that, when $\delta=\varepsilon \ll 1$, the meshing strategy $\Delta x= O(\varepsilon )$ and $\Delta y=O(\varepsilon )$ is sufficient for obtaining spectral accuracy Fix $\varepsilon=\frac{1}{1024}$ and $\Delta x=\frac{2 \pi}{16}$. Take $\Delta t= \frac{0.4}{32}$, $\frac{0.4}{64}$, $\frac{0.4}{128}$, $\frac{0.4}{256}$, $\frac{0.4}{512}$ and $\frac{0.4}{1024}$, respectively. The reference solution is computed with the same $\Delta x$, but $\Delta t=\frac{0.4}{8192}$. These results show that, when $\delta=\varepsilon \ll 1$, the SSP2 method is unconditionally stable and is second order accurate in time Fix $\Delta t=\frac{0.4}{64}$. For $\delta=\frac1{256}$, $\frac1{512}$, $\frac1{1024}$, $\frac1{2048}$, $\frac1{4096}$, $\Delta x=2\pi\varepsilon/16$, respectively. The reference solution is computed with the same $\Delta x$, but $\Delta t=\frac{\delta}{10}$. These results show that, $\delta$-independent time steps can be taken to obtain accurate physical observables and classical coordinates, but not accurate wave functions Fix $\delta=\frac{1}{1024}$ and $\Delta x=\frac{2 \pi}{16}$. Take $\Delta t= \frac{0.4}{32}$, $\frac{0.4}{64}$, $\frac{0.4}{128}$, $\frac{0.4}{256}$, $\frac{0.4}{512}$ and $\frac{0.4}{1024}$, respectively. The reference solution is computed with the same $\Delta x$, but $\Delta t=\frac{0.4}{8192}$. These results show that, the SVSP2 method is unconditionally stable and is second order accurate in time [1] Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018 [2] Vladimir S. Gerdjikov, Georgi Grahovski, Rossen Ivanov. On the integrability of KdV hierarchy with self-consistent sources. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1439-1452. doi: 10.3934/cpaa.2012.11.1439 [3] Gengen Zhang. Time splitting combined with exponential wave integrator Fourier pseudospectral method for quantum Zakharov system. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021149 [4] Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002 [5] Cesare Tronci. Momentum maps for mixed states in quantum and classical mechanics. Journal of Geometric Mechanics, 2019, 11 (4) : 639-656. doi: 10.3934/jgm.2019032 [6] Håkon Hoel, Anders Szepessy. Classical Langevin dynamics derived from quantum mechanics. Discrete & Continuous Dynamical Systems - B, 2020, 25 (10) : 4001-4038. doi: 10.3934/dcdsb.2020135 [7] Yulan Wang. Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 329-349. doi: 10.3934/dcdss.2020019 [8] Zhong Tan, Huaqiao Wang, Yucong Wang. Time-splitting methods to solve the Hall-MHD systems with Lévy noises. Kinetic & Related Models, 2019, 12 (1) : 243-267. doi: 10.3934/krm.2019011 [9] Dongsheng Yin, Min Tang, Shi Jin. The Gaussian beam method for the wigner equation with discontinuous potentials. Inverse Problems & Imaging, 2013, 7 (3) : 1051-1074. doi: 10.3934/ipi.2013.7.1051 [10] Harald Markum, Rainer Pullirsch. Classical and quantum chaos in fundamental field theories. Conference Publications, 2003, 2003 (Special) : 596-603. doi: 10.3934/proc.2003.2003.596 [11] Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda. The Hamilton-Jacobi theory and the analogy between classical and quantum mechanics. Journal of Geometric Mechanics, 2009, 1 (3) : 317-355. doi: 10.3934/jgm.2009.1.317 [12] Palash Sarkar, Shashank Singh. A unified polynomial selection method for the (tower) number field sieve algorithm. Advances in Mathematics of Communications, 2019, 13 (3) : 435-455. doi: 10.3934/amc.2019028 [13] Orazio Muscato, Wolfgang Wagner. A stochastic algorithm without time discretization error for the Wigner equation. Kinetic & Related Models, 2019, 12 (1) : 59-77. doi: 10.3934/krm.2019003 [14] Roberta Bosi. Classical limit for linear and nonlinear quantum Fokker-Planck systems. Communications on Pure & Applied Analysis, 2009, 8 (3) : 845-870. doi: 10.3934/cpaa.2009.8.845 [15] Hatim Tayeq, Amal Bergam, Anouar El Harrak, Kenza Khomsi. Self-adaptive algorithm based on a posteriori analysis of the error applied to air quality forecasting using the finite volume method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2557-2570. doi: 10.3934/dcdss.2020400 [16] Liejune Shiau, Roland Glowinski. Operator splitting method for friction constrained dynamical systems. Conference Publications, 2005, 2005 (Special) : 806-815. doi: 10.3934/proc.2005.2005.806 [17] Alberto Ibort, Alberto López-Yela. Quantum tomography and the quantum Radon transform. Inverse Problems & Imaging, 2021, 15 (5) : 893-928. doi: 10.3934/ipi.2021021 [18] Gerasimenko Viktor. Heisenberg picture of quantum kinetic evolution in mean-field limit. Kinetic & Related Models, 2011, 4 (1) : 385-399. doi: 10.3934/krm.2011.4.385 [19] Thomas Chen, Ryan Denlinger, Nataša Pavlović. Moments and regularity for a Boltzmann equation via Wigner transform. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 4979-5015. doi: 10.3934/dcds.2019204 [20] Robert I. McLachlan, Ander Murua. The Lie algebra of classical mechanics. Journal of Computational Dynamics, 2019, 6 (2) : 345-360. doi: 10.3934/jcd.2019017 2020 Impact Factor: 1.432
Statistics - Maple Programming Help Home : Support : Online Help : Graphics : Statistics : Statistics/Biplot Statistics Biplot generate biplots Calling Sequence Biplot(dataset, options, plotoptions) Parameters dataset - data set, DataFrame, or PCArecord options - (optional) equation(s) of the form option=value where option is one of arrows, arrowlabels, components, dimension, pcbiplot, points, pointlabels, or scale; specify options for generating the biplot plotoptions - options to be passed to the plots[display] command Options The options argument can contain one or more of the options shown below. All unrecognized options will be passed to the plots[display] command. See plot[options] for details. • arrows : truefalse or list; controls the display of arrows corresponding to each principal component. The default is true. If the arrows option is given as a list, the arrows are shown and any elements of the list are passed as plot options to the arrow constructor. • arrowlabels : truefalse or list; specifies the labels shown on the arrows corresponding to each column of the data. The default is true. If the dataset is a DataFrame, then the biplot will automatically use the column names from the dataframe as labels. If the dataset is a Matrix, then the arrowlabels must be provided as a list, otherwise no labels are shown. The default arrow labels can be overridden by specifying a list containing the new values. • components : list; specifies the principal components used in the biplot. By default, Biplot uses the first two principal components for 2-D plots and the first three principal components for 3-D plots. The default is [1,2]. • dimension : integer; specifies the number of dimensions, either 2 or 3 of the resulting biplot. The default is 2. • pcbiplot : truefalse; controls if with lambda = 1, observations are scaled up by $\sqrt{n}$ and variables are scaled down by $\sqrt{n}$. This is referred to as a "principal component biplot", Gabriel (1971). • points : truefalse or list; controls the display of points corresponding to the individual rows of the principal components. The default is true. If the points option is given as a list, the points are shown and any elements of the list are passed as plot options to the plot constructor. • pointlabels : truefalse or list; controls the display of point labels. The default is false. If the dataset is a DataFrame, the row names from the DataFrame are used. If the dataset is a Matrix, the numbers 1 through $n$ are used, where $n$ is the number of rows of the Matrix. The default point labels can be overridden by specifying a list containing the new values. • scale : numeric value between 0 and 1; controls if the variables are scaled by ${\mathrm{\lambda }}^{\mathrm{scale}}$ and the observations are scaled by ${\mathrm{\lambda }}^{1-\mathrm{scale}}$, where lambda are the singular values computed by the principal component analysis. The default is 1. Description • The Biplot command generates a biplot for the specified set of data. A biplot is a method of data visualization suitable for the results of a principal components analysis. • The first parameter, dataset, can be a numeric Matrix or DataFrame with 2 or more columns, or a record generated by a principal component analysis. In the case that dataset is either a Matrix or a DataFrame, a principal component analysis is run on the dataset and the results are used for the biplot. Examples > $\mathrm{with}\left(\mathrm{Statistics}\right):$ Generate a biplot for the Iris dataset. > $\mathrm{IrisDF}≔\mathrm{Import}\left("datasets/iris.csv",\mathrm{base}=\mathrm{datadir}\right)$ ${\mathrm{DataFrame}}{}\left({{\mathrm{_rtable}}}_{{18446883864376278182}}{,}{\mathrm{rows}}{=}\left[{1}{,}{2}{,}{3}{,}{4}{,}{5}{,}{6}{,}{7}{,}{8}{,}{9}{,}{10}{,}{11}{,}{12}{,}{13}{,}{14}{,}{15}{,}{16}{,}{17}{,}{18}{,}{19}{,}{20}{,}{21}{,}{22}{,}{23}{,}{24}{,}{25}{,}{26}{,}{27}{,}{28}{,}{29}{,}{30}{,}{31}{,}{32}{,}{33}{,}{34}{,}{35}{,}{36}{,}{37}{,}{38}{,}{39}{,}{40}{,}{41}{,}{42}{,}{43}{,}{44}{,}{45}{,}{46}{,}{47}{,}{48}{,}{49}{,}{50}{,}{51}{,}{52}{,}{53}{,}{54}{,}{55}{,}{56}{,}{57}{,}{58}{,}{59}{,}{60}{,}{61}{,}{62}{,}{63}{,}{64}{,}{65}{,}{66}{,}{67}{,}{68}{,}{69}{,}{70}{,}{71}{,}{72}{,}{73}{,}{74}{,}{75}{,}{76}{,}{77}{,}{78}{,}{79}{,}{80}{,}{81}{,}{82}{,}{83}{,}{84}{,}{85}{,}{86}{,}{87}{,}{88}{,}{89}{,}{90}{,}{91}{,}{92}{,}{93}{,}{94}{,}{95}{,}{96}{,}{97}{,}{98}{,}{99}{,}{100}{,}{101}{,}{102}{,}{103}{,}{104}{,}{105}{,}{106}{,}{107}{,}{108}{,}{109}{,}{110}{,}{111}{,}{112}{,}{113}{,}{114}{,}{115}{,}{116}{,}{117}{,}{118}{,}{119}{,}{120}{,}{121}{,}{122}{,}{123}{,}{124}{,}{125}{,}{126}{,}{127}{,}{128}{,}{129}{,}{130}{,}{131}{,}{132}{,}{133}{,}{134}{,}{135}{,}{136}{,}{137}{,}{138}{,}{139}{,}{140}{,}{141}{,}{142}{,}{143}{,}{144}{,}{145}{,}{146}{,}{147}{,}{148}{,}{149}{,}{150}\right]{,}{\mathrm{columns}}{=}\left[{\mathrm{Sepal Length}}{,}{\mathrm{Sepal Width}}{,}{\mathrm{Petal Length}}{,}{\mathrm{Petal Width}}{,}{\mathrm{Species}}\right]\right)$ (1) > $\mathrm{pca}≔\mathrm{PCA}\left(\mathrm{IrisDF}\left[\left[\mathrm{Sepal Length},\mathrm{Sepal Width},\mathrm{Petal Length},\mathrm{Petal Width}\right]\right]\right):$ A Biplot can also be used to show the first two components and the observations on the same diagram. The first principal component is plotted on the x-axis and the second on the y-axis. > $\mathrm{Biplot}\left(\mathrm{pca},\mathrm{size}=\left[600,"golden"\right]\right)$ From the biplot, it can be observed that petal width and length are highly correlated and their variability can be primarily attributed to the first component. Likewise, the first component also explains a large part of the Sepal length. The variability in sepal width is more attributed to the second component. It is also possible to generate a biplot displaying other principal components using the components option. For example, here is a plot of the third and fourth principal components: > $\mathrm{Biplot}\left(\mathrm{pca},\mathrm{components}=\left[3..4\right],\mathrm{scale}=0.5\right)$ It is possible to view the first three components using the dimension option. Also, the colorscheme option applies different colors based on the various levels in the "Species" column. > $\mathrm{Biplot}\left(\mathrm{pca},\mathrm{dimension}=3,\mathrm{points}=\left[\mathrm{colorscheme}=\left["valuesplit",\mathrm{IrisDF}\left[\mathrm{Species}\right]\right]\right],\mathrm{lightmodel}=\mathrm{none},\mathrm{orientation}=\left[-50,50,0\right]\right)$ The canada_crimes.csv dataset contains information on types of crimes committed per 100000 people: > $\mathrm{CCdata}≔\mathrm{Import}\left("datasets/canada_crimes.csv",\mathrm{base}=\mathrm{datadir}\right)$ ${\mathrm{DataFrame}}{}\left({{\mathrm{_rtable}}}_{{18446883864218184630}}{,}{\mathrm{rows}}{=}\left[{\mathrm{Newfoundland and Labrador}}{,}{\mathrm{Prince Edward Island}}{,}{\mathrm{Nova Scotia}}{,}{\mathrm{New Brunswick}}{,}{\mathrm{Quebec}}{,}{\mathrm{Ontario}}{,}{\mathrm{Manitoba}}{,}{\mathrm{Saskatchewan}}{,}{\mathrm{Alberta}}{,}{\mathrm{British Columbia}}{,}{\mathrm{Yukon}}{,}{\mathrm{Northwest Territories}}{,}{\mathrm{Nunavut}}\right]{,}{\mathrm{columns}}{=}\left[{\mathrm{Violent Crime}}{,}{\mathrm{Property Crime}}{,}{\mathrm{Other Criminal Code}}{,}{\mathrm{Criminal Code Traffic}}{,}{\mathrm{Federal Statute}}\right]\right)$ (2) The pointlabels option controls if the points in the biplot include labels or not. Additional options such as axes or size are passed to the plots:-display command. > $\mathrm{Biplot}\left(\mathrm{PCA}\left(\mathrm{CCdata},\mathrm{scale}=\mathrm{true}\right),\mathrm{points}=\mathrm{false},\mathrm{pointlabels}=\mathrm{true},\mathrm{arrows}=\left[\mathrm{color}="Crimson"\right],\mathrm{axes}=\mathrm{normal},\mathrm{size}=\left[800,"golden"\right],\mathrm{view}=\left[-1..1,-0.5..0.5\right]\right)$ > References Gabriel, K.R. (1971). The biplot graphical display of matrices with applications to principal component analysis. Biometrika, 58, 453-467. Compatibility • The Statistics[Biplot] command was introduced in Maple 2016.
### gsdGet1x Get an array from a GSD file #### Description: This routine returns the value of a scalar GSD item. The item must be specified by the file desciptor, item descriptor array, data array and item number. $<$t$>$ $<$type$>$ Fortran GSD b char byte byte l char logical$\ast$1 logical w short integer$\ast$2 word i int integer$\ast$4 integer r float real$\ast$4 real d double real$\ast$8 double c char[16] character$\ast$16 char This routine does not convert between types. If the type of the GSD item does not match the type of the routine, then it returns with an error. It is possible to get only part of the array. Although the part can be specified in terms of an N-dimensional array, this routine does not take a proper N-D section of the array. The caller can specify the start pixel in N dimensions and the end pixel in N dimensions. These two pixels will be converted to memory locations and all memory between the two is returned. This emulates the old GSD library. It is useful really only for parts of 1-D arrays, parts of rows, or single pixels. #### Invocation int gsdGet1{blwird}( void $\ast$file_dsc, void $\ast$item_dsc, char $\ast$data_ptr, int itemno, int ndims, int $\ast$dimvals, int $\ast$start, int $\ast$end, $<$type$>$ $\ast$values, int $\ast$actvals ); #### Arguments ##### void $\ast$file_dsc (Given) The GSD file descriptor. ##### void $\ast$item_dsc (Given) The array of GSD item descriptors related to the GSD file. ##### char $\ast$data_ptr (Given) The buffer with all the data from the GSD file. ##### int itemno (Given) The number of the item in the GSD file. ##### int ndims (Given) The dimensionality the calling routine uses to specify the start and end elements. ##### int $\ast$dimvals (Given) The array of ndims dimensions (array sizes along each axis). ##### int $\ast$start (Given) The array indices for the first element. ##### int $\ast$end The array indices for the last element. ##### $<$type$>$$\ast$value (Returned) The data values. The calling routine must make sure that sufficient memory is provided. Thus it must find out the data type and array size before calling this routine. If the data type is character, then the routine returns a byte buffer with all strings concatenated. There are no string terminators in the buffer and there is none at the end. Each string is 16 byte long and immediately followed by the next string. ##### int $\ast$actvals (Returned) The number of array values returned. This saves the caller to work out how many array elements correspond to start and end given the dimvals. #### Returned Value ##### int gsdGet1$<$t$>$(); Status. • [1:] Failure to read the item values. • [2:] Numbered item cannot be found. • [4:] Given start and end are inconsistent. • [0:] Otherwise. #### Prototype available via #include "gsd.h"
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Cosmic survey finds global appeal ### Subjects Partners line up to join the Large Synoptic Survey Telescope. The Large Synoptic Survey Telescope (artist’s impression) will map the sky every three nights. Credit: T. Mason/Mason Productions/LSST Corp. The past few years have not been the best of times for building observatories. But in a world of budget constraints and schedule delays, the Large Synoptic Survey Telescope (LSST) is bucking the trend. The US-led project to build the world’s most powerful sky-mapping machine has nailed down inter­national partnerships to fund project operations, which are intended to start in 2022. The commitments could help it to secure a final blessing from a key group: the board of the US National Science Foundation (NSF), which met this week in Washington DC. From its perch atop Cerro Pachón in Chile, the proposed 8.36-metre telescope would map the entire southern sky every three nights, generating a wealth of data on transient events such as supernovae and passing asteroids, and helping to discern the nature of dark energy, which is accelerating the expansion of the Universe. Massive computing centres would store the data and allow astronomers around the world to access them remotely. Snapshots of portions of the sky would be released every minute, and more detailed maps would come out once a year. The project reflects a shift in astronomy from the study of individual objects to surveys and big data. It has wide appeal: in 2010, it came out top of a decadal survey of US funding priorities in astronomy and astrophysics. The telescope is expected to produce many more data than the Sloan Digital Sky Survey, a highly productive northern survey. In less than two nights, the LSST will cover the same amount of sky as the Sloan managed in 8 years. Organizers are confident that they will secure construction money. Aided by a total of US$30 million from philanthropists Bill Gates and Charles Simonyi, the project has already cast its primary mirror. The US Department of Energy (DOE) has committed$160 million towards a 3.2-gigapixel camera, and the NSF expects to be able to provide $466 million to build the rest of the telescope. But the foundation is concerned about the high cost of operating the data centres that will deal with the telescope’s output of 13 terabytes of data per night. Anthony Tyson, a physicist at the University of California, Davis, and director of the LSST project, says that in 2011, the NSF told him to shift the emphasis of his international fund-raising efforts from construction to operations. The project is pioneering an innovative partnership model. In most astronomical consortia, members get a share of the telescope time that is proportional to the money they have put in. But with the LSST, institutions buy access to data:$20,000 in annual support secures access for a principal investigator, two postdoctoral researchers and unlimited graduate students. “It’s a good deal, right?” says Sidney Wolff, president of the non-profit LSST Corporation in Tucson, Arizona. Credit: Source: LSST Corp. Tyson found that recruiting partners was easy. He says that word would get out among astronomers in a country, and multiple institutions would soon be asking to join. “It mushroomed,” he says. “It was limited purely by the number of hours I could stay awake.” By the end of April this year, he had met his goal: 68 letters of intent from institutions across 26 nations, enough to cover nearly one-third of the annual operations costs of \$37 million (see ‘Sky mappers’). However, the first round of fund-raising has been closed to new partners, and Tyson says that some astronomers in countries such as France are disappointed that they missed out. (Astronomers in the United States and the project’s host country, Chile, will have free, unlimited access to the data.) The international support has reassured the NSF: as Nature went to press, the board was expected to approve the project on 18 July. Approval would allow the NSF to ask Congress for construction funding from 2014. “We’re fairly confident,” says Steven Kahn, a physicist at SLAC National Accelerator Laboratory in Menlo Park, California, and deputy director of the LSST project. “We’ve had lots of hurdles put in our path, and we’ve jumped over them.” But the project could still be endangered if the NSF and the DOE don’t get along. In 2010, the foundation pulled out of a plan to build an underground laboratory for DOE experiments in South Dakota. “There’s still a lot of nervousness about interagency collaboration,” says David MacFarlane, an astrophysicist at SLAC and chairman of the board of the LSST Corporation. But the agencies have drawn up a formal agreement that could help to reassure the NSF board that the collaboration is on solid ground. Andy Woodsworth, a physicist in Victoria, Canada, who at the end of May chaired an external review of the project, says that the LSST has already found its footing. “The time has come for this sort of survey,” he says. Authors ## Rights and permissions Reprints and Permissions Hand, E. Cosmic survey finds global appeal. Nature 487, 284 (2012). https://doi.org/10.1038/487284a
# Module is Submodule of Itself ## Theorem Let $\left({G, +_G, \circ}\right)_R$ be an $R$-module. Then $\left({G, +_G, \circ}\right)_R$ is a submodule of itself. ## Proof Follows directly from the fact that a group is a subgroup of itself. $\blacksquare$
# Supersymmetric Path Integrals I: Differential Forms on the Loop Space #### Autoren: Florian Hanisch, Matthias Ludewig (2017) In this paper, we construct an integral map for differential forms on the loop space of Riemannian spin manifolds. In particular, the even and odd Bismut-Chern characters are integrable by this map, with their integrals given by indices of Dirac operators. We also show that our integral map satisfies a version of the localization principle in equivariant cohomology. This should provide a rigorous background for supersymmetry proofs of the Atiyah-Singer Index theorem. zur Übersicht der Publikationen
# AbsoluteOptions prints error messages in V10 Bug introduced in 10.0.0 and fixed in 10.0.2 I am trying to use AbsoluteOptions in V10 but I am getting the Out with some errors as see in this picture: does anyone have same issue? - Same issue in Linux version. –  alephalpha Jul 10 '14 at 8:42 Yes. I get it too. Windows 7. –  WalkingRandomly Jul 10 '14 at 10:00 I can confirm it on win 8.1. –  Silvia Jul 10 '14 at 10:48 Reproduced on the Wolfram Programming Cloud. –  RunnyKine Jul 10 '14 at 14:12 Errors still present in 10.0.1 –  Mr.Wizard Sep 17 '14 at 15:54 It is not really fixed, try Plot[ Sin[x], {x,0,4}, PlotTheme -> "Scientific"]//AbsoluteOptions and see here –  gwr Dec 12 '14 at 17:36
Thread: rotation matrix View Single Post 2022-09-22, 03:01   #12 bhelmes Mar 2016 1101000002 Posts Is it possible to calculate one belonging rot. matrix from a vector ? Quote: Originally Posted by Dr Sardonicus No, sir. If M is a 2x2 matrix, scalar multiplication of M by k produces a matrix with determinant k^2*det(M). Thus, if M is a nonsingular 2x2 matrix, det((1/det(M))M) is 1/det(M). If M is 2x2 and det(M) is not a square, no scalar multiple of M will have determinant 1. A peaceful, early morning, especially for Dr Sardonicus, let: p=31, u1=2, v1=3; so that the norm (u1,v1)=u1²+v1²=13=12⁻¹ mod 31 and 13²=20⁻¹ mod 31 Is it possible from linear algebra to calculate one belonging rotation matrix from this vector ? The calculated target is: 13* (27,2)* (2)=(5) (-2,27) (3)=(9) 13* (14,17)* (2)=(4) (-17,14) (3)=(11) 13* (24,8)* (2)=(6) (-8,24) (3)=(15) http://localhost/devalco/unit_circle/system_tangens.php (the red coloured boxes right, all calculations checked and it seems to be all right.) My first try: Let p=31 Let M1=13*M1*= 13* (a*, b*) (-b*, a*) 1. with det (M1)=1=det (13² * det (M1*)) so that det (M1*)=20 mod 31, therefore a*²+b*²=20 2. with M1*(u1,v1)=(u2,v2) with norm (u1,v1)=(u2,v2)=u1²+v1²=u2²+v2²=13 mod p so that 13* (a*, b*) = (u2) (-b*, a*) (v2) This is more a fragment and should point in one direction, and as it is too late for me, I hope that some one could finish the calculation.
## Our 2015 – 2016 Poster The image shows a fractal set (called “Christiane’s Hair” in a recent paper) that is constructed by vertically “stacking” Cantor sets with continuously changing scaling ratios. The binary tree structure of the Cantor set is thereby emphasized, with “gaps’’ at a certain size scale separating the branches of the binary tree at that size scale. The construction also highlights the fact that the Cantor set is constructed by “ripping apart” the interval and inserting gaps. While each horizontal slice of the image is self-similar, the entire set is not since the scaling ratios vary. The image is colored in a self-similar manner, according to the percentage of 0’s and 1’s in the “address” of the given point (the proportion of left or right braches taken in the traversing the binary tree). Fractals are mathematical objects that exhibit some type of scaling behaviour. The most commonly encountered (and popular) fractals are geometric sets, although there exist a very large variety of fractal constructions. Fractals have been studied within “pure” mathematics as examples of objects with very interesting behaviour (for example, see below for some properties of the Cantor set). Fractals have also been useful in many areas of applied science, in characterizing or describing complicated phenomena. Fractals can be obtained as the supports of invariant measures of iterated function systems (IFS) with probabilities. Invariant measures also describe long term statistical behaviour of orbits of iterated function systems. The Cantor (ternary) set is perhaps the oldest known example of a fractal set and is commonly used as an example of a set with somewhat startling properties. One way to obtain the Cantor set is to start with a closed interval of unit length, and remove the middle third leaving two scaled down closed intervals each with length equal to one-third. Then these closed intervals have their middle thirds removed, and the process is repeated ad infinitum. The Cantor set is closed and compact. It is dense-in-itself since every open set containing a point from the Cantor Set contains other points from the Cantor set. Thus it is not scattered. However, it is totally disconnected: given any two points in the Cantor set, there exist two open sets in the plane that are disjoint and each set contains one of the given points. The Cantor set is uncountable, despite a geometrical way to obtain it that involves countably many steps. There is no length left after removing all the middle thirds. The scaling dimension is ln 2/ln 3, approximately 0.63. This dimension between 0 and 1 is consistent with the fact that there are too many points to count, but no length to measure. – Viqar Husain
# Calculate the volume of hydrogen required for complete hydrogenation of 0.25 dm^3 of ethyne at STP? Jul 18, 2015 You'd need ${\text{50 dm}}^{3}$ of hydrogen. #### Explanation: Like with any stoichiometry problem, the key tool you have at your disposal is the mole ratio. However, an interesting thing takes place when you're dealing with gases that are under the same conditions for pressure and temperature. In such cases, the mole ratio becomes the volume ratio. The idea is that, since you're dealing with two ideal gases, you can use the ideal gas law equation to write $P \cdot {V}_{1} = {n}_{1} \cdot R T$ $\to$ for ethyne; $P \cdot {V}_{2} = {n}_{2} \cdot R T$ $\to$ for hydrogen. The pressure and the temperature are the same for both gases, since the reaction presumably takes place at STP. If you divided these two equations, you'll get $\frac{\cancel{P} \cdot {V}_{1}}{\cancel{P} \cdot {V}_{2}} = \frac{{n}_{1} \cdot \cancel{R T}}{{n}_{2} \cdot \cancel{R T}}$ This is equivalent to ${n}_{1} / {n}_{2} = {V}_{1} / {V}_{2}$ The mole ratio is equivalent to the volume ratio. All you need now is the balanced chemical equation for the reaction, which looks like this ${C}_{2} {H}_{2 \left(g\right)} + \textcolor{red}{2} {H}_{2 \left(g\right)} \to {C}_{2} {H}_{6 \left(g\right)}$ Notice that you have a $1 : \textcolor{red}{2}$ mole ratio between ethyne and hydrogen gas. This means that the reaction needs twice as many moles of hydrogen gas than of ethyne. This translates into volumes as well. In other words, the volume of hydrogen gas must be twice as big as the volume of ethyne. $0.25 \cancel{\text{dm"^3"ethyne") * (color(red)(2)" dm"^3 "hydrogen")/(1cancel("dm"^3 "ethyne")) = color(green)("0.50 dm"""^3"hydrogen}}$
# Sensor Systems ### Sidebar theory:sensor_technology:st3_measurement_domains # Measurement Domains An oscilloscope is good to measure delays in time, but is bad for high precision amplitude measurements. A Bode analyzer good for phase shifts, but bad for measuring delays. A multimeter good for high precision voltage measurements, but bad if they change faster than 10 seconds. In this way, every domain has its own vocabulary, toolset, mathematical models and application areas. ## Introduction In figure 1 we see three screenshots of typical tools that are specific for three measurement domains. To make a convincing measurement that captures a problem or solution in a convincing way, the rigth tool must be selected. The same is true for the mathematics. In the next sections, the three most important domains are discussed. Fig. 1: Three measurement tools: oscilloscope, bode analyzer and multimeter (from the National Instruments ELVIS toolbox) ## Amplitude Domain ### Amplitude domain: examples • Measure battery voltage, cable resistance, etc. in a car • Current clamp in Irms • Measure temperature with a resistive sensor ### Amplitude domain: characteristics • We talk about absolute values like Volts • Gain to relate an output simply to an input • Use RMS for powers and noise $$U_{RMS}=\sqrt{\frac{1}{T}\int_{0}^{T} U^{2} \left ( t \right )} \label{eq:RMS}$$ ### Amplitude domain: models • Assume things are (quasi) static • Apply Kirchhoffs laws ($\Sigma$Inode=0, $\Sigma$Uloop=0) • Notice voltage dividers: $$U_{out}=U_{in} \frac{R_{out}}{R_{series}+R_{out}} \label{eq:VoltageDivider}$$ Fig. 2: The voltage divider ### Amplitude domain: equipment • The multimeter is our high precision friend • The RMS-mode is normally not “true-RMS” • A (handheld) multimeter is battery operated and is therefore perfectly differential: there are no ground problems • The internal resistance may change per range (systematic errors) • A function generator has a Thevenin equivalent with an internal resistance ($50 \Omega$) Fig. 3: A function generator and a Thevenin equivalent circuit ## Time Domain ### Time domain: examples Fig. 4: Time domain evaluation of a circuit In the time domain, we have to deal with transients: rise times and delays. • Design a controller board: connect a memory chip to the microcontroller • Optimize a PID controller • Work on echo cancellation ### Time domain: characteristics • Signals are electrical voltages that change in time • Low bandwidth signals change slowly • We talk about rise-times, overshoot, delays ### Time domain: models • Differential equations make good models • Example: RC Network (remember?) Fig. 5: An RC network as a low pass filter To capture such a system in a mathematical equation, we first write down the component equations. In this case, that is Ohm's law for the resistor and the voltage-current relation for the capacitor. Next, we need the network equations. These follow from Kirchhoff's current and voltage laws. $$\begin{matrix} I_{C}=C\frac{\partial U_{C}}{\partial t} \\ U_{R}=I_{R}R \\ I_{C}=I_{R} \\ U_{out}=U_{C}=U_{in}-U_{R} \end{matrix} \label{eq:RC_Network}$$ After combining the equations, we find $$U_{out}+RC\frac{\partial U_{out}}{\partial t}=U_{in} \label{eq:RC_DiffEquation}$$ which can be solved analytically for a step-input at t = 0 sec as $$U_{out}\left ( t \right ) = U_{in} \left ( 1-e^{-\frac{t}{RC}} \right ). \label{eq:RC_RC_AnSolution}$$ ### Time domain: equipment • Oscilloscope for analog signals • Logic analyzers for digital signals • Function generator or AWG to generate a signal • Counter to count events • Two options: • Event based signals • Need a trigger to locate the effect • Periodic signals • Need a trigger to synchronize signals ## Frequency Domain ### Frequency domain: examples • AM modulation as used in radio • Mechanical vibrations (Tacoma Narrows Bridge 1940) • Acoustics: Wah-wah guitar pedal ### Frequency domain: characteristics • Phase, gain (bode plot) • Cut-off frequencies • Spectrum • “Impedance” and not “resistance” ### Frequency domain: models • Laplace transforms (or Fourrier) give easier mathematics than differential equations As an example we look at the same RC network as used in figure 5. The component equations become $$\begin{matrix} I_{C}=C\frac{\partial U_{C}}{\partial t} U_{R}=I_{R}R \end{matrix} \label{eq:RC_NetworkImpedanceComponentEq}$$ We could combine the component equations with the network equations (from Kirchhoff), but because all components have become simple impedances, we can also work with simple voltage dividers. Using a voltage divider, we can see that $$U_{out}\left ( j\omega \right ) = \frac{Z_{C}\left ( j\omega \right )}{Z_{C}\left ( j\omega \right )+Z_{R}\left ( j\omega \right )}U_{in}\left ( j\omega \right ) \label{eq:RC_ImpedanceVoltageDivider}$$ which can be substituted easily with the component equations in impedance form $$U_{out}\left ( j\omega \right ) = \frac{1}{1+j\omega RC}U_{in}\left ( j\omega \right ). \label{eq:RC_AnZSolution}$$ The result is an expression of the frequency transfer function of the RC circuit as a filter. It has a shape as shown in figure 6. Fig. 6: Frequency transfer function of the RC filter ### Frequency domain: equipment • Oscilloscope • Not a good instrument for precision (U, t) measurements: good for signal shapes • Digital scopes (and so the scope function of MyDAQ) have discrete phenomena like aliasing • Bode diagram • shows phase and magnitude information as a function of frequency • Log-log plot shows R, L and C components as asymptotes easily ## Summary Domain Characteristics Equipment Mathematics Amplitude DC gain, hysteresis, RMS values, offset, $V_{bias}$, $V_{diode}$, $h_{fe}$, $\beta$ Multimeter Kirchhoff Time rise time, peaks, events, period, delay Counter, interval analyzer, oscilloscope differential equations Frequency phase, gain, transfer functions, bandwidth Oscilloscope, spectrum analyzer Fourier, Laplace, FFT Tab. 1: Every measurement domain has its own tools, mathematics and vocabulary
# Single electron transfer periodic DFT in VASP I have the following question: I am currently studying the electrocatalytic CO2RR on Cu materials using periodic DFT calculations and I am using the CHE model to describe the proton electron transfer reactions, as follows: $$\ce{CO2^* +e- + H+ → COOH^*}$$ $$\Delta G= G(\ce{COOH^*})-G(*)-G(CO2)-G(e^- + \ce{H+})=G(\ce{COOH^*})-G(*)-G(\ce{CO2})-\frac{1}{2}G(\ce{H2}\text{(gas)}$$ Nonetheless, I would like to ask how to describe reactions where I have only an electron transfer for example: $$\ce{COOH^* +e- → COOH- (aq)}+ *.$$ How do I add calculate the Gibbs free energy of the electron added, as well as how I add the extra electron to the $$\ce{COOH-}$$ (aq)?
Opuscula Math. 40, no. 6 (2020), 667-683 https://doi.org/10.7494/OpMath.2020.40.6.667 Opuscula Mathematica Quasilinearization method for finite systems of nonlinear RL fractional differential equations Zachary Denton Juan Diego Ramírez Abstract. In this paper the quasilinearization method is extended to finite systems of Riemann-Liouville fractional differential equations of order $$0\lt q\lt 1$$. Existence and comparison results of the linear Riemann-Liouville fractional differential systems are recalled and modified where necessary. Using upper and lower solutions, sequences are constructed that are monotonic such that the weighted sequences converge uniformly and quadratically to the unique solution of the system. A numerical example illustrating the main result is given. Keywords: fractional differential systems, lower and upper solutions, quasilinearization method. Mathematics Subject Classification: 34A08, 34A34, 34A45. Full text (pdf) 1. R.L. Bagley, P.J. Torvik, A theoretical basis for the application of fractional calculus to viscoelasticity, Journal of Rheology 27 (1983) 3, 201-210. 2. R.L. Bagley, P.J. Torvik, Fractional calculus in the transient analysis of viscoelastically damped structures, AIAA Journal 23 (1985) 6, 918-925. 3. R.L. Bagley, P.J. Torvik, On the fractional calculus model of viscoelastic behavior, Journal of Rheology 30 (1986) 1, 133-155. 4. R. Bellman, Methods of Nonlinear Analysis, volume II, Academic Press, New York, 1973. 5. R. Bellman, R. Kalaba, Quasilinearization and Nonlinear Boundary Value Problems, American Elsevier, New York, 1965. 6. E.A. Boroujeni, H.R. Momeni, Observer based control of a class of nonlinear fractional-order systems using lmi, International Journal of Science and Engineering Investigations 1 (2012) 1, 48-52. 7. M. Caputo, Linear models of dissipation whose Q is almost independent, II, Geophy. J. Roy. Astronom. 13, (1967), 529-539. 8. A. Chikrii, S. Eidelman, Generalized Mittag-Leffler matrix functions in game problems for evolutionary equations of fractional order, Cybern. Syst. Analysis 36 (2000) 3, 315-338. 9. A. Chikrii, I. Matichin, Presentation of solutions of linear systems with fractional derivatives in the sense of Riemann-Liouville, Caputo, and Miller-Ross, J. Autom. Inf. Sci. 40 (2008) 6, 1-11. 10. A. Chowdhury, C.I. Christov, Memory effects for the heat conductivity of random suspensions of spheres, Proc. R. Soc. A 466 (2010), 3253-3273. 11. L. Debnath, Recent applications of fractional calculus to science and engineering, International Journal of Mathematics and Mathematical Sciences 54 (2003), 3413-3442. 12. Z. Denton, Generalized extension of the quasilinearization for Riemann-Liouville fractional differential equations, Dynamic Systems and Applications 23 (2014) 2 & 3, 333-350. 13. Z. Denton, P.W. Ng, A.S. Vatsala, Quasilinearization method via lower and upper solutions for Riemann-Liouville fractional differential equations, Nonlinear Dynamics and Systems Theory 11 (2011) 3, 239-251. 14. Z. Denton, J.D. Ramírez, Monotone method for finite systems of nonlinear Riemann-Liouville fractional integro-differential equations, Nonlinear Dynamics and Systems Theory 18 (2018) 2, 130-143. 15. Z. Denton, A.S. Vatsala, Monotone iterative technique for finite systems of nonlinear Riemann-Liouville fractional differential equations, Opuscula Math. 31 (2011) 3, 327-339. 16. Z. Denton, A.S. Vatsala, Generalized quasilinearization method for RL fractional differential equations, Nonlinear Studies 19 (2012) 4, 637-652. 17. K. Diethelm, A.D. Freed, On the solution of nonlinear fractional differential equations used in the modeling of viscoplasticity, [in:] F. Keil, W. Mackens, H. Vob, J. Werther (eds), Scientific Computing in Chemical Engineering II: Computational Fluid Dynamics, Reaction Engineering, and Molecular Properties, Heidelberg, Springer, 1999, 217-224. 18. A.N. Gerasimov, A generalization of linear laws of deformation and its application to problems of internal friction , Akad. Nauk SSSR. Prikl. Mat. Meh. 12 (1948), 251-260. 19. W.G. Glöckle, T.F. Nonnenmacher, A fractional calculus approach to self similar protein dynamics , Biophy. J. 68 (1995), 46-53. 20. R. Hilfer (ed.), Applications of Fractional Calculus in Physics, World Scientific Publishing, Germany, 2000. 21. A.A. Kilbas, H.M. Srivastava, J.J Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, North Holland, 2006. 22. V. Kiryakova, Generalized Fractional Calculus and Applications, Pitman Res. Notes Math. Ser., vol. 301, Longman-Wiley, New York, 1994. 23. V. Kiryakova, The multi-index Mittag-Leffler functions as an important class of special functions of fractional calculus, Computers and Mathematics with Applications 59 (2010) 5, 1885-1895. 24. R.C. Koeller, Applications of fractional calculus to the theory of viscoelasticity, J. Appl. Mech. 51 (1984), 229-307. 25. V. Lakshmikantham, S. Leela, D.J. Vasundhara, Theory of Fractional Dynamic Systems, Cambridge Scientific Publishers, 2009. 26. V. Lakshmikantham, A.S. Vatsala, Generalized Quasilinearization for Nonlinear Problems, Kluwer Academic Publishers, Boston, 1998. 27. E.S. Lee, Quasilinearization and Invariant Imbedding, Academic Press, New York, 1968. 28. D. Matignon, Stability results for fractional differential equations with applications to control processing, [in:] Computational Engineering in Systems Applications, vol. 2, Lille, France, 1996, 963-968. 29. R. Metzler, W. Schick, H.G. Kilian, T.F. Nonnenmacher, Relaxation in filled polymers: A fractional calculus approach, J. Chem. Phy. 103 (1995), 7180-7186. 30. B. Oldham, J. Spanier, The Fractional Calculus, Academic Press, New York - London, 2002. 31. G.W. Scott Blair, The role of psychophysics in rheology, Journal of Colloid Science 2 (1947) 1, 21-32. • Zachary Denton (corresponding author) • https://orcid.org/0000-0002-4233-7045 • North Carolina A&T State University, Department of Mathematics and Statistics, 1601 E Market St, Greensboro, NC 27411, USA • Communicated by Marek Galewski. • Revised: 2020-09-13. • Accepted: 2020-10-25. • Published online: 2020-12-01.
# Concise Implementation of Linear Regression When I try to use Huber Loss, I am getting some error which i am unable to decode it.Can you please help me out Hi @Rosetta, thanks for raising the issue! HuberLoss may have some implementation issues, we will tackle it. The Gluon model weight matrix W.shape is (1, 2), is different from the W.shape (2, 1) in the last section, I guess that the Gluon model use XW^T + b equation, so in this code example, ‘’ w = net[0].weight.data() print(‘Error in estimating w’, true_w.reshape(w.shape) - w) b = net[0].bias.data() print(‘Error in estimating b’, true_b - b) ‘’ w.T is needed or not? Hi @liwei, you are correct! Gluon uses $XW^T + b$. Even though the shape are flipped, it doesn’t affect the final value. trainer.step normalizes the loss by 1/batch_size. when we already take the loss.mean(), loss get normalized already so there is no need to normalize again in step. 1 Like Hi @sahu.vaibhav, trainer.step() will update the weights, while loss measures the difference between truth and predictions. That’s why we use loss.mean(). thanks @goldpiggy… i was trying to answer the question. I am getting this error: TypeError: Operator abs registered in backend is known as abs in Python. This is a legacy operator which can only accept legacy ndarrays, while received an MXNet numpy ndarray. Please call as_nd_ndarray() upon the numpy ndarray to convert it to a legacy ndarray, and then feed the converted array to this operator. What can be the cause? It was working fine when I used L2loss , but this error popped up when I used Huber’s Loss. 1 Like I’m getting the same error. Any way to solve this? Any help would be much appreciated. In order to solve the issue of TypeError: Operator abs registered in backend is known as abs I’ve edited python file mxnet/gluon/loss.py installed in my package and changed class HuberLoss with MyHuberLoss. Reason of failure, as far as I understand, is described into docstring of function hybrid_forward implemented into class MyHuberLoss. Then I used MyHuberLoss in my notebook in order to experiment Huber loss function Here is the hack and the modified class : import mxnet from mxnet.gluon import loss from mxnet.gluon.loss import _reshape_like from mxnet.gluon.loss import _apply_weighting import mxnet.numpy as np class MyHuberLoss(loss.Loss): r"""Calculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss. .. math:: L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 & \text{ if } |{label}_i - {pred}_i| < {rho} \\ |{label}_i - {pred}_i| - \frac{{rho}}{2} & \text{ otherwise } \end{cases} label and pred can have arbitrary shape as long as they have the same number of elements. Parameters ---------- rho : float, default 1 Threshold for trimmed mean estimator. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. """ def __init__(self, rho=1, weight=None, batch_axis=0, **kwargs): super(MyHuberLoss, self).__init__(weight, batch_axis, **kwargs) self._rho = rho def hybrid_forward(self, F, pred, label, sample_weight=None): '''This function uses array operator from module mxnet.ndarray This module is recorded in variable F, that is given as parameter function. Parameters pred and label, in the context of this exercise, both use array type mxnet.numpy.ndarray Triggered error show that module mxnet.ndarray does not support operators such as abs and sqrt to be applied to mxnet.ndarray type. Then, in order to keep direct graph safe, rather then changing array types in order to support operators from module mxnet.ndarray, I changed module into F to module mxnet.numpy, that is compliant with array type mxnet.numpy.ndarray. ''' is_mxnet_numpy = False if type(label) == mxnet.numpy.ndarray or type(pred) == mxnet.numpy.ndarray: is_mxnet_numpy = False label = _reshape_like(F, label, pred) if is_mxnet_numpy: F = mxnet.numpy else : pass loss = F.abs(label - pred) loss = F.where(loss > self._rho, loss - 0.5 * self._rho, (0.5 / self._rho) * F.square(loss)) loss = _apply_weighting(F, loss, self._weight, sample_weight) if is_mxnet_numpy : loss_mean = F.mean(loss, axis=self._batch_axis) else : loss_mean = F.mean(loss, axis=self._batch_axis, exclude=True) return loss_mean import mxnet from mxnet.gluon import loss from mxnet.gluon.loss import _reshape_like from mxnet.gluon.loss import _apply_weighting import mxnet.numpy as np class MyHuberLoss(loss.Loss): r"""Calculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss. .. math:: L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 & \text{ if } |{label}_i - {pred}_i| < {rho} \\ |{label}_i - {pred}_i| - \frac{{rho}}{2} & \text{ otherwise } \end{cases} label and pred can have arbitrary shape as long as they have the same number of elements. Parameters ---------- rho : float, default 1 Threshold for trimmed mean estimator. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. """ def __init__(self, rho=1, weight=None, batch_axis=0, **kwargs): super(MyHuberLoss, self).__init__(weight, batch_axis, **kwargs) self._rho = rho def hybrid_forward(self, F, pred, label, sample_weight=None): '''This function uses array operator from module mxnet.ndarray This module is recorded in variable F, that is given as parameter function. Parameters pred and label, in the context of this exercise, both use array type mxnet.numpy.ndarray Triggered error show that module mxnet.ndarray does not support operators such as abs and sqrt to be applied to mxnet.ndarray type. Then, in order to keep direct graph safe, rather then changing array types in order to support operators from module mxnet.ndarray, I changed module into F to module mxnet.numpy, that is compliant with array type mxnet.numpy.ndarray. ''' is_mxnet_numpy = False if type(label) == mxnet.numpy.ndarray or type(pred) == mxnet.numpy.ndarray: is_mxnet_numpy = True label = _reshape_like(F, label, pred) if is_mxnet_numpy: F = mxnet.numpy else : pass loss = F.abs(label - pred) loss = F.where(loss > self._rho, loss - 0.5 * self._rho, (0.5 / self._rho) * F.square(loss)) loss = _apply_weighting(F, loss, self._weight, sample_weight) if is_mxnet_numpy : loss_mean = F.mean(loss, axis=self._batch_axis) else : loss_mean = F.mean(loss, axis=self._batch_axis, exclude=True) return loss_mean As a result, using Huber loss tooks twice number of epochs then L2 loss for getting the same precision for average error over the whole dataset. I think this is due to mixted slopes provided in Huber loss function. Checking documentation, it is showed that when when error is less or equal Ro parameter in between prediction and true value, then loss function comes as L2 loss function. When error is greater then Ro parameter, then loss function comes as a straight line from which, slope (then gradient) has a lower magnitude then L2 slope. I’ve no other idea… @akpd2l @rafribeiro @FBT, the current stable version of mxnet.gluon.loss.HuberLoss does not support operations on numpy arrays, but one can implement loss functions from scratch as it was done in 3.2.6. First, recall that the Huber loss function is defined as huber_loss(y_hat, y; rho) = 0.5/rho * (y_hat - y)**2 if abs(y_hat - y) <= rho else abs(y_hat - y) - 0.5*rho It becomes L2 loss for relatively small absolute residuals, and L1 loss, for large ones. So, it is smooth and less influenced by outliers. A switch is controlled by the (hyper)parameter rho. One can choose it experimentally or by measuring the variability of the inliers (e.g. rho=median(y - median(y))). Now, define a new python function def huber_loss(y_hat, y, rho=1.): loss = np.abs(y_hat - y) loss = np.where(loss <= rho, 0.5/rho * loss**2, loss - 0.5*rho) return loss and in 3.3.7 replace l = loss(net(X), y) with l = huber_loss(net(X), y). @FBT, when comparing two different loss functions used in training, it is more reasonable to choose one common measure for evaluation. In 3.3.7, loss is used for both optimization (line l = loss(net(X), y)) and evaluation (line l = loss(net(features), labels)). Instead, choose a new loss function for evaluation (e.g. L1 loss as it is more descriptive) and compare performances of MyHuberLoss and gluon.loss.L2Loss during training: loss_eval = gluon.loss.L1Loss() l2loss = gluon.loss.L2Loss() huber_loss = MyHuberLoss(rho=0.5) # don't forget to adjust rho num_epochs = 3 for epoch in range(num_epochs): for X, y in data_iter: l = huber_loss(net(X), y) # or l = l2loss(net(X), y) l.backward() trainer.step(batch_size) l = loss_eval(net(features), labels) print(f'epoch {epoch + 1}, loss {l.mean().asnumpy():f}') In our case, you will observe that the results are closely the same (why?). For sure simplicity is a far more preferable option for implementation such as for proposed huber_loss. Thanks for this point. But how such implementation deals with ascendant-compatibility with legacy code? This was the reason of class hacking supporting both mxnet.numpy.ndarray and mxnet.ndarray array types (see docstring from hybrid_forward function) Concerning usage of different loss functions, I understand the use of L1 loss for interpretability. But at this stage, data is random toy dataset, then, it is diffcult (for me) to find itnerpretability. Thanks for this information concerning the usage of Huber loss taking into account outliers into data model. For results from usage of either Huber or L2 loss functions, results on my side are quite differents. Here below are results and parameters I used. Reasons for differences : please note that into huber_loss implementation function, loss is returned as an array, while into MyHuberLoss loss is returned as a scalar, because of mean applied on array before return. This leads to change usage of trainer.step() function. When mean is returned from inside the loss function, then we should use trainer.step(1) ; when array is returned such as in huber_loss , then trainer.step(batch_size) has to be used for gradients calculation from average batch loss. I guess this last point is related with question 1. A question : suppose you use batch_size = 90. What happens then for the last batch, sized with 10, when using train_step(batch_size)? # L1 loss epoch 1, loss 4.149518 epoch 2, loss 3.457200 epoch 3, loss 2.769157 # Huber loss from MyHuberLoss epoch 1, loss 4.144689 epoch 2, loss 3.453153 epoch 3, loss 2.767166 # Huber loss from huber_loss epoch 1, loss 3.938834 epoch 2, loss 3.009948 epoch 3, loss 3.315474 # L2 loss epoch 1, loss 1.691334 epoch 2, loss 0.590921 epoch 3, loss 0.206460 Parameters, training and evaluation : batch_size = 100 # Feed forward network net = nn.Sequential() # One unique layer, connected with all nodes from input layer # Parameters initilization net.initialize(init.Normal(sigma=0.01)) # Optimizer trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1}) lossInstance = gluon.loss.L1Loss() lossInstance = gluon.loss.L2Loss() rho=0.5 #lossInstance = MyHuberLoss(rho=rho) # don't forget to adjust rho #lossInstance = huber_loss num_epochs = 3 for epoch in range(num_epochs): for X, y in data_iter: # Compute Error from feed-forward step if type(lossInstance) == huber_loss : l = lossInstance(net(X), y, rho=rho) else : l = lossInstance(net(X), y) l.backward() # Learning stage : update weights, biais if type(lossInstance) == MyHuberLoss : trainer.step(1) else : trainer.step(batch_size) # Compute error over the full dataset l = l1loss(net(features), labels) print(f'epoch {epoch + 1}, loss {l.mean().asnumpy():f}') Since this chapter is for beginners, I didn’t consider inheriting base mxnet.gluon.loss.Loss class to implement a general loss functor for every backend and followed the previous paragraph’s instructions. If my memory serves me correctly, the book introduces the first custom loss with Gluon API in chapter 9. Following Gluon API, loss functors are expected to return arrays of shape (batch_size,) or (..., batch_size) (in case of multiple tasks) for Numpy NDArrays. mean is applied over axis 1, which averages loss values over tasks not batch entries. In our case it means that loss(net(X), y) is of shape (batch_size,) as y is of shape (batch_size,1). For example, >>> y = np.array([[1, -1, 4, 3]]).reshape(4, 1) # shape = (batch_size, 1) as in load_array >>> y_hat = np.array([[1, 0, 5, 4]]).reshape(4, 1) >>> loss = gluon.loss.L1Loss() >>> loss(y_hat, y) array([0., 1., 3., 2.]) As you can see, averaging was not performed because we have only one regression task, therefore we need to pass batch_size steps into trainer.step and call mean in print(f'epoch {epoch + 1}, loss {l.mean().asnumpy():f}') manually. Accordingly, I think you should reimplement MyHuberLoss so that you don’t have to check loss types to determine trainer steps. I believe this will also shed some light on your issues. load_array function from 3.3.2 is based on mxnet.gluon.data.DataLoader, which has option last_batch. The function calls DataLoader with the default last_batch, therefore the last batch will be of shape (10, number_of_features). import mxnet from mxnet.gluon import loss from mxnet.gluon.loss import _reshape_like from mxnet.gluon.loss import _apply_weighting import mxnet.numpy as np class MyHuberLoss(loss.Loss): r"""Calculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss. .. math:: L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 & \text{ if } |{label}_i - {pred}_i| < {rho} \\ |{label}_i - {pred}_i| - \frac{{rho}}{2} & \text{ otherwise } \end{cases} label and pred can have arbitrary shape as long as they have the same number of elements. Parameters ---------- rho : float, default 1 Threshold for trimmed mean estimator. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. """ def __init__(self, rho=1, weight=None, batch_axis=0, **kwargs): super(MyHuberLoss, self).__init__(weight, batch_axis, **kwargs) self._rho = rho def hybrid_forward(self, F, pred, label, sample_weight=None): '''This function uses array operator from module mxnet.ndarray This module is recorded in variable F, that is given as parameter function. Parameters pred and label, in the context of this exercise, both use array type mxnet.numpy.ndarray Triggered error show that module mxnet.ndarray does not support operators such as abs and sqrt to be applied to mxnet.ndarray type. Then, in order to keep direct graph safe, rather then changing array types in order to support operators from module mxnet.ndarray, I changed module into F to module mxnet.numpy, that is compliant with array type mxnet.numpy.ndarray. ''' is_mxnet_numpy = False #if type(label) == mxnet.numpy.ndarray or type(pred) == mxnet.numpy.ndarray: if mxnet.util.is_np_array() : # Type is mxnet.numpy.ndarray is_mxnet_numpy = True # Use appropriate functions from module mxnet.numpy F = mxnet.numpy label = label.reshape(pred.shape) else : # This is a legacy mx.nd.NDArray; default module (mxnet.ndarray) # given in parameter is used label = _reshape_like(F, label, pred) loss = F.abs(label - pred) loss = F.where(loss > self._rho, loss - 0.5 * self._rho, (0.5 / self._rho) * F.square(loss)) loss = _apply_weighting(F, loss, self._weight, sample_weight) if is_mxnet_numpy : # Mean does apply on batch axis only; this avoid 1st axis. loss_mean = F.mean(loss, axis=tuple(range(1, loss.ndim))) else : loss_mean = F.mean(loss, axis=self._batch_axis, exclude=True) return loss_mean Concerning return from Loss class, you’ve riht, thanks. MyHuberLoss has been changed accordingly. Return format of loss is an array shaped as (batch_size,), according to documentation. But this does not change deeply the results and so, does not explain why there is a strong difference while using huber_loss function and MyHuberLoss class. Computing gradient with the average of loss is equivalent (on the paper) to compute the average gradient using loss array. Gradient operator is a linear operator for addition and multiplication with scalar, and that’s what mean operator do. Concerning the case where batch_size=90 (case where last batch is lesser size then all others) then last_batch option does not seems to solve this issue. Updating parameters with trainer.step(batch_size) on loss array with size 10 will not provide expected result when batch_size=90. Also, using trainer.step(1) after calculation of mean for each batch is valiid option in this case and is more relevant. Better solution is to feed trainer.step with size of loss array, rather then batch_size. Difference of results while using huber_loss is due to the y.shape and y_hat.shape that are not same ! Following implementation lead to same results : def huber_loss(y_hat, y, rho=1.): loss = np.abs(y_hat - y.reshape(y_hat.shape)) loss = np.where(loss <= rho, 0.5/rho * loss**2, loss - 0.5*rho) return loss As described below : # L1 loss epoch 1, loss 4.147878 epoch 2, loss 3.456719 epoch 3, loss 2.767898 # Huber loss from MyHuberLoss epoch 1, loss 4.146686 epoch 2, loss 3.455244 epoch 3, loss 2.767972 # Huber loss from huber_loss epoch 1, loss 4.147504 epoch 2, loss 3.459582 epoch 3, loss 2.778563
# A problem by Sreejato Bhattacharya Level pending Calvin and Lino play a modified game of Nim. First, they randomly choose a positive integer $$n$$ between $$10$$ and $$100$$ (inclusive). They then take a pile of stones having $$n$$ stones. The players take turns alternatively removing a certain number of stones from the pile. The rules are: • At the first move, the first player removes a positive integer number of stones, but he cannot remove $$10$$ stones or more. • At the consequent moves, each player has to remove a positive integer number of stones that is at most $$10$$ times the number of stones removed by the other player in the last turn. • The player who removes the last stone wins. Calvin goes first. Assuming each player plays optimally, the probability that Lino wins the game can be expressed as $$\dfrac{a}{b}$$, where $$a, b$$ are positive coprime integers. Find $$a+b$$. Details and assumptions At each move, a player has to remove a positive number of stones. Removing nothing from the pile isn't considered as a valid move. ×
Physics equations/24-Electromagnetic Waves/Q:displacementCurrent c24ElectromagneticWaves_displacementCurrent_v1 A circlular capactitor of radius 4.2 m has a gap of 8 mm, and a charge of 45 μC. What is the electric field between the plates? a) 5.16E+04 N/C (or V/m) b) 6.25E+04 N/C (or V/m) c) 7.57E+04 N/C (or V/m) d) 9.17E+04 N/C (or V/m) e) 1.11E+05 N/C (or V/m) copies ===2=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.3 m has a gap of 16 mm, and a charge of 68 μC. What is the electric field between the plates?} -a) 1.26E+05 N/C (or V/m) -b) 1.53E+05 N/C (or V/m) -c) 1.85E+05 N/C (or V/m) +d) 2.24E+05 N/C (or V/m) -e) 2.72E+05 N/C (or V/m) ===3=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.9 m has a gap of 11 mm, and a charge of 85 μC. What is the electric field between the plates?} +a) 1.27E+05 N/C (or V/m) -b) 1.54E+05 N/C (or V/m) -c) 1.87E+05 N/C (or V/m) -d) 2.26E+05 N/C (or V/m) -e) 2.74E+05 N/C (or V/m) ===4=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.4 m has a gap of 18 mm, and a charge of 36 μC. What is the electric field between the plates?} -a) 4.55E+04 N/C (or V/m) -b) 5.52E+04 N/C (or V/m) +c) 6.68E+04 N/C (or V/m) -d) 8.10E+04 N/C (or V/m) -e) 9.81E+04 N/C (or V/m) ===5=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.4 m has a gap of 15 mm, and a charge of 63 μC. What is the electric field between the plates?} -a) 1.62E+05 N/C (or V/m) +b) 1.96E+05 N/C (or V/m) -c) 2.37E+05 N/C (or V/m) -d) 2.88E+05 N/C (or V/m) -e) 3.48E+05 N/C (or V/m) ===6=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.7 m has a gap of 8 mm, and a charge of 89 μC. What is the electric field between the plates?} -a) 1.93E+05 N/C (or V/m) +b) 2.34E+05 N/C (or V/m) -c) 2.83E+05 N/C (or V/m) -d) 3.43E+05 N/C (or V/m) -e) 4.16E+05 N/C (or V/m) ===7=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.4 m has a gap of 18 mm, and a charge of 62 μC. What is the electric field between the plates?} -a) 9.50E+04 N/C (or V/m) +b) 1.15E+05 N/C (or V/m) -c) 1.39E+05 N/C (or V/m) -d) 1.69E+05 N/C (or V/m) -e) 2.05E+05 N/C (or V/m) ===8=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.6 m has a gap of 8 mm, and a charge of 53 μC. What is the electric field between the plates?} -a) 6.82E+04 N/C (or V/m) -b) 8.27E+04 N/C (or V/m) -c) 1.00E+05 N/C (or V/m) -d) 1.21E+05 N/C (or V/m) +e) 1.47E+05 N/C (or V/m) ===9=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.8 m has a gap of 14 mm, and a charge of 75 μC. What is the electric field between the plates?} -a) 5.43E+04 N/C (or V/m) -b) 6.58E+04 N/C (or V/m) -c) 7.97E+04 N/C (or V/m) -d) 9.66E+04 N/C (or V/m) +e) 1.17E+05 N/C (or V/m) ===10=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.3 m has a gap of 7 mm, and a charge of 47 μC. What is the electric field between the plates?} -a) 7.54E+04 N/C (or V/m) +b) 9.14E+04 N/C (or V/m) -c) 1.11E+05 N/C (or V/m) -d) 1.34E+05 N/C (or V/m) -e) 1.63E+05 N/C (or V/m) ===11=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.1 m has a gap of 14 mm, and a charge of 24 μC. What is the electric field between the plates?} -a) 4.24E+04 N/C (or V/m) +b) 5.13E+04 N/C (or V/m) -c) 6.22E+04 N/C (or V/m) -d) 7.53E+04 N/C (or V/m) -e) 9.13E+04 N/C (or V/m) ===12=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.6 m has a gap of 12 mm, and a charge of 55 μC. What is the electric field between the plates?} -a) 6.37E+04 N/C (or V/m) -b) 7.71E+04 N/C (or V/m) +c) 9.34E+04 N/C (or V/m) -d) 1.13E+05 N/C (or V/m) -e) 1.37E+05 N/C (or V/m) ===13=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.7 m has a gap of 10 mm, and a charge of 41 μC. What is the electric field between the plates?} +a) 1.08E+05 N/C (or V/m) -b) 1.30E+05 N/C (or V/m) -c) 1.58E+05 N/C (or V/m) -d) 1.91E+05 N/C (or V/m) -e) 2.32E+05 N/C (or V/m) ===14=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.7 m has a gap of 10 mm, and a charge of 12 μC. What is the electric field between the plates?} -a) 2.15E+04 N/C (or V/m) -b) 2.60E+04 N/C (or V/m) +c) 3.15E+04 N/C (or V/m) -d) 3.82E+04 N/C (or V/m) -e) 4.63E+04 N/C (or V/m) ===15=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.2 m has a gap of 12 mm, and a charge of 84 μC. What is the electric field between the plates?} -a) 1.37E+05 N/C (or V/m) -b) 1.66E+05 N/C (or V/m) -c) 2.01E+05 N/C (or V/m) -d) 2.43E+05 N/C (or V/m) +e) 2.95E+05 N/C (or V/m) ===16=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.9 m has a gap of 19 mm, and a charge of 66 μC. What is the electric field between the plates?} -a) 1.29E+05 N/C (or V/m) +b) 1.56E+05 N/C (or V/m) -c) 1.89E+05 N/C (or V/m) -d) 2.29E+05 N/C (or V/m) -e) 2.77E+05 N/C (or V/m) ===17=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.4 m has a gap of 12 mm, and a charge of 72 μC. What is the electric field between the plates?} -a) 6.21E+04 N/C (or V/m) -b) 7.52E+04 N/C (or V/m) -c) 9.11E+04 N/C (or V/m) -d) 1.10E+05 N/C (or V/m) +e) 1.34E+05 N/C (or V/m) ===18=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.5 m has a gap of 14 mm, and a charge of 21 μC. What is the electric field between the plates?} +a) 6.16E+04 N/C (or V/m) -b) 7.47E+04 N/C (or V/m) -c) 9.05E+04 N/C (or V/m) -d) 1.10E+05 N/C (or V/m) -e) 1.33E+05 N/C (or V/m) ===19=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.3 m has a gap of 14 mm, and a charge of 11 μC. What is the electric field between the plates?} -a) 2.04E+04 N/C (or V/m) -b) 2.47E+04 N/C (or V/m) -c) 3.00E+04 N/C (or V/m) +d) 3.63E+04 N/C (or V/m) -e) 4.40E+04 N/C (or V/m) ===20=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.2 m has a gap of 12 mm, and a charge of 94 μC. What is the electric field between the plates?} +a) 1.92E+05 N/C (or V/m) -b) 2.32E+05 N/C (or V/m) -c) 2.81E+05 N/C (or V/m) -d) 3.41E+05 N/C (or V/m) -e) 4.13E+05 N/C (or V/m) ===21=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 4.6 m has a gap of 12 mm, and a charge of 45 μC. What is the electric field between the plates?} -a) 6.31E+04 N/C (or V/m) +b) 7.65E+04 N/C (or V/m) -c) 9.26E+04 N/C (or V/m) -d) 1.12E+05 N/C (or V/m) -e) 1.36E+05 N/C (or V/m) ===22=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.1 m has a gap of 9 mm, and a charge of 11 μC. What is the electric field between the plates?} -a) 2.80E+04 N/C (or V/m) -b) 3.40E+04 N/C (or V/m) +c) 4.12E+04 N/C (or V/m) -d) 4.99E+04 N/C (or V/m) -e) 6.04E+04 N/C (or V/m) ===23=== {<!--c24ElectromagneticWaves_displacementCurrent_1-->A circlular capactitor of radius 3.4 m has a gap of 7 mm, and a charge of 95 μC. What is the electric field between the plates?} -a) 2.44E+05 N/C (or V/m) +b) 2.95E+05 N/C (or V/m) -c) 3.58E+05 N/C (or V/m) -d) 4.34E+05 N/C (or V/m) -e) 5.25E+05 N/C (or V/m) c24ElectromagneticWaves_displacementCurrent_v1 A circlular capactitor of radius 3.2 m has a gap of 13 mm, and a charge of 49 μC. Compute the surface integral ${\displaystyle c^{-2}\oint {\vec {E}}\cdot d{\vec {A}}}$ over an inner face of the capacitor. a) 3.46E-11 Vs2m-1 b) 4.20E-11 Vs2m-1 c) 5.08E-11 Vs2m-1 d) 6.16E-11 Vs2m-1 e) 7.46E-11 Vs2m-1 copies ===2=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.6 m has a gap of 12 mm, and a charge of 77 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 6.59E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 7.99E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 9.68E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.17E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.42E-10 Vs<sup>2</sup>m<sup>-1</sup> ===3=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.5 m has a gap of 19 mm, and a charge of 13 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 1.35E-11 Vs<sup>2</sup>m<sup>-1</sup> +b) 1.63E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 1.98E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 2.40E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 2.91E-11 Vs<sup>2</sup>m<sup>-1</sup> ===4=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.4 m has a gap of 8 mm, and a charge of 85 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 4.96E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 6.01E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 7.28E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 8.82E-11 Vs<sup>2</sup>m<sup>-1</sup> +e) 1.07E-10 Vs<sup>2</sup>m<sup>-1</sup> ===5=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.3 m has a gap of 11 mm, and a charge of 66 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 6.85E-11 Vs<sup>2</sup>m<sup>-1</sup> +b) 8.29E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 1.00E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.22E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.47E-10 Vs<sup>2</sup>m<sup>-1</sup> ===6=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.2 m has a gap of 19 mm, and a charge of 46 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} +a) 5.78E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 7.00E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 8.48E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.03E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.25E-10 Vs<sup>2</sup>m<sup>-1</sup> ===7=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.2 m has a gap of 18 mm, and a charge of 82 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 5.79E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 7.02E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 8.51E-11 Vs<sup>2</sup>m<sup>-1</sup> +d) 1.03E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.25E-10 Vs<sup>2</sup>m<sup>-1</sup> ===8=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.7 m has a gap of 17 mm, and a charge of 80 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 4.67E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 5.65E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 6.85E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 8.30E-11 Vs<sup>2</sup>m<sup>-1</sup> +e) 1.01E-10 Vs<sup>2</sup>m<sup>-1</sup> ===9=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.1 m has a gap of 7 mm, and a charge of 50 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 2.92E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 3.53E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 4.28E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 5.19E-11 Vs<sup>2</sup>m<sup>-1</sup> +e) 6.28E-11 Vs<sup>2</sup>m<sup>-1</sup> ===10=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.3 m has a gap of 19 mm, and a charge of 83 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 5.87E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 7.11E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 8.61E-11 Vs<sup>2</sup>m<sup>-1</sup> +d) 1.04E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.26E-10 Vs<sup>2</sup>m<sup>-1</sup> ===11=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.8 m has a gap of 12 mm, and a charge of 29 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 2.05E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 2.48E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 3.01E-11 Vs<sup>2</sup>m<sup>-1</sup> +d) 3.64E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 4.42E-11 Vs<sup>2</sup>m<sup>-1</sup> ===12=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.4 m has a gap of 17 mm, and a charge of 65 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 5.56E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 6.74E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 8.17E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 9.90E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.20E-10 Vs<sup>2</sup>m<sup>-1</sup> ===13=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.8 m has a gap of 14 mm, and a charge of 61 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} +a) 7.67E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 9.29E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 1.13E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.36E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.65E-10 Vs<sup>2</sup>m<sup>-1</sup> ===14=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.1 m has a gap of 8 mm, and a charge of 24 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 2.05E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 2.49E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 3.02E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 3.65E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 4.43E-11 Vs<sup>2</sup>m<sup>-1</sup> ===15=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.8 m has a gap of 14 mm, and a charge of 83 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 7.11E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 8.61E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 1.04E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.26E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.53E-10 Vs<sup>2</sup>m<sup>-1</sup> ===16=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.4 m has a gap of 16 mm, and a charge of 41 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 3.51E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 4.25E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 5.15E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 6.24E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 7.56E-11 Vs<sup>2</sup>m<sup>-1</sup> ===17=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.8 m has a gap of 17 mm, and a charge of 73 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} +a) 9.17E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 1.11E-10 Vs<sup>2</sup>m<sup>-1</sup> -c) 1.35E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.63E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.98E-10 Vs<sup>2</sup>m<sup>-1</sup> ===18=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.3 m has a gap of 14 mm, and a charge of 15 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 8.75E-12 Vs<sup>2</sup>m<sup>-1</sup> -b) 1.06E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 1.28E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.56E-11 Vs<sup>2</sup>m<sup>-1</sup> +e) 1.88E-11 Vs<sup>2</sup>m<sup>-1</sup> ===19=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.5 m has a gap of 18 mm, and a charge of 92 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 7.88E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 9.54E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 1.16E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.40E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.70E-10 Vs<sup>2</sup>m<sup>-1</sup> ===20=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 4.3 m has a gap of 12 mm, and a charge of 85 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 7.28E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 8.82E-11 Vs<sup>2</sup>m<sup>-1</sup> +c) 1.07E-10 Vs<sup>2</sup>m<sup>-1</sup> -d) 1.29E-10 Vs<sup>2</sup>m<sup>-1</sup> -e) 1.57E-10 Vs<sup>2</sup>m<sup>-1</sup> ===21=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.7 m has a gap of 8 mm, and a charge of 34 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 2.40E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 2.91E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 3.53E-11 Vs<sup>2</sup>m<sup>-1</sup> +d) 4.27E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 5.18E-11 Vs<sup>2</sup>m<sup>-1</sup> ===22=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.4 m has a gap of 8 mm, and a charge of 34 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 3.53E-11 Vs<sup>2</sup>m<sup>-1</sup> +b) 4.27E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 5.18E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 6.27E-11 Vs<sup>2</sup>m<sup>-1</sup> -e) 7.60E-11 Vs<sup>2</sup>m<sup>-1</sup> ===23=== {<!--c24ElectromagneticWaves_displacementCurrent_2-->A circlular capactitor of radius 3.9 m has a gap of 19 mm, and a charge of 78 μC. Compute the surface integral $c^{-2}\oint\vec E\cdot d\vec A$ over an inner face of the capacitor.} -a) 4.55E-11 Vs<sup>2</sup>m<sup>-1</sup> -b) 5.51E-11 Vs<sup>2</sup>m<sup>-1</sup> -c) 6.68E-11 Vs<sup>2</sup>m<sup>-1</sup> -d) 8.09E-11 Vs<sup>2</sup>m<sup>-1</sup> +e) 9.80E-11 Vs<sup>2</sup>m<sup>-1</sup> c24ElectromagneticWaves_displacementCurrent_v1 A circlular capactitor of radius 4.9 m has a gap of 17 mm, and a charge of 54 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? a) 2.92E-04 s b) 3.54E-04 s c) 4.28E-04 s d) 5.19E-04 s e) 6.29E-04 s copies ===2=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.6 m has a gap of 11 mm, and a charge of 60 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } -a) 3.28E-04 s -b) 3.97E-04 s +c) 4.82E-04 s -d) 5.83E-04 s -e) 7.07E-04 s ===3=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.7 m has a gap of 15 mm, and a charge of 36 μC. The capacitor is discharged through a 6 kΩ resistor. What is the decay time? } -a) 1.04E-04 s -b) 1.26E-04 s +c) 1.52E-04 s -d) 1.85E-04 s -e) 2.24E-04 s ===4=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.3 m has a gap of 14 mm, and a charge of 43 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } +a) 1.95E-04 s -b) 2.36E-04 s -c) 2.86E-04 s -d) 3.46E-04 s -e) 4.20E-04 s ===5=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.6 m has a gap of 7 mm, and a charge of 18 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } -a) 6.25E-04 s +b) 7.57E-04 s -c) 9.17E-04 s -d) 1.11E-03 s -e) 1.35E-03 s ===6=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.1 m has a gap of 11 mm, and a charge of 76 μC. The capacitor is discharged through a 8 kΩ resistor. What is the decay time? } +a) 1.94E-04 s -b) 2.36E-04 s -c) 2.85E-04 s -d) 3.46E-04 s -e) 4.19E-04 s ===7=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.6 m has a gap of 14 mm, and a charge of 98 μC. The capacitor is discharged through a 8 kΩ resistor. What is the decay time? } -a) 1.40E-04 s -b) 1.70E-04 s +c) 2.06E-04 s -d) 2.50E-04 s -e) 3.02E-04 s ===8=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.3 m has a gap of 8 mm, and a charge of 12 μC. The capacitor is discharged through a 7 kΩ resistor. What is the decay time? } -a) 3.07E-04 s -b) 3.71E-04 s +c) 4.50E-04 s -d) 5.45E-04 s -e) 6.61E-04 s ===9=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.3 m has a gap of 13 mm, and a charge of 44 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } -a) 2.00E-04 s -b) 2.43E-04 s -c) 2.94E-04 s +d) 3.56E-04 s -e) 4.31E-04 s ===10=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4 m has a gap of 16 mm, and a charge of 48 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } -a) 1.16E-04 s -b) 1.41E-04 s -c) 1.71E-04 s -d) 2.07E-04 s +e) 2.50E-04 s ===11=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.8 m has a gap of 16 mm, and a charge of 89 μC. The capacitor is discharged through a 6 kΩ resistor. What is the decay time? } -a) 1.98E-04 s +b) 2.40E-04 s -c) 2.91E-04 s -d) 3.53E-04 s -e) 4.27E-04 s ===12=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.1 m has a gap of 11 mm, and a charge of 51 μC. The capacitor is discharged through a 8 kΩ resistor. What is the decay time? } +a) 3.40E-04 s -b) 4.12E-04 s -c) 4.99E-04 s -d) 6.05E-04 s -e) 7.33E-04 s ===13=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.8 m has a gap of 12 mm, and a charge of 56 μC. The capacitor is discharged through a 8 kΩ resistor. What is the decay time? } +a) 2.68E-04 s -b) 3.24E-04 s -c) 3.93E-04 s -d) 4.76E-04 s -e) 5.77E-04 s ===14=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.2 m has a gap of 18 mm, and a charge of 97 μC. The capacitor is discharged through a 7 kΩ resistor. What is the decay time? } +a) 1.91E-04 s -b) 2.31E-04 s -c) 2.80E-04 s -d) 3.39E-04 s -e) 4.11E-04 s ===15=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.7 m has a gap of 19 mm, and a charge of 27 μC. The capacitor is discharged through a 6 kΩ resistor. What is the decay time? } -a) 1.60E-04 s +b) 1.94E-04 s -c) 2.35E-04 s -d) 2.85E-04 s -e) 3.45E-04 s ===16=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4 m has a gap of 14 mm, and a charge of 24 μC. The capacitor is discharged through a 7 kΩ resistor. What is the decay time? } -a) 1.84E-04 s +b) 2.23E-04 s -c) 2.70E-04 s -d) 3.27E-04 s -e) 3.96E-04 s ===17=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.3 m has a gap of 12 mm, and a charge of 63 μC. The capacitor is discharged through a 7 kΩ resistor. What is the decay time? } -a) 9.94E-05 s -b) 1.20E-04 s -c) 1.46E-04 s +d) 1.77E-04 s -e) 2.14E-04 s ===18=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.2 m has a gap of 8 mm, and a charge of 12 μC. The capacitor is discharged through a 7 kΩ resistor. What is the decay time? } +a) 2.49E-04 s -b) 3.02E-04 s -c) 3.66E-04 s -d) 4.43E-04 s -e) 5.37E-04 s ===19=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.9 m has a gap of 13 mm, and a charge of 35 μC. The capacitor is discharged through a 5 kΩ resistor. What is the decay time? } +a) 2.57E-04 s -b) 3.11E-04 s -c) 3.77E-04 s -d) 4.57E-04 s -e) 5.53E-04 s ===20=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 4.1 m has a gap of 14 mm, and a charge of 71 μC. The capacitor is discharged through a 6 kΩ resistor. What is the decay time? } -a) 1.65E-04 s +b) 2.00E-04 s -c) 2.43E-04 s -d) 2.94E-04 s -e) 3.56E-04 s ===21=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.2 m has a gap of 12 mm, and a charge of 33 μC. The capacitor is discharged through a 6 kΩ resistor. What is the decay time? } +a) 1.42E-04 s -b) 1.73E-04 s -c) 2.09E-04 s -d) 2.53E-04 s -e) 3.07E-04 s ===22=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.4 m has a gap of 8 mm, and a charge of 64 μC. The capacitor is discharged through a 9 kΩ resistor. What is the decay time? } +a) 3.62E-04 s -b) 4.38E-04 s -c) 5.31E-04 s -d) 6.43E-04 s -e) 7.79E-04 s ===23=== {<!--c24ElectromagneticWaves_displacementCurrent_3-->A circlular capactitor of radius 3.1 m has a gap of 15 mm, and a charge of 73 μC. The capacitor is discharged through a 8 kΩ resistor. What is the decay time? } -a) 6.62E-05 s -b) 8.02E-05 s -c) 9.71E-05 s -d) 1.18E-04 s +e) 1.43E-04 s c24ElectromagneticWaves_displacementCurrent_v1 A circlular capactitor of radius 3.3 m has a gap of 12 mm, and a charge of 93 μC. The capacitor is discharged through a 9 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.) a) 9.88E-09 Tesla b) 1.24E-08 Tesla c) 1.57E-08 Tesla d) 1.97E-08 Tesla e) 2.48E-08 Tesla copies ===2=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.1 m has a gap of 11 mm, and a charge of 66 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 6.33E-09 Tesla -b) 7.96E-09 Tesla -c) 1.00E-08 Tesla +d) 1.26E-08 Tesla -e) 1.59E-08 Tesla ===3=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.4 m has a gap of 15 mm, and a charge of 63 μC. The capacitor is discharged through a 8 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 7.92E-09 Tesla +b) 9.97E-09 Tesla -c) 1.26E-08 Tesla -d) 1.58E-08 Tesla -e) 1.99E-08 Tesla ===4=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4 m has a gap of 13 mm, and a charge of 89 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 8.62E-09 Tesla -b) 1.09E-08 Tesla -c) 1.37E-08 Tesla -d) 1.72E-08 Tesla +e) 2.17E-08 Tesla ===5=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.3 m has a gap of 10 mm, and a charge of 46 μC. The capacitor is discharged through a 5 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} +a) 8.32E-09 Tesla -b) 1.05E-08 Tesla -c) 1.32E-08 Tesla -d) 1.66E-08 Tesla -e) 2.09E-08 Tesla ===6=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.1 m has a gap of 15 mm, and a charge of 90 μC. The capacitor is discharged through a 5 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 1.41E-08 Tesla -b) 1.78E-08 Tesla -c) 2.24E-08 Tesla +d) 2.82E-08 Tesla -e) 3.55E-08 Tesla ===7=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.6 m has a gap of 12 mm, and a charge of 52 μC. The capacitor is discharged through a 7 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 3.30E-09 Tesla -b) 4.15E-09 Tesla -c) 5.23E-09 Tesla +d) 6.58E-09 Tesla -e) 8.29E-09 Tesla ===8=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 3.6 m has a gap of 19 mm, and a charge of 98 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 1.90E-08 Tesla -b) 2.40E-08 Tesla -c) 3.02E-08 Tesla -d) 3.80E-08 Tesla +e) 4.78E-08 Tesla ===9=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.6 m has a gap of 18 mm, and a charge of 44 μC. The capacitor is discharged through a 7 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 6.64E-09 Tesla +b) 8.36E-09 Tesla -c) 1.05E-08 Tesla -d) 1.32E-08 Tesla -e) 1.67E-08 Tesla ===10=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.9 m has a gap of 18 mm, and a charge of 45 μC. The capacitor is discharged through a 7 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 2.82E-09 Tesla -b) 3.54E-09 Tesla -c) 4.46E-09 Tesla -d) 5.62E-09 Tesla +e) 7.07E-09 Tesla ===11=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.3 m has a gap of 15 mm, and a charge of 21 μC. The capacitor is discharged through a 7 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 1.62E-09 Tesla -b) 2.04E-09 Tesla -c) 2.57E-09 Tesla -d) 3.23E-09 Tesla +e) 4.07E-09 Tesla ===12=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.7 m has a gap of 16 mm, and a charge of 12 μC. The capacitor is discharged through a 8 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 6.62E-10 Tesla -b) 8.33E-10 Tesla -c) 1.05E-09 Tesla -d) 1.32E-09 Tesla +e) 1.66E-09 Tesla ===13=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.9 m has a gap of 16 mm, and a charge of 46 μC. The capacitor is discharged through a 9 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} +a) 5.00E-09 Tesla -b) 6.29E-09 Tesla -c) 7.92E-09 Tesla -d) 9.97E-09 Tesla -e) 1.26E-08 Tesla ===14=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.9 m has a gap of 14 mm, and a charge of 56 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 3.18E-09 Tesla -b) 4.00E-09 Tesla -c) 5.04E-09 Tesla -d) 6.34E-09 Tesla +e) 7.99E-09 Tesla ===15=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.8 m has a gap of 14 mm, and a charge of 55 μC. The capacitor is discharged through a 8 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 3.95E-09 Tesla -b) 4.97E-09 Tesla +c) 6.26E-09 Tesla -d) 7.88E-09 Tesla -e) 9.92E-09 Tesla ===16=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.4 m has a gap of 12 mm, and a charge of 85 μC. The capacitor is discharged through a 8 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 5.39E-09 Tesla -b) 6.79E-09 Tesla -c) 8.55E-09 Tesla +d) 1.08E-08 Tesla -e) 1.35E-08 Tesla ===17=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 3.1 m has a gap of 9 mm, and a charge of 85 μC. The capacitor is discharged through a 5 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 2.33E-08 Tesla -b) 2.93E-08 Tesla +c) 3.69E-08 Tesla -d) 4.65E-08 Tesla -e) 5.85E-08 Tesla ===18=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.6 m has a gap of 15 mm, and a charge of 57 μC. The capacitor is discharged through a 9 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 4.43E-09 Tesla -b) 5.57E-09 Tesla +c) 7.02E-09 Tesla -d) 8.83E-09 Tesla -e) 1.11E-08 Tesla ===19=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4 m has a gap of 14 mm, and a charge of 78 μC. The capacitor is discharged through a 5 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 9.77E-09 Tesla -b) 1.23E-08 Tesla -c) 1.55E-08 Tesla -d) 1.95E-08 Tesla +e) 2.45E-08 Tesla ===20=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 3.5 m has a gap of 14 mm, and a charge of 88 μC. The capacitor is discharged through a 7 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 1.86E-08 Tesla -b) 2.34E-08 Tesla +c) 2.95E-08 Tesla -d) 3.72E-08 Tesla -e) 4.68E-08 Tesla ===21=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 3.9 m has a gap of 8 mm, and a charge of 55 μC. The capacitor is discharged through a 8 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 5.30E-09 Tesla +b) 6.67E-09 Tesla -c) 8.39E-09 Tesla -d) 1.06E-08 Tesla -e) 1.33E-08 Tesla ===22=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.8 m has a gap of 9 mm, and a charge of 53 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 3.26E-09 Tesla -b) 4.11E-09 Tesla +c) 5.17E-09 Tesla -d) 6.51E-09 Tesla -e) 8.19E-09 Tesla ===23=== {<!--c24ElectromagneticWaves_displacementCurrent_4-->A circlular capactitor of radius 4.1 m has a gap of 9 mm, and a charge of 79 μC. The capacitor is discharged through a 6 kΩ resistor. What is what is the maximum magnetic field at the edge of the capacitor? (There are two ways to do this; you should know both.)} -a) 7.80E-09 Tesla -b) 9.82E-09 Tesla +c) 1.24E-08 Tesla -d) 1.56E-08 Tesla -e) 1.96E-08 Tesla
# Chapter 7 - Section 7.3 - Simplifying Radicals, the Distance Formula, and Circles - 7.3 Exercises: 11 6 #### Work Step by Step The product rule for radicals tells us that $\sqrt[n] a\times\sqrt[n] b=\sqrt[n] ab$ (when $\sqrt[n] a$ and $\sqrt[n] b$ are real numbers and $n$ is a natural number). That is, the product of two nth roots is the nth root of the product. Therefore, $\sqrt 18\times\sqrt 2=\sqrt (18\times2)=\sqrt 36=6$. We know that $\sqrt 36=6$, because $6^{2}=36$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# WooHoo Create a math problem based on this gif. Note by Lew Sterling Jr 6 years, 3 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted - list • bulleted • list 1. numbered 2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1 paragraph 2 paragraph 1 paragraph 2 > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Given that $H$ and $W$ are real numbers and $\large{\theta=\pi\left(10^{\huge{\frac{\sin^{-1}\left(\log\left(\int_{0}^{\infty}x^{\left(W\cdot0 \cdot 0 \cdot H \cdot 0 \cdot 0\right)!}e^{-x}dx\right)\right)}{\cos\pi+i\sin\pi}+1}}\right)}$ $+1+2+3+4+6+7+8+9+10+11+12+13,$ find the value of $\tan\theta^{\circ}$. - 6 years, 2 months ago $- \cot 1^{\circ}$ - 6 years, 2 months ago - 6 years, 2 months ago Just...wow. - 6 years, 2 months ago If all those little (let's call them fubbies) were allowed to stand wherever they wanted to, but only in one of the positions they're occupying right now, what would be the probability that they would get the wave perfect? Assume that all the fubbies have their own timing at which they'll put their hand up, unaffected by their position. Also assume that there are only $7$ of them, and ignore the ones behind. They make it too hard. - 6 years, 2 months ago Nice Job. clapping - 6 years, 2 months ago Hard, eh? :P - 6 years, 2 months ago I understand it, and still trying to answer that question. xD - 6 years, 2 months ago Only in the current position (as shown in gif) the wave is perfect? All other arrangements, its not? - 6 years, 2 months ago Yup. - 6 years, 2 months ago
# Weyl Group Element $w$ fixing a root, and its presentation as product of simple reflections $w=s_1\dots s_n$ Let $$\Phi$$ be a root system and $$\gamma \in \Phi$$ a root. Let $$W$$ be the Weyl group and $$\Delta$$ a set of simple roots. Let $$w \in W$$ such that $$w(\gamma)=\gamma$$. Is it true that if $$w=s_1\dots s_n$$ with $$s_i$$ simple reflections, then $$s_i(\gamma)=\gamma$$ for $$i=1\dots n$$? I can't see a reason why this shouldn't be true, but am unable to prove it. Thanks! This is definitely not true. For instance, already in $$\Phi=B_2$$, each root has a root orthogonal to it, so for every root there is some nontrivial element (in fact, a reflection) of the Weyl group fixing it. But e.g. a simple root in $$B_2$$ is not fixed by any simple reflection. However, what you might want to know is the following. If we choose any point $$v$$ in the dominant chamber of our root system, then the stabilizer of $$v$$ is exactly the parabolic subgroup of $$W$$ generated by simple reflections that fix $$v$$. For a proof of this see Lemma 10.3B of Humphreys' "Introduction to Lie Algebras and Representation Theory" (https://www.springer.com/us/book/9780387900537).
# Python (Numpy) Non-Elementwise Array Operations 1. Jan 20, 2014 ### thegreenlaser Python (numpy) question: I have two 2D numpy arrays, A[i,j] and B[k,l], but the indexes are unrelated to each other (A and B won't even have the same dimensions in general). I want to be able to add/multiply these two together to get a 4D matrix: C[i,j,k,l] = A[i,j] + B[k,l] or C[i,j,k,l] = A[i,j]*B[k,l]. Obviously I can do this by looping through all four indices and running the statement "C[i,j,k,l] = A[i,j] + B[k,l]" or "C[i,j,k,l] = A[i,j]*B[k,l]" at each iteration of the loop, but is there an elegant way of doing this using a numpy function rather than loops? Some sort of addition function that adds an N-dimensional array to an M-dimensional array to give an (N+M)-dimensional array of all possible pairwise additions of elements? My loops are taking a while to run, and I'm hoping that using a "proper" numpy array manipulation function rather than loops would speed things up (and clean up my code a bit). 2. Jan 20, 2014 ### AlephZero 3. Jan 21, 2014 ### thegreenlaser Thanks. It's kind of too bad there's no such operation... However, I've at least realized that rather than looping through all four indices i, j, k and l, I can just loop through two (e.g. i and j) and then run the statement "C[i,j,:,:] = A[i,j] + B" or "C[i,j,:,:] = A[i,j]*B" each time. That at least cuts out a decent chunk of the loop iterations. In fact, in my case I was able to cut out almost all of the loop iterations since k and l have much larger ranges than i and j. 4. Jan 21, 2014 ### Staff: Mentor Is there some reason you need to add matrices in the way you described? The reason that there isn't a function like what you're asking for is that the usual matrix operations of addition and multiplication are very limited in how they operate. Two matrices can be added only if they are both the same size - same number of rows and columns. Two matrices A and B can be multiplied only if the number of columns of A is the same as the number of rows of B. Are you doing this just to do it, or do you have some reason to combine matrices in the way you described? 5. Jan 22, 2014 ### thegreenlaser Here's a really simple example with 1D matrices of why I would want to do this. Say I have N different sources, and I'm trying to calculate a function corresponding to each source: fn(x) where $$f_n(x) = x - s_n$$ Now, in a program, the variable "x" would actually be a 1xM matrix of values, so fn(x) would actually be an NxM matrix with values f[n,m] = x[m] - s[n]. My situation is more complicated, but it's the same sort of thing. The indices i and j correspond to x and y positions and the indices k and l correspond to different sources and I'm calculating the field created by each source. 6. Jan 22, 2014 ### Staff: Mentor The notation isn't helping any that I can see. An example of what you're trying to do might be more helpful. Suppose that s is a vector with 5 elements (N = 5), and that x is a vector with 7 elements (M = 7). Here's a possible s vector: s = <1, 2, 3, 4, 5>. And here's a possible x vector: x = <2, 4, 6, 8, 10, 12, 14>. How do you see f1(x) being calculated? The subtraction you show doesn't make sense to me, because x and s are of different sizes. 7. Jan 22, 2014 ### thegreenlaser f1(x) = <1,3,5,7,9,11,13> I don't think you're really understanding what I'm asking, so let me try a more concrete example. Say I have three charged particles located on the x-axis at s1 = -3, s2 = 0, and s3 = 5. The electric field due to charge n at some position x will depend on the distance away from that source, which is given by x - sn, right? So En(x) would depend on the quantity (x - sn). Now, I'm writing a program, and I want to find each En(x) at a bunch of points between x = -10 and x = +10 so that I can plot the electric field for each charge. So, in the program, I create an array of 5 points x = [-10, -5, 0, 5, 10] that I want to calculate each En field at. In the end, I'll have a 3x5 array "E" of points, where E[n,m] is the field due to the nth source at position x[m]. To calculate that value, I need to calculate x[m] - s[n]. I'm subtracting the number s[n] (the position of the nth source) from the number x[m] (the position I'm calculating En at), and I'm doing that for each combination of n and m. Maybe it'll be more clear: so E[n,:] is an array which corresponds to En(x) calculated at a bunch of different x values (since you can't use a continuous variable in programming, you have to pick an array of values to calculate the function at). Hopefully that makes sense? If I'm solving this problem with pen and paper, there's no point where I'm adding/subtracting an N or M dimensional vectors. I think the term "array" is probably more helpful than the word "vector" in this context. 8. Jan 22, 2014 ### thegreenlaser This thread is kind of getting off track though... without worrying about why I need to do it, can we just trust that I do need to perform such a calculation? To get back on track... I'm looking for a numpy function that accomplishes the same thing as the following piece of code: Code (Text): #A, B, and C are numpy arrays #A is Ni by Nj #B is Nk by Nl #C is Ni by Nj by Nk by Nl for i in range(Ni): for j in range(Nj): for k in range(Nk): for l in range(Nl): C[i,j,k,l] = A[i,j] + B[k,l] 9. Jan 22, 2014 ### Staff: Mentor Based on this, what you're doing is picking one component of s and subtracting it from each component of x. Here is some C code that would do this operation. Code (Text): for (j = 0; j < M; j++) { f(1) = x(j) - s(1); } Here is some code that would calculate all of the values in the f array. Edit: Changed the code below to address a bug reported in post #12. Code (Text): for (i = 0; i < N; i ++) { for (j = 0; j < M; j++) { f(i, j) = x(j) - s(i); } } When the code above is done, you'll have a vector of function values <f1, f2, ..., fN>, where each component is calculated in the way that you calculated f1(x). Let's stay with the simpler example until I get a better understanding of what you're trying to do. Does the above do what you're trying to do? "Vector" and "array" are pretty much synonomous when the array is one-dimensional. Last edited: Jan 23, 2014 10. Jan 23, 2014 ### thegreenlaser Yes, exactly, this is basically what I have currently. See my last post; the python code in that post does exactly what I want it to do, it's just that numpy functions tend to run a lot faster than loops, which is why I'm looking for a numpy function to accomplish the same thing as that loop. Last edited by a moderator: Jan 23, 2014 11. Jan 23, 2014 ### Staff: Mentor As AlephZero said early on in this thread, there's not much demand for what you're trying to do, so there's not likely to be a special function to do it. If you want the code to run faster, write your code in a compiled language such as C or C++. My understanding is that python is interpreted, and interpreted languages inherently run much more slowly than compiled languages. 12. Jan 23, 2014 ### AlephZero There must be some misunderstanding, or a typo, there. Do you really mean something like Code (Text): f(i) += x(j) - s(i); or Code (Text): f(i,j) = x(j) - s(i); Either way, the bottom line is you are calculating i*j values, so the only optimization you can make is to reduce the overhead of stepping through the loops, in an interpreted language. You could change the first option to something like Code (Text): f(i) = sum(x(:)) - N*s(i) (assuming there is a built in math function called sum) or change the second option to use array slices instead of the inner loop, something like Code (Text): f(i,:) = x(:) - s(i); (note, I'm not a Python programmer so my syntax is probably wrong in those examples!) 13. Jan 23, 2014 ### Staff: Mentor Code (Text): f(i,j) = x(j) - s(i); When I made the transition from calculating f1(x) to calculating all of the f values, I failed to take that into account. Here's the corrected version: Code (Text): for (i = 0; i < N; i ++) { for (j = 0; j < M; j++) { f(i, j) = x(j) - s(i); } } 14. Jan 23, 2014 ### thegreenlaser Yes, I'm aware of this. This is why I want to use a built-in numpy function rather than looping through the array. As I understand it, the built-in functions are pre-compiled rather than interpreted, which is why they tend to run much faster than the interpreted loops. I guess the answer is just that there is no such function. I'll try to find another way to make my code run faster... I think it's possible using a package to write part of a python script in C so that it runs fast. I'll look into that. Thanks for the help. 15. Dec 7, 2016 ### hautahi Hi, just stumbled upon this. This can be implemented using outer product and reshaping commands in numpy: Code (Python): import numpy as np s, x = [1,2,3,4,5], [2,4,6,8,10,12,14] N, M = len(s), len(x) fp = np.repeat(x,N) -np.outer(np.ones([M,1]),s).flatten() f = fp.reshape([N,M],order='F')
# Chapter 11 - Additional Topics - 11.4 - Complex Numbers - Problem Set 11.4: 55 #### Work Step by Step Complex numbers are "complex" because they contain multiple types of numbers--real numbers and imaginary numbers. After all, when we add 1 and $2i$, we are adding a real and an imaginary number. Thus, real numbers are a subset of imaginary numbers as they are included in the set of imaginary numbers. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Governance policies for a blue bio-economy A blue bio-economy aims to optimize the development and utilization of coastal and marine ecosystem services for the benefit of society in a market context. The concept of ecosystem services considered here is elaborated in the article Ecosystem services. ## Pricing of ecosystem services The free market economy does not guarantee that optimal use is made of the services that ecosystems potentially provide[1]. Ecosystem services that benefit a specific group of users can (and in most cases will) be developed and enhanced through market mechanisms, to the extent that user value exceeds the production costs. This is less the case for ecosystem services that benefit society in general; without government intervention, these services are usually either underdeveloped or compromised by overexploitation (the 'Tragedy of the Commons'). Government intervention is also necessary when the market price of ecosystem services does not take into account the social costs of negative side effects associated with these services. The unpriced side effects associated with the utilization of ecosystem services are called externalities; externalities can be positive, but more often they are negative. A positive externality is the provision of regulatory or cultural services associated to an ecosystem service driven by market demand (see Ecosystem services). Pollution and habitat degradation are examples of negative externalities. Evaluating the net societal impact of the marketing of ecosystem services on the coastal and marine environment is not always obvious, because impacting activities generally have beneficial effects on some environmental aspects and detrimental effects on others. Several methods have been developed to set a price on coastal and marine ecosystem services, see e.g. Multifunctionality and Valuation in coastal zones: concepts, approaches, tools and case studies and other articles in the category Evaluation and assessment in coastal management. Although there is no unique unambiguous method, we will assume here that it is possible to assess whether the net societal effect of externalities is beneficial or detrimental. ## Categories of ecosystem services The type of intervention that is most appropriate and effective depends on the type of ecosystem service and on local social and cultural conditions. Some principles have been elaborated by Hasselström and Gröndahl (2021)[2] for eutrophication reduction and carbon storage issues. These principles can also be applied for other ecosystem services. They distinguish four categories of ecosystem services based on the market value $M$ of the ecosystem service, the production costs $P$ and the net societal impact $E$ of the externalities associated with the utilization of the ecosystem service. The net societal impact (sum of associated beneficial and detrimental externalities) can be positive or negative. The four categories of ecosystem services, illustrated with some possible examples, are: 1. $E\gt 0, \quad M\gt P . \quad$ Possible examples (only positive externalities indicated): M = Promoting beach tourism, P = Beach nourishment and sewage treatment, E = Coastal protection, ecosystem health M = Food provision, P = Farming of molluscs in eutrophic waters, E = Reduction of excess nutrients M = Medical applications, P = Seaweed farming, E = Reduction of excess nutrients M = Protection of coastal assets, P = Restoration of mangrove forests, E = contribution to carbon sequestration and other ecosystem services 2. $E\lt 0, \quad M\gt P . \quad$ Possible examples (only negative externalities indicated): M = Food provision, P = Fishery, E = Foodweb disturbance (e.g. trophic cascade effects), habitat degradation / loss, biodiversity loss and species extinction (see Effects of fisheries on marine biodiversity) M = Coastal tourism, P = Real estate development, E = Habitat degradation, landscape alteration M = Shipping, P = Canalization of estuaries, E = Loss of habitats and associated ecosystem functionalities 3. $E\gt 0, \quad M\lt P. \quad$ Possible examples (only positive externalities indicated): M = Food provision, P = Harvesting invasive species (e.g. jellyfish), E = Foodweb restoration M = Renewable energy, P = Biomass of seaweed cultivation, E = Carbon sequestration, biofertilizer and other ecosystem services (see Seaweed (macro-algae) ecosystem services) M = Non-marketable services, e.g. water quality, landscape, P = restoration coastal wetlands, E = contribution to carbon sequestration, biodiversity and other ecosystem services 4. $E\lt 0, \quad M\lt P .$ ## Policy instruments for a blue bio-economy The categories 1 and 4 do not need any governmental intervention. Category 1 will normally occur in a liberal market and delivers net beneficial side effects to society. However, the benefit of category 1 ecosystem services for society can possibly be enhanced. Targeting government subsidies to ecosystem service providers to increase the benefit of positive externalities is an option, but entails the risk of spending public money on services that would have been provided anyway[2]. Labels securing that the ecosystem service has been produced without harm to the environment, so-called eco-certification, are a market-based instrument to limit possible detrimental side effects. The higher price consumers are ready to pay for certified products generates an incentive for producers to take all necessary measures to prevent any environmental damage their activities could cause[3]. Category 4 will not happen in a liberal market economy. Category 2 will normally occur in a liberal market economy, but the negative side effects may be considered socially unacceptable. Several types of governmental intervention can address this issue: • Setting a legally binding and enforceable limit on the production of negative side effects or the obligation to completely phase out certain negative side effects. Such an intervention is in line with the Precautionary Principle. Although emission standards for polluting substances exist in many countries, enforcement is often insufficient to ensure full compliance[4]. • Obligation to apply the Best Available Techiques (BAT) or Best Available Techniques Not Entailing Excessive Costs (BATNEEC). BAT and BATNEEC are key elements of environmental laws in many countries around the world, especially for setting emission limit values and other permit conditions in preventing and controlling industrial emissions[5]. • Compensatory measures. Under EU law, actors carrying out activities that cause harm to protected species or protected habitats that cannot reasonably be avoided are obliged to take compensatory measures that neutralize these environmental adverse effects. • The polluter pays principle (PPP). The user of the ecosystem service is obliged to pay a tax or fine for repairing any damage caused to the environment. This principle, when applied to companies that make use of ecosystem services, generates a strong incentive for these actors to mitigate negative side effects. The PPP can be extended to include assurance or performance bonds, which are an economic instrument to ensure that the worst case cost of damage is covered. Assurance premiums provide an incentive to limit impacts and to repair any damage the activity has produced. For example, lower insurance costs for the use of more environmentally friendly fishing gear provide added incentive for their earlier adoption and development[6]. Fig. 1. Governmental policies for a blue bio-economy, adapted after Hasselström and Gröndahl (2021)[2]. • Market-Based Instruments (MBI). The most commonly applied MBI is CAT - Cap and Trade (not to be confused with Common Asset Trust), based on tradable allowances (emission rights) to cause a certain amount of environmental damage over a certain period of time. CAT encourages actors to take mitigation measures and sell permits that are no longer needed, so that emissions can be allocated where mitigation measures are most costly. Reduction of environmental damage is achieved by limiting the number of allowances. Ensuring actors do not exceed their allowances is crucial for the credibility of the CAT mechanism and requires rigorous monitoring and validation. CAT is implemented in practice to reduce CO2 emissions via the so-called ETS - Emission Trading System. Measures to remove emitted pollutants from the environment (compensation schemes, e.g. carbon sequestration) are promoted in the ETS by granting tradable allowances to such measures (see Blue carbon sequestration). Other applications of CAT are conceivable, for example as a way to reduce the emission of nutrients[2]. In the fishing industry, tradable fishing rights are referred to as Individually Transferable Quotas (ITQ), the right to harvest a certain proportion of the total allowable catch (TAC). ITQs represent catch shares where the shares are transferable; shareholders have the freedom to buy, sell and lease quota shares. As the economic value of quota shares increases when fish stocks are well managed (higher TAC), ITQ shares create an economic incentive for environmental stewardship. However, ITQs do not exclude 'free-riding' behavior[7]. Category 3 ecosystem services will not normally be taken up in a liberal market economy, even if they provide significant societal benefits. Financial compensation is needed here to promote the development or strengthening of these ecosystem services. Instruments are: • PES - Payment for Ecosystem Services. Government subsidies (PES) can give service providers the necessary incentives to develop or strengthen these ecosystem services, provided the financial compensation is at least equal to the production costs of the service. In some cases, PES grants are awarded by private parties if the ecosystem service delivers benefits to their business. PES grants are conditionally linked to the delivery of agreed ecosystem services and should not exceed the value of the ecosystem services provided[8]. PES differs from PPP (public-private partnership) in that the subsidized ecosystem service benefits to society (no exclusive exploitation rights). • CAT - Cap and Trade. Ecosystem services that mitigate the impact of environmental damage by economic activities can be (co)financed by selling allowances for a certain amount of damage. It requires the existence or the creation of a CAT market for this type of damage. This is the case for the greenhouse gas Emission Trade System, which can provide credits for local blue carbon projects to enhance carbon sequestration in coastal wetlands, e.g. mangrove forests, salt marshes, seagrass meadows (see Blue carbon sequestration). An international organization (Verra) has established generally accepted Verified Carbon Standards for certifying blue carbon projects through assessment of the amount of sequestered carbon. The potential of blue carbon financing of coastal wetland creation, conservation and maintenance is currently largely unused[9]. Increase of the market price of carbon credits will provide a strong stimulus for blue carbon projects[10]. Verified blue carbon projects can be (co-)financed by selling credits on the international market, or they can be government funded to meet national targets for reducing greenhouse gas emissions. However, the high costs associated with labor-intensive carbon measurement and monitoring for blue carbon certification are a concern for small-scale projects, for which PES funding or other funding schemes may be preferred[11]. A blue bioeconomy would also be boosted by the establishment of an international trading system for nutrient offset credits. In this way, financial resources could be raised for the restoration and maintenance of coastal habitats that provide other ecosystem services in addition to eutrophication mitigation. Currently (2022) there are few initiatives to develop market-based tools for nutrient offsetting and other ecosystem services in coastal areas[2]. Governance policies fostering a blue bio-economy discussed in this article are summarized in Fig. 1. ## Related articles Ecosystem services Multifunctionality and Valuation in coastal zones: concepts, approaches, tools and case studies Blue carbon sequestration Seaweed (macro-algae) ecosystem services ## References 1. Austen, M.C., Andersen, P., Armstrong, C., Döring, R., Hynes, S., Levrel, H., Oinonen, S., and Ressurreiçao, A. 2019. Valuing marine ecosystems - taking into account the value of ecosystem benefits in the blue economy. In: Future Science Brief 5 of the European Marine Board. Ostend, Belgium, ISBN 9789492043696. ISSN: 4920-43696 2. Hasselström, L. and Gröndahl, F. 2021 Payments for nutrient uptake in the blue bioeconomy – When to be careful and when to go for it. Marine Pollution Bulletin 167, 112321 3. Froger, G., Boisvert, V., Méral, P., Le Coq, J-F., Caron, A. and Aznar, O. 2015. Market-Based Instruments for Ecosystem Services between Discourse and Reality: An Economic and Narrative Analysis. Sustainability 2015, 7, 11595-11611 4. Giles, C. 2020. Next Generation Compliance: Environmental Regulation for the Modern Era Part 2: Noncompliance with Environmental Rules Is Worse Than You Think. Harvard Law School, Environmental And Energy Law Progam 5. OECD 2017. Report on OECD project on best available techniques for preventing and controlling industrial chemical pollution activity: policies on BAT or similar concepts across the world. Health and Safety Publications Series on Risk Management No. 40 ENV/JM/MONO(2017)12 6. Innes, J., Pascoe, S., Wilcox, C., Jennings, S. and Paredes, S. 2015. Mitigating undesirable impacts in the marine environment: a review of market-based management measures. Front.Mar.Sci. 2:76 7. Garrity, E.J. 2020. Individual Transferable Quotas (ITQ), Rebuilding Fisheries and Short-Termism: How Biased Reasoning Impacts Management. Systems 2020, 8, 7 8. Wunder, S. 2015. Revisiting the concept of payments for environmental services. Ecological Economics 117: 234-243 9. Macreadie, P.I. et al. 2022. Operationalizing marketable blue carbon. One Earth 5: 485-492 10. Zeng, Y., Friess, D.A., Sarira, T.V., Siman, K. and Koh, L.O. 2021. Current Biology 31: 1–7 11. Friess, D.A., Howard, J., Huxham, M., Macreadie, P.I. and Ross, F. 2022. Capitalizing on the global financial interest in blue carbon. PLOS Clim 1(8): e0000061 The main author of this article is Job DronkersPlease note that others may also have edited the contents of this article. Citation: Job Dronkers (2022): Governance policies for a blue bio-economy. Available from http://www.coastalwiki.org/wiki/Governance_policies_for_a_blue_bio-economy [accessed on 30-11-2022] For other articles by this author see Category:Articles by Job Dronkers For an overview of contributions by this author see Special:Contributions/Dronkers J
Dielectron azimuthal anisotropy at mid-rapidity in Au + Au collisions at $\sqrt{s_{_{NN}}} = 200$ GeV The collaboration Phys.Rev.C 90 (2014) 064904, 2014. Abstract We report on the first measurement of the azimuthal anisotropy ($v_2$) of dielectrons ($e^{+}e^{-}$ pairs) at mid-rapidity from $\sqrt{s_{_{NN}}} = 200$ GeV Au+Au collisions with the STAR detector at RHIC, presented as a function of transverse momentum ($p_T$) for different invariant-mass regions. In the mass region $M_{ee}\!<1.1$ GeV/$c^2$ the dielectron $v_2$ measurements are found to be consistent with expectations from $\pi^{0}$, $\eta$, $\omega$ and $\phi$ decay contributions. In the mass region $1.1\!<M_{ee}\!<2.9$ GeV/$c^2$, the measured dielectron $v_2$ is consistent, within experimental uncertainties, with that from the $c\bar{c}$ contributions. • #### Figure 19 (exp. data) Experimental data from Figure 19 10.17182/hepdata.96269.v1/t1 The dielectron $v_2$ in the $\pi^0$ Dalitz decay region as a function of $p_T$ in different centralities from Au +... • #### Figure 19 (sim. data) Simulated data from Figure 19 10.17182/hepdata.96269.v1/t2 Expected dielectron $v_2$ from $\pi^0$ Dalitz decay as a function of $p_T$ in different centralities from Au + Au collisions... • #### Figure 21 a (exp. data) Experimental data from Figure 21 a 10.17182/hepdata.96269.v1/t3 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 b (exp. data) Experimental data from Figure 21 b 10.17182/hepdata.96269.v1/t4 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 c (exp. data) Experimental data from Figure 21 c 10.17182/hepdata.96269.v1/t5 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 d (exp. data) Experimental data from Figure 21 d 10.17182/hepdata.96269.v1/t6 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 e (exp. data) Experimental data from Figure 21 e 10.17182/hepdata.96269.v1/t7 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 f (exp. data) Experimental data from Figure 21 f 10.17182/hepdata.96269.v1/t8 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV for... • #### Figure 21 a (sim. data) Simulated data from Figure 21 a 10.17182/hepdata.96269.v1/t9 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 21 b (sim. data) Simulated data from Figure 21 b 10.17182/hepdata.96269.v1/t10 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 21 c (sim. data) Simulated data from Figure 21 c 10.17182/hepdata.96269.v1/t11 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 21 d (sim. data) Simulated data from Figure 21 d 10.17182/hepdata.96269.v1/t12 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 21 e (sim. data) Simulated data from Figure 21 e 10.17182/hepdata.96269.v1/t13 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 21 f (sim. data) Simulated data from Figure 21 f 10.17182/hepdata.96269.v1/t14 The dielectron $v_2$ as a function of $p_T$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV expected... • #### Figure 22 (exp. data) Experimental data from Figure 22 10.17182/hepdata.96269.v1/t15 The $p_T$-integrated dielectron $v_2$ as a function of $M_{ee}$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV. • #### Figure 22 (hadron data) 10.17182/hepdata.96269.v1/t16 The $v_2$ of hadrons $\pi$, $K$, $p$, $\phi$ and $\Lambda$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200... • #### Figure 22 (sim. data) Simulated data from Figure 22 10.17182/hepdata.96269.v1/t17 The $p_T$-integrated dielectron $v_2$ as a function of $M_{ee}$ in minimum-bias Au + Au collisions at $\sqrt{s_{NN}}$ = 200 GeV,...
Averaging/smoothing a process variable for a readout - LCD Dacr0n Member Hey all! I have a arduino microcontroller acting like a PID thermostat. I also have a 2x16 LCD screen hooked up... the first line displays the "setpoint" and the second line displays the current temp or "process variable". All is working fine except the sample rate of the PID script is causing the numbers on the second line to be all scrambled like. What I am wondering is how can I write a little piece of code that takes the "process variable"(current temp) and averages or smoothens it out for the display. Instead of changing the digits every 200 milliseconds like the sample rate is at. And FYI the sample rate must be 200ms so I can't change that. Here's what my current code looks like: Code: /******************************************************** * PID Simple Example * Reading analog input 0 to control analog PWM output 3 ********************************************************/ #include <PID_Beta6.h> #include <LiquidCrystal.h> //Define Variables we'll be connecting to double Setpoint, Input, Output; //Specify the links and initial tuning parameters PID myPID(&Input, &Output, &Setpoint, 2,3,0); // we should connect LCD like on the page //http://arduino.cc/en/Tutorial/LiquidCrystalBlink //only instead pin 5 (which is already in use //we would use pin 8 instead //so we would engage pins 12, 11, 8, 4, 3, 2 // so, input pin better to redefine LiquidCrystal lcd(12, 11, 8, 4, 3, 2); const int onPin = 5; // choose the input pin to trigger heater //leave them the same const int upPin = 6; // choose the input pin to increase temp const int downPin = 7; // choose the input pin to decrease temp int buttonState = 0; // variable for reading the pin status //also we would need pre-states of Up and Down button //because person is slower then processor. // and when I push button, loop would cycle several times // then value would encerase for more then 1 int DownPushed = 0; int UpPushed = 0; int OnPushed = 0; int PID_ON = 0; unsigned long lastTime; void setup() { Serial.begin( 19200 ); //initialize the variables we're linked to Input = analogRead(0); Setpoint = 100; // Declare inputs pinMode(onPin, INPUT); // declare pushbutton as input pinMode(upPin, INPUT); // declare pushbutton as input pinMode(downPin, INPUT); // declare pushbutton as input //turn the PID on STANDBY myPID.SetMode(MANUAL); Output=0; myPID.SetSampleTime(200); myPID.SetTunings(2,3,0); myPID.SetOutputLimits(0, 200); lastTime = millis(); // set up the LCD's number of columns and rows: lcd.begin(16,2); } void loop() { buttonState = digitalRead(onPin); // here I'm changing the behaviour // previously it worked only while you push the on button // now it would be switch on/off button // you push it once - switch on // push second time - switch off if (buttonState == HIGH) { // turn LED/HEATER on: myPID.SetMode(AUTO); } else { // turn LED/HEATER off: myPID.SetMode(MANUAL); Output=0; } unsigned long lastTime; // I would not change these lines, because you are expecting 250 ms for a "push" // that is if you hold button for more then 1/4 second, if(digitalRead(upPin)==HIGH) { if (millis()-lastTime >= 250) { Setpoint+=1; lastTime=millis(); } } if(digitalRead(downPin)==HIGH) { if (millis()-lastTime >= 250) { Setpoint-=1; lastTime=millis(); } } Input = analogRead(0); myPID.Compute(); analogWrite(10,Output); //and output to LCD lcd.setCursor(0,0); //if heater is on - show * //if not - empty if( PID_ON ==1 ) { lcd.print("*"); } else { lcd.print(" "); }; lcd.print("--SET--: "); lcd.print((int)(Setpoint) ); lcd.setCursor( 0,1); lcd.print(" --AIR--: "); lcd.print( Input ); } I have to write some sort of script that deals with and averages that last line "Input" and displays the process variable (temperature) every 1000 or 2000 ms (1 or 2 seconds) instead of every 200ms leaving me with just a blur of numbers. I have no clue what to do A friend told me: "there are two different issues involved here. One is the "busyness" of the display updating every 200ms. The other is the "bounciness" of the data. Smoothing the data by itself is not enough, in my experience, since a value that only changes by one or two counts, but changes 5 times a second, is still distracting. So the display updates should be further apart as well as having the data smoothed." another friend told me : "one way to get around that is to have a cycle counter that triggers your display: you display if the counter has reached a pre-defined number. but within each cycle, you continue to update the average. the interesting about exponential smoothing is that at any given point, the "average" contains information about all the past measurements. it is just that the older measurements are weighted exponentially less than the newer ones." Is there any advice you recommend ? I a kind of a rookie at coding and would appreciate any help! Maybe how to apply the advice those people gave me? Many thanks! Here is a clip of what the display does. The top number is fine because that represents whatever the setpoint is.... the readout below is what I need to make a little more readable. YouTube - LCD Dacr0n Member Someone told me: Increment a counter every pass. On the fifth pass, set the counter to zero, print out the average." but im a bit lost as to how to do that. 3v0 Coop Build Coordinator Forum Supporter Make an array of 5 elements. Replace the oldest reading in the array with the next new reading. Each time you update the 0th element of the array add all elements of the array and divide by 5. Display that value. A moving average may be better. Do as above but use a 10 element array and average all 10 for a reading. Display this average as frequently or in frequently as you like. Last edited: Mosaic Well-Known Member To keep things efficient, try to sample using 4 ,8,16 samples etc.....that way, calculating the avg is quick when done with binary division. Although a Hw divider (might) be quicker.... A moving avg is better,as 3v0 said. But u have more housekeeping. U have to keep rolling the array data to make space for the next new sample and then calc the avg. Basically u create an array stack that u keep pushing a new sample onto, thus the rest of the data keeps moving down the stack as new data is pushed on top of it. If u have a 16 element stack....the bottom sample is sixteen samples old. This results in a smoothing of the data and compensates for any spikes or transients quite well based on the historical samples. Further, u avg every time a sample is taken so your data display is updated faster.You don't wait for 16 new samples. Last edited: Loading
# Three blocks with masses m, 2m, and 3m are connected by strings as shown in the figure. After an upward force F is applied on block m, the masses move upward at constant speed v. What is the net force on the block of mass 2m? (g is the acceleration due to gravity).           1. 2mg 2. 3mg 3. 6mg 4. zero Subtopic:  Application of Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot An explosion breaks a rock into three parts in a horizontal plane. Two of them go off at right angles to each other. The first part of mass 1kg moves with a speed of 12 ms–1 and the second part of mass 2 kg moves with 8 ms–1 speed. If the third part flies off with 4 ms–1 speed, then its mass is: 1. 5 kg 2. 7 kg 3. 17 kg 4. 3 kg Subtopic:  Newton's Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A car of mass 1000 kg negotiates a banked curve of radius 90 m on a frictionless road. If the banking angle is 45o, the speed of the car is: 1. 20 ms-1 2. 30 ms-1 3. 5 ms-1 4. 10 ms-1 To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A person of mass 60 kg is inside a lift of mass 940 kg and presses the button on control panel. The lift starts moving upwards with an acceleration of 1.0 ${\mathrm{ms}}^{-2}$. If g = 10 ${\mathrm{ms}}^{-2}$, the tension in the supporting cable is 1.  9680 N 2.  11000N 3.  1200N 4.  8600 N Subtopic:  Application of Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A body of mass M hits normally a rigid wall with velocity v and bounces back with the same velocity. The impulse experienced by the body is: 1.  1.5Mv 2.  2Mv 3.  zero 4.  Mv Subtopic:  Newton's Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A block of mass m is in contact with the cart C as shown in the figure. The coefficient of static friction between the block and the cart is $\mathrm{\mu }$. The acceleration $\mathrm{\alpha }$ of the cart that will prevent the block from falling satisfies: 1. $\mathrm{\alpha }>\frac{\mathrm{mg}}{\mathrm{\mu }}$ 2. $\mathrm{\alpha }>\frac{\mathrm{g}}{\mathrm{\mu m}}$ 3. $\mathrm{\alpha }\ge \frac{\mathrm{g}}{\mathrm{\mu }}$ 4. $\mathrm{\alpha }<\frac{\mathrm{g}}{\mathrm{\mu }}$ Subtopic:  Friction | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A gramophone record is revolving with an angular velocity $\omega$. A coin is placed at a distance r from the centre of the record. The static coefficient of friction is $\mu$. The coin will revolve with the record if: 1. $r=\mu g{\omega }^{2}$ 2. $r<\frac{{\omega }^{2}}{\mu g}$ 3. $r\le \frac{\mu g}{{\omega }^{2}}$ 4. $r\ge \frac{\mu g}{{\omega }^{2}}$ Subtopic:  Friction | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot An explosion blows a rock into three parts. Two parts go off at right angles to each other. These two are, the first part 1 kg moving with a velocity of 12 ${\mathrm{ms}}^{-1}$ and the second part 2 kg moving with a velocity of 8 ${\mathrm{ms}}^{-1}$. If the third part flies off with a velocity of 4 ${\mathrm{ms}}^{-1}$, its mass would be: 1. 5 kg 2. 7 kg 3. 17 kg 4. 3 kg Subtopic:  Newton's Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The mass of a lift is 2000 kg. When the tension in the supporting cable is 28000 N, then its acceleration is: (g=10 m/s2) 1. 30 ms-2 downwards 2. 4 ms-2 upwards 3. 4 ms-2 downwards 4. 14 ms-2 upwards Subtopic:  Application of Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A body, under the action of a force $\stackrel{\to }{\mathrm{F}}=6\stackrel{^}{i}-8\stackrel{^}{j}+10\stackrel{^}{k}$, acquires an acceleration of 1 ms-2. The mass of this body must be: 1. 2 √10 kg 2. 10 kg 3. 20 kg 4. 10 √2 kg Subtopic:  Newton's Laws | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh
# Shouldn't All Alternating Series Diverge? The ${n}$th-term test for divergence everywhere I've seen basically states that, if the limit of the ${n}$th term as ${n}$ approaches infinity is not 0 or doesn't exist, the series diverges. However, this test does not give any precondition on the ${n}$th term (e.g. that it has to be positive), so it should be applicable to alternating series as well. So, Shouldn't all alternating series diverge, since the limit of the nth term of the series, ${a_n =(-1)^nb_n}$ (including the ${(-1)^n}$), as n approaches infinity usually does not exist? Won't this contradict those series that pass the Alternating Series Test (AST)? Or does the AST supersede the ${n}$th-term test in such cases (if so, why isn't this mentioned anywhere)? • What would make you think that the limit of $(-1)^nb_n$ does not exist? – David C. Ullrich Mar 10 '16 at 17:48 • The limit of $b_n$ must be $0$ for the alternating series test. From this it follows that $(-1)^nb_n$ has limit $0$. – André Nicolas Mar 10 '16 at 17:48 • @David I mistakenly thought that, since the limit of ${(-1)^n}$ doesn't exist, the limit of ${(-1)^nb_n}$ would also not exist. I checked Wolfram Alpha for the limits of ${a_n}$ for convergent alternating series, and they are indeed 0, so the nth-term test remains valid after all! – Leponzo Mar 10 '16 at 18:09 • Let's stick to integers, $(-1)^x$ gets us into trouble for non-integer $x$. If $b_n$ has a non-zero limit, then sure, $(-1)^n b_n$ does not have a limit. But if $b_n$ is after a while close to $0$, then so is $(-1)^nb_n$. – André Nicolas Mar 10 '16 at 18:22 • Two comments. First, of course it was clear what your error was, I was trying to get you to find it yourself. The problem was remembering things without the details: It's a Fact that if the limits of $a_n$ and $b_n$ exist then the limit of $a_nb_n$ exists, and equals the product of the limits. But that Fact says nothing about limits not existing. Second: You checked Wolfram Alpha? That's very sad. Also dangerous - WA gets things wrong sometimes. If you think about the definition of limits it's very easy to see that if $\lim b_n=0$ then $\lim (-1)^nb_n=0$. – David C. Ullrich Mar 10 '16 at 18:33 Well, obviously there are convergent alternating series, so there must be some mistake here. We know that, for $\sum a_n$ to converge, then the condition $$\lim_{n\to\infty}a_n=0$$ is necessary (but not sufficient!). So if we take $a_n=(-1)^nb_n$, then if $\sum (-1)^nb_n$ converges, then $$\lim_{n\to\infty}(-1)^nb_n=0$$ which happens if and only if $$\lim_{n\to\infty}b_n=0$$ so well, we can have an alternating sequence who's series converges, but we need to have $b_n\to 0$ for $\sum (-1)^nb_n$ to converge. An example: $$\sum_{k=1}^{\infty}(-1)^k\frac1k=-\log(2)$$ on the other hand, we can make a series like $$c_n= \begin{cases} \frac{1}{n} \text{ if } n \text{ is even}\\ \frac{1}{2n} \text{ otherwise } \end{cases}$$ for which actually $$\sum_{n=1}^\infty (-1)^nc_n$$ actually diverges to $+\infty$, but $\lim c_n\to 0$. We again see, for $\sum (-1)^nb_n$ to converge, the condition $\lim b_n=0$ is necessary, but not sufficient.
# Is this functional weakly continuous? Take a $C^1$ function $G \colon \mathbb{R}\to \mathbb{R}$ and define a functional $$\mathcal{G}(u)=\int_0^1G(u(t))\, dt, \quad u \in H^1(0, 1).$$ We then have $\mathcal{G}\in C^1\big(H^1(0, 1)\to \mathbb{R}\big)$. Now, I would like to apply Weierstrass's theorem to this functional, and so I need to show that it is weakly lower semicontinuous. Question 1 Is it true? Some course notes I'm reading act as if $\mathcal{G}$ were weakly continuous, because they claim the differential $$\mathcal{G}' \colon H^1(0, 1) \to \big[ H^1(0, 1) \big] '$$ is weak-strong continuous. (This trivially implies the claim). To show that, they first compute $$\langle \mathcal{G}'(u), v \rangle = \int_0^1 G'(u)v\, dt,$$ which is clear to me, and then factor the mapping $$u \in H^1 \mapsto \mathcal{G}'(u) \in \big[ H^1 \big]'$$ as $$u \in H^1 \mapsto u \in L^\infty \mapsto G'\circ u \in L^\infty \mapsto \mathcal{G}'(u) \in \big[ H^1 \big]';$$ then, since the first embedding is compact (so they say) and the other arrows are continuous, the whole mapping is weak-strong continuous. Question 2 This reasoning seems wrong to me, because the embedding $H^1(0, 1) \hookrightarrow L^\infty(0, 1)$ is not compact. Am I wrong? - The embedding $H^1(0,1) \hookrightarrow L^\infty(0,1)$ is indeed compact. This follows from general Sobolev embedding theorems, but in this special case it makes a nice exercise in using the Arzela-Ascoli theorem. Leave a comment if you want hints. With a clear mind it is easy! Take a bounded sequence $u_n \in H^1(0,1)$. Then $u_n$ is equibounded and equi-Hölder continuous and so it has a uniformly convergent subsequence. In fact $$u_n(x)-u_n(y)=\int_y^x u_n'(s)\, ds$$ so that $$\lvert u_n(x) -u_n(y)\rvert \le \lVert u_n \rVert_{H^1}\lvert x-y\rvert^{\frac{1}{2}}$$ which proves equicontinuity. Now observe that $H^1$ is embedded in $L^\infty$ and so a $H^1$-bounded sequence is $L^\infty$-bounded also. This proves equiboundedness. –  Giuseppe Negro May 31 '11 at 17:56
# 2 Tips To Master Italian Pronouns With Examples And English Translation Italian Pronouns are a great example. In fact, if you’ve never studied a foreign language before, you might not even know what a pronoun is. Almost every time you express a thought, you use pronouns. We use these words to avoid making our sentences too long and repetitive. Italian or any new language makes you reexamine your own language, which is one of the coolest things about learning them. As well as giving you a better understanding of how words work together. For fluency in Italian, pronouns are essential. However, they function differently than English pronouns. The purpose of this post is to help you understand what pronouns are and why they are so important. I will also share 2 tips to help you master the four types of Italian pronouns to become proficient in the language. Want to improve your Italian quickly and have fun at the same time? That’s what I thought. ## What Are Pronouns Anyway? In order to understand how and when pronouns work in Italian, let’s first take a look at what pronouns are in English. Words that function as pronouns replace one or more nouns. “My brother” might be referred to as “him.” “The picture frame” might be referred to as “it.” In every sentence you say about yourself, you might refer to yourself as “me” or “I” instead of your full name! As an example of how you would sound without pronouns: “Sam got a new blanket. Sam loves the new blanket. The new blanket is soft and green.” You can express the same ideas much more concisely by using “he” and “it”: “Sam got a new blanket. He loves it. It is soft and green.” Let’s take a look at the 4 types of pronouns in Italian now that you understand what a pronoun is. The following topics will be discussed in this post: • Subject pronouns • Possessive pronouns • Direct object pronouns • Indirect object pronouns Let’s get started! Present Participles in Spanish: Principles, Examples & More ### Subject Pronouns In Italian It is likely that you already know how to use personal pronouns if you have been studying Italian for some time. When you look at a verb conjugation chart, you see these words. $$\color{red}{\mathbf{Io \longrightarrow I}}$$ $$\color{red}{\mathbf{Tu \longrightarrow you}}$$ $$\color{red}{\mathbf{Lui \longrightarrow he}}$$ $$\color{red}{\mathbf{Lei \longrightarrow she}}$$ $$\color{red}{\mathbf{Noi \longrightarrow we}}$$ $$\color{red}{\mathbf{Voi \longrightarrow you (plural)}}$$ $$\color{red}{\mathbf{Loro \longrightarrow they}}$$ Let’s look at an example: $$Lucrezia\ e\ Daniela$$ vogliono mangiare la pizza (Lucrezia and Daniela want to eat pizza) The following can be shortened: $$Loro$$ vogliono mangiare la pizza (They want to eat pizza) The verb conjugation communicates the necessary information, such as a person or number, without using subject pronouns at the beginning of a sentence. As a result, we could simply say: $$Vogliono$$ mangiare la pizza, and it would still mean, (They want to eat pizza.) There are 3 cases where you must be sure not to omit the pronoun in Italian. The pronoun should be kept if: • For clarity, you need it • Modified with also (also) • If you want to emphasize a subject or compare it with another subject, you can do so The Tools Every Language Student Should Be Using Outside Of Class ### Possessive Pronouns In Italian Next, let’s look at Italian possessive pronouns. It replaces nouns that have been modified by possessive adjectives with possessive pronouns (pronomi possessivi). There are columns for gender and singular vs. plural in the chart. However, you don’t choose the possessive pronoun based on the gender of the speaker. Rather, you choose based on the $$gender\ of\ the\ object$$ that belongs to them. And whether it’s $$one\ object\ or\ multiple$$. Here are some examples to clarify all that. Assume I have five books. I would refer to them as i miei libri (my books) because libro (books) is a masculine word in Italian and I have more than one of them. Let’s say Giovanni is talking about his mother. He would refer to her as $$la\ mia$$ mamma (my mother) because he only has one mother and she’s female. Is the logic of possessive pronouns beginning to make sense to you? Here are a few more examples: 1. Nisha has one ring, so she calls it, $$il mio$$ anello (my ring) 2. If Rani wants to talk about Nisha’s ring, he would call it $$il suo$$ anello (her ring) 3. If Nisha is talking to me about my books, she would call them $$i tuoi$$ libri (your books) For this reason, it is important to understand not only the spelling, pronunciation, and meaning but also the gender of a new word in Italian. Escribir (Spanish Verb) Conjugation In Present, Imperative, Imperfect & Preterite Tenses ### Direct Object Pronouns In Italian We will now examine direct object pronouns that replace a person or object’s name. It is always paired with transitive verbs, which are verbs that have an object, such as: $$\color{red}{\mathbf{capire \longrightarrow to\ understand}}$$ $$\color{red}{\mathbf{mangiare \longrightarrow to\ eat}}$$ $$\color{red}{\mathbf{scrivere \longrightarrow to\ write}}$$ $$\color{red}{\mathbf{rompere \longrightarrow to\ break}}$$ Here are the Italian direct object pronouns: $$\color{red}{\mathbf{Mi \longrightarrow me}}$$ $$\color{red}{\mathbf{Ti \longrightarrow you}}$$ $$\color{red}{\mathbf{Lo \longrightarrow him/it}}$$ $$\color{red}{\mathbf{La \longrightarrow her/it}}$$ $$\color{red}{\mathbf{Ci \longrightarrow us}}$$ $$\color{red}{\mathbf{Vi \longrightarrow you\ (plural)}}$$ $$\color{red}{\mathbf{Li \longrightarrow them\ (masc.)}}$$ $$\color{red}{\mathbf{Le \longrightarrow them\ (fem.)}}$$ A direct object pronoun can be used in a variety of situations. Suppose your classmate asks you if you know one of her friends. Conosci Robert? (Do you know Robert?) Here are some possible responses: No, non $$lo$$ conosco (No, I don’t know him) The direct object pronoun is used in the everyday phrase, $$Ci\ vediamo$$. It means “$$See\ you\ later$$, although the direct translation would be, $$We\ will\ see\ us\ each\ other$$. How To Get A Job In Finance With No Experience ### Indirect Object Pronouns In Italian Last but not least, let’s talk about indirect object pronouns. Our direct object pronouns include some of these words. However, there are also some that are different. All of them are used differently. The indirect object is the $$person\ or\ thing\ that\ something\ is\ done\ to\ or\ for$$. In the sentence, “I bought Molly a bouquet of flowers,” $$the\ bouquet\ of\ flowers\ would\ be\ the\ direct\ object$$ because it’s what you bought. And the $$indirect\ object\ would\ be\ Molly$$ since she is the person you bought the flowers for. Molly was not bought by you – that’s an important distinction! The indirect object pronouns are as follows: $$\color{red}{\mathbf{Mi \longrightarrow to/for\ me}}$$ $$\color{red}{\mathbf{Ti \longrightarrow to/for\ you}}$$ $$\color{red}{\mathbf{Gli \longrightarrow to/for\ him\ or\ it\ (masc.)}}$$ $$\color{red}{\mathbf{Le \longrightarrow to/for\ her\ or\ it\ (fem.)}}$$ $$\color{red}{\mathbf{Ci \longrightarrow to/for\ us}}$$ $$\color{red}{\mathbf{Vi \longrightarrow to/for\ you (plural)}}$$ $$\color{red}{\mathbf{Loro \longrightarrow to/for\ them}}$$ Here are a few examples. In the following sentences, indirect object pronouns are bolded. The corresponding phrase in English appears at the end of the sentence, while in Italian, it appears in the middle. • L’insegnante $$mi$$ ha spiegato la storia → The teacher explained the story $$to\ me.$$ • Federica $$ti$$ ha portato un regalo → Federica brought a gift $$for\ you.$$ • $$Gli$$ spiegherò l’idea → I will explain the idea $$to\ him.$$ • Paolo $$le$$ sta dicendo qualcosa → Paolo is saying something $$to\ her.$$ ## 2 Tips For Learning Italian Pronouns Pronouns are short words, usually two or three letters long. It will make a huge difference in your level of fluency in Italian if you use them correctly. A Brief Details about Oral English Course and Why to Pursue It? ### Take the time to be patient with yourself Since these words are so similar, mastering them takes time. Don’t give up if they don’t stick in your mind as fast as you expect. As long as you keep going, you’ll get there eventually. ### When it comes to pronouns, practice makes perfect If you always use the full word instead of pronouns, you can easily avoid using them. In this case, you will never be able to learn. It takes practice to become good at Italian pronouns. You might make some mistakes as a result. Learning from them is worth it. To become accustomed to using these words in real life, use them in your Italian conversations. You’ll sound more natural and be easier to understand. ## FAQs How many pronouns does Italian have? Italian has seven personal subject pronouns: four for singulars, and three for plurals. As conjugation usually determines the grammatical person, personal subject pronouns are usually dropped. Does Italian have pronouns? The Italian subject pronouns are equivalent to the English I, you, he, and she. Third-person pronouns include lui (or egli), lei (or ella), esso and essa (it), and loro (or essi). There is a common use of lei, loro, and lui in spoken language, while egli, ella, and essi are almost exclusively used in literature. What are the 6 subject pronouns in Italian? In modern Italian, he, she, and they are generally expressed by lui, lei, and loro, respectively. The words “egli, ella, essi, esse” appear more often in written Italian than in spoken Italian. The words “esso” and “essa” are rarely used. How do I learn Italian pronouns? Unlike in English, Italian has the object pronoun directly before the verb (e.g. Lo chiamo). For a negative sentence, non goes first, with the pronoun right before the verb: (e.g. Non lo chiamo.)
How to formulate Traveling Salesman Problem (TSP) as Integer Linear Program (ILP)? Consider this distance matrix of an asymmetric TSP instance: $$\begin{matrix} & c_0 & c_1 & c_2\\ c_0 & 0 & 1 & 2\\ c_1 & 2 & 0 & 1\\ c_2 & 1 & 2 & 0 \end{matrix}$$ One optimal tour (with a total distance of 3) is clearly: $$c_0 \rightarrow c_1 \rightarrow c_2 \rightarrow c_0$$ Now we can reformulate this as an ILP. Let $e_{i,j}\in\{0,1\}$ denote the presence or absence of the directed edge from $c_i$ to $c_j$ in the tour (0 - edge is not in the tour, 1 - edge is in the tour). Then we obtain the ILP by: 1. setting the objective $$\text{minimize}\;\;\; 1\cdot e_{0,1} + 2\cdot e_{0,2} + 2\cdot e_{1,0} + 1\cdot e_{1,2} + 1\cdot e_{2,0} + 2\cdot e_{2,1}$$ 2. ensure that we arrive at city $c_i$ from exactly one other city and depart from $c_i$ to exactly one other city \begin{align*} e_{0,1} + e_{0,2} = 1\\ e_{1,0} + e_{1,2} = 1\\ e_{2,0} + e_{2,1} = 1\\ e_{1,0} + e_{2,0} = 1\\ e_{0,1} + e_{2,1} = 1\\ e_{0,2} + e_{1,2} = 1 \end{align*} 3. ensure we do not find a set of minor tours (we want a complete tour through all 3 cities) by enforcing the following constraints for the artificial variables $u_i$ (Miller-Tucker-Zemlin subtour elimination constraints): \begin{align*} u_1 - u_2 + 2\cdot e_{1,2} \leq 1\\ u_2 - u_1 + 2\cdot e_{2,1} \leq 1\\ \end{align*} Is this transformation correct? Its probably not because when I try to use a Solver it says that the ILP is infeasible. I tried to follow the steps on Wikipedia: https://en.wikipedia.org/wiki/Travelling_salesman_problem#Integer_linear_programming_formulation Edit 1 We can verify that there must be a solution to the ILP by the following considerations: There are exactly 2 solutions to the equation system under 2. \begin{align*} \text{(1)}\;\;\;e_{0,1} = e_{1,2} = e_{2,0} = 1 \land e_{1,0} = e_{2,1} = e_{0,2} = 0\\ \text{(2)}\;\;\;e_{0,1} = e_{1,2} = e_{2,0} = 0 \land e_{1,0} = e_{2,1} = e_{0,2} = 1\\ \end{align*} Since there are 6 variables and each variable can have only 2 values, we can easily verify this by testing all $2^6$ variable assignments. Now we can observe that either $e_{1,2} = 0 \land e_{2,1} = 1$ or $e_{1,2} = 1 \land e_{2,1} = 0$ holds for the solutions. In the first case we can satisfy the inequations under 3. by $u_1 = 2 \land u_2 = 1$ in the latter case by $u_1 = 1 \land u_2 = 2$. Edit 2 This is annoying. I used another tool (LPSolve) to solve the exact same ILP and got the solution instantly. Seems that my transformation was correct and the tool is buggy. • Welcome to CS.SE! I recommend proofreading your question to make sure the equations come out right, as Latex can be confusing -- some of yours were missing line breaks. I fixed it up, so it should be good now. – D.W. Dec 16, 2015 at 0:24 • What have you tried? If you already know of an optimal tour, you should be able to plug that into your ILP formulation to get a particular value for each variable, then test by hand if it satisfies all of your equations, and check by hand whether the problem is indeed satisfiable as you have formulated it. If it violates one of the equations you can then double-check your reasoning for that equation. Basically, I recommend you do some basic trouble-shooting on your own before asking here, then edit your question to show us what sanity checks you've tried. – D.W. Dec 16, 2015 at 0:25 You must have entered your ILP program into the ILP solver wrong. It's easy to verify that the ILP program is feasible, using the procedure I outlined in the comments. In particular, from the tour $c_0 \to c_1 \to c_2 \to c_0$ we get the assignment $e_{0,1}=e_{1,2}=e_{2,0}=1$ and $e_{1,0}=e_{2,1}=e_{0,2}=0$. Setting $u_1=0$ and $u_2=1$, we find a feasible solution: you can easily check by hand that every one of your equations is satisfied. So, you've done something wrong in entering it into the tool. In the future, you should do that kind of sanity check on your own before asking here. • I will :) But my actual qestion was not whether the ILP is solvable. I wanted to know if the transformation itself is correct. I'm new to this stuff and not really sure although the steps seem to be simple. Concerning the ILP tool: Im quite sure that i entered it correctly. Would it be appropriate to add the code here in another Edit? Dec 16, 2015 at 18:20 • @Paule, OK, I suggest that you do two things: (a) find the bug in the way you entered it into the ILP, and correct that, and then remove that part of the question, (b) edit the question to clarify what exactly your question is. If this doesn't answer your question, I'm not sure what your question is. You might want to see if you can articulate what you want to know that isn't already covered in en.wikipedia.org/wiki/… and other standard explanations of Miller-Tucker-Zemlin. – D.W. Dec 16, 2015 at 20:47 • No, I'm afraid that debugging your ILP code isn't on-topic for this site. We are for questions about computer science (e.g., concepts, algorithms) but not about code. I don't know if that kind of question would be on-topic anywhere on the Stack Exchange network. – D.W. Dec 16, 2015 at 20:47 • Your were right. Its a bug.. Not in my code but in the tool i use. I entered the ILP in another tool (LPSolve) and got the solution instantly. Dec 17, 2015 at 11:40 • I also made a small change to make the question clearer (i hope). Thanks for your help. Dec 17, 2015 at 11:49
elliptic curve with a degree 2 isogeny to itself? I've come across the following question, which I think must be easy for experts: is there a complex elliptic curve $E$ with an isogeny of degree 2 to itself? Of course one can ask the same question for isogenies whose degree is not a square, or for higher dimensional abelian varieties etc. - Expanding on Francois's answer, $E$ has an endomorphism of degree 2 if and only if its endomorphism ring $R=\operatorname{End}(E)$, which is an order in an imaginary quadratic field, has an element of norm 2. There are exactly three such orders, namely $\mathbb{Z}[i]$, $\mathbb{Z}[\sqrt{-2}]$, and $\mathbb{Z}[(1+\sqrt{-7})/2]$. So up to isomorphism over $\overline{\mathbb{Q}}$, there are exactly three elliptic curves with endomorphisms of degree 2. Equations for these curves and their degree 2 endomorphism are given in Advanced Topics in the Arithmetic of Elliptic Curves, Proposition II.2.3.1. There are similarly only finitely many curves with a higher degree cyclic isogeny of fixed degree $d$. Using Velu's formulas, one could probably write them all down for small values of $d$. Yes, but the elliptic curve needs to have complex multiplication, since the multiplication-by-$n$ map has degree $n^2$. For an explicit example, you can take $E=\mathbf{C}/(\mathbf{Z}+i\mathbf{Z})$ with the isogeny being multiplication by $1+i$.
# The best is not there Published: My favorite interpretation of Eduardo Scarpetta’s play “Miseria e nobiltà” (“Poverty and Nobility”) is the one by Totò, the Prince of laughter. The video above shows an extract from the comedy film taken from the play at the end of which Totò pronounces the words “la migliore non c’è” (“the best is not there”), speaking about chairs in his house. I find this line hilarious for its spectacular absurdity. It is evident that, among the chairs of a house, there must be the best one, although this might not be unique. In fact, if we let $S$ be the non-empty finite set of chairs in a house, we can show by induction that it contains its supremum, $\sup S$, namely there is a best chair. • If $S$ contains a single chair $x$, then $\sup S = x \in S$. • Suppose that, if $S$ contains $n$ chairs, then $\sup S = \max S \in S$. Now, assume $S$ contains $n+1$ chairs and let $x$ be an element of $S$. By the induction hypothesis, as $S^\prime = S \setminus {x}$ contains $n$ chairs, it contains its supremum, let’s call it $x^\prime \in S^\prime$. Then: $$\sup S = \sup \left( S^\prime \cup \{x\} \right) = \sup \{x^\prime, x\} = \max \{x^\prime, x\} \in S$$ as $x^\prime \in S^\prime \subseteq S$ and $x \in S$. Would the line “la migliore non c’è” still sound spectacularly absurd if Totò lived in the Grand Hilbert Hotel assuming there is a chair in each room?
# Burning my tongue? by Pengwuino Tags: burning, tongue PF Gold P: 7,125 So what exactly is the scientific definition of "burning your tongue"? I hate this, i wish we could evolve to eat food 800 degrees F. But no really, what exactly is going on when you burn your tongue? PF Gold P: 8,961 Are you talking about actual thermal burning, or spicy burning? P: 15,325 Amongst other things: You damage the cells in your tongue, causing some of them to burst. This releases enzymes that alert repair systems. Oneof the systems is the lymph system that floods the area and cells with fluid, causing the swelling. The swelling and engorged cells are much of the feeling that lasts from a burned tongue. PF Gold P: 7,125 ## Burning my tongue? The one dave is talking about. Thanks for the info P: 15,325 As an aside, I am beginning to recognize Pengwuino's posts by subject alone. As soon as I read 'burning my tongue' I thought 'that sounds like a Pengwuino post'. P.S. You should not put something in your mouth that is 800F. P.P.S. Note that cooking substitutions can produce unpredictable results: 20min. @ 200F is not the same as 5min. @ 800F. PF Gold P: 7,125 Yah and things like "Why did i run into the wall?" sounds like pure fish babble! Emeritus PF Gold P: 12,257 Quote by DaveC426913 As an aside, I am beginning to recognize Pengwuino's posts by subject alone. As soon as I read 'burning my tongue' I thought 'that sounds like a Pengwuino post'. Molten cheese is the worst. Mentor P: 6,044 Quote by Moonbear Molten cheese is the worst. This is because cheese has a fairly high heat capacity. The amount of heat energy $Q$ added to substance of mass $m$ that undergoes is a temperature change $\Delta T$ is $Q = c m \Delta T$, where $c$ is the substance's (specific) heat capacity. Cake has a lower heat capacity than cheese and grease (from the pepperoni), so if cake and pizza are both sampled fresh out of the same oven, the pizza does more damage, since it has the capacity to supply more energy to the tongue and mouth. As we experience life, we continually "do the experiment" (as I will on Wednesday night while watching the hockey game), and we build up a storehouse of information about heat capacities (and other things) that becomes part of our intuition of how to eat. P: 19 ....As we experience life, we continually "do the experiment" (as I will on Wednesday night while watching the hockey game), and we build up a storehouse of information about heat capacities (and other things) that becomes part of our intuition of how to eat. Such an expression always helps me find the funs for myself, which is why I am pretty much spending time around on board like this..:-) P: 15,325 Once, in a restaurant I ate a "cheese dream" (tomato on cheese on toast). It was so scalding hot, I burned my lip and tongue, and in the process, dragged the tomato slice off the bread, landing on the back of my hand. It burned me so badly, I had a tomato-slice-shaped red mark on the back of my hand for days. Emeritus Quote by George Jones This is because cheese has a fairly high heat capacity. The amount of heat energy $Q$ added to substance of mass $m$ that undergoes is a temperature change $\Delta T$ is $Q = c m \Delta T$, where $c$ is the substance's (specific) heat capacity. Cake has a lower heat capacity than cheese and grease (from the pepperoni), so if cake and pizza are both sampled fresh out of the same oven, the pizza does more damage, since it has the capacity to supply more energy to the tongue and mouth. As we experience life, we continually "do the experiment" (as I will on Wednesday night while watching the hockey game), and we build up a storehouse of information about heat capacities (and other things) that becomes part of our intuition of how to eat.
# Linear independence of translates of a function $\{ \phi(\cdot - x) : x \in \mathcal F \}$ Let $$0 \neq \phi \in L^2(\mathbb R^n)$$ be a square-integrable function and $$\mathcal F \subset \mathbb R^n$$ a finite set. If we are in the one-dimensional setting $$n=1$$ then the set of translates of $$\phi$$ by $$\mathcal F$$, i.e. $$\{ \phi(\cdot - x) : x \in \mathcal F \} \subset L^2(\mathbb R)$$ is linear independent. I found this result in the book Christensen, In introduction to frames and Riesz bases, p. 228. I was wondering if this statement holds in arbitrary dimensions, i.e. without restriction of $$n$$ to $$n=1$$. If yes, does somebody knows a reference for this? • This is the same as asking if finitely many exponentials $e^{ix\cdot t}$, $x\in F$, are linearly independent (take Fourier transforms). The answer is yes, and for example the second answer here can be adapted (consider the exponent with largest real part; if there are several, we have reduced the dimension by one, so an induction on the dimension takes care of that): math.stackexchange.com/questions/1451281/… Apr 3 at 17:38 • Not really important, but asking whether every finite subfamily of a family $(f_i)_{i\in I}$ is linearly independent is exactly the same as asking whether $(f_i)_{i\in I}$ is so: linear combinations by definition only involve finitely many elements, so linear independence likewise. Apr 3 at 19:02 I do not know what Christensen's proof is but here is a simple proof that works in $$R^n$$. Suppose that translates are linearly dependent: $$\sum c_j\phi(x-t_j)\equiv 0,$$ where $$t_j$$ are all distinct. Take Fourier transform; shift correspnds to multilication on an exponential: $$\sum c_j e^{-it_j\cdot s}\hat{\phi}(s)\equiv 0.$$ But the multiplier $$m(s):=\sum c_j e^{-it_js}$$ is a non-zero entire function, since all $$t_j$$ are distinct, so its zeros make a proper analytic subset of $$R^n$$, and such a set must be of zero measure. Therefore $$\hat{\phi}(s)=0$$ almost everywhere, so $$\phi=0.$$ • As I see it correctly the argument here is that since the Fourier transform of $\phi$ is not the zero function, it must be non-zero on a set of positive Lebesgue measure. Are we then using that an entire function of several complex variables which vanishes on a set in $\mathbb R^n$ of positive Lebesgue measure must be the zero function? If yes, is there a reference for this? Apr 4 at 10:21 A standard proof of the independence of the exponentials $$f_j(x)=e^{ix\cdot \xi_j}$$ is the following. Take a vector $$\omega \in \mathbb R^n$$ such that $$\omega\cdot (\xi_k-\xi_j)\neq 0$$ for $$k \neq j$$ (this follows by induction on the numbers of vectors) and let $$D=\omega \cdot \nabla$$. Then $$D(f_j)=i\omega \cdot \xi_j f_j$$ so that the $$f_j$$ are eigenvectors associated to distinct eigenavalues of a linear operator. • @Eremenko I agree that it is easier to use the Wronskian if $n=1$. For general $n$ I do not see how to use it easily and induction on the dimension is puzzzling. Apr 4 at 6:50 • Wronskian of any number of exponentials is easily compute: it is an exponential times Vandermonde. If functions are linearly independent on a line then they are surely linearly independent in the whole space. Take a line $x=\{ x_0t:t\in R\}$ such that $(x_0,\xi_j)\neq 0$ for all $j$. Apr 4 at 15:15
# What is Convex about Locally Convex Spaces? This might be a silly question, but what motivates the name "locally convex" for locally convex spaces? The definition in terms of semi-norms seems to have nothing to do with convexity or with the other definition involving neighborhood bases -- and the neighborhood basis definition makes little sense to me either, because it refers to sets which are "absorbent", "balanced", and convex. Why the restrictions? And then why aren't they called "locally absorbent, balanced, and convex spaces"? And why do we never here about the terms absorbent or balanced in any other context? Also, I know that Banach spaces are locally convex, but this just confuses me further -- what do Banach spaces have to do with convexity? And why are locally convex spaces a natural generalization of Banach spaces? I have some vague ideas -- the Hahn-Banach theorem (and hyperplanes) are used a lot in convex programming, and the "p-norm" is only a norm for $p \ge 1$, the same values for which $x^p$ is a convex function -- are norms somehow "convex", does this follow from the triangle inequality? Then why aren't arbitrary complete metric spaces locally convex? Any insights would be greatly appreciated. • Here is a link between convex sets and semi norms : A closed convex set is the intersection of all the half spaces that contains it. Half spaces can be viewed like $p^{-1}([0;\infty[)$ where $p$ is a linear form. Now the typical example of semi norm is the absolute value of a lineare form. So the equivalence between the two definitions is not so surprising. I suppose the restrictions "absorbent" and "balanced" are here to make things work and avoid pathological counterexamples. Now locally absorbant, balancer and convex space" would be a bit too long... May 23, 2016 at 14:13 • Having a local base that is both absorbent and balanced is a compulsory requirement for a topological vector space. Hence we call a space that admits a local base with an additional property of being locally convex a "locally convex space". May 23, 2016 at 14:30 • Generally, a space is called "locally foo" if every point has a neighbourhood basis consisting of neighbourhoods that are "foo". Locally connected, for example. Locally convex thus means every point has a neighbourhood basis consisting of convex sets. May 23, 2016 at 14:31 • For more information on topological vector spaces, you may which to consult Rudin's Functional Analysis. The book covers the properties "absorbent" and "balanced" very early. May 23, 2016 at 14:34 The only reason the question seems silly is that you include the answer! A locally convex TVS is one that has a basis at the origin consisting of balanced absorbing convex sets. The reason for the emphasis on "convex" is that that's what distinguishes locally convex TVSs from other TVSs: every TVS has a local base consisting of balanced absorbing sets. Regarding "Also, I know that Banach spaces are locally convex, but this just confuses me further -- what do Banach spaces have to do with convexity? And why are locally convex spaces a natural generalization of Banach spaces?", surely this is clear. Balls in a normed space are convex (as well as balanced and absorbing). Why aren't arbitrary complete metric spaces locally convex? Huh? The notion of convexity makes no sense in a general metric space. • This clears up most of my confusion. Why does the notion of convexity make no sense in a general metric space? Or do you mean that convexity as a notion only makes sense for linear spaces (vector spaces, not necessarily topological), because otherwise "the line between two points" is not defined? May 23, 2016 at 14:54 • Again I'm puzzled because you've answered your own question. $C$ is convex if for every $x,y\in C$ and $t\in[0,1]$ we have $tx+(1-t)y\in C$, and in a general metric space there is no such thing as $tx++(1-t)y$. May 23, 2016 at 15:00 • Only for vector spaces, not necessarily topological? Just to confirm, since I guess I don't trust my own judgment enough. May 23, 2016 at 15:44 • I can think of a large number of things the question "Only for vector spaces, not necessarily topological? " might mean. If what you're asking is something that hasn't already been answered then ask again, a little more explicitly... May 23, 2016 at 15:48 • Yes, as you say, convexity makes sense in any vector space. Otoh "locally convex" only makes sense for topological vector spaces. May 23, 2016 at 15:56 There is two different yet equivalent definition of Locally convex spaces : one in which the topology endowed by a family of semi-norms, and one in term of absorbent balanced and convex basis. The equivalence between the two definition is rather long to prove but you can find it in Rudin's Functional Analysis. This might be of interest to you : https://en.wikipedia.org/wiki/Minkowski_functional Why are locally convex spaces a natural generalisation of Banach spaces ? The topology of a Banach space is induced by the norm. And a norm is a family of semi-norm, namely the family containing one semi-norm which happens to be a norm. What do Banach spaces hae to do with convexity ? Every $x$ in a Banach space has a localy base of convex sets, namely the balls centered a $x$ and of radius $r > 0$. Every open neighbourhood of $x$ contains one such ball centered at $x$.
# Geometry and Topology Commons™ ## All Articles in Geometry and Topology 987 full-text articles. Page 1 of 38. A Stronger Strong Schottky Lemma For Euclidean Buildings, 2023 The Graduate Center, City University of New York #### A Stronger Strong Schottky Lemma For Euclidean Buildings, Michael E. Ferguson ##### Dissertations, Theses, and Capstone Projects We provide a criterion for two hyperbolic isometries of a Euclidean building to generate a free group of rank two. In particular, we extend the application of a Strong Schottky Lemma to buildings given by Alperin, Farb and Noskov. We then use this extension to obtain an infinite family of matrices that generate a free group of rank two. In doing so, we also introduce an algorithm that terminates in finite time if the lemma is applicable for pairs of certain kinds of matrices acting on the Euclidean building for the special linear group over certain discretely valued fields. 2023 College of the Holy Cross #### Translation Of: Familles De Surfaces Isoparamétriques Dans Les Espaces À Courbure Constante, Annali Di Mat. 17 (1938), 177–191, By Élie Cartan., Thomas E. Cecil ##### Mathematics Department Faculty Scholarship This is an English translation of the article "Familles de surfaces isoparamétriques dans les espaces à courbure constante" which was originally published in Annali di Matematica 17, 177–191 (1938), by Élie Cartan. A note from Thomas E. Cecil, translator: This is an unofficial translation of the original paper which was written in French. All references should be made to the original paper. Mathematics Subject Classification Numbers: 53C40, 53C42, 53B25 Spectral Sequences And Khovanov Homology, 2023 Dartmouth College #### Spectral Sequences And Khovanov Homology, Zachary J. Winkeler ##### Dartmouth College Ph.D Dissertations In this thesis, we will focus on two main topics; the common thread between both will be the existence of spectral sequences relating Khovanov homology to other knot invariants. Our first topic is an invariant MKh(L) for links in thickened disks with multiple punctures. This invariant is different from but inspired by both the Asaeda-Pryzytycki-Sikora (APS) homology and its specialization to links in the solid torus. Our theory will be constructed from a Z^n-filtration on the Khovanov complex, and as a result we will get various spectral sequences relating MKh(L) to Kh(L), AKh(L), and APS(L). Our … 2023 University of Kentucky #### Slices Of C_2, Klein-4, And Quaternionic Eilenberg-Mac Lane Spectra, Carissa Slone ##### Theses and Dissertations--Mathematics We provide the slice (co)towers of $$\Si{V} H_{C_2}\ul M$$ for a variety of $$C_2$$-representations $$V$$ and $$C_2$$-Mackey functors $$\ul M$$. We also determine a characterization of all 2-slices of equivariant spectra over the Klein four-group $$C_2\times C_2$$. We then describe all slices of integral suspensions of the equivariant Eilenberg-MacLane spectrum $$H\ulZ$$ for the constant Mackey functor over $$C_2\times C_2$$. Additionally, we compute the slices and slice spectral sequence of integral suspensions of $H\ulZ$ for the group of equivariance $Q_8$. Along the way, we compute the Mackey functors $$\mpi_{k\rho} H_{K_4}\ulZ$$ and $\mpi_{k\rho} H_{Q_8}\ulZ$. 2023 Institute of Applied Mathematics and Mechanics of the NAS of Ukraine #### On The Uniqueness Of Continuation Of A Partially Defined Metric, Evgeniy Petrov ##### Theory and Applications of Graphs The problem of continuation of a partially defined metric can be efficiently studied using graph theory. Let $G=G(V,E)$ be an undirected graph with the set of vertices $V$ and the set of edges $E$. A necessary and sufficient condition under which the weight $w\colon E\to\mathbb R^+$ on the graph $G$ has a unique continuation to a metric $d\colon V\times V\to\mathbb R^+$ is found. 2022 The University of Western Ontario #### Multi-Trace Matrix Models From Noncommutative Geometry, Hamed Hessam ##### Electronic Thesis and Dissertation Repository Dirac ensembles are finite dimensional real spectral triples where the Dirac operator is allowed to vary within a suitable family of operators and is assumed to be random. The Dirac operator plays the role of a metric on a manifold in the noncommutative geometry context of spectral triples. Thus, integration over the set of Dirac operators within a Dirac ensemble, a crucial aspect of a theory of quantum gravity, is a noncommutative analog of integration over metrics. Dirac ensembles are closely related to random matrix ensembles. In order to determine properties of specific Dirac ensembles, we use techniques from random … (R1518) The Dual Spherical Curves And Surfaces In Terms Of Vectorial Moments, 2022 Ordu University #### (R1518) The Dual Spherical Curves And Surfaces In Terms Of Vectorial Moments, Süleyman Şenyurt, Abdussamet Çalışkan ##### Applications and Applied Mathematics: An International Journal (AAM) In the article, the parametric expressions of the dual ruled surfaces are expressed in terms of the vectorial moments of the Frenet vectors. The integral invariants of these surfaces are calculated. It is seen that the dual parts of these invariants can be stated by the real terms. Finally, we present examples of the ruled surfaces with bases such as helix and Viviani’s curves. 2022 Embry-Riddle Aeronautical University #### Manufacturability And Analysis Of Topologically Optimized Continuous Fiber Reinforced Composites, Jesus A. Ferrand ##### Doctoral Dissertations and Master's Theses Researchers are unlocking the potential of Continuous Fiber Reinforced Composites for producing components with greater strength-to-weight ratios than state of the art metal alloys and unidirectional composites. The key is the emerging technology of topology optimization and advances in additive manufacturing. Topology optimization can fine tune component geometry and fiber placement all while satisfying stress constraints. However, the technology cannot yet robustly guarantee manufacturability. For this reason, substantial post-processing of an optimized design consisting of manual fiber replacement and subsequent Finite Element Analysis (FEA) is still required. To automate this post-processing in two dimensions, two (2) algorithms were developed. The … Classifications Of Dupin Hypersurfaces In Lie Sphere Geometry, 2022 College of the Holy Cross #### Classifications Of Dupin Hypersurfaces In Lie Sphere Geometry, Thomas E. Cecil ##### Mathematics Department Faculty Scholarship This is a survey of local and global classification results concerning Dupin hypersurfaces in Sn (or Rn) that have been obtained in the context of Lie sphere geometry. The emphasis is on results that relate Dupin hypersurfaces to isoparametric hypersurfaces in spheres. Along with these classification results, many important concepts from Lie sphere geometry, such as curvature spheres, Lie curvatures, and Legendre lifts of submanifolds of Sn (or Rn), are described in detail. The paper also contains several important constructions of Dupin hypersurfaces with certain special properties. 2022 The University of Western Ontario #### Automorphism-Preserving Color Substitutions On Profinite Graphs, Michal Cizek ##### Electronic Thesis and Dissertation Repository Profinite groups are topological groups which are known to be Galois groups. Their free product was extensively studied by Luis Ribes and Pavel Zaleskii using the notion of a profinite graph and having profinite groups act freely on such graphs. This thesis explores a different approach to study profinite groups using profinite graphs and that is with the notion of automorphisms and colors. It contains a generalization to profinite graphs of the theorem of Frucht (1939) that shows that every finite group is a group of automorphisms of a finite connected graph, and establishes a profinite analog of the theorem … On The Thom Isomorphism For Groupoid-Equivariant Representable K-Theory, 2022 Dartmouth College #### On The Thom Isomorphism For Groupoid-Equivariant Representable K-Theory, Zachary J. Garvey ##### Dartmouth College Ph.D Dissertations This thesis proves a general Thom Isomorphism in groupoid-equivariant KK-theory. Through formalizing a certain pushforward functor, we contextualize the Thom isomorphism to groupoid-equivariant representable K-theory with various support conditions. Additionally, we explicitly verify that a Thom class, determined by pullback of the Bott element via a generalized groupoid homomorphism, coincides with a Thom class defined via equivariant spinor bundles and Clifford multiplication. The tools developed in this thesis are then used to generalize a particularly interesting equivalence of two Thom isomorphisms on TX, for a Riemannian G-manifold X. 2022 University of Tennessee, Knoxville #### Numerical Studies Of Correlated Topological Systems, Rahul Soni ##### Doctoral Dissertations In this thesis, we study the interplay of Hubbard U correlation and topological effects in two different bipartite lattices: the dice and the Lieb lattices. Both these lattices are unique as they contain a flat energy band at E = 0, even in the absence of Coulombic interaction. When interactions are introduced both these lattices display an unexpected multitude of topological phases in our U -λ phase diagram, where λ is the spin-orbit coupling strength. We also study ribbons of the dice lattice and observed that they qualitative display all properties of their two-dimensional counterpart. This includes flat bands near … Rendezvous Numbers Of Compact And Connected Spaces, 2022 University of Northern Iowa #### Rendezvous Numbers Of Compact And Connected Spaces, Kevin Demler, Bill Wood Ph.D. ##### Summer Undergraduate Research Program (SURP) Symposium The concept of a rendezvous number was originally developed by O. Gross in 1964, and was expanded upon greatly by J. Cleary, S. Morris, and D. Yost in 1986. This number exists for every metric space, yet very little is known about it, and it’s exact value for most spaces is not known. Furthermore, it’s exact value is difficult to calculate, and in most cases we can only find bounds for the value. We focused on their arguments using convexity and applied it to shapes in different metrics and graphs. Using sets of points that stood out (vertices, midpoints) as … Left-Separation Of Ω1, 2022 University of Northern Iowa #### Left-Separation Of Ω1, Lukas Stuelke, Adrienne Stanley Ph.D. ##### Summer Undergraduate Research Program (SURP) Symposium A topological space is left-separated if it can be well-ordered so that every initial segment is closed. Here, we show that all countable ordinal numbers are left-separated. We then prove that a similar method could not work for ω1 , using the pressing-down lemma1 . We finish by showing that a left-separating well-ordering on ω1 necessarily leads to a contradiction. 2022 East Tennessee State University #### Bbt Acoustic Alternative Top Bracing Cadd Data Set-Norev-2022jun28, Bill Hemphill ##### STEM Guitar Project’s BBT Acoustic Kit This electronic document file set consists of an overview presentation (PDF-formatted) file and companion video (MP4) and CADD files (DWG & DXF) for laser cutting the ETSU-developed alternate top bracing designs and marking templates for the STEM Guitar Project’s BBT (OM-sized) standard acoustic guitar kit. The three (3) alternative BBT top bracing designs in this release are (a) a one-piece base for the standard kit's (Martin-style) bracing, (c) an X-braced fan-style bracing similar to traditional European or so-called 'classical' acoustic guitars. The CADD data set for each of the three (3) top bracing designs includes … On A Relation Between Ado And Links-Gould Invariants, 2022 Louisiana State University at Baton Rouge ##### LSU Doctoral Dissertations In this thesis we consider two knot invariants: Akutsu-Deguchi-Ohtsuki(ADO) invariant and Links-Gould invariant. They both are based on Reshetikhin-Turaev construction and as such share a lot of similarities. Moreover, they are both related to the Alexander polynomial and may be considered generalizations of it. By experimentation we found that for many knots, the third order ADO invariant is a specialization of the Links-Gould invariant. The main result of the thesis is a proof of this relation for a large class of knots, specifically closures of braids with five strands. 2022 University of Massachusetts Amherst #### General Covariance With Stacks And The Batalin-Vilkovisky Formalism, Filip Dul ##### Doctoral Dissertations In this thesis we develop a formulation of general covariance, an essential property for many field theories on curved spacetimes, using the language of stacks and the Batalin-Vilkovisky formalism. We survey the theory of stacks, both from a global and formal perspective, and consider the key example in our work: the moduli stack of metrics modulo diffeomorphism. This is then coupled to the Batalin-Vilkovisky formalism–a formulation of field theory motivated by developments in derived geometry–to describe the associated equivariant observables of a theory and to recover and generalize results regarding current conservation. Unomaha Problem Of The Week (2021-2022 Edition), 2022 University of Nebraska at Omaha #### Unomaha Problem Of The Week (2021-2022 Edition), Brad Horner, Jordan M. Sahs ##### UNO Student Research and Creative Activity Fair The University of Omaha math department's Problem of the Week was taken over in Fall 2019 from faculty by the authors. The structure: each semester (Fall and Spring), three problems are given per week for twelve weeks, with each problem worth ten points - mimicking the structure of arguably the most well-regarded university math competition around, the Putnam Competition, with prizes awarded to top-scorers at semester's end. The weekly competition was halted midway through Spring 2020 due to COVID-19, but relaunched again in Fall 2021, with massive changes. Now there are three difficulty tiers to POW problems, roughly corresponding to … Introduction To Classical Field Theory, 2022 Department of Physics, Utah State University #### Introduction To Classical Field Theory, Charles G. Torre ##### All Complete Monographs This is an introduction to classical field theory. Topics treated include: Klein-Gordon field, electromagnetic field, scalar electrodynamics, Dirac field, Yang-Mills field, gravitational field, Noether theorems relating symmetries and conservation laws, spontaneous symmetry breaking, Lagrangian and Hamiltonian formalisms. (R1898) A Study On Inextensible Flows Of Polynomial Curves With Flc Frame, 2022 Ordu University #### (R1898) A Study On Inextensible Flows Of Polynomial Curves With Flc Frame, Süleyman Şenyurt, Kemal Eren, Kebire Hilal Ayvacı ##### Applications and Applied Mathematics: An International Journal (AAM) In this paper, we investigate the inextensible flows of polynomial space curves in R3. We calculate that the necessary and sufficient conditions for an inextensible curve flow are represented as a partial differential equation involving the curvatures. Also, we expressed the time evolution of the Frenet like curve (Flc) frame. Finally, an example of the evolution of the polynomial curve with Flc frame is given and graphed.
# Execution of printf with ++ operators in C C++Server Side ProgrammingProgramming In some problems we can find some printf() statements are containing some lines with ++ operator. In some questions of competitive examinations, we can find these kind of questions to find the output of that code. In this section we will see an example of that kind of question and try to figure out what will be the answer. ## Example Code #include<stdio.h> int main() { volatile int x = 20; printf("%d %d\n", x, x++); x = 20; printf("%d %d\n", x++, x); x = 20; printf("%d %d %d ", x, x++, ++x); return 0; } Now we will try to guess what will be the output. Most of the compilers takes each parameter of printf() from right to left. So in the first printf() statement, the last parameter is x++, so this will be executed first, it will print 20, after that increase the value from 20 to 21. Now print the second last argument, and show 21. Like that other lines are also calculated in this manner. For ++x, it will increase the value before printing, and for x++, it prints the value at first, then increase the value. Please check the output to get the better understanding. ## Output 21 20 20 20 22 21 21 Published on 04-Apr-2019 17:03:53
# syntax error when using tex4ht on Latex file that uses the standalone package [closed] When I try to use htlatex to convert a latex file to html, where the latex file uses the standalone package, a syntax error is thrown from the file keyval.tex. The Latex file works fine with pdflatex and with latex. So the issue seems to be a tex4ht only or some incompatibility between tex4ht and the standalone package. I send an email on this to the tex4ht mailing list, but there was no resolution. I am hoping someone here can help. I tried to find what is the problem myself, but my Latex skills are not good enough to debug this and I have no idea why this happens. It is a very simple set up. Just one small latex file. That is all. htlatex reports an error in /usr/share/texlive/texmf-dist/tex/generic/xkeyval/keyval.tex at line 227 (end of file) that there is an extra \else Here is the file, and the command and the error message. \documentclass{book}% \usepackage{standalone}% this causes an error with htlatex \begin{document} hello \end{document} This the command used htlatex main.tex This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian) restricted \write18 enabled. entering extended mode LaTeX2e <2011/06/27> Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, lo (./main.tex (/usr/share/texlive/texmf-dist/tex/latex/base/book.cls Document Class: book 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/bk10.clo)) (/usr/share/texmf/tex/generic/tex4ht/tex4ht.sty) (/usr/share/texmf/tex/generic/tex4ht/usepackage.4ht) (/usr/share/texlive/texmf-dist/tex/latex/standalone/standalone.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifluatex.sty) (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty) (/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texlive/texmf-dist/tex/latex/xkeyval/xkeyval.sty (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/xkeyval.tex (/usr/share/texlive/texmf-dist/tex/generic/xkeyval/keyval.tex))) ! Extra \else. l.227 \else ? When commenting out the standalone package, the error goes away. specs: 4ht.c (2009-01-31-07:34 kpathsea) >latex -v pdfTeX 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian) There is no other way to convert this to html. I tried Latex2html and it does not support this package. .. Warning: No implementation found for package: standalone. *** preamble done *** I tried latexml and it also does not support this. I tried plastex and it also does not support this. >plastex main.tex plasTeX version 0.9.1 ( /usr/lib/pymodules/python2.7/plasTeX/Packages/book.pyc ) ( /usr/share/texlive/texmf-dist/tex/latex/standalone/standalone.sty WARNING: unrecognized command/environment: ifstandalone - ## closed as too localized by Martin Scharrer♦Jan 28 '13 at 21:51 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. what would you want standalone to do in html output? Can't you simply not use the package in that case? –  David Carlisle Jan 28 '13 at 16:59 @DavidCarlisle, I need this package, since I am building a tree that has latex files and I'd like to be able build each as separate, but also be able to build it all as one document. Please see this tex.stackexchange.com/questions/94822/… The idea is that I can make an HTML tree from the same latex files using htlatex, and also be able to make one PDF file from the whole tree using pdflatex. –  Nasser Jan 28 '13 at 17:01 I figured out where the issue comes from. Some \if..tex macros don't get properly defined and therefore TeX can't match the \else and \fi to it. It turned out that this was caused because htlatex defines \IfFileExists with a different internal behaviour which brakes its use in standalone. If fixed this now in the repository. The fix will be part of the next release. Note that I never intended and tested standalone for these "special" TeX compilers, only for the normal ones (pdflatex, latex, lualatex and xelatex) –  Martin Scharrer Jan 28 '13 at 21:42 @MartinScharrer:I seem to be encountering the exact same problem with standalone 2012/09/15 v1.1b. I am using TeXLive2014 (updated as of today). –  Peter Grill Oct 1 '14 at 6:16 @MartinScharrer fyi, This issue is still in TL 2015. thanks. –  Nasser Jun 21 at 11:30
# Tag Info 67 The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the variance is proportional to mean-squared (constant coefficient of variation). Something approximate to this occurs fairly often with financial data, or indeed, with ... 52 The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution function $F$. More specifically, because $F$ must rise asymptotically to $1$ for large arguments $x$ (by the Law of Total Probability), we are interested in how ... 46 First, combine any sums having the same scale factor: a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate. Next, observe that the characteristic function (cf) of $\Gamma(n, \beta)$ is $(1-i \beta t)^{-n}$, whence the cf of a sum of these distributions is the product $$\prod_{j} \frac{1}{(1-i \beta_j t)^{n_j}}.$$ When ... 39 Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's nothing for it but to write down a $k+1$ by $k+1$ matrix of derivatives and do the calculation. There's a ... 38 As for qualitative differences, the lognormal and gamma are, as you say, quite similar. Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others use a lognormal). They are both, for example, constant-coefficient-of-variation models (the CV for the lognormal is $\sqrt{e^{\sigma^2} -1}$, for the gamma it's ... 35 Both the gamma and Weibull distributions can be seen as generalisations of the exponential distribution. If we look at the exponential distribution as describing the waiting time of a Poisson process (the time we have to wait until an event happens, if that event is equally likely to occur in any time interval), then the $\Gamma(k, \theta)$ distribution ... 33 The log-linked gamma GLM specification is identical to exponential regression: $$E[y \vert x,z] = \exp \left( \alpha + \beta \cdot x +\gamma \cdot z \right)=\hat y$$ This means that $E[y \vert x=0,z=0]=\exp(\alpha)$. That's not a very meaningful value (unless you centered your variables to be be mean zero beforehand). There are at least three way to ... 32 The gamma and the lognormal are both right skew, constant-coefficient-of-variation distributions on $(0,\infty)$, and they're often the basis of "competing" models for particular kinds of phenomena. There are various ways to define the heaviness of a tail, but in this case I think all the usual ones show that the lognormal is heavier. (What the first person ... 31 That's a good question. In fact, why don't people use generalised linear models (GLM) more is also a good question. Warning note: Some people use GLM for general linear model, not what is in mind here. It does depend where you look. For example, gamma distributions have been popular in several of the environmental sciences for some decades and so ... 29 When you're considering simple parametric models for the conditional distribution of data (i.e. the distribution of each group, or the expected distribution for each combination of predictor variables), and you are dealing with a positive continuous distribution, the two common choices are Gamma and log-Normal. Besides satisfying the specification of the ... 25 An alternative is the approach of Kooperberg and colleagues, based on estimating the density using splines to approximate the log-density of the data. I'll show an example using the data from @whuber's answer, which will allow for a comparison of approaches. set.seed(17) x <- rexp(1000) You'll need the logspline package installed for this; install it if ... 23 One solution, borrowed from approaches to edge-weighting of spatial statistics, is to truncate the density on the left at zero but to up-weight the data that are closest to zero. The idea is that each value $x$ is "spread" into a kernel of unit total area centered at $x$; any part of the kernel that would spill over into negative territory is removed and ... 22 Imagine you're the newly appointed manager of a flower shop. You've got a record of last year's customers – the frequency with which they shop and how long since their last visit. You want to know how much business the listed customers are likely to bring in this year. There are a few things to consider: [assumption (ii)] Customers have different shopping ... 21 Well, quite clearly the log-linear fit to the Gaussian is unsuitable; there's strong heteroskedasticity in the residuals. So let's take that out of consideration. What's left is lognormal vs gamma. Note that the histogram of $T$ is of no direct use, since the marginal distribution will be a mixture of variates (each conditioned on a different set of values ... 20 As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution: $$X \sim N(0,\sigma^2) \Rightarrow X^2/\sigma^2 \sim \mathcal \chi^2_1 \Rightarrow X^2 \sim \sigma^2\mathcal \chi^2_1= \text{Gamma}\left(\... 20 This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are supposing X has a \Gamma(\alpha,\beta) distribution and we wish to find the expectation of Y=\log(X). First, because \beta is a scale parameter, its ... 19 The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique. Consider a family of distributions \{F_\theta : \theta \in \Theta\} which consitute an exponential family, meaning they admit a density ... 18 Some background The \chi^2_n distribution is defined as the distribution that results from summing the squares of n independent random variables \mathcal{N}(0,1), so:$$\text{If }X_1,\ldots,X_n\sim\mathcal{N}(0,1)\text{ and are independent, then }Y_1=\sum_{i=1}^nX_i^2\sim \chi^2_n,$$where X\sim Y denotes that the random variables X and Y have ... 17 I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. I advised you to read the lecture as it is, in my opinion, very clear and interesting concerning the use of gamma distribution in GLMs. glmGamma <- glm(... 17 \beta_1X \sim Gamma(\alpha_1, 1) and \beta_2 Y \sim Gamma(\alpha_2, 1), then according to Wikipedia$$\dfrac{\beta_1X}{\beta_2Y} \sim \text{Beta Prime distribution}(\alpha_1, \alpha_2). $$In addition, short hand you write \beta'(\alpha_1, \alpha_2). Now the Wiki page also describes the density of the general Beta-prime distribution \beta'(\alpha_1,... 16 Let us address the question posed, This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution...? No mystery really, it is simply that the normal distribution and the gamma distribution are members, among others of the exponential family of distributions, which family is defined by the ability to ... 16 The Welch–Satterthwaite equation could be used to give an approximate answer in the form of a gamma distribution. This has the nice property of letting us treat gamma distributions as being (approximately) closed under addition. This is the approximation in the commonly used Welch's t-test. (The gamma distribution is can be viewed as a scaled chi-square ... 16 Both the MLEs and moment based estimators are consistent and so you'd expect that in sufficiently large samples from a gamma distribution they'd tend to be quite similar. However, they won't necessarily be alike when the distribution is not close to a gamma. Looking at the distribution of the log of the data, it is roughly symmetric - or indeed actually ... 16 Residuals in glm's such as with the gamma family is not normally distributed, so simply a QQ plot against the normal distribution isn't very helpful. To understand this, note that the usual linear model given by$$ y_i = \beta_0 + \beta_1 x_1 + \dotso +\beta_p x_p + \epsilon $$has a very special form, the observation can be decomposed as an expected ... 15 Gamma regression is in the GLM and so you can get many useful quantities for diagnostic purposes, such as deviance residuals, leverages, Cook's distance, and so on. They are perhaps not as nice as the corresponding quantities for log-transformed data. One thing that gamma regression avoids compared to the lognormal is transformation bias. Jensen's ... 15 Question 1 The way you calculate the density by hand seems wrong. There's no need for rounding the random numbers from the gamma distribution. As @Pascal noted, you can use a histogram to plot the density of the points. In the example below, I use the function density to estimate the density and plot it as points. I present the fit both with the points and ... 15 I am guessing that you are looking for a positive, continuous probability distribution with infinite mean and with a maximum density away from zero. I thought that by analogy with a Gamma distribution (p(x) \propto x^a \exp(-x) \, dx), we could try something with a rational (polynomial) rather than an exponential tail. After a little bit of lazy R and ... 14 The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance. In GLM parlance the dispersion parameter, \phi in \text{Var}(Y_i)=\phi\text{V}(\mu_i) is normally constant. More generally, you have a(\phi), but that doesn't help. It might perhaps be possible ... 14 Let's start with the p.d.f. of a gamma-distributed random variable X, where \alpha is the shape parameter and \beta is the rate parameter (the p.d.f. is a little bit different if \beta is a scale parameter; both parameters are strictly positive):$$ f_X(x) = \frac{x ^ {\alpha - 1} \beta ^ \alpha e ^ {-\beta x}}{\Gamma(\alpha)} Now let \$\alpha = \... 14 Height, for instance, is often modelled as being normal. Maybe the height of men is something like 5 foot 10 with a standard deviation of 2 inches. We know negative height is unphysical, but under this model, the probability of observing a negative height is essentially zero. We use the model anyway because it is a good enough approximation. All models ... Only top voted, non community-wiki answers of a minimum length are eligible
### Basic reproduction number $$R_0$$ #### Overview $$R_0$$, the basic reproduction number, is defined as the average number of secondary cases expected to arise from a single infected individual in a wholly susceptible population. $$R_{eff}$$, the effective reproduction number, refers to the expected number of secondary cases to arise from an arbitrary case at any point in time. $$R_{eff}$$ is expected to change over the course of an outbreak. Containment will occur when $$R_{eff}<1$$. Estimating $$R_0$$ and $$R_{eff}$$ in this outbreak are challenging because: 1. There is little information from the first few infection generations 2. The distribution of incubation period and time from presentation of symptoms to hospitalization are not exponentially distributed 3. Interventions and policies intended to curtail the outbreak have affected the unfolding process and are therefore reflected in the case notification data. #### Takeoff estimators We have considered ‘’takeoff’’ estimators (e.g. Wearing et al. https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020174). Wearing et al. show that dynamics of an epidemic in the early phases are related to $$R_0$$. A regression line through plot of case notifications over time, from the start of the outbreak on 1 December 2020 (neglecting the first couple points), clearly does not go through the origin, suggesting that the epidemic was already in its exponential phase by Day 47 (Jan 16). If $$\gamma$$ were constant during this period after Day 47 (which it’s probably not, due to increasing case isolation), the number of cases would be expected to grow according to the proportionality $$$log(X_t) \propto (R_0-1)\gamma t.$$$ Although $$\gamma$$ is changing, we can nevertheless use this method to provide upper and lower bounds on $$R_{eff}$$ by looking at the plausible range of $$\gamma$$ (i.e., it’s minimum at $$\gamma \approx 1/7$$ and its maximum at $$\gamma \approx 1$$. A second approach looks at the first significant report in these data (41 cumulative cases on Day 33, Jan 3). During the pre-exponential phase of the epidemic, the takeoff rate is given by $$$R_0 = \lambda_2(\lambda_2/(\sigma m)+1)^m/(\gamma (1-(\lambda_2/(\gamma n)+1)^{-n})),$$$ where $$\lambda_2$$ is the slope of the takeoff at the beginning of the epidemic, $$1/\sigma$$ is the average latent period, $$1/\gamma$$ is the average infectious period and $$m$$ and $$n$$ are the parameters of Erlang distributions for the interval (Wearing and Rohani). For assumed initial case on Day $$t_1=1$$ and $$x=41$$ cases on Day $$t=33$$, we have $$\lambda_2 \approx \frac{\log x}{t-t_1} = \frac{\log 41}{33-1} = 0.11$$. Inserting into the formula yields the estimate: Inserting into the formula yields the estimate: $$\hat R_0 \approx 2.28$$. ### Case detection rate ($$q$$) We have sought to estimate the case detection rate by adjusting the case fatality rate for cases reported in Wuhan from January 1 - January 19 according to the presumed actual case fatality rate of 3% as the difference between these two estimates is primarily due to the under-reporting of less severe cases. For a derivation of our estimator, see https://github.com/CEIDatUGA/ncov-parameters. The following plot shows the estimated probability distribution of $$q$$. ### Incubation period ($$1/\sigma$$) Travelers who acquired the infection in a location where transmission was occurring and developed symptoms at a later time provide an opportunity to estimate the incubation period. Using data from the Be Outbreak Prepared (BOP) linelist, we estimate the incubation period to have a mean of 5.25 days. By maximum likelihood, we have determined that $$k=2$$ is the most likely shape parameter for an Erlang distributed onset to isolation interval. ### Lag between symptom onset and isolation A key determinant of transmission is the lag between the onset of symptoms and isolation. The inverse of this quantity is the isolation rate. By maximum likelihood, we have determined that $$k=2$$ is the most likely shape parameter for an Erlang distributed onset to hospitalization interval. We determined that $$k=3$$ is the most likley shape parameter for an Erlang distributed onset to confirmation interval.
# 3.1.1: Normal Form in Two Variables Consider the differential equation \label{eqnplane}\tag{3.1.1.1} a(x,y)u_{xx}+2b(x,y)u_{xy}+c(x,y)u_{yy}+\mbox{terms of lower order}=0 in $$\Omega\subset\mathbb{R}^2$$. The associated characteristic differential equation is \label{planechar}\tag{3.1.1.2} a\chi_x^2+2b\chi_x\chi_y+c\chi_y^2=0. We show that an appropriate coordinate transform will simplify equation (\ref{eqnplane}) sometimes in such a way that we can solve the transformed equation explicitly. Let $$z=\phi(x,y)$$ be a solution of (\ref{planechar}). Consider the level sets $$\{(x,y):\ \phi(x,y)=const.\}$$ and assume  $$\phi_y\not=0$$ at a point $$(x_0,y_0)$$ of the level set. Then there is a function $$y(x)$$ defined in a neighborhood of $$x_0$$ such that $$\phi(x,y(x))=const.$$ It follows $$y'(x)=-\dfrac{\phi_x}{\phi_y},$$ which implies, see the characteristic equation (\ref{planechar}), ay'^2-2by'+c=0. Then, provided  $$a\not=0$$, we can calculate $$\mu:=y'$$ from the (known) coefficients $$a$$, $$b$$ and $$c$$: \label{mu}\tag{3.1.1.4} \mu_{1,2}=\dfrac{1}{a}\left(b\pm\sqrt{b^2-ac}\right). These solutions are real if and only of $$ac-b^2\le0$$. Equation (\ref{eqnplane}) is hyperbolic if $$ac-b^2<0$$, parabolic if  $$ac-b^2=0$$ and ellipticif  $$ac-b^2>0$$. This follows from an easy discussion of the eigenvalues of the matrix $$\left(\begin{array}{cc} a&b\\ b&c \end{array}\right),$$ see an exercise. ### Normal form of a hyperbolic equation Let $$\phi$$ and $$\psi$$ are solutions of the characteristic equation (\ref{planechar}) such that \begin{eqnarray*} y_1'\equiv\mu_1&=&-\dfrac{\phi_x}{\phi_y}\\ y_2'\equiv\mu_2&=&-\dfrac{\psi_x}{\psi_y}, \end{eqnarray*} where $$\mu_1$$ and $$\mu_2$$ are given by (\ref{mu}). Thus $$\phi$$ and $$\psi$$ are solutions of the linear homogeneous equations of first order \begin{eqnarray} \label{phi}\tag{3.1.1.5} \phi_x+\mu_1(x,y)\phi_y&=&0\\ \label{psi}\tag{3.1.1.6} \psi_x+\mu_2(x,y)\psi_y&=&0. \end{eqnarray} Assume $$\phi(x,y)$$, $$\psi(x,y)$$ are solutions such that $$\nabla\phi\not=0$$ and $$\nabla\psi\not=0$$, see an exercise for the existence of such solutions. Consider two families of level sets defined by $$\phi(x,y)=\alpha$$ and $$\psi(x,y)=\beta$$, see Figure 3.1.1.1. Figure 3.1.1.1: Level sets These level sets are characteristic curves of the partial differential equations (\ref{phi}) and (\ref{psi}), respectively, see an exercise of the previous chapter. Lemma. (i) Curves from different families can not touch each other (ii) $$\phi_x\psi_y-\phi_y\psi_x\not=0$$. Proof. (i): $$y_2'-y_1'\equiv\mu_2-\mu_1=-\dfrac{2}{a}\sqrt{b^2-ac}\not=0.$$ (ii): $$\mu_2-\mu_1=\dfrac{\phi_x}{\phi_y}-\dfrac{\psi_x}{\psi_y}.$$ $$\Box$$ Proposition 3.1. The mapping $$\xi=\phi(x,y)$$, $$\eta=\psi(x,y)$$ transforms equation (\ref{eqnplane}) into \label{normhyp}\tag{3.1.1.7} v_{\xi\eta}=\mbox{lower order terms}, where $$v(\xi,\eta)=u(x(\xi,\eta),y(\xi,\eta))$$.} Proof. The proof follows from a straightforward calculation. \begin{eqnarray*} u_x&=&v_\xi\phi_x+v_\eta\psi_x\\ u_y&=&v_\xi\phi_y+v_\eta\psi_y\\ u_{xx}&=&v_{\xi\xi}\phi_x^2+2v_{\xi\eta}\phi_x\psi_x+v_{\eta\eta}\psi_x^2+\mbox{lower order terms}\\ u_{xy}&=&v_{\xi\xi}\phi_x\phi_y+v_{\xi\eta}(\phi_x\psi_y+\phi_y\psi_x)+v_{\eta\eta}\psi_x\psi_y+\mbox{lower order terms}\\ u_{yy}&=&v_{\xi\xi}\phi_y^2+2v_{\xi\eta}\phi_y\psi_y+v_{\eta\eta}\psi_y^2+\mbox{lower order terms}. \end{eqnarray*} Thus $$au_{xx}+2bu_{xy}+cu_{yy}=\alpha v_{\xi\xi}+2\beta v_{\xi\eta}+\gamma v_{\eta\eta}+l.o.t.,$$ where \begin{eqnarray*} \alpha:&=&a\phi_x^2+2b\phi_x\phi_y+c\phi_y^2\\ \beta:&=&a\phi_x\psi_x+b(\phi_x\psi_y+\phi_y\psi_x)+c\phi_y\psi_y\\ \gamma:&=&a\psi_x^2+2b\psi_x\psi_y+c\psi_y^2. \end{eqnarray*} The coefficients $$\alpha$$ and $$\gamma$$ are zero since $$\phi$$ and $$\psi$$ are solutions of the characteristic equation. Since $$\alpha\gamma-\beta^2=(ac-b^2)(\phi_x\psi_y-\phi_y\psi_x)^2,$$ it follows from the above lemma that the coefficient $$\beta$$ is different from zero. $$\Box$$ Example 3.1.1.1: Consider the differential equation $$u_{xx}-u_{yy}=0.$$ The associated characteristic differential equation is $$\chi_x^2-\chi_y^2=0.$$ Since $$\mu_1=-1$$ and $$\mu_2=1$$, the functions $$\phi$$ and $$\psi$$ satisfy differential equations \begin{eqnarray*} \phi_x+\phi_y&=&0\\ \psi_x-\psi_y&=&0. \end{eqnarray*} Solutions with $$\nabla\phi\not=0$$ and $$\nabla\psi\not=0$$ are $$\phi=x-y,\ \ \psi=x+y.$$ Then the mapping $$\xi=x-y,\ \ \eta=x+y$$ $$v_{\xi\eta} (\xi,\eta)=0.$$ Assume $$v\in C^2$$ is a solution, then $$v_\xi=f_1(\xi)$$ for an arbitrary $$C^1$$ function $$f_1(\xi)$$. It follows $$v(\xi,\eta)=\int_0^\xi\ f_1(\alpha)\ d\alpha+g(\eta),$$ where $$g$$ is an arbitrary $$C^2$$ function. Thus each $$C^2$$-solution of the differential equation can be written as $$(\star)$$  $$v(\xi,\eta)=f(\xi)+g(\eta)$$, where $$f,\ g\in C^2$$. On the other hand, for arbitrary $$C^2$$-functions $$f$$, $$g$$ the function $$(\star)$$ is a solution of the differential equation $$v_{\xi\eta}=0$$. Consequently every $$C^2$$-solution of the original equation $$u_{xx}-u_{yy}=0$$ is given by $$u(x,y)=f(x-y)+g(x+y),$$ where $$f,\ g\in C^2$$. ### Contributors • Integrated by Justin Marshall.
# Centroid In Geometry, the centroid is an important concept related to a triangle. A triangle is a three-sided bounded figure with three interior angles. Based on the sides and angles, a triangle can be classified into different types such as • Scalene triangle • Isosceles triangle • Equilateral triangle • Acute-angled triangle • Obtuse-angled triangle • Right-angled triangle The centroid is an important property of a triangle. Let us discuss the definition of centroid, formula, properties and centroid for different geometric shapes in detail. ## Centroid Definition The centroid is the centre point of the object. The point in which the three medians of the triangle intersect is known as the centroid of a triangle. It is also defined as the point of intersection of all the three medians. The median is a line that joins the midpoint of a side and the opposite vertex of the triangle. The centroid of the triangle separates the median in the ratio of 2: 1. It can be found by taking the average of x- coordinate points and y-coordinate points of all the vertices of the triangle. ### Centroid Theorem The centroid theorem states that the centroid of the triangle is at 2/3 of the distance from the vertex to the mid-point of the sides. Suppose PQR is a triangle having a centroid V. S, T and U are the midpoints of the sides of the triangle PQ, QR and PR, respectively. Hence as per the theorem; QV = 2/3 QU, PV = 2/3 PT and RV = 2/3 RS ### Centroid of A Right Angle Triangle The centroid of a right angle triangle is the point of intersection of three medians, drawn from the vertices of the triangle to the midpoint if the opposite sides. ### Centroid of a Square The point where the diagonals of the square intersect each other is the centroid of the square. As we all know the square has all its sides equal, hence it is easy to locate the centroid in it. See the below figure, where O is the centroid of the square. ## Properties of centroid The properties of the centroid are as follows: • The centroid is the centre of the object • It is the centre of gravity • It should always lie inside the object • It is the point of concurrency of the medians ## Centroid Formula Let’s consider a triangle. If the three vertices of the triangle are A(x1, y1), B(x2, y2), C(x3, y3), then the centroid of a triangle can be calculated by taking the average of X and Y coordinate points of all three vertices. Therefore, the centroid of a triangle can be written as: Centroid of a triangle = ((x1+x2+x3)/3, (y1+y2+y3)/3) ### Centroid Formula For Different Shapes Here, the list of centroid formula is given for different geometrical shapes. Shapes Figure x̄ ȳ Area Triangular area – h/3 bh/2 Quarter-circular area 4r/3π 4r/3π πr2/4 Semicircular area 0 4r/3π πr2/2 Quarter-elliptical area 4a/3π 4b/3π πab/4 Semi elliptical area 0 4b/3π πab/2 Semiparabolic area 3a/8 3h/5 2ah/3 Parabolic area 0 3h/5 4ah/3 Parabolic spandrel 3a/4 3h/10 ah/3 ### Examples of How to Calculate Centroid Find the solved examples below, to find the centroid of triangles with the given values of vertices. Question 1: Find the centroid of the triangle whose vertices are A(2, 6), B(4,8), and C(6,12). Solution: Given: A(x1, y1) = A(2, 6) B(x2, y2) = B(4,9) C(x3, y3) = C(6,15) We know that the formula to find the centroid of a triangle is = ((x1+x2+x3)/3, (y1+y2+y3)/3) Now, substitute the given values in the formula Centroid of a triangle = ((2+4+6)/3, (6+9+15)/3) = (12/3, 30/3) = (4, 10) Therefore, the centroid of the triangle for the given vertices A(2, 6), B(4,8), and C(6,12) is (4, 10). Question 2: Find the centroid of the triangle whose vertices are A(1, 5), B(2,6), and C(4,10). Solution: Given, A(1, 5), B(2,6), and C(4,10) are the vertices of a triangle ABC. By the formula of the centroid we know; Centroid =  ((x1+x2+x3)/3, (y1+y2+y3)/3) Putting the values, we get; Centroid = (1+2+4)/3, (5+6+10)/3 = 7/3, 11/3 Hence, the centroid of the triangle having vertices A(1, 5), B(2,6), and C(4,10) is (7/3, 11/3). Question 3: If the vertices of a triangle PQR is (2,1), (3,2) and (-2,4). Then find the centroid of it. Solution: Given, (2,1), (3,2) and (-2,4) are the vertices of triangle pQR. By the formula of the centroid we know; Centroid =  ((x1+x2+x3)/3, (y1+y2+y3)/3) Putting the values, we get; Centroid, O = (2+3-2)/3, (1+2+4)/3 O = 3/3, 7/3 O = 1,7/3 Hence, the centroid of the triangle having vertices (2,1), (3,2) and (-2,4) is (1,7/3). Visit BYJU’S to learn different concepts on Maths and also download BYJU’S – The Learning App for personalised videos to learn with ease.
# LaTeX Error: Command \textquotedbl unavailable in encoding HE8 I was testing today the SVN version of LyX 1.6.0 and 1.5.7. Due to a change in the way the double quotation mark (“) is handled, adding it to Hebrew text resulted in the following LaTeX error: `LaTeX Error: Command \textquotedbl unavailable in encoding HE8`
Categories: Semi-structured Data Functions (Parsing) # PARSE_JSON¶ Interprets an input string as a JSON document, producing a VARIANT value. TRY_PARSE_JSON ## Syntax¶ PARSE_JSON( <expr> ) ## Arguments¶ expr An expression of string type (e.g. VARCHAR) that holds valid JSON information. ## Returns¶ The returned value is of type VARIANT and contains a JSON document. ## Usage Notes¶ • This function supports an input expression with a maximum size of 8 MB compressed. • If the input is NULL, the output is also NULL. However, if the input string is 'null', then it is interpreted as a JSON null value so that the result is not SQL NULL, but a valid VARIANT value containing null. An example is included in the Examples section below. • When parsing decimal numbers, PARSE_JSON attempts to preserve exactness of the representation by treating 123.45 as NUMBER(5,2), not as a DOUBLE. However, numbers using scientific notation (e.g. 1.2345e+02) or numbers which cannot be stored as fixed-point decimals due to range or scale limitations are stored as DOUBLE. Because JSON does not represent values such as TIMESTAMP, DATE, TIME, or BINARY natively, these have to be represented as strings. • In JSON, an object (also called a “dictionary” or a “hash”) is an unordered set of key-value pairs. • TO_JSON and PARSE_JSON are (almost) converse or reciprocal functions. • The PARSE_JSON function takes a string as input and returns a JSON-compatible variant. • The TO_JSON function takes a JSON-compatible variant and returns a string. The following is (conceptually) true if X is a string containing valid JSON: X = TO_JSON(PARSE_JSON(X)); For example, the following is (conceptually) true: '{"pi":3.14,"e":2.71}' = TO_JSON(PARSE_JSON('{"pi":3.14,"e":2.71}')) However, the functions are not perfectly reciprocal for two reasons: • The order of the key-value pairs in the string produced by TO_JSON is not predictable. • The string produced by TO_JSON can have less whitespace than the string passed to PARSE_JSON. The following are equivalent JSON, but not equivalent strings: • {"pi": 3.14, "e": 2.71} • {"e":2.71,"pi":3.14} ## Examples¶ This shows an example of storing different types of data in a VARIANT column by calling PARSE_JSON to parse strings. Create and fill a table. Note that the INSERT statement uses the PARSE_JSON function. create or replace table vartab (n number(2), v variant); insert into vartab select column1 as n, parse_json(column2) as v from values (1, 'null'), (2, null), (3, 'true'), (4, '-17'), (5, '123.12'), (6, '1.912e2'), (7, '"Om ara pa ca na dhih" '), (8, '[-1, 12, 289, 2188, false,]'), (9, '{ "x" : "abc", "y" : false, "z": 10} ') AS vals; Query the data: select n, v, typeof(v) from vartab; +---+------------------------+------------+ | N | V | TYPEOF(V) | |---+------------------------+------------| | 1 | null | NULL_VALUE | | 2 | NULL | NULL | | 3 | true | BOOLEAN | | 4 | -17 | INTEGER | | 5 | 123.12 | DECIMAL | | 6 | 1.912000000000000e+02 | DOUBLE | | 7 | "Om ara pa ca na dhih" | VARCHAR | | 8 | [ | ARRAY | | | -1, | | | | 12, | | | | 289, | | | | 2188, | | | | false, | | | | undefined | | | | ] | | | 9 | { | OBJECT | | | "x": "abc", | | | | "y": false, | | | | "z": 10 | | | | } | | +---+------------------------+------------+ The following example shows NULL handling for PARSE_JSON and TO_JSON: SELECT TO_JSON(NULL), TO_JSON('null'::VARIANT), PARSE_JSON(NULL), PARSE_JSON('null'); +---------------+--------------------------+------------------+--------------------+ | TO_JSON(NULL) | TO_JSON('NULL'::VARIANT) | PARSE_JSON(NULL) | PARSE_JSON('NULL') | |---------------+--------------------------+------------------+--------------------| | NULL | "null" | NULL | null | +---------------+--------------------------+------------------+--------------------+ The following examples demonstrate the relationship among PARSE_JSON, TO_JSON, and TO_VARIANT: Create a table and add VARCHAR, generic VARIANT, and JSON-compatible VARIANT data. The INSERT statement inserts a VARCHAR value, and the UPDATE statement generates a JSON value that corresponds to that VARCHAR. CREATE or replace TABLE jdemo2 (varchar1 VARCHAR, variant1 VARIANT, variant2 VARIANT); INSERT INTO jdemo2 (varchar1) VALUES ('{"PI":3.14}'); UPDATE jdemo2 SET variant1 = PARSE_JSON(varchar1); This query shows that TO_JSON and PARSE_JSON are conceptually reciprocal functions: SELECT varchar1, PARSE_JSON(varchar1), variant1, TO_JSON(variant1), PARSE_JSON(varchar1) = variant1, TO_JSON(variant1) = varchar1 FROM jdemo2; +-------------+----------------------+--------------+-------------------+---------------------------------+------------------------------+ | VARCHAR1 | PARSE_JSON(VARCHAR1) | VARIANT1 | TO_JSON(VARIANT1) | PARSE_JSON(VARCHAR1) = VARIANT1 | TO_JSON(VARIANT1) = VARCHAR1 | |-------------+----------------------+--------------+-------------------+---------------------------------+------------------------------| | {"PI":3.14} | { | { | {"PI":3.14} | True | True | | | "PI": 3.14 | "PI": 3.14 | | | | | | } | } | | | | +-------------+----------------------+--------------+-------------------+---------------------------------+------------------------------+ However, the functions are not exactly reciprocal; differences in whitespace or order of key-value pairs can prevent the output from matching the input. For example: SELECT TO_JSON(PARSE_JSON('{"b":1,"a":2}')), TO_JSON(PARSE_JSON('{"b":1,"a":2}')) = '{"b":1,"a":2}', TO_JSON(PARSE_JSON('{"b":1,"a":2}')) = '{"a":2,"b":1}' ; +--------------------------------------+--------------------------------------------------------+--------------------------------------------------------+ | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) = '{"B":1,"A":2}' | TO_JSON(PARSE_JSON('{"B":1,"A":2}')) = '{"A":2,"B":1}' | |--------------------------------------+--------------------------------------------------------+--------------------------------------------------------| | {"a":2,"b":1} | False | True | +--------------------------------------+--------------------------------------------------------+--------------------------------------------------------+ Although both PARSE_JSON and TO_VARIANT can take a string and return a variant, they are not equivalent. The following code uses PARSE_JSON to update one column and TO_VARIANT to update the other column. (The update to column variant1 is unnecessary because it was updated earlier using an identical function call; however, the code below updates it again so that you can see side-by-side which functions are called to update the columns). UPDATE jdemo2 SET variant1 = PARSE_JSON(varchar1), variant2 = TO_VARIANT(varchar1); The query below shows that the output of PARSE_JSON and the output of TO_VARIANT are not the same. In addition to the trivial difference(s) in whitespace, there are significant differences in quotation marks. SELECT variant1, variant2, variant1 = variant2 FROM jdemo2; +--------------+-----------------+---------------------+ | VARIANT1 | VARIANT2 | VARIANT1 = VARIANT2 | |--------------+-----------------+---------------------| | { | "{\"PI\":3.14}" | False | | "PI": 3.14 | | | | } | | | +--------------+-----------------+---------------------+
# Multiline equation without number I am trying to have a multiline equation without any numbering, even not in the last one. \begin{align} a = x \nonumber \\ b = y \end{align} However, this is producing a line number in the last line, which would like not to happen. I have also tried the equation and equation* environments with no luck. Thanks. P.S. What' the markup for rendering LaTeX code. Can't seem to be able to find it in the help page? • erh, align* should be obvious. There is no markup for rendering LaTeX code. Only source code Jun 19, 2013 at 15:33 • It might be a good to start reading the manual for the amsmath package and browsing align. Jun 19, 2013 at 18:06 \documentclass{article} \usepackage{amsmath} % for the "align" and "align*" environments \begin{document} \begin{align*} % the "starred" equation environments produce no equation numbers a &= b\\ % if no alignment is needed, use the gather* instead of the align* env. &= c \end{align*} \end{document} If you do use the unstarred version, remember that \nonumber operates on only one line, so to omit numbers entirely, \nonumber (or \notag) must be input for every line.
# Math How do we integrate ln| 1 - cot x | 0-pi/4? 1. 👍 0 2. 👎 0 3. 👁 114 1. 1-cotx=1-(cosx/sinx)= 1 - ((1/2)(cot(x/2))) - ((1/2)(sec(x/2))) how to simplify this inside "ln"? can we substract them adding ln(loge) infront of them? 1. 👍 0 2. 👎 0 2. I think you're stuck. There's just no way to get things to just a product. According to wolframalpha, the integral is not elementary. http://www.wolframalpha.com/input/?i=integral+log(1-cotx)+dx 1. 👍 0 2. 👎 0 ## Similar Questions 1. ### Math Express the function in the form f compose g. (Use non-identity functions for f and g.) u(t) = cot t/(9+cot t) {f(t), g(t)}=? 2. ### math, calculus 2 Consider the area between the graphs x+y=16 and x+4= (y^2). This area can be computed in two different ways using integrals. First of all it can be computed as a sum of two integrals integrate from a to b of f(x)dx + integrate 3. ### Trigonometry Verify the identity algebraically. TAN X + COT Y/TAN X COT Y= TAN Y + COT X 4. ### Math 12 How do you solve -6[cot(2pi/3)-cot(pi/3)]? 1. ### Math Justin wants to evaluate 3cot(-5pi/4). Which of the following identities can he use to help him? Select two answers. cot(-theta) = cot(theta) cot(-theta) = -cot(theta) cot(-theta) = cot(-theta) cot(theta + pi) = cot(theta) 2. ### pre calculus Cot 495 degrees... a) -tan 135 degrees b) -cot 135 degrees c) cot 135 degrees d) -cot 315 degrees 3. ### Calculus Sketch the regions enclosed by the given curves. Decide whether to integrate with respect to x or y. Draw a typical approximating rectangle and label its height and width. integrate with either respect to x or y, then find area S 4. ### Math Prove: Cot x cot 2x - cot 2x cot 3x - cot 3x cot x = 1 1. ### Trig HELP!!!! I don't know how to do the trig identity with this problem csc^4 x-cot^4x= Csx^2 x + cot^2x 2. ### Calculus Differentiate. y= u(a cos u + b cot u) I am not sure if this correct. I was not sure whether to leave the variables as are or where the negatives should be placed? (u)*(- a sin u - b csc^2 u) + (a cos u + b cot u)*(1) 3. ### Calculus Find the derivative of y=cot³(1-2x)² y'=3[cot(1-2x)²]² (-csc²(1-2x)²)(2-4x)(-2) y'=(-6+24x)[cot(1-2x)²]²(-csc²(1-2x)²) How do you simplify further? 4. ### calculus We're doing indefinite integrals using the substitution rule right now in class. The problem: (integral of) (e^6x)csc(e^6x)cot(e^6x)dx I am calling 'u' my substitution variable. I feel like I've tried every possible substitution,
### Top 10 Arxiv Papers Today in Optics ##### #1. Self-stabilizing laser sails based on optical metasurfaces ###### Joel Siegel, Anthony Wang, Sergey G. Menabde, Mikhail A. Kats, Min Seok Jang, Victor Watson Brar This article investigates the stability of 'laser sail'-style spacecraft constructed from dielectric metasurfaces with areal densities $<$1g/m$^2$. We show that the microscopic optical forces exerted on a metasurface by a high power laser (100 GW) can be engineered to achieve passive self-stabilization, such that it is optically trapped inside the drive beam, and self-corrects against angular and lateral perturbations. The metasurfaces we study consist of a patchwork of beam-steering elements that reflect light at different angles and efficiencies. These properties are varied for each element across the area of the metasurface, and we use optical force modeling tools to explore the behavior of several metasurfaces with different scattering properties as they interact with beams that have different intensity profiles. Finally, we use full-wave numerical simulation tools to extract the actual optical forces that would be imparted on Si/SiO$_{2}$ metasurfaces consisting of more than 400 elements, and we compare those results to our... more | pdf | html ###### Tweets qraal: [1903.09077] Self-stabilizing laser sails based on optical metasurfaces https://t.co/8xKzsHL3TP mickeykats: New preprint: "Self-stabilizing laser sails based on optical metasurfaces" https://t.co/uyZGTVjHeX This is a collaboration with + led by Victor Brar's group in the Physics Dept @UWMadison. On my group's side, funded by @AFOSR https://t.co/HF5bF0RLNt CondensedPapers: Self-stabilizing laser sails based on optical metasurfaces. https://t.co/GeSWBPFU0v StarshipBuilder: Self-stabilizing laser sails based on optical metasurfaces https://t.co/qS1DBmOWVB AFOSR: RT @mickeykats: New preprint: "Self-stabilizing laser sails based on optical metasurfaces" https://t.co/uyZGTVjHeX This is a collaboration… nyrath: RT @StarshipBuilder: Self-stabilizing laser sails based on optical metasurfaces https://t.co/qS1DBmOWVB A_J_Higgins: RT @StarshipBuilder: Self-stabilizing laser sails based on optical metasurfaces https://t.co/qS1DBmOWVB None. None. ###### Other stats Sample Sizes : None. Authors: 6 Total Words: 6775 Unqiue Words: 2022 ##### #2. Cascaded Rotational Doppler Effect ###### Junhong Deng, King Fai Li, Wei Liu, Guixin Li We propose and substantiate experimentally the cascaded rotational Doppler effect for interactions of spinning objects with light carrying angular momentum. Based on the law of parity conservation for electromagnetic interactions, we reveal that the frequency shift can be doubled through cascading two rotational Doppler processes which are mirror-imaged to each other. This effect is further experimentally verified with a rotating half-wave plate, and the mirror-imaging process is achieved by reflecting the frequency-shifted circularly polarized wave upon a mirror with a quarter-wave plate in front of it. The mirror symmetry and thus parity conservation guarantees that this doubled frequency shift can be further multiplied with more successive mirror-imaging conjugations, with photons carrying spin and/or orbital angular momentum, which could be widely applied for detection of rotating systems ranging from molecules to celestial bodies with high precision and sensitivity. more | pdf | html ###### Tweets PhysicsPaper: Cascaded Rotational Doppler Effect. https://t.co/obJUnaYEef None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 3252 Unqiue Words: 1163 ##### #3. Proper scaling for triangular aperture in OAM measurement ###### Dina Grace C. Banguilan, Nathaniel Hermosa A standard triangular aperture for measuring the orbital angular momentum (OAM) of light by diffraction usually has a fixed and limited radius R. This possesses a crucial issue since for an increasing topological charge m of an OAM beam, the radius r of the beam also increases. Here, we prove experimentally our supposition. We use a dynamic triangular aperture that can be programmed to have different characteristic R to diffract beams of various OAM values. By analysing the diffraction patterns with 2d-correlation, we find a minimum bound for R. For a constant initial waist $w$ in the spatial light modulator and a constant position z of the aperture system, we find that the radius of the aperture is unique for each m value. Interestingly, this R scales according to the literature-reported beam's rms radius. We also show that with larger aperture, a smearing effect can be seen in the diffraction patterns which becomes a setback on discerning fringes for the measurement of the topological charge value and thus of the OAM of light.... more | pdf | html ###### Tweets PhysicsPaper: Proper scaling for triangular aperture in OAM measurement. https://t.co/x8Esm1cwtC None. None. ###### Other stats Sample Sizes : None. Authors: 2 Total Words: 5030 Unqiue Words: 1716 ##### #4. The first evidence of current-injection organic semiconductor laser with field-effect transistor ###### Thangavel Kanagasekaran, Hidekazu Shimotani, Keiichiro Kasai, Shun Onuki, Rahul D. Kavthe, Ryotaro Kumashiro, Nobuya Hiroshiba, Tienan Jin, Naoki Asao, Katsumi Tanigaki Laser is one of the most important discoveries in the 20th century, and inorganic semiconductor lasers (ISCL) are most frequently used in many applications nowadays. Organic semiconductor lasers (OSCL) have many attractive features when they compared to ISCLs, such as flexibility, human friendliness, feasible and inexpensive production process, light weight, and multicolor emission. However, electrically driven OSCLs (el-OSCL) have not yet been realized, although they are possible in an optically driven mode. Here, we report that an el-OSCL can be realized in field-effect transistor (FET) structure. The FET el-OSCL with distributed feedback (DFB) construction is made using a BP3T single crystal as a lasing medium electrostatically laminated on a silicon substrate modified with periodically patterned poly-styrene by applying a hot embossing method. An emergent sharp-linewidth laser emission spectrum to the resolution limit of a detector and a non-linear increase in intensity above the threshold current density of ca.1 kA cm-2 is... more | pdf | html ###### Tweets PhysicsPaper: The first evidence of current-injection organic semiconductor laser with field-effect transistor. https://t.co/V02Yi9jrfM None. None. ###### Other stats Sample Sizes : None. Authors: 10 Total Words: 4058 Unqiue Words: 1469 ##### #5. Analytical vs. Numerical Langevin Description of Noise in Small Lasers ###### G. L. Lippi, J. Mørk, G. P. Puccioni We compare the analytical and numerical predictions of noise in nano- and microcavity lasers obtained from a rate equation model with stochastic Langevin noise. Strong discrepancies are found between the two approaches and these are critically analyzed and explained on the basis of general considerations and through the comparison to the numerical predictions of a Stochastic Laser Simulator. While the analytical calculations give reliable redictions, the numerical results are entirely incorrect thus unsuitable for predicting the dynamics and statistical properties of small lasers. more | pdf | html ###### Tweets PhysicsPaper: Analytical vs. Numerical Langevin Description of Noise in Small Lasers. https://t.co/TUAm9MrP49 None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 3442 Unqiue Words: 1418 ##### #6. High optical magnetism of dodecahedral plasmonic meta-atoms ###### Véronique Many, Romain Dézert, Etienne Duguet, Alexandre Baron, Vikas Jangid, Virginie Ponsinet, Serge Ravaine, Philippe Richetti, Philippe Barois, Mona Tréguer-Delapierre The generation in artificial composites of a magnetic response to light comparable in magnitude with the natural electric response, may offer an invaluable control parameter for a fine steering of light at the nanoscale. In many experimental realizations however, the magnetic response of artificial meta-atoms is too weak so that there is a need for new designs with increased magnetic polarizability. Numerical simulations show that geometrical plasmonic nanostructures based on the ideal model of Platonic solids are excellent candidates for the production of strong optical magnetism in visible light. Inspired by this model, we developed a bottom-up approach to synthesize plasmonic nano-clusters made of twelve gold patched located at the center of the faces of a dodecahedron. The scattering of the electric and magnetic dipole induced by light are measured across the whole visible range. The ratio of the magnetic to electric response at resonance is found three times higher than its counterpart measured on disordered plasmonic... more | pdf | html ###### Tweets PhysicsPaper: High optical magnetism of dodecahedral plasmonic meta-atoms. https://t.co/TKt16IGpDR None. None. ###### Other stats Sample Sizes : [12, 46812] Authors: 10 Total Words: 5732 Unqiue Words: 2050 ##### #7. Self-assembled nanostructured metamaterials ###### Virginie Ponsinet, Alexandre Baron, Emilie Pouget, Yutaka Okazaki, Reiko Oda, Philippe Barois The concept of metamaterials emerged in years 2000 with the achievement of artificial structures enabling non conventional propagation of electromagnetic waves, such as negative phase velocity of negative refraction. The electromagnetic response of metamaterials is generally based on the presence of optically-resonant elements (or meta-atoms) of sub-wavelength size and well designed morphology so as to provide the desired electric and magnetic optical properties. Top-down technologies based on lithography techniques have been intensively used to fabricate a variety of efficient electric and magnetic resonators operating from microwave to visible light frequencies. However, the technological limits of the top-down approach are reached in visible light where a huge number of nanometre sized elements is required. We show here that the bottom-up fabrication route based on the combination of nanochemistry and of the self-assembly methods of colloidal physics provide an excellent alternative for the large scale synthesis of complex... more | pdf | html ###### Tweets PhysicsPaper: Self-assembled nanostructured metamaterials. https://t.co/yJZkwgs7EF None. None. ###### Other stats Sample Sizes : None. Authors: 6 Total Words: 4920 Unqiue Words: 2010 ##### #8. Spectral singularities of a potential created by two coupled microring resonators ###### Vladimir V. Konotop, Barry C. Sanders, Dmitry A. Zezyulin Two microring resonators, one with gain and one with loss, coupled to each other and to a bus waveguide, create an effective non-Hermitian potential for light propagating in the waveguide. Due to geometry, coupling for each microring resonator yields two counter-propagating modes with equal frequencies. We show that such a system enables implementation of many types of scattering peculiarities, which are either the second or fourth order. The spectral singularities separate parameter regions where the spectrum is either pure real or else comprises complex eigenvalues; hence, they represent the points of the phase transition. By modifying the gain-loss relation for the resonators such an optical scatterer can act as a laser, as a coherent perfect absorber, be unidirectionally reflectionless or transparent, and support bound states either growing or decaying in time. These characteristics are observed for a discrete series of the incident-radiation wavelengths. more | pdf | html ###### Tweets PhysicsPaper: Spectral singularities of a potential created by two coupled microring resonators. https://t.co/CoB58Jm7f8 None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 4699 Unqiue Words: 1451 ##### #9. Tunable MEMS VCSEL on Silicon substrate ###### Hitesh Kumar Sahoo, Thor Ansbæk, Luisa Ottaviano, Elizaveta Semenova, Fyodor Zubov, Ole Hansen, Kresten Yvind We present design, fabrication and characterization of a MEMS VCSEL which utilizes a silicon-on-insulator wafer for the microelectromechanical system and encapsulates the MEMS by direct InP wafer bonding, which improves the protection and control of the tuning element. This procedure enables a more robust fabrication, a larger free spectral range and facilitates bidirectional tuning of the MEMS element. The MEMS VCSEL device uses a high contrast grating mirror on a MEMS stage as the bottom mirror, a wafer bonded InP with quantum wells for amplification and a deposited dielectric DBR as the top mirror. A 40 nm tuning range and a mechanical resonance frequency in excess of 2 MHz are demonstrated. more | pdf | html ###### Tweets PhysicsPaper: Tunable MEMS VCSEL on Silicon substrate. https://t.co/ixE8KBVsYu None. None. ###### Other stats Sample Sizes : None. Authors: 7 Total Words: 6074 Unqiue Words: 1875 ##### #10. Plane-wave scattering by an ellipsoid composed of an orthorhombic dielectric-magnetic material with arbitrarily oriented constitutive principal axes ###### H. M. Alkhoori, A. Lakhtakia, J. K. Breakall, C. F. Bohren The extended boundary condition method can be formulated to study plane-wave scattering by an ellipsoid composed of an orthorhombic dielectric-magnetic material whose relative permittivity dyadic is a scalar multiple of its relative permeability dyadic, when the constitutive principal axes are arbitrarily oriented with respect to the shape principal axes. Known vector spherical wavefunctions are used to represent the fields in the surrounding matter-free space. After deriving closed-form expressions for the vector spherical wavefunctions for the scattering material, the internal fields are represented as superpositions of those vector spherical wavefunctions. The unknown scattered-field coefficients are related to the known incident-field coefficients by a transition matrix. The total scattering and absorption efficiencies are highly affected by the orientation of the constitutive principal axes relative to the shape principal axes, and the effect of the orientational mismatch between the two sets of principal axes is... more | pdf | html ###### Tweets PhysicsPaper: Plane-wave scattering by an ellipsoid composed of an orthorhombic dielectric-magnetic material with arbitrarily oriented constitutive principal axes. https://t.co/etQUuzJzIk None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 8359 Unqiue Words: 1835 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 99,599 papers. ###### Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online ###### Stats Tracking 99,599 papers.
# Magnetic field around an infinitely long wire 1. Aug 17, 2007 ### ehrenfest Hello, I am trying to integrate $$\int \mu_0 I (dl X r) /4 \pi r^2$$ in order to get the magnetic field at a point a distance R from a wire with current I. r is the distance between the differential length and the point. I integrate over the entire wire (which becomes an angle from 0 to 2 pi). I am off by a factor of 1/pi from the correct answer of of mu_0 I/2 pi R. Is my integral or my integration wrong? 2. Aug 17, 2007 ### learningphysics Your integral is wrong... in the numerator it should be ($$\vec{dl}X\hat{r}$$) $$\hat{r}$$ is different from $$\vec{r}$$ $$\hat{r}$$ denotes a unit vector in the direction of $$\vec{r}$$ 3. Aug 18, 2007 ### ehrenfest You're right. That is what I meant. I still am off by the factor of 1/pi. I tried to change to change it to an integral with respect to an angle and got: $$\mu_0 I/4\pi \int_0^{2\pi} \sin \theta d\theta r/r^2$$ Is that correct? Last edited: Aug 18, 2007 4. Aug 18, 2007 ### pardesi no why do u take the angle to be $$0$$ to $$2 \pi$$ the angles should be $$\theta_{1}$$ to $$\theta_{2}$$ where tehse are the angles made by the wire's end points with point in case of infinite wire they turn out to be $$\frac{\pi}{2},\frac{\pi}{2}$$ or $$\frac{- \pi}{2},\frac{\pi}{2}$$ depending on how u integrate 5. Aug 18, 2007 ### learningphysics Are you mixing up the big R with the little r? If that is supposed to be r (which is varying) then the integral is wrong. And the limits should be 0 and pi. I get $$\frac{\mu_0 I}{4\pi R}\int_0^{\pi} \sin \theta d\theta$$ write everything in terms of R (which is constant) and $$\theta$$ and after taking all the constants outside, inside the integral you should just get $$sin(\theta)$$ The way I did was setting: x = -R/tan(theta) calculate dx (which is dl) in terms of d(theta) also set r = R/sin(theta) For some reason latex is messing up for me. Last edited: Aug 18, 2007 6. Aug 18, 2007 ### ehrenfest OK. So my integral should be $$\int \mu_0 I (dl X r) /4 \pi r^2$$ = $$\int \mu_0 I sin(\theta)dl r /4 \pi r^2$$ I am confused about your solution. I am not sure why you introduced x. Anyway I get with your trig substitution, dx = R csc^2(x) dtheta then all the sines cancel! 7. Aug 18, 2007 ### learningphysics There shouldn't be an r in the numerator for the fraction on the right. I think that's what's causing the trouble. 8. Aug 18, 2007 ### ehrenfest YES! Because r-hat is the unit vector. I see. Thanks. 9. Aug 18, 2007 ### learningphysics You're welcome.
# Tag Archives: IBP ## Integration by Parts I suddenly have college degrees to my name. In some sense, I think that I should feel different – but all I’ve really noticed is that I’ve much less to do. Fewer deadlines, anyway. So now I can blog again! Unfortunately, I won’t quite be able to blog as much as I might like, as I will be traveling quite a bit this summer. In a few days I’ll hit Croatia. Georgia Tech is magnificent at helping its students through their first few tough classes. Although the average size of each of the four calculus classes is around 150 students, they are broken up into 30 person recitations with a TA (usually a good thing, but no promises). Some classes have optional ‘Peer Led Undergraduate Study’ programs, where TA-level students host additional hours to help students master exercises over the class material. There is free tutoring available in many of the freshmen dorms every on most, if not all, nights of the week. If that doesn’t work, there is also free tutoring available from the Office of Minority Education or the Department of Success Programs – the host of the so-called 1-1 Tutoring program (I was a tutor there for two years). One can schedule 1-1 appointments between 8 am and something like 9 pm, and you can choose your tutor. For the math classes, each professor and TA holds office hours, and there is a general TA lounge where most questions can be answered, regardless of whether one’s TA is there. Finally, there is also the dedicated ‘Math Lab,’ a place where 3-4 highly educated math students (usually math grad students, though there are a couple of math seniors) are available each hour between 10 am and 4 pm (something like that – I had Thursday from 1-2 pm, for example). It’s a good theory. During Dead Week, the week before finals, I had a group of Calc I students during my Math Lab hour. They were asking about integration by parts – when in the world is it useful? At first, I had a hard time saying something that they accepted as valuable – it’s an engineering school, and the things I find interesting do not appeal to the general engineering population of Tech. I thought back during my years at Tech (as this was my last week as a student there, it put me in a very nostalgic mood), and I realized that I associate IBP most with my quantum mechanics classes with Dr. Kennedy. In general, the way to solve those questions was to find some sort of basis of eigenvectors, normalize everything, take more inner products than you want, integrate by parts until it becomes meaningful, and then exploit as much symmetry as possible. Needless to say, that didn’t satisfy their question. There are the very obvious answers. One derives Taylor’s formula and error with integration by parts: $\begin{array}{rl} f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\ &= f(0) + xf'(0) + \displaystyle \int_0^x tf”(x-t)\,dt\\ &= f(0) + xf'(0) + \frac{x^2}2f”(0) + \displaystyle \int_0^x \frac{t^2}2 f”'(x-t)\,dt \end{array}$ … and so on. But in all honesty, Taylor’s theorem is rarely used to estimate values of a function by hand, and arguing that it is useful to know at least the bare bones of the theory behind one’s field is an uphill battle. This would prevent me from mentioning the derivation of the Euler-Maclaurin formula as well. I appealed to aesthetics: Taylor’s Theorem says that $\displaystyle \sum_{n\ge0} x^n/n! = e^x$, but repeated integration by parts yields that $\displaystyle \int_0^\infty x^n e^{-x} dx=n!$. That’s sort of cool – and not as obvious as it might appear at first. Although I didn’t mention it then, we also have the pretty result that n integration by parts applied to $\displaystyle \int_0^1 \dfrac{ (-x\log x)^n}{n!} dx = (n+1)^{-(n+1)}$. Summing over n, and remembering the Taylor expansion for $e^x$, one gets that $\displaystyle \int_0^1 x^{-x} dx = \displaystyle \sum_{n=1}^\infty n^{-n}$. Finally, I decided to appeal to that part of the student that wants only to do well on tests. Then for a differentiable function $f$ and its inverse $f^{-1}$, we have that: $\displaystyle \int f(x)dx = xf(x) – \displaystyle \int xf'(x)dx =$ $= xf(x) – \displaystyle \int f^{-1}(f(x))f'(x)dx = xf(x) – \displaystyle \int f^{-1}(u)du$. In other words, knowing the integral of $f$ gives the integral of $f^{-1}$ very cheaply, and this is why we use integration by parts to integrate things like $\ln x$, $\arctan x$, etc. Similarly, one gets the reduction formulas necessary to integrate $\sin^n (x)$ or $\cos^n (x)$. If one believes that being able to integrate things is useful, then these are useful.There is of course the other class of functions such as $\cos(x)\sin(x)$ or $e^x \sin(x)$, where one integrates by parts twice and solves for the integral. I still think that’s really cool – sort of like getting something for nothing. And at the end of the day, they were satisfied. But this might be the crux of the problem that explains why so many Tech students, despite having so many resources for success, still fail – they have to trudge through a whole lost of ‘useless theory’ just to get to the ‘good stuff.’
IN WHICH Ross Rheingans-Yoo, a sometimes-poet and erstwhile student of Computer Science and Math, oc­cas­ion­al­ly writes on things of int­erest. # Reading Feed (last update: April 14) A collection of things that I was happy I read. Views expressed by linked authors are chosen because I think they're interesting, not because I think they're correct, unless indicated otherwise. ### (13) Blog: Marginal Revolution | The importance of local milieus — "We find suggestive evidence that co-locating with future inventors may impact the probability of becoming an inventor. The most consistent effect is found for place of higher education; some positive effects are also evident from birthplace, whereas no consistent positive effect can be derived from individuals’ high school location." Blog: Shtetl-Optimized | How to upper-bound the probability of something bad — an algorithmist's guideline. Blog: The Unit of Caring | Anonymous asked: you have the most hilariously naive politics i've ever seen... — "[in conclusion...] And I think anon is wrong about whether I need to grow a backbone." Blog: # What is there to say? My grandfather was a career scientist at Oak Ridge National Labs for 36 years. He was an international traveler and an international collaborator, advancing human knowledge of materials science as best he knew how -- by sharing what he knew with fellow seekers of truth, regardless of nationality. As a young man, he left a country rent by war to seek an education -- and a home -- and a future in the United States. Here he raised three sons, international travelers and collaborators themselves -- a businessman, a public servant, and a professor of Law. I can't count the friends I have with friends and colleagues, seeking an education -- seeking a future -- seeking to advance the knowledge of all mankind -- who have had my nation slam our door in their faces this weekend. I feel sick for what my nation has done in my name, # Re-Thinking Prejudices I've decided that this post is retroactively part 1 of ? of a recurring series on approaching debates with a mind toward actually changing minds and the world. [ | ] There's a statue visible from the window of my office, a poem inscribed near its base: Not like the brazen giant of Greek fame, With conquering limbs astride from land to land; Here at our sea-washed, sunset gates shall stand A mighty woman with a torch, whose flame Is the imprisoned lightning, and her name Mother of Exiles. From her beacon-hand Glows world-wide welcome; her mild eyes command The air-bridged harbor that twin cities frame. “Keep, ancient lands, your storied pomp!” cries she I lift my lamp beside the golden
A Simple Physical Analysis of the Greenhouse Effect and Climate Change Keywords Greenhouse Effect, Blackbody Radiation, Climate Change, Solar Radiation, physical modelling, physical analysis Abstract This paper examines physical theories such as thermodynamics, quantum mechanics to develop parameters including absorptivity, reflectivity, and transmissivity of the atmosphere. It is on such parameters that the physical modelling equation has been established to quantitatively analyze the ongoing global warming and climate change situation. This theoretical research reaffirms the severe effect of the greenhouse gases that are already well known to contribute to global warming and climate change. This paper also reveals the great possibility that with more complex and advanced modelling based on multi-layer scenarios, such as that based on gradated layers of the atmosphere of this planet, a much more accurate and detailed analysis for a prediction of the future route of global warming processes will be possible. Introduction It is evident that, given the change of global temperature during the past few decades, the earth is becoming warmer; the interesting issue is, for many scientists, to determine whether such rises in temperature are due to anthropogenic causes or natural ones. There have been remarkable fluctuations of global temperatures throughout the long history of the planet Earth [1]. Ice ages and warm periods have existed alternately overriding each other, allowing in-between the interglacial period. Considering the past fluctuations, however, the abrupt rise of temperatures recently is considered to be part of a grand process in which the earth endures the rise in global average temperatures [2]. Several previous studies on the issue, which is considered the dominant theory, posit that present global warming may mostly be attributed to rapidly expanding human industrial activities [3]. To clearly understand global warming, however, it is necessary to differentiate the effects of principal solar radiation energy that warms the earth and the radiation from the earth out back to space. Normally these two types of radiation are ideally balanced, yet once this balance is broken due to greenhouse gases or water vapours, the earth’s temperature rises, and, as a result, a new balance is soon to be established [4]. This paper focuses on the exact balance of the solar and global radiation and the effect of the greenhouse gases in restoring new equilibrium on the earth. To that end, in this paper, a simple physical model has been created to explain the fundamental physics behind the radiation energy, blackbody radiation, which are then combined to delve into the resultant solar radiation. This research will also explore the effect of the earth’s atmosphere in absorbing global radiation on the global temperature of the earth’s surface. Finally, the model including reflectivity and transmissivity has been used to calculate and analyze the changes in the temperature of the surface of the earth. Theory Radiation Theory in Physics Radiation is the energy transferred by electromagnetic waves. The intensity of the radiation emitted by an object is determined by its temperature and other properties. Radiation flux density is defined as the radiation energy per area and time [4]. Let E and F be radiation energy and radiation flux density, respectively, and equation (1) is obtained, in which A is area and t is time. Figure 1 illustrates the flux density F = dE/(dAdt)                                                                                      (1) Figure 1. Radiation energy and flux density [4] The radiation flux density depends on the direction of radiation. Yet intensity means the flux illuminated vertically onto the specific unit of area or the flux illuminated on a unit solid angle. Hence, equation (2) is given: I = dF/(dΩ cos cosΘ)                                                                             (2) where is the solid angle and Θ is the angle between the radiation ray and the vertical line to the surface. Figure 2 illustrates the Solid angle: Figure 2. Illustration of the solid angle [4] If the flux is uniform in all directions (isotropic), then F=πI. Now consider the monochromatic flux of wavelength λ and let Fλ, Iλ be flux and the intensity for the particular wavelength. It is given that F=∫Fλ and I = ∫Iλ [4]. If frequency v is used instead of λ, F=∫Fvdv and I = ∫Ivdv, in which v = c/λ with c being the speed of light. Here again, when radiation passes through matter, there is absorption, transmittance, and reflection. Let radiation with wavelength λ have absorptivity, reflectivity, and transmissivity aλ, rλ, and τλ, respectively. The following equation (3) can be obtained [5] aλ + rλ + τλ = 1                                                                                       (3) Theory of Blackbody Radiation A blackbody is an imaginary object that absorbs all the electromagnetic waves radiated into it [6]. Hence, rλ + τλ = 0 and aλ = 1. Yet, even blackbodies emit some electromagnetic waves and thus show the frequency distribution of the radiation. Photons should be considered in calculating the frequency distribution. Since the photon is a boson, it satisfies Bose-Einstein Distribution with chemical potential 0, and equation (4) is obtained [7]. Figure 3 shows how Bose-Einstein Distribution is defined: Figure 3. Bose-Einstein Distribution [11] Before substituting, here are some simple explanations of variables in statistical thermodynamics kB is the Boltzmann constant, a value that links the temperature of a gas, like the atmosphere, with the average kinetic energy of its particles β = 1/ kB T defines thermodynamic beta (or coldness) as the reciprocal of thermodynamic temperature h = h/2π defines a form of Planck’s constant, which describes the angular momentum of a particle, a photon in this case – all values of angular momentum are quantised, which means they must be a multiple of h. ω = 2πf defines angular velocity as multiplied by the frequency (rotations per second). nk1a = 1/(eβhω – 1)                                                                                   (4) where nk1a  is the Bose-Einstein Distribution Function of a photon, as described by the graph above. Thus, the average number of photons with a wave vector between k and k + dk is given by: (5) The mean energy of photons with frequencies between ω and ω + dω is given by the product of the number of photons and the energy of a single photon, hω , as follows [9]. Therefore, the total average energy density over all is given by: The radiation power density, which is defined as the energy per unit volume and unit time, of photons whose frequency is between ω and ω + dω is given by eq (8), when using the fact that the flux radiated from the surface of a black body is related to the energy density as , where = flux = energy/(area × time) per frequency, involving c/4 for solid angle integration, finally resulting in the total radiation power density F as where is known as the Stefan-Boltzmann constant. When the blackbody is not completely ideal, then the flux density has a constant, and the equation becomes [10]: F = εσT4 ,                                                                                                   (10) which is known as the Law of Stefan-Boltzmann [7], and ε is known as the emissivity of blackbody which has a value between 0 and 1, with the value closer to 1 for more complete blackbody. Figure 3 illustrates spectra made by blackbody radiation and figure 4 illustrates Stefan-Boltzmann law with black body radiation Figure 3. Spectra from a blackbody [12] Figure 4. Stefan-Boltzmann law with black body radiation [13] Modelling Solar Radiation and the Surface Temperature of the Earth The temperature of the earth is determined by how much energy comes in from space and returns to space. When incoming radiation is greater than the earth’s radiation energy, the global temperature rises. To clarify this, let Isolar be the intensity of solar radiation per area of the earth’s surface, which is approximately 1,370 W/m2 [4]. Then the area of the earth illuminated by solar radiation is given by the product of Isolar  and πREarth2, which is the cross-sectional area of the earth with radius REarth. Figure 5 illustrates the structure of the earth’s absorptivity, transmissivity and reflectivity. Figure 5. Structure of the earth’s absorptivity, transmissivity and reflectivity [14] Let rEarth be the reflectivity of the earth, and equation (11) is established: Now if the intensity of solar radiation onto the earth’s surface is Isolar, and the earth reflects it with reflectivity , power density incident onto the surface of the earth is (1 – rEarth) Isolar . Hence, radiation power incident onto the global surface will be approximately 0.3. [7] On the other hand, the power of the radiation the earth emits back into space is the product of the flux density Fout and the surface of the earth. Now, assume that the earth is a blackbody and has no atmosphere. Then according to the Law of Stefan-Boltzmann, the following equation is obtained [8]: where Tearth is the temperature of the surface of the earth. Hence, the power density emitted by the earth is given by: For the earth to have an energy balance, φin = φout must be satisfied, and thus this yields: or if this equation is solved for the surface temperature of the earth, (15) can be obtained as follows: With ε = 1, for which the earth is an ideal blackbody, the surface temperature of the earth is approximately 255K or -18C. The average surface temperature of the earth is known to be about 15C, and thus there is an approximately 33C difference. [4] For this to be rationally explained, the greenhouse effect of the atmosphere should be considered. Results The Greenhouse Effect It is well known that nitrogen and oxygen gas, which compose a majority of the atmosphere, neither absorb nor emit radiation energy, yet trace gases like water vapour, carbon dioxide, and methane gas absorb a significant portion of radiation energy, causing the greenhouse effect. Figure 6 shows how the greenhouse effect is created on the earth: Figure 6. Greenhouse effect made by heat trapped by greenhouse gases in the atmosphere [15] Figure 7. Single-Layer Model in which there is only a single layer of atmosphere on the Earth [7] As seen in Figure 7, In the Single-Layer Model, solar radiation enters the earth and the earth’s radiation is emitted out to space and the atmosphere of the earth reabsorbs the earth’s radiation and emits it up and down, as shown, if there is only one layer of atmosphere specified, which passes short-wavelength solar radiation without any absorption. In contrast to this, the earth’s atmosphere is mostly confined to infrared rays and thus the layer absorbs the rays and simultaneously emits the radiation of atmospheric temperature back both into outer space and down to the surface [5]. For radiation equilibrium to be established in this situation, there must be energy balance within the atmospheric layer. Also, the energy balance should be established on the earth’s surface, giving equation (18) as follows: Therefore, Finally, likewise, energy balance should also be established on the entire planet. Since Iup,atmosphere = Iin,solar ,                                                           (20) (21) If the earth is assumed to be an ideal blackbody, by equation (21), Tatmosphere is about 255K. Then since Tground = 21/4 Tatmosphere , Tground becomes about 303K. This is 30C, which is 40C higher than the case in which the earth’s atmosphere was assumed not to exist. This means the earth’ atmosphere plays an essential role in maintaining a habitable temperature on the earth. Now, it is assumed that there is reflection and transmission, rather than complete absorption in the atmosphere. Let reflectivity, transmissivity, and absorptivity r, τ, a, and then a + r + τ = 1. Now with this definition, in the Single-Layer Model mentioned above, Iup,atmosphere can be replaced with Iup,atmosphere + τIup,ground , and Idown,atmosphere with Idown,atmosphere + rIup,ground. Thus, the equation in this equilibrium condition changes to: Therefore, finally equation (22) and (23) will be obtained. Table 1 shows the calculation results when reflectivity r and transmissivity τ varies from 0.1 to 0.9 under the condition that ε = 1, with r in the rows and τ in the columns. Table 1: Surface temperature of the Earth depending on reflectivity and transmissivity. Discussion In Table 1, it is evident that, when τ – r = 0.2, the result becomes closest to 288K which is the very present average surface temperature of the earth. It is also known that the higher concentration of carbon dioxide in the atmosphere reduces the transmissivity of the atmosphere of the earth. Table 1 also affirms the fact that as transmissivity decreases, the earth’s surface temperature rises. This shows that greenhouse gases such as carbon dioxide and water vapour are the most plausible cause of global warming of the earth. The calculation based on the no-atmosphere model has revealed that surface temperature of the earth was -18C, which is 33C lower than the known value of the average surface temperature, 15C [5]. This difference explains well the essential role of the atmosphere for the stability of the earth’s temperature. The calculation based on the single-layer-model shows that surface temperature was 15C higher than the known value [5]. This error may be mostly due to the assumption that the absorptivity was chosen to be 100%. This model was modified so that reflectivity and transmissivity parameters are included in the equation and resulted in the fact that the accepted value of the surface temperature of the earth was established when the difference between transmissivity and reflectivity becomes 0.2. It is clear that as reflectivity increases or transmissivity decreases, the atmospheric temperature increases and that the reduction of transmissivity of the earth’s radiation due to the increase of the greenhouse gases becomes the cause of global warming on the planet. Conclusion The modelling of climate change has so far been applied to global warming of the earth based on the theory of solar and earth’s radiation under the assumption that the earth is a quasi-ideal blackbody. Two models are introduced: one is for the assumption that there is no atmosphere and the other one for the single-layered atmosphere. With the comparison of the two models, it has been reaffirmed that the role of earth’s atmosphere is critically essential for the stability of earth’s surface temperature and negative effect of greenhouse gases on the earth’s self-adjusting climate system and hydrological cycles. With the more advanced models such as 2 or 3 layered will be expected to allow more specific and better results for more precise analysis and prediction. Simple as it may, the Single-layered Model explains quantitatively well the behaviour of earth’s climatological cycle and is expected to open new challenges to more advanced research with the advent of AI technology. Acknowledgements I would like to thank my beloved parents who never cease to encourage me when I face unexpected difficulty. I also hope to extend my gratitude to my physics teacher, who helped me finish the paper even with his utmost busy schedule. Variable Interpretation Value Unit E energy of photon J h Planck’s constant 6.62608×10-34 J.s c speed of electromagnetic radiation 2.99792458×108 m.s-1 v frequency of electromagnetic radiation Hz I intensity of radiation T surface temperature of object K A surface area of object m2 σ Stefan-boltzmann constant 5.6696×10-8 W.m-2.K-4 P power emitted from hot object W emissivity of blackbody a absorptivity r refelectivity kB Boltzmann constant 1.38066×10-23 J.K-1 trasmissivity m.K k wave vector m-1 F radiation flux Power density W/ m2 RE radius of the Earth m Bose-Einstein Distribution Function of a photon 1/ kB T 1.36×103 W.m-2 0.30 Table 2. Summary of the physical quantities used in this paper References 1. Schmunk B. Robert, 2019, Global Warming from 1880 to 2019, NASA/GSFC GISS 2. Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley, 2013, Summary for Policymakers: 5th Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 3. http://repositorio.cenpat-conicet.gob.ar:8081/xmlui/bitstream/handle/123456789/497/climateChange.pdf?sequence=1 4. John Houghton, 2012, Intergovernmental Panel on Climate Change, Cambridge University Press 5. Reif Frederick, 1965, Fundamentals of Statistical and Thermal Physics, Waveland Press 6. Nave C.R, 2017, Blackbody Radiation, Georgia State University Department of Physics and Astronomy 7. Gedeon Mike, 2018, Thermal Emissivity and Radiative Heat Transfer, Materion Technical Tidbits 8. Archer David, 2007, Global Warming: Understanding the Forecast, John Wiley and Sons 9. Kittel Charles, 1969, Thermal Physics, John Wiley and Sons 10. Gasiorowicz Stephen, 1974, Quantum Physics, John Wiley and Sons 11. Fleagle Robert G., Businger Joost A., 1963, An Introduction to Atmospheric Physics, Academic Press New York 12. Blundell Steven J., M. Blundell Katherine, 2010, Concepts in Thermal Physics, Oxford University Press 13. Nave C.R, 2017, Distributing Energy Among Bosons, Georgia State University Department of Physics and Astronomy 14. Bergman Theodore L., Lavine Adrienne S., Incropera Frank P., 2011, Fundamentals of Heat and Mass Transfer, John Wiley & Sons 15. Fitzpatrick R., 2006, Plasma Physics: An Introduction, University of Texas 16. Pidwirny, M., 2006, Fundamentals of Physical Geography 17. Nave C.R, 2017, Greenhouse Effect, Georgia State University Department of Physics and Astronomy
# Splitting a wide table or a longtable into two blocks I have a very wide table (code below). I would like to put a small gap between the data under "First Group" and "Second Group" - two of the main multicolumns column headers. I tried putting in a double pipe (||) between these two columns where I specify the longtable. It splits the table but it behaves very weirdly in the multicolumn rows. How can I make the table split cleanly from top to bottom, hopefully without having to wade through the miles of code looking for &s? Thanks for your time! Code for my table follows: \documentclass[6pt]{article} \usepackage[portrait, total={5.45in, 8.5in}, top=1.25in, bottom=1.25in, right=1.25in, left=1.5in, centering]{geometry} \usepackage{longtable} \usepackage{bm} \usepackage[table]{xcolor} \usepackage[none]{hyphenat} \usepackage[T1]{fontenc} \usepackage[default]{cantarell} \usepackage{booktabs} \pagestyle{empty} \renewcommand{\familydefault}{\sfdefault} \renewcommand{\arraystretch}{1.25} \usepackage{arydshln} \newcolumntype{x}[1]{>{\raggedright}p{#1}} \setlength{\tabcolsep}{4pt} \begin{document} \begin{center} \scriptsize{\textbf{A very wide table with two groups}}\end{center} \setlength\LTleft{0in} \setlength\LTright{1.25in} \setlength\LTpre{-0.3cm} \setlength\LTpost{0in} \newcommand{\CTPanel}[1]{% \multicolumn{1}{>{\columncolor{white}}r|}{#1}} \centering \begin{longtable}{lp{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}||p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}} \hiderowcolors &\multicolumn{9}{c}{First Group}&\multicolumn{9}{c}{Second Group}\\\cmidrule(lr){2-10}\cmidrule(lr){11-19} something&\multicolumn{2}{c}{AB} & \multicolumn{3}{c}{ABCD} & \multicolumn{4}{c||}{ABCD EFGH}&\multicolumn{2}{c}{AB} & \multicolumn{3}{c}{ABCD} & \multicolumn{4}{c}{ABCD EFGH}\\\cmidrule(lr){2-3}\cmidrule(lr){4-6}\cmidrule(lr){7-10}\cmidrule(lr){11-12}\cmidrule(lr){13-15}\cmidrule(lr){16-19} Characteristics & + & - & I & II & III & $L^{A}$ & $L^{B}$ & $H^{+}$ & TN & + & - & I & II & III & $L^{A}$ & $L^{B}$ & $H^{+}$ & TN\\ \specialrule{0.02em}{0.1em}{0em} \specialrule{0.02em}{0em}{0em} \endfoot \hline \showrowcolors some variable &+&+&-&+&+&-&-&-&-&-&+&+&-&-&+&+&-&-\\\hline some other variable &+&+&-&+&+&-&-&-&-&-&+&+&-&-&+&+&-&-\\\hline \end{longtable} \end{document} Resulting table (after edits suggested by D. Carlisle): - Based on David Carlisle's answer and comments, this hack gives a better impression of the longtable being split into two distinct blocks. It uses a wide vertical white rule to split the parts of the table. Hence it looks better if the all the table rows (or just alternate rows) are coloured with a different colour other than white. \begin{longtable}{lp{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}!{\color{white}\vrule width 5pt}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}p{0.3cm}} One more interesting fact I learnt through this question is that it is possible to make vertical rules play well with multicolumns as well as the tricky booktabs package by using the vrule option. For example, this code \begin{longtable}{lp{0.3cm}!{\color{tableShade3}\vrule} p{0.3cm}!{\color{white}\vrule width 4pt} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule} p{0.3cm}!{\color{white}\vrule}} Not perfect, but functional and somewhat less ugly than the the | or || solution. - whenever you use a \multicolumn that ends in a column that has any | or @{...} material in its right hand edge, you need to re-insert it so if you have |ll||ll| then typically you will need \multicolumn{2}{|c||}{heading for 1st 2 columns}& Your vertical lines are there, but shifted up by strange amounts, whatever \cmidrule(lr){2-10}\cmidrule(lr){11-19} does it really doesn't like vertical rules. If I comment out each line that has those, the vertical lines shift back in to position. You could just use standard LaTeX \cline instead, possibly? –  David Carlisle Feb 27 '12 at 14:36 I used \cmidrule from the booktabs package. The main advantage of using \cmidrule is that the rule does not touch the "walls" of the cell. Using \cline makes it appear as a continuous rule undistinguished from \hline. \cmidrule comes with a small space before and after the rule separating it from the text in the cells. I wonder if this space is interfering with the vertical rules. –  Ariel Feb 27 '12 at 15:42
Group theory required for further study in von Neumann algebra - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T02:55:20Z http://mathoverflow.net/feeds/question/59316 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/59316/group-theory-required-for-further-study-in-von-neumann-algebra Group theory required for further study in von Neumann algebra Jiang 2011-03-23T16:05:05Z 2012-11-20T22:28:43Z <p>After over half a year's study on operator algebra (especially on von Neumann algebra) by doing exercises in Fundamentals of the theory of operator algebras 1, 2 --Kadison, I was told that the current research focus is on the Ⅱ1 factor, and certain background on group theory is necessary, such as studying the free product, specific group construction and the ergodic action. Then, I want to know are there any good books on the group theory that might be necessary for further study on von Neumann theory? </p> http://mathoverflow.net/questions/59316/group-theory-required-for-further-study-in-von-neumann-algebra/59318#59318 Answer by ght for Group theory required for further study in von Neumann algebra ght 2011-03-23T16:31:23Z 2011-03-23T16:31:23Z <p>There are several books that are excellent. Here are a few of them:</p> <ul> <li>An Invitation to von Neumann Algebras by V. Sunder</li> <li>Theory of $C^{*}$-Algebras and von Neumann Algebras by B. Blackadar</li> <li>Finite von Neumann Algebras and Masas by A. Sinclair and R. Smith </li> </ul> <p>For free probability I recommend:</p> <ul> <li>Free Random Variables: A Noncommutative Probability Approach to Free Products With Applications to Random Matrices, Operator Algebras, and Harmonic by K. Dykema, A. Nica and D. Voiculescu</li> <li>Lectures on the Combinatorics of Free Probability by A. Nica and R. Speicher</li> </ul> <p>I hope it helps!</p> http://mathoverflow.net/questions/59316/group-theory-required-for-further-study-in-von-neumann-algebra/59346#59346 Answer by Jon Bannon for Group theory required for further study in von Neumann algebra Jon Bannon 2011-03-23T20:27:36Z 2011-03-24T11:59:45Z <p>As for a book on group theory that may be useful or interesting to read for further study of $II_{1}$ factors, I think that de la Harpe's book <em>Topics in Geometric Group Theory</em> is good for this. The reason I say this is that geometric group theory is concerned with the "large scale" structure of groups, and concerns ways that groups can be equivalent in ways that are weaker than group isomorphism. A lot of contemporary $II_{1}$ factor theory is also concerned with weak equivalence of groups and their measure-preserving actions. I'll say a bit more below, for context. </p> <p>Before I do, though, let me mention that you should check out Sorin Popa's ICM talk, <em>Deformation and rigidity for group actions and von Neumann algebras</em>, a preprint listed on his <a href="http://www.math.ucla.edu/~popa/" rel="nofollow">website</a>, and read all of it. This gives a really good intuition about a big part of what's going on in the subject right now, and says everything I'll say here and more. </p> <p>One classical construction of a $II_{1}$ factor using a group $G$ is the (left) group von Neumann algebra, i.e. the commutant of the right regular represention of an i.c.c. (all nontrivial conjugacy classes are infinite) group G in $B(\ell^2 G)$. If two i.c.c. groups are isomorphic, then certainly their group von Neumann algebras are too. On the other hand, it is very difficult in general to tell if the group von Neumann algebra construction "remembers" the group used to construct it. For example, any two i.c.c. amenable groups have isomorphic group von Neumann algebras, so if you begin with an i.c.c. amenable group and whip up the group von Neumann algebra, it won't remember which group you used, but only will remember the amenability. It turns out that this construction is also sensitive to Kazhdan's Property (T), the Haagerup property (a weak amenability that is strongly non-(T)), in the sense that these properties are reflected in the structure of the von Neumann algebra. This construction is also sensitive to freeness, as in the freeness of the generators of a group. (See Gabriel's answer above.) Gromov's hyperbolicity is also reflected in the structure of the group von Neumann algebra, in that this "large scale" property severely governs the structure of the von Neumann algebra: this construction for Gromov hyperbolic groups give rise to solid factors. These things all seem to be "global" group properties, and so this is why I'm suggesting geometric group theory.</p> <p>We're sort of listening for echos of the group in the von Neumann algebra built from it...</p> <p>The broad question is: What properties of a group survive the construction of a $II_{1}$ factor using that group?</p> <p>Another classical construction of a $II_{1}$ factor the group-measure space construction, which in modern terms is the way we build the crossed product von Neumann algebra from a discrete group and an ergodic measure-preserving action of that group on a standard Borel space. Check out Popa's above-mentioned ICM talk for more weak equivalences for groups surrounding this construction.</p> <p>If you look at other ways of constructing groups, you can consider the same question. </p> <p>Good luck with your study of von Neumann algebras!</p>
# Find the Surface Area of a Sphere of Radius 7 Cm. - Geometry Sum Find the surface area of a sphere of radius 7 cm. #### Solution Radius of the sphere, r = 7 cm ∴ Surface area of the sphere, S =  $4\pi r^2 = 4 \times \frac{22}{7} \times \left( 7 \right)^2$  = 616 cm2 Thus, the surface area of sphere is 616 cm2. Concept: Concept of Surface Area, Volume, and Capacity Is there an error in this question or solution? #### APPEARS IN Balbharati Mathematics 2 Geometry 10th Standard SSC Maharashtra State Board Chapter 7 Mensuration Practice set 7.1 | Q 4 | Page 145
# Online 50 lions online pokies Spins Real Money This is a theoretical percentage which indicates how much would winnings can be expected to be paid back compared with overall bets. For example, if you bet £100 on a slot with an RTP of 98%, the theoretical return rate would be £98. Of course, this is purely a mathematical calculation often based on millions of spins. • You can declare a ten free spins no deposit bonus upon registering a model new account. • Institutional commitment maquinas tragamonedas en las vegas to hsat program goals. • Yes, you will need to register with an online on line casino before you are able to begin using your free spins. There are also many great looking 3D slots out there are well, and some of the biggest payouts at CaesarsGames.com. If you’re planning a Disney vacation, this system will also accept Famicom games. When you make an account on this online casino, you will see the list of payment methods. The difference between them is usually processing time, which can vary from a couple of hours to a week, and those details are on the website. Also, Foxy Games minimum deposit is usually £10 to £15, but it depends on the method of your choice. ## 50 lions online pokies | A Massive Selection Of Casino Games 50 lions online pokies Moreover, each game has been developed to a high standard – nothing has been compromised in order to bring you quick, simple gaming. Players also have the beauty of streaming the games rather than downloading them, which means it saves their data package if they’re outside of Wi-Fi range. There is a computer accessible webpage that gives you a lot of information about what’s on offer, but there isn’t much to interact with when you visit. It feels like Foxy Casino Mobile is still a work in progress and is something that will develop over time. ## Free Spins No Deposit No deposit free spins are very useful to a punter for a lot of causes. It is the simplest approach to get pleasure from on line casino winnings with none financial obligations from you. In addition, you will have the freedom to test run a particular recreation to seek out out whether or not you prefer it or not, all freed from charge. Almost all on line casino free spins bonuses have a cap on the max amount of wins you can declare. The on line casino needs to be positive that it has an opportunity to win again the money it just gave you and, with any luck, safe some of your individual cash, too. The differentiating components for each of those varieties often have to do with the tactic and particulars of how the on line casino doles out the spins. ## Foxy Games Mobile Casino However, to withdraw the funds, there are various things that shall be required of you relying on the online casino. Some would require that you wager your winnings obtained with the free spins before a withdrawal could be attainable. In different cases, you have to deposit somewhat sum into your casino account after earning from free spins, so that your account could be verified. Coin Master 50 spin rewards most commonly appear throughout in-game events, like those that reward you for raiding or battling different gamers. There’s additionally a small probability to get this number from every day hyperlinks, so bookmark this web page and examine back typically. Plus you’ll get rewards together with your second, third, and fourth deposits in complete. Sign up for 7Bit Casino and you can claim a huge seventy five free spins no deposit bonus to use on the Aloha King Elvis slot. While there is no definite answer, it’s best to only really play a high volatility slot if you have a pretty big amount of bonus money – such as with a casino bonus. That’s because a low bankroll can often be wiped out before any winning spins on a high variance slot actually land. This is often the case when wagering free spins winnings, as you don’t usually start with quite as much. To beat the wagering requirements, you need to use the bonus money to place bets until you’ve reached a certain amount. ## Why Does Bingo Diamond Give Away 150 Free Spins? Twin Casino gives you 400 free spins across two deposits (deposit as low as €$10 on every to get the full 400) and €$400 in bonuses if you register today. Though the gambling trade is filled with scammers who try to misguide players by promoting massive payouts and gamers end up dropping their hard-earned cash, scatters. In it, but sometimes the player will just really feel out of kinds and complain of a headache. More and extra, and different European Movies with a single click on then this straightforward trick can do the job. To utilise this strategy, you probably can obtain most films of the films for free. ## Vegas Mobile Casino To ensure you’re getting the best bonus, don’t be dazzled by the promise of 200 free spins if it comes with lots of T&Cs. Consider which offer will work out the best general, and you can play safe within the information that you’re getting a great deal as nicely as having enjoyable. The Foxy Dynamite free slot is a Bally’s 5 reels, 99 changeable lines video game.
# Change-making problem - counterexample for greedy algorithm Let D be set of denominations and m the largest element of D. We say c is counterexample if greedy algorithm is giving answer different from optimal one. I found statement that if for given set greedy algorithm is non-optimal, the smallest c will be smaller than 2m-1. The question is if it's really true and how to prove it? If not, is there a relatively small range to look for the smallest counterexample? - How do you define the optimal solution? What objective function do you optimize? –  Yury Dec 1 '12 at 3:50 @Yury: You want to minimize the number of coins used to make up the given amound. –  Brian M. Scott Dec 1 '12 at 7:24 I think you need some assumptions, e.g. for coins $\{2-\sqrt{2}, 1, \sqrt{2}\}$, the first natural number for which the greedy algorithm doesn't work is $3 > 2\sqrt{2}$. If you are talking about coins $\in \mathbb{N}$, then I remember something similar, but I can't recollect the source. The articles mentioned by Hendrik look promising, however ,it was not there where I saw it. –  dtldarek Dec 1 '12 at 11:12 add comment ## 2 Answers The paper "Optimal Bounds for the Change-Making Problem" (by Kozen and Zaks, TCS 1994) gives a bound of $x < c_M + c_{m-1}$ where $x$ is the counter example and $c_m$ and $c_{m-1}$ are the largest and second largest coin. They claim the bound is optimal. (I just browsed the paper, and I did not take the time to understand whether this automatically means you cannout expres it in the largest coin value) Jeffrey Shallit (famous for his paper "What this country needs is an 18c piece") in 1998 posted a bibliography on the subject: mathforum. Shallit adds "... it is a problem in which it is notoriously easy to make wrong conclusions -- some of them have even been published". Good luck with your investigations. - add comment I recently came up with 1 solution that seemed to show if the given 2 conditions are satisfied, the greedy algorithm would yield the optimal solution. a) The G.C.D (All the denominations except 1) = 2nd Smallest denomination. b) The sum of any 2 consecutive denominations must be lesser than the 3rd consecutive denomination. For eg. $c_2 + c_3 < c_4$. (Where $c_1 = 1; c_2, c_3, c_4$ are coin denominations in ascending order). I understand this is not a complete solution. However, I believe that if these 2 conditions are met, the greedy algorithm will yield the optimal solution. - Duplicate answer. As there, this would be more useful if accompanied by a proof. –  robjohn Mar 6 '13 at 9:56 add comment
# Create Beautiful Code Listings with Minted ## Learn how to make code listings in LaTeX look very aesthetic using Minted package It is very common having to write code listings in LaTeX, in order to illustrate a given algorithm., especially with computer science-related manuals. For this purpose, LaTeX offers the environment verbatim that lets you write code listings like this: which is obtained by the following code: % Code listing in LaTeX using verbatim \documentclass{article} \begin{document} \begin{verbatim} def show_verbatim(): print("Hello verbatim world!") show_verbatim() \end{verbatim} \end{document} which are literal and therefore insensible to control sequences, environments, etc. It also has an in-line equivalent \verb, which lets you enclose in any kind of delimiters the text inside. This kind of listing looks rather dull; we all want the nice color schemes we are used to in our IDEs or text editors. These colors not only make the code more artistic, but they make it more readable, especially for long listings. In this tutorial I’m going to show you how to use the minted package to produce nice-looking and customizable code listings, with highlighted syntax. ## Installation In this section, we are going to see quickly how to install minted. Please note If you can load minted package via \usepackage{minted} and compile without any errors, then you should skip to the next section. The main difference between minted and similar packages is that the former makes use of the additional software Pygments, which provides much better syntax highlighting. To install Pygments you need Python 2.6 or latter installed in your computer. In case you don’t have Python already, you can download it from here. Once you have Python installed, you can use the pip tool to install Pygments: just run pip install pygments in the command line as an administrator, check the following video: minted also requires a series of LaTeX packages to be installed and up to date in your system: keyval, ifthen, kvoptions, calc, fancyvrb, ifplatform, fvextra, pdftexcmds, upquote, etoolbox, float, xstring 2.3, xcolor lineno, framed, shellesc (for luatex 0.87+). All of them come with the usual LaTeX distributions, so probably you don’t have to worry about it. Once you have all of this, you can install minted as any other package, from you LaTeX package manager or from CTAN. Since minted needs to call Pygments, which is an external program, you need to allow the LaTeX processor to do so by passing it the -shell-escape option. So you need to call the processor like this: The same is true for other compilers if you are using them. If all of these seems too much for you and you don’t want to worry about anything I have written about so far, I strongly recommend you use Overleaf. ## Code Highlighting in LaTeX This is an example of minted basic usage. We use the minted environment to write some Python code: % Using Minted for code listing \documentclass{article} \usepackage{minted} \begin{document} \begin{listing} \begin{minted}{Python} def hello_world(): print("Hello floating world!") \end{minted} \caption{Floating listing.} \label{lst:hello} \end{listing} \end{document} Compiling the first code yields the following result: So we see that the use of minted package is straightforward: we only have to start a minted environment and put inside braces the language highlighting we want. ### Code listing of a File in LaTeX Instead of going smaller, we can go bigger, printing and highlighting whole files. For this purpose there is the \inputminted{tex}{filename.tex} command, where you pass the language highlighting and the file you want to input, and this file is written as a block of minted code. % Using Minted for file code listing \documentclass{article} \usepackage{minted} \begin{document} \inputminted{tex}{ex1.tex} \end{document} where ex1.tex file contains the following code: % ex1.tex file content \documentclass{article} \usepackage{minted} \begin{document} \begin{listing}[htbp] \begin{minted}{Python} def hello_world(): print("Hello floating world!") \end{minted} \caption{Floating listing.} \label{lst:hellow} \end{listing} \end{document} Compiling the previous code, we get the following result: ### Inline Code listing in LaTeX To typeset code inline, the command \mintinline{c}|int i| is provided. Again, the delimiters can be changed, according the listed code. Here is an example: % Inline code listing \documentclass{article} \usepackage{minted} \begin{document} Here is an example of a line code \mintinline{c}|int i| . \end{document} Compiling this code yields: ## Minted command and environment options All the above mentioned minted code highlighting commands accept the same set of options, as a comma-separated list of key=value pairs. Here is a list of the most remarkable ones: • autogobble (boolean): Automatically remove all common leading whitespace from code. If you don’t like the work done, you can always use gobble (integer) and pass an integer to it, so that the amount of characters passed is deleted from the start of all lines. • breaklines (boolean): Automatically break long lines in minted environments and \mint commands, and wrap longer lines in \mintinline. By default, automatic breaks occur only at space characters. You can set to true breakanywhere to enable breaking anywhere. There are further breaking options (such as breakbytoken, breakbytokenanywhere, breakbefore and breakafter) that we are not going to explore here. Refer to the package documentation in CTAN for more details. • bgcolor (string): Sets the background color of the listing. The string must be the name of a previously-defined color. • codetagify (list of strings): Highlight special code tags in comments and docstrings. The default list of code tags is: XXX, TODO, BUG, and NOTE. • curlyquotes (boolean): When your keyboard doesn’t have left and right quotes, the backtick and the single quotation mark ‘ are used as left and right quotes in most places (for example, in LaTeX). minted writes them literally by default, but if this option is set to true, they are substituted by the curly left and right quotes. • escapeinside (string): This makes minted escape to LaTeX between the two characters specified in the string. All code between the two characters will be interpreted as LaTeX and typeset accordingly. The escape characters need not be identical. This has a few exceptions, however: escaping does not work inside strings and comments (in the later case, you may want to use the texcomments option). • fontfamily, fontseries, fontsize, fontshape: These self-descriptive options let you customize the font used inside the minted environment. • linenos (boolean): Enables line numbers. It can be accompanied with numberblanklines (boolean), which enables or disables the numbering of blank lines. • mathescape (boolean): Enables the usual math mode inside comments. • samepage (boolean): Forces the whole listing to appear in the same page, even if it doesn’t fit. • showspaces (boolean): Enables visible spaces with a small horizontal line . You can redefine the invisible space character passing a new macro to the option space (macro); by default, the \textvisiblespace macro is passed. • stripall (boolean): This option strips all leading and trailing whitespace from the input. • tabsize (integer): This is the number of spaces a tab will be converted to, unless obeytabs (boolean) is active, in which case tabs will be preserved. If you are decided to use tabs, similar options for showing tabs and setting the invisible tabs character are available. In every boolean option, you can omit the =true part, and just write the option name. ## Floating listings In certain situations, or for certain tastes, it is better to treat the code listings as floating objects. For this matter, minted provides the listing environment that you can wrap around any source code block. As it is usual for floating objects, you can provide a \label{} and a \caption{}, and also the usual placement specifiers, check the first example. As happens with figures and tables, you can create an index of all of the floating listings in the document using \listoflistings. In case you are going to use floating listings and reference them within the text, you may want to change how \LaTeX counts them: you can choose whether the chapter or the section should be used. In order to do that, you have to load the package with the chapter or section option: % List of listings numbering % Counts follows sections' numbering \usepackage[section]{minted} % Counts follows chapters' numbering \usepackage[chapter]{minted} ## Customizing the syntax highlight If you are reading this you probably like programming, and in that case you almost sure like to customize the color schemes of your code editors, and the appearance of the minted listings shouldn’t be less. Instead of the default minted style, you may set the style you want using \usemintedstyle[language]{stylename}. This sets the stylename for the given language`; if you don’t specify any language, the style is set for the document as a whole. To try out all the available stylesheets with any given language you can visit the Pygments website. In fact, you can also see the list of all supported languages, which probably are more than you have heard of. Here is a screenshot of code highlighting choice: And that’s all for the minted package. I’m sure from now on your listings in LaTeX will look very aesthetic, which will make reading your documents even more pleasing.
# Gamma of Put from Call Ford stock is trading around 16.40 . The call option on the $16 strike has a gamma of 0.617. What is the gamma of the put on the$16 strike? ×
# Is it decidable whether a given Turing machine moves its head more than 481 cells away from the left-end marker, on input ε? So, while reading some problems on decidability, I came across the following resource: https://www.isical.ac.in/~ansuman/flat2018/tm-more-undecidable.pdf Here, on page no 12, it is written that the problem is decidable and with the following argument: "Yes, Simulate M on for upto m^481 · 482 · k steps. If M visits the 482nd cell, accept, else reject." I am quite confused with the step count. Can anyone please explain what does this mean, or maybe point to some resources where I can find a proper explanation!!!! Image of the slide In particular, if a configuration $$c$$ yields a runs that gets back to $$c$$, at some point, then the run is stuck in a loop (note that this is not a necessary condition, a run might not halt while never repeating a configuration!). Now, consider all the possible configurations that use up to 481 cells. There is a finite amount of those, namely $$m^{481} \cdot 482 \cdot k$$ (where $$m$$ is the size of the alphabet, and $$k$$ is the number of states).
useR: The Week in Review That’s it for #useR2018. After 6 keynotes, 132 parallel sessions, many more lightning talks and posters, and an all-important conference dinner, we’ve reached the end of the week. This was my first proper conference since 2015. I had almost forgotten how it felt to be surrounded by hundreds of people who are just as passionate (if not more) about your tiny area of specialised knowledge than you are. I took notes for the three tutorials I went to, but I wanted to take a moment to review the week as a whole, including the talks that stood out to me. You can find my tutorial notes below: All talks and tutorials were recorded, so keep an eye out for them on the useR2018 site. The #rstats community is active on Twitter, so check out the #useR2018 hashtag as well. A quick personal note I’d like to declare my own biggest success and biggest failure of the conference: • Biggest personal win: I posted notes from each of the three tutorials I went to! This forced me to learn more about PCA, which is a big win. • Biggest personal not-win: I didn’t present anything. I told myself this was because I had nothing to present, but I’m not so sure that’s true. Talk highlights • Steph de Silva spoke of R as not just a language but a community. And in this community you go from seeing R as something you use to seeing R as something you share. The R subculture(s) have real power to contribute in a world where data affects decisions. • Rob Hyndman’s new fable package looks super easy to use. It’s a tidyverse-compatible replacement for his extremely popular forecast package. He calls it “fable” because “fables aren’t true but they tell you something useful about reality, just like a forecast.” • Thomas Lin Pedersen is rewriting the gganimate package and it looks so cool. He described visualisation as existing on a spectrum between static, interactive, and animated. Traditional media (eg. newspapers) use static visualisation and modern journalism websites use interactive visualisation, but animated visualisation is often found in social media. • Katie Sasso introduced standalone Shiny applications using Electron. I am so keen to try these out! Imagine being able to distribute a Shiny app to someone without them needing to so much as install R. • Nicholas Tierney’s maxcovr package makes it easier to solve the maximal coverage location problem in R. His choice of example was apt. Brisbane offers free public wifi in and around the CBD, and the maxcovr package can be used to identify optimal placemennt of new routers to improver the coverage population and area. • Roger D. Peng spoke about the teaching R. I loved the quote from John Chambers on S (R’s predecessor): “We wanted users to begin in an interactive environment, where they did not consciously think of themselves as programming… they should be able to slide gradually into programming.” This is the R value proposition in a nutshell for me: you don’t have to jump into the developer side of things, but if you want to start going down that path it’s a gradual transition. • Jenny Bryan spoke about code smells and feels. Whenever I read something cool and useful about R I look at the author and there’s a good chance it’s Jenny. I like that the advice in her talk is more “use this in moderation” rather than the prescriptive “Don’t do this”. • Danielle Navarro shared her experience in teaching R to psychology students. This one resonated, especially with her emphasis on student fear. Student fear stops learning before it can begin! • Martin Mächler of the R Core team discussed an often-neglected topic: numerical precision. It was a chance to get into the guts of R. He also gave the following bizarre example: unique((1:10)/10 - (0:9)/10) ## [1] 0.1 0.1 0.1 0.1 “10.0 times 0.1 is hardly ever 1.0” - The Elements of Programming Style by Brian W. Kernighan and P. J. Plauger Wrapping up Thank you to the organisers and to everyone who contributed to the conference. I met a tonne of people here and I can’t mention everyone. Thank you to the following people for existing and for making my #useR2018 experience extra-special: Adam Gruer, Dale Maschette, Emily Kothe, John Ormerod, Steph de Silva, Charles T. Gray, and Ben Harrap.
Unfortunately, in the last year, adblock has now begun disabling almost all images from loading on our site, which has lead to mathwarehouse becoming unusable for adlbock users. Before learning how to solve the equation of a quadratic inequality. It is important that you: Quick review of the graphs and equations of linear inequalities. The graphs of quadratic inequalities follow the same general relationship. Greater than inequalities are the region above the equation's graph and less than inequalities are made up by the region underneath the graph of the equation. #### What is the solution of a quadratic inequality? Below, you will learn a formula for solving quadratic inequalities. First, it's important to try to understand what a quadratic inequality is and what its solution is. So let us explore a graphical solution for a quadratic inequality. We will examine the quadratic inequality $$y > x^2 -1$$ . The yellow region represents the graph of the quadratic inequality. The red line segment from $$(-1, 2)$$ to $$(1, 2)$$ represents the solution itself, graphically. The solution, graphically, is always where the graph of the inequality overlaps with the x axis . Diagram 7 The same basic concepts apply to quadratic inequalities like $$y < x^2 -1$$ from digram 8. This is the same quadratic equation, but the inequality has been changed to $$\red <$$. In this case, we have drawn the graph of inequality using a pink color. And that represents the graph of the inequality. The solution , graphically, is always where the graph of the inequality overlaps with the x axis . Diagram 8 ### General Formulafor Solutions of Quadratic Inequalities The table below represents two general formulas that express the solution of a quadratic inequality of a parabola that opens upwards (ie a > 0) whose roots are r1 and r2. The Greater Than Inequality 0 > ax² + bx + c Solution: {x| r1 < X < r2} The Less Than Inequality 0 < ax² + bx + c Solution: {x| x < r1 or x > r2} We can reproduce these general formula for inequalities that include the quadratic itself (ie ≥ and ≤). The Greater Than Inequality 0 ≥ ax² + bx + c Solution: {x| r1 ≤ X ≤ r2} The Less Than Inequality 0 ≤ ax² + bx + c Solution: {x| x ≤ r1 or x ≥ r2}
# Determinant of the variance-covariance matrix 1. Oct 10, 2009 ### kingwinner Let ∑ be the variance-covariance matrix of a random vector X. The first component of X is X1, and the second component of X is X2. Then det(∑)=0 <=> the inverse of ∑ does not exist <=> there exists c≠0 such that a.s. d=(c1)(X1)+(c2)(X2) (i.e. (c1)(X1)+(c2)(X2) is equal to some constant d almost surely) ======================= I don't understand the last part. Why is it true? How can we prove it? Any help is appreciated!:) 2. Oct 10, 2009 Write $$\sigma = \begin{bmatrix} \sigma_1^2 & \sigma_{12}\\ \sigma_{21} & \sigma_2^2\end{bmatrix}$$ and then write down the expression for its determinant, noting that it equals zero. Now, take $$D = c_1X_1 + c_2X_2$$ and use the usual rules to write out the variance of $$D$$ in terms of $$c_1, c_2$$ and the elements of $$\sigma$$. Compare the determinant to the expression just obtained - you should see that why the statement is true.
# From disintegration to conditioning There is a paper "Conditioning as disintegration" by J. T. Chang and D. Pollard, which seems to construct the regular conditional probability from the disintegration. In particular, from Definition 1, Theorem 1 and Theorem 2.(iii) in that paper, we can summarize a theorem as follows: Theorem. Let $$\Omega$$ be a Polish space, $$\mathcal F = \mathcal B(\Omega)$$ be the Borel $$\sigma$$-field for $$\Omega$$, and $$\mathbf P$$ be a probability measure on $$(\Omega,\mathcal F)$$. Let $$(E,\mathcal E)$$ be a measurable space, with $$\mathcal E$$ countably generated and containing all the singleton sets. Let $$X:(\Omega,\mathcal F) \to (E,\mathcal E)$$ be a random element. Denote by $$P_X := X_*\mathbf P = \mathbf P\circ X^{-1}$$ the pushforward measure of $$X$$ on $$(E,\mathcal E)$$. Then there is a family $$\{\mathbf P^x\}_{x\in E}$$ of probability measures on $$(\Omega,\mathcal F)$$, such that: • For every $$x\in E$$, the probability measure $$\mathbf P^x$$ concentrates on the event $$\{X = x\}$$. • For all $$A\in\mathcal F$$, the mapping $$\mathbf P^\cdot(A): (E,\mathcal E)\to [0,1]$$ is measurable. • For all $$A\in\mathcal F$$ and $$B\in\mathcal E$$, $$$$\mathbf P\left(A\cap X^{-1}(B)\right) = \int_B \mathbf P^x(A) P_X (dx).$$$$ Moreover, the family $$\{\mathbf P^x\}_{x\in E}$$ is uniquely determined up to an almost sure equivalence: if $$\{\mathbf Q^x\}_{x\in E}$$ is another family of probability measure on $$(\Omega,\mathcal F)$$ that satisfies above conditions, then $$\begin{equation*} P_X\{x\in E: \mathbf P^x \ne \mathbf Q^x\} = 0. \end{equation*}$$ Here is the problem. Consider the special case that $$E=\Omega$$ and $$\mathcal E$$ is a sub-$$\sigma$$-field of $$\mathcal F$$ that contains all singletons. Since $$\Omega$$ is second countable, its Borel $$\sigma$$-field $$\mathcal F$$ must be countably generated and contain all singletons. As a sub-$$\sigma$$-field of $$\mathcal F$$, $$\mathcal E$$ is also countably generated. Let $$X = \mathrm{Id}$$. Then $$P_\mathrm{Id} = \mathbf P$$ and $$\sigma(\mathrm{Id}) = \mathcal E$$. Now all assumptions in the theorem are fulfilled. Hence, we get a $$\mathbf P$$-a.s. unique family of probability measures $$\{\mathbf P^\omega\}_{\omega\in\Omega}$$ on $$(\Omega,\mathcal F)$$ satisfying: 1. For every $$\omega\in\Omega$$, the probability measure $$\mathbf P^\omega$$ concentrates on the singleton $$\{\omega\}$$. 2. For all $$A\in\mathcal F$$, the mapping $$\mathbf P^\cdot(A): (\Omega,\mathcal E)\to [0,1]$$ is measurable. 3. For all $$A\in\mathcal F$$ and $$B\in\mathcal E$$, $$$$\mathbf P\left(A\cap B\right) = \int_B \mathbf P^\omega(A) \mathbf P (dx).$$$$ The statements 2 and 3 are completely the same as the formulation of conditional probability, that is, $$\mathbf P^\omega(A) = \mathbf P(A\mid \mathcal E)(\omega)$$. However, if we combine them with the statement 1, then there are something quite strange. Indeed, since $$\mathbf P^\omega$$ concentrates on $$\{\omega\}$$, we have $$\mathbf P^\omega(A) = \mathrm{1}_A(\omega)$$ for all $$A\in\mathcal F$$, while this should hold only for $$A\in\mathcal E$$ since $$\mathbf P^\omega$$ is the conditional probability by statement 3. Besides, the mapping $$\mathbf P^\cdot(A) = \mathrm{1}_A: (\Omega,\mathcal E)\to [0,1]$$ is measurable only for $$A\in\mathcal E$$, but not for all $$A\in\mathcal F$$ claimed in statement 2. So where does it go wrong? Any comments or hints will be appreciated. TIA... EDIT: Here are some further remarks: 1. I just claimed that "as a sub-$$\sigma$$-field of $$\mathcal F$$, $$\mathcal E$$ is also countably generated". This is wrong. See e.g., here for a counterexample. 2. Thanks to the comment by @aduh, the problem reduce to whether it must be $$\mathcal E = \mathcal F$$? Or does there exist a proper sub-$$\sigma$$-field of $$\mathcal F$$ that is countably generated and contains all singletons? I post this as another question in Math.SE. Conclusion: Under my assumptions, $$\mathcal E$$ must coincide with $$\mathcal F$$. So the problem is trivial. See the accepted answer given by @GEdgar in the "another question" I mentioned for details. • It's not true that $\mathcal E$ is c.g. if $\mathcal F$ is. How do you know that your assumptions don't entail that $\mathcal E = \mathcal F$? Sep 21 '20 at 3:35 • @aduh Thank you for your comment. I am not sure if it must be $\mathcal E = \mathcal F$. I post another question for this in the site. Please see the EDIT part at the end. Sep 21 '20 at 10:42 • Right, good. And it looks like GEdgar's answer solves the problem. In your case, $\mathcal E = \mathcal F$. Sep 21 '20 at 20:17 • @aduh Yes, I see. Thank you and enjoy this discussion! Sep 21 '20 at 20:43 • You might consider writing an answer to this question so that it’s not left unanswered. As said in the Conclusion part at the end of the question, we can prove $$\mathcal E = \mathcal F$$ following the lines of @GEdgar. More precisely, we can prove the following theorem: Theorem. Let $$\Omega$$ be a Polish space, $$\mathcal F=\mathcal B(\Omega)$$ be the Borel $$\sigma$$-field for $$\Omega$$. If $$\mathcal E\subset \mathcal F$$ is a countably generated sub-$$\sigma$$-field containing all the singleton sets, then $$\mathcal E = \mathcal F$$. Lemma. Let $$\Omega$$ be a Polish space, $$\mathcal F=\mathcal B(\Omega)$$ be the Borel $$\sigma$$-field for $$\Omega$$. If $$\mathcal E\subset \mathcal F$$ is a countably generated sub-$$\sigma$$-field, then a set $$A\in\mathcal F$$ belongs to $$\mathcal E$$ if and only if $$A$$ is a union of singletons.
# My Easel ## August 18, 2007 ### Stayin’ alive Filed under: Uncategorized — Aditya Sengupta @ 1:22 am I found these very touching videos through this blog, which in turn I found a couple of hours ago through someone else’s blog. The blog itself is highly recommended reading. It’s not often I ask you to read a blog without having followed it for a while, but the posts there, the first few ones at least, are captivating. (For those who haven’t figured it out already, the title is from the song by the same name by the Bee Gees, again- highly recommended) ## August 16, 2007 ### Google: 9 Notions of Innovation Filed under: Uncategorized — Aditya Sengupta @ 10:47 pm I found this great podcast on iTunes U. This is Marissa Mayer, Vice President of Search Product and User Experience at Google, talking about what she calls the ‘9 Notions of Innovation’. I found a video on Youtube as well: Here is a gist of the main talking points during the presentation: 1. Ideas come from everywhere Google expects everyone to innovate, even the finance team 2. Share everything you can Every idea, every project, every deadline — it’s all accessible to everyone on the intranet 3. You’re brilliant, we’re hiring Founders Larry Page and Sergey Brin approve hires. They favor intelligence over experience 4. A license to pursue dreams Employees get a “free” day a week. Half of new launches come from this “20% time” 5. Innovation, not instant perfection Google launches early and often in small beta tests, before releasing new features widely 6. Don’t politic, use data Mayer discourages the use of “I like” in meetings, pushing staffers to use metrics 7. Creativity loves restraint Give people a vision, rules about how to get there, and deadlines 8. Worry about usage and users, not money Provide something simple to use and easy to love. The money will follow. 9. Don’t kill projects — morph them There’s always a kernel of something good that can be salvaged Anyone who knows me or has followed my blog (I’ve posted on life at Google earlier) will know just how much I respect the Google culture. I find it unsurprising that an entity that follows such a path towards innovation gains so much success and respect. My personal favourites are #2, 7 and 8. Although each point is one that I think should be followed everywhere, I see a particular need for these points to be highlighted amongst the people I work with, or don’t, or can’t. You get the idea. I wish this approach to innovation, and the Google philosophy, were more prevalent. Here is the original stream. Click the play button below: http://www.stanford.edu/group/edcorner/uploads/podcast/mayer060517.mp3 To attribute this as best I can: this cast is hosted by the Stanford Technology Ventures Program. ## August 9, 2007 ### K9 Express (part 2): Encounter of the first kind Filed under: Uncategorized — Aditya Sengupta @ 12:48 am A fair bit has been said in jest about animals that roam the halls of engineering colleges in Bombay. Here is proof. From my own class. No kidding. This dog strolls in during a lecture and decides to take a bit of a nap. Well, the dog wasn’t alone. These guys didn’t mind taking 40 winks themselves: Methinks this does not bode well for the professor. To be fair, it was the fag end of a very long day ## August 6, 2007 ### The Real Fake Steve Jobs Filed under: Uncategorized — Aditya Sengupta @ 10:30 pm The latest buzz on the blogosphere is the unmasking of Fake Steve Jobs– the anonymous blogger who, for more than a year now has assumed the self-interpreted persona of Apple CEO Steve Jobs. A very creative interpretation at that. ‘The Secret Diary of Steve Jobs’ is one of my favourite blogs- the kind one looks forward to each day. Funny, nay- hilarious in its parodical portrayal of Steve Jobs. And the blog is popular too. 700,000 hits last month. His readers have included, at one point or the other, Bill Gates and Steve Jobs himself. The real one. Well, I’m not too happy about the fact that he’s been unmasked. I enjoyed the intrigue. And from what I can see from the post that did the dirty deed, not a lot of people are happy either. Read the comments. So who is FSJ? Daniel Lyons. A reporter for Forbes. Read this for a more thorough account. His real blog is here. His colleagues at Forbes have been having a blast through all this. Through all this, I hope that this does not dilute FSJ’s voice. He claims it wont. He promises to come back ‘badder than ever‘. Well, in FJS’s very own inimitable style, peace and love. Namaste! ## August 5, 2007 ### Math Puzzle Filed under: Uncategorized — Aditya Sengupta @ 9:04 pm I’m usually pretty good at pointing out the flaws in mathematical puzzles that give contradictory answers… the types that give you results like 1 equals 0, you get the idea. I just found this one that has had me stumped: First we set: x=0.999999999…… (infinitely recurring) Multiplying both sides by 10, we have, 10x=9.999999999….. (infinitely recurring) subtracting the first equation from the second one, 10x – x = 9.999999999…… – 0.999999999……. Therefore, 9x = 9 We divide both sides by 9 to get, x = 1 so do we have, from the first statement, 1 = .999999999….. ? Apparently, this is true! No kidding. Yeah, I was pretty surprise as well. I expected to find a hitch in the proof, an inconsistency of some sort. Nothing. Nada. Zilch. I looked it up online even. You’d be surprised how popular this issue is on the web. Amongst mathematicians at any rate. Wikipedia has a pretty exhaustive, and somewhat exhausting article about this here. The image you see at the beginning of the post is from there. So is the alternative proof that follows: \begin{align} 0.333\dots &= \frac{1}{3} \\ 3 \times 0.333\dots &= 3 \times \frac{1}{3} = \frac{3 \times 1}{3} \\ 0.999\dots &= 1 \end{align} Oh, here is another interesting piece of information I found while looking up this puzzle. Though quite a few of you probably know about this: Any recurring (non-terminating repeating) decimal can be converted to a fraction. Use the method in the first proof. Here is a related page with some other elegant examples. (Update: I hit the publish button before I meant to post. Here is the ending) This puzzle illustrates the somewhat philosophical issues in our interpretation of mathematics. While we inherently believe that the number .999999999….. has a last 9 at infinity, one must realize that there is no last 9 and that the expansion of the number never ends. Stating that there is something at infinity is meaningless. We often treat infinity as if it were a number, or a location (a point on a number line). This is something we need to get past. This entire discussion curiously reminds me of a particular strip from Calvin and Hobbes. This one: Create a free website or blog at WordPress.com.
# Kontorovich-Lebedev-transform(2) The integral transform $$F ( \tau ) = \ \int\limits _ { 0 } ^ \infty K _ {i \tau } ( x) f ( x) d x ,$$ where $K _ \nu$ is the Macdonald function. If $f$ is of bounded variation in a neighbourhood of a point $x = x _ {0} > 0$ and if $$f ( x) \mathop{\rm ln} x \in L \left ( 0 , \frac{1}{2} \right ) ,\ \ f ( x) \sqrt x \in L \left ( \frac{1}{2} , \infty \right ) ,$$ then the following inversion formula holds: $$\frac{f ( x _ {0} + ) + f ( x _ {0} - ) }{2 } =$$ $$= \ \frac{2}{\pi ^ {2} x _ {0} } \int\limits _ { 0 } ^ \infty K _ {i \tau } ( x _ {0} ) \tau \sinh \pi \tau F ( \tau ) d \tau .$$ Let $f _ {i}$, $i = 1 , 2$, be real-valued functions with $$f _ {i} ( x) x ^ {-3/4} \in L ( 0 , \infty ) ,\ \ f _ {i} ( x) \in L _ {2} ( 0 , \infty ) ;$$ and let $$F _ {i} ( \tau ) = \ \int\limits _ { 0 } ^ \infty \frac{\sqrt {2 \tau \sinh \pi \tau } } \pi \frac{K _ {i \tau } }{\sqrt x } f _ {i} ( x) d x .$$ Then $$\int\limits _ { 0 } ^ \infty F _ {1} ( \tau ) F _ {2} ( \tau ) d \tau = \ \int\limits _ { 0 } ^ \infty f _ {1} ( x ) f _ {2} ( x) d x$$ (Parseval's identity). The finite Kontorovich–Lebedev transform has the form $$F ( \tau ) = \ \frac{2 \pi \sinh \pi \tau }{\pi ^ {2} | I _ {i \alpha } ( \alpha ) | ^ {2} } \times$$ $$\times \int\limits _ { 0 } ^ \alpha [ K _ {i \tau } ( \alpha ) I _ {i \tau } ( x) - I _ {i \tau } ( \alpha ) K _ {i \tau } ( x) ] f ( x) \frac{dx}{x} ,$$ $\tau > 0$, where $I _ \nu$ is the modified Bessel function (see [3]). The study of such transforms was initiated by M.I. Kontorovich and N.N. Lebedev (see [1], [2]). #### References [1] M.I. Kontorovich, N.N. Lebedev, "A method for the solution of problems in diffraction theory and related topics" Zh. Eksper. i. Toer. Fiz. , 8 : 10–11 (1938) pp. 1192–1206 (In Russian) [2] N.N. Lebedev, Dokl. Akad. Nauk SSSR , 52 : 5 (1945) pp. 395–398 [3] Ya.S. Uflyand, E. Yushkova, Dokl. Akad. Nauk SSSR , 164 : 1 (1965) pp. 70–72 [4] V.A. Ditkin, A.P. Prudnikov, "Integral transforms and operational calculus" , Pergamon (1965) (Translated from Russian)
# How can you model a finite mixture model where it is over a dirac point mass? [duplicate] I am trying to model the finite mixture model case where: $$\theta \sim \sum_{h=1}^{k}\pi_h \delta_{\theta^{*}_h}$$ where $\theta^{*}_h \sim Gamma(\alpha, \beta)$, $(\pi_1, \ldots ,\pi_k) \sim Dirichlet(1/k, \ldots, 1/k)$, and $\delta$ is the dirac point mass. I am doing this in R, but I do not even know how to start. Would anyone have any idea how I can implement this? Thanks! • (1) Does your notation mean that you draw $k$ independent values from a $\Gamma(\alpha,\beta)$ distribution and then independently weight them with $k$ values from a Dirichlet distribution? (2) Regardless, what exactly do you mean by "model ... a model"?? What procedure are you contemplating implementing? – whuber Nov 10, 2015 at 20:46 • Hi, that is exactly what I am trying to do in the first park you described. I am trying to find a way to obtain draws from this weighted mixture of dirichlet "weights" and the dirac point mass at the Gamma draws. Would you know how I can proceed with this? Nov 10, 2015 at 21:25 • I'm still trying to figure out what you want to do. If you draw independent $\pi_h$ and $\theta^{*}_h$ for each $\theta$, then what you are doing is no different than drawing a single value from a $\Gamma(\alpha,\beta)$ distribution. If you draw $k$ values once and for all from the Gamma distribution and draw a new $(\pi_i)$ for each $\theta$, then you are choosing each of those $k$ Gamma values with equal probability. Shall we presume, then, that you are defining a discrete distribution by drawing exactly one set of $k$ Gamma values and one set of $\pi_i$ and sampling repeatedly from it? – whuber Nov 10, 2015 at 23:06 • Hi, that is exactly what I'm trying to do, sorry if I didn't make it more clear. Nov 12, 2015 at 2:35
# Counting Point Mutations solved by 25392 July 2, 2012, midnight by Rosalind Team Topics: Alignment ## Evolution as a Sequence of Mistakes Figure 1. A point mutation in DNA changing a C-G pair to an A-T pair. A mutation is simply a mistake that occurs during the creation or copying of a nucleic acid, in particular DNA. Because nucleic acids are vital to cellular functions, mutations tend to cause a ripple effect throughout the cell. Although mutations are technically mistakes, a very rare mutation may equip the cell with a beneficial attribute. In fact, the macro effects of evolution are attributable by the accumulated result of beneficial microscopic mutations over many generations. The simplest and most common type of nucleic acid mutation is a point mutation, which replaces one base with another at a single nucleotide. In the case of DNA, a point mutation must change the complementary base accordingly; see Figure 1. Two DNA strands taken from different organism or species genomes are homologous if they share a recent ancestor; thus, counting the number of bases at which homologous strands differ provides us with the minimum number of point mutations that could have occurred on the evolutionary path between the two strands. We are interested in minimizing the number of (point) mutations separating two species because of the biological principle of parsimony, which demands that evolutionary histories should be as simply explained as possible. ## Problem Figure 2. The Hamming distance between these two strings is 7. Mismatched symbols are colored red. Given two strings $s$ and $t$ of equal length, the Hamming distance between $s$ and $t$, denoted $d_{\mathrm{H}}(s, t)$, is the number of corresponding symbols that differ in $s$ and $t$. See Figure 2. Given: Two DNA strings $s$ and $t$ of equal length (not exceeding 1 kbp). Return: The Hamming distance $d_{\mathrm{H}}(s, t)$. ## Sample Dataset GAGCCTACTAACGGGAT CATCGTAATGACGGCCT ## Sample Output 7
# Writing an HTTP POST body that has new lines I am looking for general ways to simplify this. What it does is write a body and send an HTTP POST request to an endpoint. I had the method require a list just to append new lines for each string I build at an earlier point in time. The expected response is in JSON, so I'm also looking for ways to simplify capturing the response. If there are any small libraries that can do this I can consider using them. private void makeRequest(List<String> records) throws IOException { URL url = new URL("http://my-api.com/post-endpoint"); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setDoOutput(true); connection.setRequestMethod("POST"); try (OutputStream output = connection.getOutputStream()) { Iterator<String> iterator = records.iterator(); while (iterator.hasNext()) { String json = iterator.next(); output.write(json.getBytes(Charset.forName("UTF-8"))); if (iterator.hasNext()) { output.write('\n'); } } } String inputLine; StringBuilder response = new StringBuilder(); while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); System.out.println("RESPONSE: " + response.toString()); } ### Iterating over a list Using an Iterator to iterate over a list is very old-fashioned. It's better to use the for-each syntax when possible: for (String record : records) { // ... } I noticed that the last element is treated differently from the others: a \n character is not appended after the last record. Is that really necessary? I doubt it. Trailing \n characters should not make the JSON invalid. Although the implementation may be correct to not print a trailing \n, it seems a bit pedantic. If for some reason you need to keep the special treatment of the last line, you can still use the for-each technique with twist: if (!records.isEmpty()) { output.write(records.get(0).getBytes(Charset.forName("UTF-8"))); for (String record : records.subList(1, records.size())) { output.write('\n'); output.write(record.getBytes(Charset.forName("UTF-8"))); } } That is, instead of treating the last item specially, we treat the first item specially, and then use .subList to effectively work with the tail of the list. Note that .subList does not copy the content of the list, so it's not inefficient. And if you can use Java 8, then an even more compact solution is possible using String.join: output.write(String.join("\n", records).getBytes(Charset.forName("UTF-8"))); ### try-with-resources You used try-with-resources when writing to the output stream, but not when reading from the input stream. It's always good to use it whenever possible: StringBuilder response = new StringBuilder(); • @mtj thanks for the reminder about joining. An even simpler solution is using String.join. – janos Sep 2 '17 at 9:09
# Displaying detections in High-Res¶ If you are running object detection model eg. MobileNet or Yolo, they usually require smaller frame for inferencing (eg. 300x300 or 416x416). Instead of displaying bounding boxes on such small frames, you could also stream higher resolution frames (eg. video output from ColorCamera) and display bounding boxes on these high-res frames. There are several approaches to achieving that, and in this tutorial we will take a look at them. ## 1. Passthrough¶ Just using the small inferencing frame. Here we used passthrough frame of MobileNetDetectionNetwork’s output so bounding boxes are in sync with the frame. Other option would be to stream preview frames from ColorCamera and sync on the host (or don’t sync at all). 300x300 frame with detections below. Demo code here. ## 2. Crop high resolution frame¶ A simple solution to low resolution frame is to stream high resolution frames (eg. video output from ColorCamera) to the host, and draw bounding boxes to it. For bounding boxes to match the frame, preview and video sizes should have the same aspect ratio, so 1:1. In the example, we downscale 4k resolution to 720P, so maximum resolution is 720x720, which is exactly the resolution we used (camRgb.setVideoSize(720,720)). We could also use 1080P resolution and stream 1080x1080 frames back to the host. Demo code here. ## 3. Stretch the frame¶ A problem that we often encounter with models is that their aspect ratio is 1:1, not eg. 16x9 as our camera resolution. This means that some of the FOV will be lost. In our Maximizing FOV tutorial we showcased that changing aspect ratio will preserve the whole aspect ratio of the camera, but it will “squeeze”/”stretch” the frame, as you can see below. Demo code here. ## 4. Edit bounding boxes¶ To avoid stretching the frame (as it can have an affect on NN accuracy), we could also stream full FOV video from the device and do inferencing on 300x300 frames. This would, however, mean that we have to re-calculate bounding boxes to match with different aspect ratio of the image. This approach does not preserve the whole aspect ratio, it only displays bounding boxes on whole FOV video frames. Demo code here. ## Got questions? We’re always happy to help with code or other questions you might have.
# The problems on pdb renumbering 1 post / 0 new The problems on pdb renumbering #1 Hello everyone, I am learning to do antibody-antigen docking with snugdock. The antigen I am working on has two chains (chain A and B). Before the antigen pdb file can be used for docking, it must be renumbered (changing it to a single A chain and numbering all residues continuously), am I right? There is some renumbering scripts in the Rosetta package. I have tried some of them, but none can work. (1)  The pdb_renumber.py script in the /app/rosetta_src_2017.08.59291_bundle/tools/protein_tools/scripts directory. The command: /app/rosetta_src_2017.08.59291_bundle/tools/protein_tools/scripts/pdb_renumber.py input.pdb output.pdb gives the following error: value= (' ', 1, ' ') Traceback (most recent call last): File "/app/rosetta_src_2017.08.59291_bundle/tools/protein_tools/scripts/pdb_renumber_new.py", line 36, in <module> residue.id=(' ',residue_id,' ') File "/usr/lib/python2.7/site-packages/biopython-1.69-py2.7-linux-x86_64.egg/Bio/PDB/Entity.py", line 92, in id " this entity.".format(self._id, value, value)) ValueError: Cannot change id from (' ', 1, ' ') to (' ', 1, ' '). The id (' ', 1, ' ') is already used for a sibling of this entity. (2) The renumber_pdb.py script in the  /app/rosetta_src_2017.08.59291_bundle/tools/ directory. The command: python /app/rosetta_src_2017.08.59291_bundle/tools/renumber_pdb.py input.pdb output.pdb gives the error: Traceback (most recent call last): File "/app/rosetta_src_2017.08.59291_bundle/tools/renumber_pdb.py", line 35, in <module> TypeError: coercing to Unicode: need string or buffer, NoneType found (3) The enumber_pdb.py script in the /app/rosetta_src_2017.08.59291_bundle/demos/protocol_capture/replica_docking/csrosetta3/com directory. The command: /app/rosetta_src_2017.08.59291_bundle/demos/protocol_capture/replica_docking/csrosetta3/com/renumber_pdb.py input.pdb output.pdb doesn't give any error, but the chain mode in new pdb file it produces is just the same with the input pdb file: there are still two chain identifiers (A and B) and the residues of the two chain is still numbered seperately. Could anyone tell me how to fix these pdb renumbering scripts and do you know which script can properly work? Yeping Sun Category: Post Situation: Sun, 2017-08-20 06:39 Sunyp_IM
# Tag Info 15 This problem originated with passengers using electronics (they call them PED's - portable electronic devices) during flight. While all consumer electronics have to be qualified by a regulatory body (FCC, etc.) to prove they do not emit harmful interference, this doesn't mean they emit no interference especially to high gain sensitive navigation equipment. ... 14 Actually, all the atoms are identical. The time at which it is observed to decay is not an intrinsic property of a given atom, but rather an effect of quantum mechanics. For any given time bin, there is some finite amplitude for a transition to a decayed state, which in turn corresponds to a finite probability, since the particle(s) emitted will escape from ... 12 The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in ... 10 The field of the accelerated charge cannot change instantaneously. From "Gravitation" by Misner, Wheeler, and Thorne. So it does have an electric field-but why does accelerating a field(which is in the answer below) cause light to be made? Or rather, why wouldn't a constant velocity just do it? The field of a uniformly moving charge does not ... 9 It may have something to do with the growth rate of sunflowers. During peak growing times sunflowers can grow inches in a single day, which likely results in them drawing more water out of the ground, allowing them to concentrate the radioactive materials through deposition in the plant matter at a faster rate than other plant organisms. I would suspect ... 9 To pretty much everything you stated in your question, "no". That convection requires a medium is not the main difference, it is simply the most obvious aspect of what is a fundamentally different mechanism for transfering energy. Convection is the transfer of energy by movement of a medium, whereas radiation is the transfer of energy by, well, thermal ... 9 You can get an upper limit by simply treating the case of The radioisotope structured as a point source. The whole dose still present, corrected for half-life. No shielding. A simple $\frac{\text{presented area}}{4\pi r^2}$ for the acceptance (in this case the radiation is emitted in all directions, acceptance represents the fraction that hits the target, ... 8 Short answer: no. Longer answer: No, excepting neutrinos none of the products of radioactive decay has the penetrating power to pass through the atmosphere, and neutrino detection is not something we can do from satellites. To elaborate, the immediate products of radioactive decay are (some set of, depending on the decay in question) fission fragments, ... 8 Ultra-high energy cosmic rays all come from a very, very long way away (anything with the power to create them nearby would constitute a danger to life as we know it). I think the preferred mechanism these days is dynamic acceleration in the jets formed by active galactic nuclei, but don't quote me. Anyway, ultra-relativistic though they are, that means ... 8 Pasted text of the letter in English - the link also contains the original typed German letter. Open letter to the group of radioactive people at the Gauverein meeting in Tübingen. Zürich, Dec. 4, 1930 Physics Institute of the ETH Gloriastrasse Zürich Dear Radioactive Ladies and Gentlemen, As the bearer of these ... 8 The neutron is made of two down quarks and an up quark; the proton of two up quarks and a down quark. This leads to two effects that differentiate their masses. One is that the up and down quark themselves have different masses. The other is that the proton is charged, and so quantum corrections involving virtual photons affect its mass. The details are ... 8 The simple answer is no, though as usual in Physics things are a bit more complicated than that. There are several ways in which radionucleotides decay: alpha decay, beta decay, gamma decay, and fission. These are all mediated by the weak and strong nuclear forces, though the electromagnetic force plays some part in alpha decay and nuclear fission. There is ... 8 Although none of these questions is an exact duplicate, there is a lot of overlap, and I hope we can avoid stringing this kind of stuff out indefinitely. The good news is that you're apparently being very cautious about the safety hazards of your planned matter-antimatter spaceship -- hazards that science fiction authors typically blithely ignore. Please let ... 8 You are always subjected to a low background of ionizing radiation from a number of natural and artificial sources, which include cosmic rays, trace amounts of radioactive nuclei in the air and in food, and indeed from the ground. A good place to read up on this is the corresponding Wikipedia article. The radiation from the core, however, has no chance of ... 8 In fact, an electric charge at rest on the Earth's surface is accelerated and this actually poses a challenge to the idea that uniformly accelerated charge radiates. I believe this is still an open question. For example: One of the most familiar propositions of elementary classical electrodynamics is that "an accelerating charge radiates". In fact, ... 8 How do I stay alive to be killed by neutrinos? You wouldn't. The point is being made that even the beam of neutrinos with a supernova at one astronomical unit distance would be intense enough that enough of them would interact with the matter of your body to be lethal. So even the neutrinos would get you if all the other stuff - notably $\gamma$s didn't. ... 7 It is not a matter of "falling in": all s orbitals have non-trivial probability densities at the center. It is about energy balance in the nucleus. Kr-83 is a lower energy configuration than Rb-83 by enough to make up for the neutrino and the gamma(s). Evidently Kr-85 is not a sufficiently lower energy state than Rb-85. 7 The color of a surface doesn't reliably indicate the emissivity at non-visible wavelengths. The color in the visible spectrum is more of a side effect than anything. Most thermal radiation around body temperature or room temperature happens in the infrared region, not the visible, and that's not reliably indicated by visible color: The transparent ... 7 I hear that the WC-135 Constant Phoenix has recently been deployed to monitor radiation in the air around Japan. The Vela satellites which were operational until 1984, and currently the DSP satellites, are intended to give immediate reports of nuclear bomb detonation and ICBM launches. The Vela satellites included gamma ray detectors, which accidentally ... 7 Schrödinger came up with the cat in 1935, which was relatively late in the development of quantum mechanics. Back in the 1920's there had been a lot more uncertainty. The Copenhagen school had wanted to quantize the atom while leaving the electromagnetic field classical, as formalized in the Bohr-Kramers-Slater (BKS) theory. De Broglie's 1924 thesis ... 7 There are many reasons for this situation. Power produced is non-adjustable. The battery produces power at nearly constant rate (slowly decaying with time). It cannot be increased and if not consumed (or stored) the power is lost. (Mentioned by DumpsterDoofus) low power density. ${}^{63}\text{Ni}$ for instance produces ~5 W/kg (and kg here is just mass of ... 6 Indeed, it's likely that the rice came from Japan, and if it did, it's pretty likely it came from the Fukushima region which is famous across Japan and beyond for its rice - and other products. However, it could have been harvested before the tsunami. But as discussed here, Are these radioactive particle matter and air emmissions dangerous, 2000KM from ... 6 There's certainly no problem being out in the Sun during an eclipse: There's nothing being emitted then that's not being emitted at other times. The danger is just that the relative darkness near totality may make it seem safe to look at the Sun, even when it's not. But as long as you don't look directly at the Sun, you're fine. During the time when an ... 6 There is nothing magical about lead for this purpose. The driving factor is the number of electrons per unit volume, which reduces (to a first approximation) to the mass density. You get very good (better than lead) shielding performance from gold, tungsten, mercury, etc; and quite reasonable performance from iron or copper. Question for the student: why ... 6 The bad news: Space radiation is much harsher compared to boring gamma rays from our primitive nuclear reactors. Space radiation has much higher energy levels, and you cannot completely shield it, even with 10 meters of lead (which is in fact not very effective for neutrons). The good news is that an individual gamma photon, for example, usually would not ... 6 Thermal neutrons capture on hydrogen and carbon with reasonable (i.e. not large, but significant) cross-sections (this is the delayed event detection methods of most organic liquid scintillator anti-neutrino detectors--i.e the one that don't dope their scintillator with Gadolinium). So though a "cloud"--meaning a localized diffuse gas--of neutrons can ... 6 As @dmckee says the problem is complicated. It is complicated because it is not a solution of a potential describing one force, but a balance between electromagnetic forces and the strong force that is keeping the quarks within the nucleons. (In the nucleus the strong force is like a type of Van der waals potential, a higher order interaction, overflowing ... 6 Hint :You have masses (from parent nuclei mass you can get mass of daughter nuclei by subtracting mass of $\alpha$ particle , and the Q value ie. the energy that gets liberated and $931.5 \ MeV \approx 1 amu$ and so you can see the excess mass is easily negligible ie $<0.1amu$ $^{212}Po \rightarrow \ ^{208}D + \ ^4\alpha$ momentum is zero ... 6 When people say that the decay rate depends critically on the $Q$ value, they're talking about alpha decays compared to other alpha decays. When you compare alpha decay to emission of other small clusters, the dependence on the atomic number $Z_c$ of the emitted cluster is much more prominent. The reason is as follows. In the Gamow model of beta decay, we ... 5 First, just recall that one becquerel is one radioactive decay per second. If your spinach has had 1-3 Bq/kg, then you should just completely forget about any threats - there is no danger whatsoever. The safe limits are 100-500 Bq/kg, see e.g. http://www.anarchyjapan.com/2011/04/for-now-no-changes-in-food-radiation-safety-levels/ which are 100+ times ... Only top voted, non community-wiki answers of a minimum length are eligible
# Quality Control of the Dataset Using the Forward Search #### 2022-04-24 library(forsearch) ## Introduction Years ago, statisticians were primarily concerned with development of statistical methods for the analysis of various kinds of data. They developed the probability distributions and attributes of new estimates, tests of hypothesis, and predictive methodologies. They then applied these methods to the data that was presented to them by subject matter specialists, fitting the methods to the data as well as they could. Some of the studies were not created only for scientific or engineering audiences, but also had the attention of regulatory authorities. It became more and more important that the analytical methods be shown to fit the data. Soon, the statisticians were asked to help design the studies in such a way that available analytical methods could be more appropriately used on their data. Study protocols went from being nice-to-have guidelines to being serious plans for the conduct of the study. Greater and greater detail was inserted into them. Statistical analysis plans (SAPs) were soon required to accompany the protocol and to be synchronized with it to a greater and greater degree. In 2014, Francis Collins and Lawrence Tabak, the Director of the National Institutes of Health and his principal deputy, published a report (reference below) that addressed their concern that the system for ensuring the reproducibility of scientific research was failing. Their principal concern was that statistical results on some preclinical studies could not be reproduced by subsequent investigators in the area. They assured their readers that they found very few cases where lack of reproducibility was caused by misconduct, but rather by existing practices and procedures, lack of laboratory staff training in statistical concepts and methods, failure to report critical aspects of experimental design, etc. Scientific progress relies on the premise that it is objective, not subjective, that another competent investigator should be able to reproduce the results of the study by simply reproducing the stated methods, using the same study materials. I will leave it to you to read their paper and those that reinforced or contradicted their prescriptions for corrective action. This concern about study reproducibility is not limited to preclinical studies. It also applies to clinical trials and other large, expensive studies with substantial societal impact. If a second study does not fully support the first study, which one should we believe? One drawback to the NIH approach is that it relies on the follow-up study to divulge the problem. This might be months or years after the original study and after a great deal of money has been spent going down the wrong path. Also, the original study might be a large engineering study or a complicated clinical trial that may itself take many months or years to complete. Staff changes are bound to occur. Supplies used in the beginning of the study may not be identical to supplies used at the end of the study. Study sites may come or go. In other words, the data analysis could be just fine, appropriate for the study as defined in the protocol and the statistical analysis plan. Perhaps it is simply the nature of the study that inappropriate or inconsistent data has been introduced. The protocol is supposed to describe the universe to which the study (and the data) applies. But how accurate is that description considering the actual data? Quality control of the data is the correctness of the data after it is first collected. Proper, easily interpreted data entry devices and software to get the data correctly into a format for data analysis. Proper storage of all of the data to prevent deliberate or inadvertent changes. These are pretty obvious, but not uniformly available to all investigators, even today. Scientific integrity is the adherence of the study conduct to the protocol and the completeness of the protocol with regard to its scope. A study of 14- to 19-year-olds, should not admit a 13-year-old who will be 14 before the end of the study, no matter how compelling her medical case. But once she is in, her data is part of the study. Unfortunately, it is virtually impossible to enumerate all the different ways that data quality control and scientific integrity of the study can be compromised. What we are describing here is a software tool that is available now and that can help point out in real time possible discrepancies in the first study with regard to data quality or scientific integrity. This tool can and should be used prior to the statistical analysis of the first study. To complicate matters further, regulatory agencies and others worry that data quality control is only undertaken after some unexpected, adverse findings are discovered in the study. Eliminating observations at this point has the appearance of sampling to a foregone conclusion or eliminating adverse results. Diagnostics involves identifying inconsistent observations, determining their impact on the primary analyses of the study, and justifying their disposition: total acceptance of the observataion, modification of value or classification prior to acceptance, or removal. In the name of proper scientific integrity, disposition of outliers must be well documented. I believe that statisticians, especially those that work with studies and data every day, will have a bigger role in the quality control and scientific integrity of studies in the years to come. The forsearch R extension will be a useful tool in this area. To Be Continued ## References Atkinson, A and M Riani. Robust Diagnostic Regression Analysis, Springer, New York, 2000. Pinheiro JC and DM Bates. Mixed-Effects Models in S and S-Plus, Springer, New York, 2000. Collins, FS and LA Tabak (2014). Policy: NIH plans to enhance reproducibility, Nature, 505, p612–613.
# Bateman transform In the mathematical study of partial differential equations, the Bateman transform is a method for solving the Laplace equation in four dimensions and wave equation in three by using a line integral of a holomorphic function in three complex variables. It is named after the English mathematician Harry Bateman, who first published the result in (Bateman 1904). The formula asserts that if ƒ is a holomorphic function of three complex variables, then $\phi(w,x,y,z) = \oint_\gamma f\left((w+ix)+(iy+z)\zeta,(iy-z)+(w-ix)\zeta,\zeta\right)\,d\zeta$ is a solution of the Laplace equation, which follows by differentiation under the integral. Furthermore, Bateman asserted that the most general solution of the Laplace equation arises in this way.
# Align equations horizontally with related vertical cells I have a table like this: \begin{tabular}{|l|c|r|} \hline \multicolumn{3}{|c|}{\textbf{Etumerkillinen}} \\ \hline Tyyppi & Minimi & Maksimi \\ \hline i8 & $-2^{7}$ & $2^{7}-1$\\ i16 & $-2^{15}$ & $2^{15}-1$\\ i32 & $-2^{31}$ & $2^{31}-1$\\ i64 & $-2^{64}$ & $2^{63}-1$\\ i128 & $-2^{127}$ & $2^{127}-1$\\ \hline \end{tabular} The table is rendered like this: I would like to align the math equations in a way that the 2's, 1's and the negative signs are aligned horizontally in relation with each other. Is this possible and how would I achieve that? • Welcome to TeX SX! If I understand well, you want the 2nb and 3rd columns cells to be internally left-aligned ang globally centred? May 7, 2020 at 8:38 • Be aware of the typo in the column Minimi the field -2^{64} May 7, 2020 at 13:01 I don't think you'll go wrong if you choose left-alignment for both data columns. To align the numbers in the second column on 2, - and 1, you could split the single column into three distinct sub-columns. I would focus my efforts more on giving the table a more open "look", as is done on the right in the following screenshot. \documentclass{article} \usepackage{array} % for '\newcolumntype' macro \newcolumntype{L}{>{$}l<{$}} \newcolumntype{C}{>{${}}c<{{}$}} \usepackage{booktabs} \begin{document} \begin{tabular}[t]{|l|L|L@{}C@{}L|} \hline \multicolumn{5}{|c|}{\textbf{Etumerkillinen}} \\ \hline Tyyppi & $Minimi$ & \multicolumn{3}{c|}{Maksimi} \\ \hline i8 & -2^{7^{\mathstrut}} & 2^{7} & - & 1 \\ i16 & -2^{15} & 2^{15} & - & 1 \\ i32 & -2^{31} & 2^{31} & - & 1 \\ i64 & -2^{64} & 2^{63} & - & 1 \\ i128 & -2^{127} & 2^{127} & - & 1 \\ \hline \end{tabular}% \begin{tabular}[t]{@{\kern2pt} l L L@{}C@{}L @{}} % cf barbara beeton's comments below \toprule \multicolumn{5}{c}{\textbf{Etumerkillinen}} \\ \cmidrule{1-5} \multicolumn{1}{@{}l}{Tyyppi} & $Minimi$ & \multicolumn{3}{c@{}}{Maksimi} \\ \midrule i8 & -2^{7} & 2^{7} & - & 1 \\ i16 & -2^{15} & 2^{15} & - & 1 \\ i32 & -2^{31} & 2^{31} & - & 1 \\ i64 & -2^{64} & 2^{63} & - & 1 \\ i128 & -2^{127} & 2^{127} & - & 1 \\ \bottomrule \end{tabular} \end{document} • @albert - Good catch. Please see the revised answer. – Mico May 7, 2020 at 9:41 • Nit-pick. In the right-hand image, the first column, beginning with "i", looks squished tightly up against the left edge, especially since the "T" in the column header has some "air" on the left and the "i" is so narrow that it almost looks cut off. May 7, 2020 at 19:01 • Good plan. I'd be tempted to indent the "i"s even a little more, maybe as much as 2pt. May 7, 2020 at 19:45 • It's true that it's an optical illusion, but it's more obvious in the table on the right because the rules delineating the table start at exactly the same place as the table contents. The table on the left has a buffer. May 7, 2020 at 19:58 • Much better now! The "i" still looks skinny, but that can't be helped (and it would be much worse in sans serif). May 7, 2020 at 20:07 You can define \2 macro and use it: \def\2^#1{\hbox to1.6em{$2^{#1}$\hss}} \begin{tabular}{|l|c|r|} \hline \multicolumn{3}{|c|}{\textbf{Etumerkillinen}} \\ \hline Tyyppi & Minimi & Maksimi \\ \hline i8 & $-\2^{7}$ & $\2^{7}-1$\\ i16 & $-\2^{15}$ & $\2^{15}-1$\\ i32 & $-\2^{31}$ & $\2^{31}-1$\\ i64 & $-\2^{64}$ & $\2^{63}-1$\\ i128 & $-\2^{127}$ & $\2^{127}-1$\\ \hline \end{tabular} • Please make a complete example out of it and show the output in an image. May 7, 2020 at 9:46 • I have made the example complete as as it is in the question. May 7, 2020 at 9:51 Another possible layout, with the formulæ aligned and centred in their respective columns. It requires eqparbox for the alignment, and collcell to use the former in tables: \documentclass{article} \usepackage{array} % \usepackage{eqparbox} \newcommand{\eqmathboxM}[1]{\eqmakebox[M][l]{$#1$}} \newcommand{\eqmathboxm}[1]{\eqmakebox[m][l]{$#1$}} \usepackage{collcell} \begin{document} \setlength{\extrarowheight}{3pt} \begin{tabular}[t]{|l|>{\collectcell\eqmathboxm}c< {\endcollectcell}|>{\collectcell\eqmathboxM}c< {\endcollectcell}|} \hline \multicolumn{3}{|c|}{\textbf{Etumerkillinen}} \\ \hline Tyyppi & \multicolumn{1}{c}{Maxi Minimi} & \multicolumn{1}{c|}{Mini Maksimi} \\ \hline i8 & -2^{7} & 2^{7}\hfill- 1 \\ i16 & -2^{15} & 2^{15}\hfill -1 \\ i32 & -2^{31} & 2^{31}\hfill - 1 \\ i64 & -2^{64} & 2^{63}\hfill - 1 \\ i128 & -2^{127} & \eqmakebox[M]{$2^{127}- 1$} \\ \hline \end{tabular} \end{document}